赞
踩
下载spark包:
https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
配置环境变量
export SPARK_HOME=/home/spark-3.1.2-bin-hadoop3.2
export JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
export PATH=${PATH}:${JAVA_HOME}/bin:${SPARK_HOME}/bin
安装pyspark包(注意与spark版本号相同)
pip install pyspark==3.1.2 -i https://pypi.tuna.tsinghua.edu.cn/simple/
注释:
python项目中使用spark,linux 环境只需要安装jdk、spark、pyspark(python模块)即可;windows 环境还需要额外安装hadoop组件,否则运行报错。
安装上面两个步骤,输入pyspark,有如下响应才是正常安装。
bash-5.0# pyspark Python 3.8.1 (default, Jan 18 2020, 02:42:17) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. 21/09/23 14:22:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 3.1.2 /_/ Using Python version 3.8.1 (default, Jan 18 2020 02:42:17) Spark context Web UI available at http://python-allsql-76bd574f99-xbrv4:4040 Spark context available as 'sc' (master = local[*], app id = local-1632378148340). SparkSession available as 'spark'.
可能出现的问题:
执行pyspark报错:
bash: No such file or directory
解决方法:apk add bash
“Pyspark: Exception: Java gateway process exited before sending the driver its port number”
解决方法:
出现这个问题有多个可能
1、由于java的环境变量没有设置对引起的,修改下jdk的环境变量即可
2、版本不兼容问题
3、apline 系统的问题,这个也是比较容易忽略的,首先需要确定的是pyspark 是否启动成功,如果提示,pyspark 命令不存在,则是环境变量问题,如果提示No such file or directory,同上面的第一个报错!
java.sql.SQLException: No suitable driver
解决方法:
缺失mysql jar包,下载后mysql-connector-java 放在jre\lib\ext里即可。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。