赞
踩
tar -zxvf scala-2.11.8.tgz -C /usr/project/
tar -zxvf spark-2.4.4-bin-hadoop2.7.tgz -C /usr/project/
vi /etc/profile
注意:在写环境变量前已经将Scala和Spark的名字重命名(因为名字太长太复杂 [/狗头])
#scala
export SCALA_HOME=/usr/project/scala
export PATH=$PATH:$SCALA_HOME/bin
#spark
export SPARK_HOME=/usr/project/spark
export PATH=$PATH:$SPARK_HOME/bin
重新加载环境变量
注意:下面两个命令意思相同可自行选择
. /etc/profile
source /etc/profile
进入conf下
/usr/project/spark/conf
mv spark-env.sh.template spark-env.sh
mv slaves.template slaves
在最底部添加!
export HADOOP_CONF_DIR=/opt/hadoop-2.7.7/etc/hadoop
export YARN_CONF_DIR=/opt/hadoop-2.7.7/etc/hadoop
cd 进入:/usr/project/hadoop-2.7.7/etc/hadoop
vi yarn-site.xml
注意*是内部添加
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
</property>
start-all.sh
或者
start-dfs.sh、start-yarn.sh
使用spark运行在yarn上
spark-shell --master yarn --deploy-mode client
ip:8088
成功!!!!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。