赞
踩
linux集群系统配置:[1] 大数据学习前夕[01]:系统-网络-SSH
JDK环境:[2] 大数据学习前夕[02]:JDK安装升级
zookeeper集群环境:[3] 大数据学习[01]:zookeeper环境配置
hadoop集群环境:[4] 大数据学习[02]:hadoop安装配置
[hadoop@hadoop01 ~]$ wget https://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz
[hadoop@hadoop01 scala-2.11.11]$ tar -zvxf scala-2.11.11.tgz
[hadoop@hadoop01 scala-2.11.11]$ sudo vim /etc/profile
export SCALA_HOME=/home/hadoop/scala-2.11.11
export PATH=${SCALA_HOME}/bin:${PATH}
[hadoop@hadoop01 scala-2.11.11]$ source /etc/profile
[hadoop@hadoop01 ~]$ scala -version
[hadoop@hadoop01 ~]$ scala
[hadoop@hadoop01 ~]$ scp -r scala-2.11.11 hadoop@hadoop02:~/
[hadoop@hadoop01 ~]$ scp -r scala-2.11.11 hadoop@hadoop03:~/
[hadoop@hadoop01 ~]$ wget https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz
[hadoop@hadoop01 ~]$ tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz
[hadoop@hadoop01 ~]$ cp spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh.template spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
[hadoop@hadoop01 ~]$ vim spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
export SCALA_HOME=/home/hadoop/scala-2.11.11
export JAVA_HOME=/home/hadoop/jdk1.8.0_144
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop
# dir表于挂在zookeeper 上的路径。
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02:2181,hadoop03:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=512m
export SPARK_EXECUTOR_MEMORY=512m
export SPARK_DRIVER_MEMORY=512m
export SPARK_WORKER_CORES=1
[hadoop@hadoop01 ~]$ cp spark-2.1.1-bin-hadoop2.7/conf/slaves.template spark-2.1.1-bin-hadoop2.7/conf/slaves
[hadoop@hadoop01 ~]$ vim spark-2.1.1-bin-hadoop2.7/conf/slaves
hadoop02
hadoop03
[hadoop@hadoop01 ~]$ vim /etc/profile
export SPARK_HOME=/home/hadoop/spark-2.1.1-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
[hadoop@hadoop01 ~]$ source /etc/profile
[hadoop@hadoop01 ~]$ scp -r spark-2.1.1-bin-hadoop2.7 hadoop@hadoop02:~/
[hadoop@hadoop01 ~]$ scp -r spark-2.1.1-bin-hadoop2.7 hadoop@hadoop03:~/
同理也设置一下环境变量;
[hadoop@hadoop01 sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop01.out
hadoop03: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop03.out
hadoop02: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop02.out
[hadoop@hadoop01 bin]$ spark-shell
在 /home/hadoop/testData/test.db 目录下随便写几一些单词,或一些文章,如下代码就可以达到统计的效果。
scala> val file = sc.textFile("file:/home/hadoop/testData/test.db")
file: org.apache.spark.rdd.RDD[String] = file:/home/hadoop/testData/test.db MapPartitionsRDD[6] at textFile at <console>:24
scala> file.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
res1: Array[(String, Int)] = Array((him,1), (you,1), (girl,1), (word,1), (hello,7), (boy,1), (good,1), (china,1), (me,1), (her,1))
在hadoop01启动,master在hadoop01的机器上,hadoop02,hadoop03机器上没有master进程,手动到hadoop上启动master进程:
[hadoop@hadoop02 sbin]$ ./start-master.sh
[hadoop@hadoop03 sbin]$ ./start-master.sh
用jps查看,可以看到hadoop02与hadoop03都有master进程了。
[hadoop@hadoop01 bin]$ spark-shell –master spark://hadoop01:7077,hadoop02:7077,hadoop03:7077
master为hadoop01:
hadoop02
hadoop03
现在把hadoop01服务器中master服务关闭。
[hadoop@hadoop01 ~]$ ./spark-2.1.1-bin-hadoop2.7/sbin/stop-master.sh
hadoop01已经停了,如下图
hadoop02的status变成了ALIVE.[zk选举的结果]
hadoop03仍然不变的。
另外,spark shell不会受到影响的,spark仍然可以正常工作的。
看看zk节点,可以看到spark的节点的,同时也可以看到spark的任务:
提示:当spark的集群master的sttus上出现了RECOVERING状态时,很可能是由于spark非正常关闭了,重启后有可能一段时间是这样,一个办法是把zk中的spark节点给删除了,再把spark重启就可以了。
删除spark在zk中的目录
rmr /spark
非正常挂掉的任务,需要手工在zk删除
查看任务
ls /spark/master_status
删除失效任务即可
rmr /spark/master_status/app_app-20160219104450-0021
重新启动spark集群。
[1] 大数据学习前夕[01]:系统-网络-SSH
[2] 大数据学习前夕[02]:JDK安装升级
[3] 大数据学习[01]:zookeeper环境配置
[4] 大数据学习[02]:hadoop安装配置
【作者:happyprince, http://blog.csdn.net/ld326/article/details/78023816】
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。