当前位置:   article > 正文

大数据学习[05]:Spark高可用配置安装_spark集群主机状态 recovering

spark集群主机状态 recovering

spark

摘要:目的是基于zk搭建高可用Spark计算框架;首先安装scala环境;然后,配置spark相关配置文件;最后启动zookeeper,hadoop, spark,查看各个节点的进程情况, 展示demo, 验证spark高可用是怎么一回事。

前置

linux集群系统配置:[1] 大数据学习前夕[01]:系统-网络-SSH
JDK环境:[2] 大数据学习前夕[02]:JDK安装升级
zookeeper集群环境:[3] 大数据学习[01]:zookeeper环境配置
hadoop集群环境:[4] 大数据学习[02]:hadoop安装配置

安装Scala

下载:
[hadoop@hadoop01 ~]$ wget https://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz
  • 1
解压:
[hadoop@hadoop01 scala-2.11.11]$ tar -zvxf scala-2.11.11.tgz 
  • 1
配置环境变量:
[hadoop@hadoop01 scala-2.11.11]$ sudo vim /etc/profile
export SCALA_HOME=/home/hadoop/scala-2.11.11
export PATH=${SCALA_HOME}/bin:${PATH}
[hadoop@hadoop01 scala-2.11.11]$ source /etc/profile
  • 1
  • 2
  • 3
  • 4
测试:
[hadoop@hadoop01 ~]$ scala -version
[hadoop@hadoop01 ~]$ scala
  • 1
  • 2
复制:
[hadoop@hadoop01 ~]$ scp -r scala-2.11.11 hadoop@hadoop02:~/
[hadoop@hadoop01 ~]$ scp -r scala-2.11.11 hadoop@hadoop03:~/
  • 1
  • 2

安装spark

下载:
[hadoop@hadoop01 ~]$ wget https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz
  • 1
解压:
[hadoop@hadoop01 ~]$ tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz
  • 1
配置spark-env.sh
[hadoop@hadoop01 ~]$ cp spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh.template spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
[hadoop@hadoop01 ~]$ vim spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
export SCALA_HOME=/home/hadoop/scala-2.11.11
export JAVA_HOME=/home/hadoop/jdk1.8.0_144
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop
 # dir表于挂在zookeeper 上的路径。
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02:2181,hadoop03:2181 -Dspark.deploy.zookeeper.dir=/spark" 
export  SPARK_WORKER_MEMORY=512m
export  SPARK_EXECUTOR_MEMORY=512m
export  SPARK_DRIVER_MEMORY=512m
export  SPARK_WORKER_CORES=1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

配置slaves

[hadoop@hadoop01 ~]$ cp spark-2.1.1-bin-hadoop2.7/conf/slaves.template spark-2.1.1-bin-hadoop2.7/conf/slaves
[hadoop@hadoop01 ~]$ vim spark-2.1.1-bin-hadoop2.7/conf/slaves
hadoop02
hadoop03
  • 1
  • 2
  • 3
  • 4
设置环境变量
[hadoop@hadoop01 ~]$ vim /etc/profile
export SPARK_HOME=/home/hadoop/spark-2.1.1-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
[hadoop@hadoop01 ~]$ source /etc/profile
  • 1
  • 2
  • 3
  • 4
复制
[hadoop@hadoop01 ~]$ scp -r spark-2.1.1-bin-hadoop2.7 hadoop@hadoop02:~/
[hadoop@hadoop01 ~]$ scp -r spark-2.1.1-bin-hadoop2.7 hadoop@hadoop03:~/
  • 1
  • 2

同理也设置一下环境变量;

启动spark服务:
[hadoop@hadoop01 sbin]$ ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop01.out
hadoop03: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop03.out
hadoop02: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.1.1-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop02.out
  • 1
  • 2
  • 3
  • 4
启动spark shell
[hadoop@hadoop01 bin]$ spark-shell
  • 1

spark

启动页面

spark

测试一个shell上的小例子:

在 /home/hadoop/testData/test.db 目录下随便写几一些单词,或一些文章,如下代码就可以达到统计的效果。

scala> val file = sc.textFile("file:/home/hadoop/testData/test.db")
file: org.apache.spark.rdd.RDD[String] = file:/home/hadoop/testData/test.db MapPartitionsRDD[6] at textFile at <console>:24

scala> file.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
res1: Array[(String, Int)] = Array((him,1), (you,1), (girl,1), (word,1), (hello,7), (boy,1), (good,1), (china,1), (me,1), (her,1))
  • 1
  • 2
  • 3
  • 4
  • 5
查看任务情况页面

spark

spark的高可用性:

在hadoop01启动,master在hadoop01的机器上,hadoop02,hadoop03机器上没有master进程,手动到hadoop上启动master进程:

[hadoop@hadoop02 sbin]$ ./start-master.sh
[hadoop@hadoop03 sbin]$ ./start-master.sh
  • 1
  • 2

用jps查看,可以看到hadoop02与hadoop03都有master进程了。

启动shell:
[hadoop@hadoop01 bin]$ spark-shell –master spark://hadoop01:7077,hadoop02:7077,hadoop03:7077
  • 1
  • 2

master为hadoop01:
spark
hadoop02
spark
hadoop03
spark
现在把hadoop01服务器中master服务关闭。
[hadoop@hadoop01 ~]$ ./spark-2.1.1-bin-hadoop2.7/sbin/stop-master.sh
hadoop01已经停了,如下图
spark
hadoop02的status变成了ALIVE.[zk选举的结果]
spark
hadoop03仍然不变的。
另外,spark shell不会受到影响的,spark仍然可以正常工作的。

zookeeper节点

看看zk节点,可以看到spark的节点的,同时也可以看到spark的任务:
spark
提示:当spark的集群master的sttus上出现了RECOVERING状态时,很可能是由于spark非正常关闭了,重启后有可能一段时间是这样,一个办法是把zk中的spark节点给删除了,再把spark重启就可以了。
删除spark在zk中的目录

rmr /spark  
  • 1

非正常挂掉的任务,需要手工在zk删除
查看任务

ls /spark/master_status  
  • 1

删除失效任务即可

rmr /spark/master_status/app_app-20160219104450-0021  
  • 1

重新启动spark集群。

参与引用

[1] 大数据学习前夕[01]:系统-网络-SSH
[2] 大数据学习前夕[02]:JDK安装升级
[3] 大数据学习[01]:zookeeper环境配置
[4] 大数据学习[02]:hadoop安装配置

【作者:happyprince, http://blog.csdn.net/ld326/article/details/78023816

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/66429
推荐阅读
相关标签
  

闽ICP备14008679号