当前位置:   article > 正文

sparn on kerberos-yarn_spark on yarn集成kerberos配置

spark on yarn集成kerberos配置

一、环境:

-- Hadoop集群

1. Hadoop集群(dm46、dm47、dm48),开启了Kerberos安全,集群所有组件基于K8S管理,运行在docker pod中

2. yarn中的python版本是3.6,java版本是1.8

-- 客户端:dm45

1. 开源spark版本:3.3.1

2. conda有两个环境:base的python是3.6,pyspark_env的版本是3.8

二、目标:

在dm45结点,通过spark(3.3.1)客户端,基于yarn模式,向Hadoop集群提交spark作业。

三、步骤(在dm45操作):

1. 下载spark客户端,并解压

spark-3.3.1-bin-hadoop3.tgz

spark home路径:/home/xxxxx/kdh/spark

2. kinit yarn

3. spark配置

a. 将hdfs-site.xml、core-site.xml、yarn-site.xml拷贝到/home/xxxxx/kdh/spark/conf

b. 修改spark-env.sh

cd /home/xxxxx/kdh/spark/conf

cp spark-env.sh.template spark-env.sh

vi /home/xxxxx/kdh/spark/conf/spark-env.sh

-------------------------------------------------------------------------------------------

HADOOP_CONF_DIR=/home/xxxxx/soft/TDH-Client/conf/hadoop

YARN_CONF_DIR=/home/xxxxx/soft/TDH-Client/conf/hadoop

YARN_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-yarn

HADOOP_YARN_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-yarn

HADOOP_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop

HADOOP_LIBEXEC_DIR=/home/xxxxx/soft/TDH-Client/hadoop/hadoop/libexec/

HADOOP_HDFS_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-hdfs

HADOOP_COMMON_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop

HADOOP_MAPRED_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-mapreduce

SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://dm47:8020//tmp/zzdb/sparklog/ -Dspark.history.fs.cleaner.enabled=true"

c. 修改spark-defaults.conf

cd /home/xxxxx/kdh/spark/conf

cp spark-defaults.conf.template spark-defaults.conf

vi /home/xxxxx/kdh/spark/conf/spark-defaults.conf

-------------------------------------------------------------------------------------------

spark.eventLog.enabled true

spark.eventLog.dir hdfs:///tmp/zzdb/sparklog

spark.eventLog.compress true

spark.yarn.historyServer dm48:19888

spark.yarn.jars hdfs:///tmp/zzdb/spark/jars/*.jar

d. 修改日志级别

cd /home/xxxxx/kdh/spark/conf

cp log4j2.properties.template log4j2.properties

e. 依赖包(TDH-Client里寻找)

cp guardian-common-guardian-3.1.0.jar /home/xxxxx/kdh/spark/jars/

cp yarn-plugin-transwarp-6.2.0.jar /home/xxxxx/kdh/spark/jars/

hadoop fs -mkdir -p /tmp/zzdb/spark/jars

cd /home/xxxxx/kdh/spark/jars

hadoop fs -put * /tmp/zzdb/spark/jars/

f. 其它

hadoop fs -mkdir -p /tmp/zzdb/sparklog

4. 启动history server

cd /home/xxxxx/kdh/spark/sbin

./start-history-server.sh

地址: http://dm45:18080

5. 测试

a. pi任务

-- base环境是python3.6

conda activate base

./spark-submit \

--master yarn \

--principal yarn@TDH \

--keytab /home/xxxxx/soft/yarn.keytab \

/home/xxxxx/kdh/spark/examples/src/main/python/pi.py 30

b. spark-shell

sh ./spark-shell --master yarn --deploy-mode client --driver-memory 4g --executor-memory 4g --num-executors 2 --executor-cores 2 --principal yarn@TDH --keytab /home/xxxxx/soft/yarn.keytab

c. pi任务-找茬:指定了高版本的python,任务在yarn上显示SUCCEEDED,但实际没看到pi的输出,可能是眼神问题

conda activate pyspark_env

cd /home/xxxxx/kdh/spark/bin

./spark-submit \

--master yarn \

--conf 'spark.pyspark.driver.python=/software/anaconda3/envs/pyspark_env/bin/python' \

--conf 'spark.pyspark.python=/software/anaconda3/envs/pyspark_env/bin/python' \

--principal yarn@TDH \

--keytab /home/xxxxx/soft/yarn.keytab \

/home/xxxxx/kdh/spark/examples/src/main/python/pi.py 30

d.提交wordcount - cluster模式

conda activate base

cd /home/kangwenqi/kdh/spark/bin

./spark-submit \

--master yarn \

--deploy-mode cluster \

--principal yarn@TDH \

--keytab /home/kangwenqi/soft/yarn.keytab \

/home/kangwenqi/workspace/pyspark_learn/02_pyspark_core/main/02_Wordcount_hdfs_yarn_cluster.py

四、地址查看:

http://dm45:18080

http://dm48:19888

http://dm46:8088/cluster

五、幕后花絮

1. 更换yarn pod的java版本,原版本是1.7,换为1.8

[root@dm46 ~]# docker images | grep yarn

dm46:5000/transwarp/yarn transwarp-6.2.1-final cb9ccbe898b6 3 years ago 2.22GB

transwarp/yarn transwarp-6.2.1-final cb9ccbe898b6 3 years ago 2.22GB

[root@dm46 ~]#

docker run -id dm46:5000/transwarp/yarn:transwarp-6.2.1-final bash

docker ps -a | grep yarn | grep bash

docker exec -it d0f513cd0780 bash

mv /usr/java/jdk1.7.0_71 /usr/java/jdk1.7.0_71-bak

mv /usr/java/jdk1.8.0_25 /usr/java/jdk1.7.0_71

docker tag dm46:5000/transwarp/yarn:transwarp-6.2.1-final dm46:5000/transwarp/yarn:transwarp-6.2.1-final-jdk17bak

docker commit d0f513cd0780 dm46:5000/transwarp/yarn:transwarp-6.2.1-final

docker push dm46:5000/transwarp/yarn:transwarp-6.2.1-final

2. pod中找不到dm45(找到就见鬼了)

tdh pod在宿主机上映射的hosts:/etc/transwarp/conf/hosts

3. yarn一直报:Operation category READ is not supported in state standby

但是感觉也没有太大影响,加上了以下语句,感觉没什么卵用

export SPARK_MASTER_HOST=dm47

暂时和解了吧

4. 失误,检查晚了:版本不对,一切白费!!!

之所以放到最后,是因为这是我走过的弯路,不是一开始,都是正确的。

(base) [root@dm45 bin]# hadoop version

Hadoop 2.7.2-transwarp-6.2.0

Subversion http://xxx:10080/hadoop/hadoop-2.7.2-transwarp.git -r f31230971c2a36e77e4886e0f621366826cec3a3

Compiled by jenkins on 2019-07-27T11:33Z

Compiled with protoc 2.5.0

From source with checksum 42cb923f1631e3c548d6b7e572aa6962

This command was run using /home/xxxxx/soft/TDH-Client/hadoop/hadoop/hadoop-common-2.7.2-transwarp-6.2.0.jar

正解:https://dlcdn.apache.org/spark/spark-3.2.3/spark-3.2.3-bin-hadoop2.7.tgz

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/684408
推荐阅读
相关标签
  

闽ICP备14008679号