当前位置:   article > 正文

spark:spark-submit 提交任务及参数说明(yarn)_spark.kryoserializer.buffer.max可以在spark-submit提交么

spark.kryoserializer.buffer.max可以在spark-submit提交么

Spark:对于提交命令的理解:

https://blog.csdn.net/weixin_38750084/article/details/106973247

spark-submit 可以提交任务到 spark 集群执行,也可以提交到 hadoop 的 yarn 集群执行。

代码中配置:

util:

  1. import org.apache.spark.serializer.KryoSerializer
  2. import org.apache.spark.sql.SparkSession
  3. object SparkContextUtil {
  4. /**
  5. * 封装创建sparkContext实例
  6. *
  7. * @param appName
  8. * @param params
  9. * @return
  10. */
  11. def createSparkContext(appName: String, params: Map[String, String] = Map.empty) = {
  12. // 入口
  13. val spark: SparkSession = SparkSession.builder()
  14. .appName(appName)
  15. .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
  16. .master("local[*]")
  17. .config("spark.serializer",classOf[KryoSerializer].getName)
  18. .config("spark.debug.maxToStringFields", "100")
  19. .enableHiveSupport().getOrCreate()
  20. // 封装用户传递进来的参数
  21. params.foreach { case (key, value) => spark.conf.set(key, value) }
  22. spark
  23. }
  24. }

使用: 

  1. object BusinessDataCombineErpJobs {
  2. Logger.getLogger("org").setLevel(Level.WARN)
  3. val logger = LoggerFactory.getLogger(BusinessDataCombineErpJobs.getClass.getSimpleName)
  4. def main(args: Array[String]): Unit = {
  5. val spark = SparkContextUtil.createSparkContext(TestSparkSql.getClass.getSimpleName)
  6. //返回基础sparkContext,用于创建RDD以及管理群集资源
  7. val sc = spark.sparkContext
  8. println("---数据处理开始---")
  9. test(spark)
  10. println("---数据处理结束---")
  11. spark.close()
  12. }
  13. }

1. 例子

一个最简单的例子,部署 spark standalone 模式后,提交到本地执行。

  1. ./bin/spark-submit \
  2. --master spark://localhost:7077 \
  3. examples/src/main/python/pi.py

如果部署 hadoop,并且启动 yarn 后,spark 提交到 yarn 执行的例子如下。

注意,spark 必须编译成支持 yarn 模式,编译 spark 的命令为:

build/mvn -Pyarn -Phadoop-2.x -Dhadoop.version=2.x.x -DskipTests clean package

 其中, 2.x 为 hadoop 的版本号。编译完成后,可执行下面的命令,提交任务到 hadoop yarn 集群执行。

  1. ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
  2. --master yarn \
  3. --deploy-mode cluster \
  4. --driver-memory 1g \
  5. --executor-memory 1g \
  6. --executor-cores 1 \
  7. --queue thequeue \
  8. examples/target/scala-2.11/jars/spark-examples*.jar 10
  9. 注意:后边的数字10是传入的一个参数

线上实操:

  1. spark2-submit --class bi.tag.TSimilarTagsTable --master yarn-client --executor-memory 6G --num-executors 5 --executor-cores 2 /var/lib/hadoop-hdfs/seijing/ble/tag/spark-sql/pf-spark-master/pi/target/pi-1.0.1-SNAPSHOT.jar
  2. spark2-submit --class resume.mlib.RcoAID \
  3. --master yarn \
  4. --deploy-mode client \
  5. --num-executors 4 \
  6. --executor-memory 10G \
  7. --executor-cores 3 \
  8. --driver-memory 10g \
  9. --conf "spark.executor.extraJavaOptions='-Xss512m'" \
  10. --driver-java-options "-Xss512m" \
  11. /var/lib/hadoop-hdfs/als_ecommend/reserver-1.0-SNAPSHOT.jar $1 $2 >> /var/lib/hadoop-hdfs/als_ecommend/logs/log_spark_out_`date +\%Y\%m\%d`.log
  12. 注意:
  13. 1
  14. $1 $2 是 上一层,执行这个脚本传进来的参数
  15. 如:
  16. /bin/bash /root/combine.sh aa bb
  17. aa bb 就是传入的参数
  18. 2
  19. 最后打印出的日志格式为:
  20. -rw-r--r-- 1 root root 2375 Feb 27 15:25 log_spark_out_20200227.log
  21. -rw-r--r-- 1 root root 712272 Feb 28 17:03 log_spark_out_20200228.log
  22. -rw-r--r-- 1 root root 2375 Mar 9 15:36 log_spark_out_20200309.log
  23. -rw-r--r-- 1 root root 712463 Mar 10 20:24 log_spark_out_20200310.log
  24. -rw-r--r-- 1 root root 10578 Mar 12 18:51 log_spark_out_20200312.log
  25. -rw-r--r-- 1 root root 468018 Mar 13 10:06 log_spark_out_20200313.log
  26. -rw-r--r-- 1 root root 712602 Mar 19 18:26 log_spark_out_20200319.log
  27. 只有print的,以及DF show 这样的日志才会存储到日志文件中。
  28. logger打印的日志在控制台运行任务时可以看到,但是并不能存储到日志文件中。

2. spark-submit 详细参数说明

参数名参数说明
--master master 的地址,提交任务到哪里执行,例如 spark://host:port,  yarn,  local
--deploy-mode 在本地 (client) 启动 driver 或在 cluster 上启动,默认是 client
--class 应用程序的主类,仅针对 java 或 scala 应用
--name 应用程序的名称
--jars 用逗号分隔的本地 jar 包,设置后,这些 jar 将包含在 driver 和 executor 的 classpath 下
--packages 包含在driver 和executor 的 classpath 中的 jar 的 maven 坐标
--exclude-packages 为了避免冲突 而指定不包含的 package
--repositories 远程 repository
--conf PROP=VALUE

 指定 spark 配置属性的值,

 例如 -conf spark.executor.extraJavaOptions="-XX:MaxPermSize=256m"

--properties-file 加载的配置文件,默认为 conf/spark-defaults.conf
--driver-memory Driver内存,默认 1G
--driver-java-options 传给 driver 的额外的 Java 选项
--driver-library-path 传给 driver 的额外的库路径
--driver-class-path 传给 driver 的额外的类路径
--driver-cores Driver 的核数,默认是1。在 yarn 或者 standalone 下使用
--executor-memory 每个 executor 的内存,默认是1G
--total-executor-cores 所有 executor 总共的核数。仅仅在 mesos 或者 standalone 下使用
--num-executors 启动的 executor 数量。默认为2。在 yarn 下使用
--executor-core 每个 executor 的核数。在yarn或者standalone下使用

 

 

 

 

 

 

 

 

 

 

 

 

 

 

yarn-client模式跑任务无异常(代码配置中配置了.master("local[*]"))

脚本为:

  1. spark2-submit \
  2. --class bi.tag.BusinessDataCombineErpJobs \
  3. --master yarn-client \
  4. --executor-memory 3G \
  5. --num-executors 5 \
  6. --executor-cores 2 \
  7. /var/business_data/p-1.0.1-SNAPSHOT.jar > /var/business_data/business_data.log

代码中去掉.master("local[*]"),任务依然可以跑成功。

但是代码中存在.master("local[*]")参数的情况下,我直接把脚本改为:

--master yarn \
--deploy-mode cluster \

报错了

  1. spark2-submit \
  2. --class bi.tag.BusinessDataCombineErpJobs \
  3. --master yarn \
  4. --deploy-mode cluster \
  5. --driver-memory 1g \
  6. --executor-memory 3g \
  7. --executor-cores 2 \
  8. /var/business_data/p-1.0.1-SNAPSHOT.jar 10
  9. 注意:数字10 是代码BusinessDataCombineErpJobs 中自定义的传入的一个参数

报错日志为:

azkaban:

  1. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client: Application report for application_1583730534669_117324 (state: FAILED)
  2. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client:
  3. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - client token: N/A
  4. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - diagnostics: Application application_1583730534669_117324 failed 2 times due to AM Container for appattempt_1583730534669_117324_000002 exited with exitCode: 13
  5. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - For more detailed output, check application tracking page:http://pf-bigdata4:8088/proxy/application_1583730534669_117324/Then, click on links to logs of each attempt.
  6. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Diagnostics: Exception from container-launch.
  7. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container id: container_e87_1583730534669_117324_02_000001
  8. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exit code: 13
  9. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Stack trace: ExitCodeException exitCode=13:
  10. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
  11. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.run(Shell.java:507)
  12. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
  13. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
  14. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
  15. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
  16. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  17. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  18. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  19. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.lang.Thread.run(Thread.java:748)
  20. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO -
  21. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO -
  22. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container exited with a non-zero exit code 13
  23. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Failing this attempt. Failing the application.
  24. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster host: N/A
  25. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster RPC port: -1
  26. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - queue: default
  27. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - start time: 1590649410241
  28. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - final status: FAILED
  29. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - tracking URL: http://pf-bigdata4:8088/cluster/app/application_1583730534669_117324
  30. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - user: root
  31. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exception in thread "main" org.apache.spark.SparkException: Application application_1583730534669_117324 finished with failed status
  32. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.Client.run(Client.scala:1153)
  33. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1568)
  34. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:892)
  35. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
  36. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
  37. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
  38. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  39. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Shutdown hook called
  40. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-eb1e1b60-ef09-4a58-8e5f-dc988411999e
  41. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /huayong/data/tmp/spark-dba79ec3-1f27-4da0-8e8e-5a98c31c156f
  42. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Process completed unsuccessfully in 55 seconds.
  43. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - Job run failed!
  44. java.lang.RuntimeException: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
  45. at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:305)
  46. at azkaban.execapp.JobRunner.runJob(JobRunner.java:787)
  47. at azkaban.execapp.JobRunner.doRun(JobRunner.java:602)
  48. at azkaban.execapp.JobRunner.run(JobRunner.java:563)
  49. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  50. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  51. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  52. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  53. at java.lang.Thread.run(Thread.java:748)
  54. Caused by: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
  55. at azkaban.jobExecutor.utils.process.AzkabanProcess.run(AzkabanProcess.java:125)
  56. at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:297)
  57. ... 8 more
  58. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 cause: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
  59. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Finishing job bi_cal_business_data_table_combine at 1590649460777 with status FAILED

 

yarn logs -applicationId application_1583730534669_117324命令查看日志为:

  1. 20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
  2. 20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
  3. 20/05/28 15:04:19 ERROR yarn.ApplicationMaster: Uncaught exception:
  4. java.lang.IllegalStateException: User did not initialize spark context!
  5. at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:467)
  6. at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:301)
  7. at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:241)
  8. at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241)
  9. at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241)
  10. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:782)
  11. at java.security.AccessController.doPrivileged(Native Method)
  12. at javax.security.auth.Subject.doAs(Subject.java:422)
  13. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
  14. at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:781)
  15. at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:240)
  16. at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:806)
  17. at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)

脚本最后一行的自定义的传参的参数 10去掉,依然上面的错。

 

但是代码中把.master("local[*]"去掉后,使用client和cluster模式,都可以跑成功。

 

总结:

1.代码中local[*]参数去掉后,两种模式都可以跑成功,不去掉,只能跑client模式

2.cluster模式是在集群跑任务,使用的是集群随机一台机器的资源,而client模式是在提交任务的这台机器上跑,使用的是这台机器的资源

3.没问题的脚本:

client:

  1. spark2-submit \
  2. --class bi.tag.BusinessDataCombineErpJobs \
  3. --master yarn-client \
  4. --driver-memory 1g \
  5. --executor-memory 3g \
  6. --executor-cores 2 \
  7. /var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar

cluster:

  1. spark2-submit \
  2. --class bi.tag.BusinessDataCombineErpJobs \
  3. --master yarn \
  4. --deploy-mode cluster \
  5. --driver-memory 1g \
  6. --executor-memory 3g \
  7. --executor-cores 2 \
  8. /var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar

 

sparkstreaming的提交示例:

spark2-submit --master yarn-client --conf spark.driver.memory=2g --class com.tzb.sparkstreaming.prod.DataChangeStreaming --executor-memory 8G --num-executors 5 --executor-cores 2 /test/spark-test-jar-with-dependencies.jar >> /test/sparkstreaming_datachange.log

参考:

https://www.cnblogs.com/weiweifeng/p/8073553.html

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/625515
推荐阅读
相关标签
  

闽ICP备14008679号