当前位置:   article > 正文

spark错误集锦

spark错误集锦

1. java.lang.ClassNotFoundException: Failed to find data source: kafka. 

详细错误如下:

  1. Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
  2. at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:639)
  3. at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:159)
  4. at com.hx.bigdata.spark.Md2Doris.main(Md2Doris.java:20)
  5. Caused by: java.lang.ClassNotFoundException: kafka.DefaultSource
  6. at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  7. at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  8. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
  9. at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  10. at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23$$anonfun$apply$15.apply(DataSource.scala:622)
  11. at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23$$anonfun$apply$15.apply(DataSource.scala:622)
  12. at scala.util.Try$.apply(Try.scala:192)
  13. at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23.apply(DataSource.scala:622)
  14. at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23.apply(DataSource.scala:622)
  15. at scala.util.Try.orElse(Try.scala:84)
  16. at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:622)
  17. ... 2 more
  18. 24/04/24 09:42:34 INFO SparkContext: Invoking stop() from shutdown hook

原因:这是因为没有加入kafka相关的依赖,特别是比如以前写的是spark streaming程序,引入spark kafka使用的是

<dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
            <version>${spark.version}</version>
            <!--            <scope>provided</scope>-->
        </dependency>

但是在编写 spark structure streaming以后就应该引入:

<dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql-kafka-0-10_2.11</artifactId>
            <version>${spark.version}</version>
            <!--            <scope>provided</scope>-->
        </dependency>

 2. java.io.IOException: (null) entry in command string: null chmod 0644 C:\Users\wsf\AppData\L...

详细报错如下:

  1. 24/04/24 09:47:48 ERROR StreamMetadata: Error writing stream metadata StreamMetadata(3c42eeca-593e-40c7-80c8-681693c62ff3) to file:/C:/Users/wsf/AppData/Local/Temp/temporary-49bf6c97-f0dc-4727-b7b2-2fc187abf76d/metadata
  2. java.io.IOException: (null) entry in command string: null chmod 0644 C:\Users\wsf\AppData\Local\Temp\temporary-49bf6c97-f0dc-4727-b7b2-2fc187abf76d\metadata
  3. at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:762)
  4. at org.apache.hadoop.util.Shell.execCommand(Shell.java:859)
  5. at org.apache.hadoop.util.Shell.execCommand(Shell.java:842)
  6. at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:661)
  7. at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:501)
  8. at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:482)
  9. at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:498)
  10. at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:467)
  11. at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:433)
  12. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
  13. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
  14. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
  15. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
  16. at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76)
  17. at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:124)
  18. at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:122)
  19. at scala.Option.getOrElse(Option.scala:121)
  20. at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:122)
  21. at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution.<init>(ContinuousExecution.scala:51)
  22. at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:246)
  23. at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:299)
  24. at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:296)

原因:这是因为spark运行在windows下,缺少相关的hadoop.dll文件,可以到下面地址下载:

https://github.com/cdarlint/winutils,将下载的hadoop.dll文件放置到 c:\windows\system32目录中即可。

3. WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again.

详细报错如下:

  1. Caused by: org.apache.spark.SparkException: There is no enough memory to build hash map
  2. at org.apache.spark.sql.execution.joins.UnsafeHashedRelation$.apply(HashedRelation.scala:312)
  3. at org.apache.spark.sql.execution.joins.HashedRelation$.apply(HashedRelation.scala:108)
  4. at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:853)
  5. at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:841)
  6. at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:86)

原因:driver端内存太小,增大driver端的内存,即指定 --driver-memory 为更大的值。

4. 使用spark-submit报错 Error: Must specify a primary resource (JAR or Python or R file)

原因:因为命令行里 application-jar 没指定

5. spark读或写报错:java.lang.IllegalArgumentException: Can't get JDBC type for null

原因:Dataset里有字段无法获取其jdbc type,需要明确每个字段的类型。

6. 在hue运行spark sql 报错java.io.IOException: Failed to create local dir in /tmp/blockmgr-adb70127

原因:用户很久没使用ThriftServer导致系统清理了该上级目录或者用户根本就对该目录没有写权限。解决方法:重启ThriftServer或设置目录权限:spark.local.dir,默认是/tmps,spark.env中添加配置SPARK_LOCAL_DIRS或程序中配置,可配置多个路径,逗号分隔增强io效率。

7. cannot assign instance of scala.collection.immutable… 

详细报错如下:

  1. WARN scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.5.159, executor 0): java.lang.ClassCastException:
  2. cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of
  3. type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

原因:setMaster不一致,比如代码里设置了standaone,提交时使用yarn模式提交

8. saveAsHadoopFiles报错:class scala.runtime.Nothing$ not org.apache.hadoop.mapred.OutputFormat

原因:明确指定key value和OutputFormat

9. is bigger than spark.driver.maxResultSize (1024.0 MiB)

详细报错如下:

org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 17 tasks (1062.8 MiB) is bigger than spark.driver.maxResultSize (1024.0 MiB)

原因:返回driver端的数据结果集太大了,需要调大配置项 spark.driver.maxResultSize(默认值1G)

10. spark任务报错:FileSystem closed

原因:在spark代码里获取了hdfs的FileSystem后,不需要了不要去close,由于hadoop FileSystem.get 获得的FileSystem会从缓存加载,如果多线程一个线程closedFileSystem会导致该BUG。

11. 在Spark SQL中运行的SQL语句过于复杂的话,会出现 java.lang.StackOverflowError 异常

原因:这是因为程序运行时因为sql复杂解析出的Stack很深大于 JVM 的设置大小,可以在启动 Spark-sql 的时候加上 --driver-java-options "-Xss10m" 选项解决这个问题

12. 各种序列化导致的报错

当 Spark 作业在运行过程中报错,而且报错信息中含有Serializable等类似词汇,那么可能是序列化问题导致的报错。
序列化问题要注意以下三点:
作为RDD的元素类型的自定义类,必须是可以序列化的;
算子函数里可以使用的外部的自定义变量,必须是可以序列化的;
不可以在RDD的元素类型、算子函数里使用第三方的不支持序列化的类型,例如Connection。

13. 各种shuffle错误:shuffle file not found/shuffle.FetchFailedException

原因:一般发生在大量shuffle操作时,因为网络原因或者是executor处于gc中,可以尝试调大值new SparkConf().set("spark.shuffle.io.maxRetries", "60").set("spark.shuffle.io.retryWait", "60s"),还是不行就调大executor的内存和cpu。

14. Executor&Task Lost

详细报错如下:

  1. WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, aa.local): ExecutorLostFailure (executor lost)
  2. WARN TaskSetManager: Lost task 69.2 in stage 7.0 (TID 1145, 192.168.xx.x): java.io.IOException: Connection from /192.168.xx.x:55483 closed
  3. java.util.concurrent.TimeoutException: Futures timed out after [120 second
  4. ERROR TransportChannelHandler: Connection to /192.168.xx.x:35409 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.network.timeout if this is wrong

原因:因为网络或者gc的原因,worker或executor没有接收到executor或task的心跳反馈。提高 spark.network.timeout 的值,根据情况改成300(5min)或更高。如果还是不行就调大executor的内存和cpu。

15. 各种OOM

看是driver端还是executor端,增加内存即可 

持续更新中。。。。。。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/499682
推荐阅读
相关标签
  

闽ICP备14008679号