赞
踩
详细错误如下:
- Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
- at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:639)
- at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:159)
- at com.hx.bigdata.spark.Md2Doris.main(Md2Doris.java:20)
- Caused by: java.lang.ClassNotFoundException: kafka.DefaultSource
- at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
- at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
- at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
- at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
- at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23$$anonfun$apply$15.apply(DataSource.scala:622)
- at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23$$anonfun$apply$15.apply(DataSource.scala:622)
- at scala.util.Try$.apply(Try.scala:192)
- at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23.apply(DataSource.scala:622)
- at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$23.apply(DataSource.scala:622)
- at scala.util.Try.orElse(Try.scala:84)
- at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:622)
- ... 2 more
- 24/04/24 09:42:34 INFO SparkContext: Invoking stop() from shutdown hook
原因:这是因为没有加入kafka相关的依赖,特别是比如以前写的是spark streaming程序,引入spark kafka使用的是
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
<!-- <scope>provided</scope>-->
</dependency>
但是在编写 spark structure streaming以后就应该引入:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
<!-- <scope>provided</scope>-->
</dependency>
详细报错如下:
- 24/04/24 09:47:48 ERROR StreamMetadata: Error writing stream metadata StreamMetadata(3c42eeca-593e-40c7-80c8-681693c62ff3) to file:/C:/Users/wsf/AppData/Local/Temp/temporary-49bf6c97-f0dc-4727-b7b2-2fc187abf76d/metadata
- java.io.IOException: (null) entry in command string: null chmod 0644 C:\Users\wsf\AppData\Local\Temp\temporary-49bf6c97-f0dc-4727-b7b2-2fc187abf76d\metadata
- at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:762)
- at org.apache.hadoop.util.Shell.execCommand(Shell.java:859)
- at org.apache.hadoop.util.Shell.execCommand(Shell.java:842)
- at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:661)
- at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:501)
- at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:482)
- at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:498)
- at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:467)
- at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:433)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
- at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76)
- at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:124)
- at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:122)
- at scala.Option.getOrElse(Option.scala:121)
- at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:122)
- at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution.<init>(ContinuousExecution.scala:51)
- at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:246)
- at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:299)
- at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:296)
原因:这是因为spark运行在windows下,缺少相关的hadoop.dll文件,可以到下面地址下载:
https://github.com/cdarlint/winutils,将下载的hadoop.dll文件放置到 c:\windows\system32目录中即可。
详细报错如下:
- Caused by: org.apache.spark.SparkException: There is no enough memory to build hash map
- at org.apache.spark.sql.execution.joins.UnsafeHashedRelation$.apply(HashedRelation.scala:312)
- at org.apache.spark.sql.execution.joins.HashedRelation$.apply(HashedRelation.scala:108)
- at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:853)
- at org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:841)
- at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:86)
原因:driver端内存太小,增大driver端的内存,即指定 --driver-memory 为更大的值。
原因:因为命令行里 application-jar 没指定
原因:Dataset里有字段无法获取其jdbc type,需要明确每个字段的类型。
原因:用户很久没使用ThriftServer导致系统清理了该上级目录或者用户根本就对该目录没有写权限。解决方法:重启ThriftServer或设置目录权限:spark.local.dir,默认是/tmps,spark.env中添加配置SPARK_LOCAL_DIRS或程序中配置,可配置多个路径,逗号分隔增强io效率。
详细报错如下:
- WARN scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.5.159, executor 0): java.lang.ClassCastException:
- cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of
- type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
原因:setMaster不一致,比如代码里设置了standaone,提交时使用yarn模式提交
原因:明确指定key value和OutputFormat
详细报错如下:
org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 17 tasks (1062.8 MiB) is bigger than spark.driver.maxResultSize (1024.0 MiB)
原因:返回driver端的数据结果集太大了,需要调大配置项 spark.driver.maxResultSize(默认值1G)
原因:在spark代码里获取了hdfs的FileSystem后,不需要了不要去close,由于hadoop FileSystem.get 获得的FileSystem会从缓存加载,如果多线程一个线程closedFileSystem会导致该BUG。
原因:这是因为程序运行时因为sql复杂解析出的Stack很深大于 JVM 的设置大小,可以在启动 Spark-sql 的时候加上 --driver-java-options "-Xss10m" 选项解决这个问题
当 Spark 作业在运行过程中报错,而且报错信息中含有Serializable等类似词汇,那么可能是序列化问题导致的报错。
序列化问题要注意以下三点:
作为RDD的元素类型的自定义类,必须是可以序列化的;
算子函数里可以使用的外部的自定义变量,必须是可以序列化的;
不可以在RDD的元素类型、算子函数里使用第三方的不支持序列化的类型,例如Connection。
原因:一般发生在大量shuffle操作时,因为网络原因或者是executor处于gc中,可以尝试调大值new SparkConf().set("spark.shuffle.io.maxRetries", "60").set("spark.shuffle.io.retryWait", "60s"),还是不行就调大executor的内存和cpu。
详细报错如下:
- WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, aa.local): ExecutorLostFailure (executor lost)
- WARN TaskSetManager: Lost task 69.2 in stage 7.0 (TID 1145, 192.168.xx.x): java.io.IOException: Connection from /192.168.xx.x:55483 closed
- java.util.concurrent.TimeoutException: Futures timed out after [120 second
- ERROR TransportChannelHandler: Connection to /192.168.xx.x:35409 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.network.timeout if this is wrong
原因:因为网络或者gc的原因,worker或executor没有接收到executor或task的心跳反馈。提高 spark.network.timeout 的值,根据情况改成300(5min)或更高。如果还是不行就调大executor的内存和cpu。
看是driver端还是executor端,增加内存即可
持续更新中。。。。。。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。