当前位置:   article > 正文

【Hive】Hive在调用执行MapReduce进程时报错:FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql._execution执行器玉mapreduce

execution执行器玉mapreduce

今天,在测试Hive时,碰到了以下错误:

  1. hive (default)> INSERT INTO test VALUES ('kuroneko359',20);
  2. Query ID = root_20231207144941_56661aca-5d0c-40c5-83b3-1631434f25a5
  3. Total jobs = 3
  4. Launching Job 1 out of 3
  5. Number of reduce tasks determined at compile time: 1
  6. In order to change the average load for a reducer (in bytes):
  7. set hive.exec.reducers.bytes.per.reducer=<number>
  8. In order to limit the maximum number of reducers:
  9. set hive.exec.reducers.max=<number>
  10. In order to set a constant number of reducers:
  11. set mapreduce.job.reduces=<number>
  12. 2023-12-07 14:49:43,919 INFO [bf528afe-a11e-4444-98a7-aad77cef2125 main] client.RMProxy: Connecting to ResourceManager at bigdata1/192.168.72.101:8032
  13. 2023-12-07 14:49:44,095 INFO [bf528afe-a11e-4444-98a7-aad77cef2125 main] client.RMProxy: Connecting to ResourceManager at bigdata1/192.168.72.101:8032
  14. Starting Job = job_1701931585546_0001, Tracking URL = http://bigdata1:8088/proxy/application_1701931585546_0001/
  15. Kill Command = /opt/module/hadoop/bin/mapred job -kill job_1701931585546_0001
  16. Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
  17. 2023-12-07 14:49:50,949 Stage-1 map = 0%, reduce = 0%
  18. Ended Job = job_1701931585546_0001 with errors
  19. Error during job, obtaining debugging information...
  20. FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
  21. MapReduce Jobs Launched:
  22. Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
  23. Total MapReduce CPU Time Spent: 0 msec
  24. hive (default)>

从报错的内容上看,应该是调用MapReduce时出现了错误。

尽管查看日志,也没有明确的指出出现错误的原因:

于是,我便想到了用Hadoop来执行MapReduce来测试MapReduce的功能是否正常:

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 3 10

执行以上命令,发现调用MapReduce时确实出现了问题:

根据程序提供的错误,我们可以得知,是MapReduce找不到Java的位置,导致程序无法正常执行。

解决方法:

在Hadoop的yarn-site.xml中添加JAVA_HOME,添加完之后别忘了分发到其他节点。

再次执行原来的命令,发现又出现了一个新的错误:

解决方法:

在终端中输入:

echo $(hadoop classpath)

获取到hadoop classpath,将结果添加到yarn-site.xml中:

  1. <property>
  2. <name>yarn.application.classpath</name>
  3. <value>/opt/module/hadoop/etc/hadoop:/opt/module/hadoop/share/hadoop/common/lib/*:/opt/module/hadoop/share/hadoop/common/*:/opt/module/hadoop/share/hadoop/hdfs:/opt/module/hadoop/share/hadoop/hdfs/lib/*:/opt/module/hadoop/share/hadoop/hdfs/*:/opt/module/hadoop/share/hadoop/mapreduce/lib/*:/opt/module/hadoop/share/hadoop/mapreduce/*:/opt/module/hadoop/share/hadoop/yarn:/opt/module/hadoop/share/hadoop/yarn/lib/*:/opt/module/hadoop/share/hadoop/yarn/*</value>
  4. </property>

保存并分发yarn-site.xml,之后重启yarn。

再次执行先前使用Hadoop运行MapReduce的程序,发现可以正常执行:

之后进入Hive运行刚才的语句,问题成功解决。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/597358
推荐阅读
相关标签
  

闽ICP备14008679号