赞
踩
【异常描述】
在Hive界面直接执行SQL
Task with the most failures(4):
-----
Task ID:task_1479051658015_121922_r_000000
URL:http://hnn01.ns4f.hi.ipm.nokia.com:8088/taskdetails.jsp?jobid=job_1479051658015_121922&tipid=task_1479051658015_121922_r_000000
-----
Diagnostic Messages for this Task:AttemptID:attempt_1479051658015_121922_r_000000_3 Timed out after 300 secs
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 626 Reduce: 1009 Cumulative CPU: 87485.1 sec HDFS Read: 177424895078 HDFS Write: 150885802688 FAIL
Total MapReduce CPU Time Spent: 1 days 0 hours 18 minutes 5 seconds 100 msec
【解决方法1】
这个问题出现的背景是:表中数据量庞大。问题原因貌似跟内存有关,可能是集群的垃圾回收机制不完善所致。临时解决办法是修改/usr/hdp/2.4.2.0-258/hive/conf下的mapred-site.xml。
修改前:
<proper
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。