当前位置:   article > 正文

hive count(*)from table报错解决办法发_hive count*报错

hive count*报错

我的环境是:
centos_7.05的操作系统
hadoop-2.8.3
hive-1.2.1
jdk-1.8

配置集群用的内网和外网结合,关键就在这里,阿里云的公网ip和私网ip相结合太坑了,切记别混搭。

来看启动集群后启动hive的hiverserver2的客户端后,测试连接hive;

  1. 在master的主节点上 在hive/bin/./hiveserver2
  2. 复制master的新会话窗口 使用beeline测试连接
#localhost 换成公网ip

beeline -u jdbc:hive2://localhost:10000 -n hadoop
  • 1
  • 2
  • 3
  1. 使用select count(*) form db_user这张表的时候报错了
  2. 结果如下;
select count(*) from db_user;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases.
Query ID = root_20171012025616_0d673cbd-6c2b-421b-bffc-98e2c4d8d676
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1507801793754_0002, Tracking URL = http://192.168.163.130:8088/proxy/application_1507801793754_0002/
Kill Command = /usr/hadoop/bin/hadoop job -kill job_1507801793754_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-10-12 02:56:29,615 Stage-1 map = 0%, reduce = 0%
2017-10-12 02:56:36,331 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec
2017-10-12 02:56:42,700 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.64 sec
MapReduce Total cumulative CPU time: 2 seconds 640 msec
java.io.IOException: java.lang.OutOfMemoryError: PermGen space
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:338)
at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:390)
at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:583)
at org.apache.hadoop.mapreduce.Job$5.run(Job.java:680)
at org.apache.hadoop.mapreduce.Job$5.run(Job.java:677)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:677)
at org.apache.hadoop.mapred.JobClient$NetworkedJob.getTaskCompletionEvents(JobClient.java:349)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.computeReducerTimeStatsPerJob(HadoopJobExecHelper.java:612)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:570)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:424)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2182)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1838)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1525)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1236)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1226)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2625)
at java.lang.Class.getMethod0(Class.java:2866)
at java.lang.Class.getMethod(Class.java:1676)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.getReturnProtoType(ProtobufRpcEngine.java:293)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:258)
at com.sun.proxy.$Proxy72.getTaskAttemptCompletionEvents(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBClientImpl.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:324)
at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:390)
at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:583)
at org.apache.hadoop.mapreduce.Job$5.run(Job.java:680)
at org.apache.hadoop.mapreduce.Job$5.run(Job.java:677)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
Ended Job = job_1507801793754_0002 with exception 'java.io.IOException(java.lang.OutOfMemoryError: PermGen space)'
FAILED: Hive Internal Error: java.lang.OutOfMemoryError(PermGen space)
java.lang.OutOfMemoryError: PermGen space

MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.64 sec HDFS Read: 7728 HDFS Write: 101 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 640 msec
AsyncLogger error handling event seq=105, value='[ERROR calling class org.apache.logging.log4j.core.async.RingBufferLogEvent.toString(): java.lang.NullPointerException]':
java.lang.OutOfMemoryError: PermGen space
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  1. 经过查询之后得出结果,真是当时不知道看了那个鬼的文章搭建了本节点用内网(私网ip,从节点是公网ip),每个节点都用自己的私网ip,,,,,在这里默哀一分钟吧,,,,

改正:把vim /etc/hosts里面的所有ip更改为 私网ip就行了

  1. 关掉集群,重新启动即可
  2. 启动hive的 ./hiveserver2
  3. 开启一个新的会话 beeline -u jdbc:hive2://localhost:10000 -n hadoop
  4. select count(*) form db_user
  5. 在hive里完美运行了,select count(1) form db_user;
  6. 也可以完美运行
    `

最终:一定要看日志

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/597376
推荐阅读