赞
踩
在使用hive分区功能时生成的分区字段和分区目录取决于该分区字段的数据,由于该字段数据为中文汉字,则导致分区字段和分区目录中出现中文,造成删除该表令hive卡住运行不了后面的操作
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = root_20210624210229_677de744-4b78-4cd1-a26b-fab1a4c68b43 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Selecting local mode for task: Stage-1 Starting Job = job_1624457024938_0005, Tracking URL = http://hadoop01:8088/proxy/application_1624457024938_0005/ Kill Command = /usr/local/myHadoop/bin/hadoop job -kill job_1624457024938_0005 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2021-06-24 21:02:45,626 Stage-1 map = 0%, reduce = 0% 2021-06-24 21:02:52,718 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.43 sec 2021-06-24 21:03:04,085 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.56 sec MapReduce Total cumulative CPU time: 3 seconds 560 msec Ended Job = job_1624457024938_0005 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 2 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Cannot run job locally: Number of reducers (= 2) is more than 1 Starting Job = job_1624457024938_0006, Tracking URL = http://hadoop01:8088/proxy/application_1624457024938_0006/ Kill Command = /usr/local/myHadoop/bin/hadoop job -kill job_1624457024938_0006 Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 2 2021-06-24 21:03:20,052 Stage-2 map = 0%, reduce = 0% 2021-06-24 21:03:30,707 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.98 sec 2021-06-24 21:03:38,054 Stage-2 map = 100%, reduce = 50%, Cumulative CPU 2.73 sec 2021-06-24 21:03:43,186 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 4.33 sec MapReduce Total cumulative CPU time: 4 seconds 330 msec Ended Job = job_1624457024938_0006 Loading data to table test.partition_bucket partition (sex=null) Failed with exception MetaException(message:Expecting a partition with name sex=女, but metastore is returning a partition with name sex=?.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. MetaException(message:Expecting a partition with name sex=女, but metastore is returning a partition with name sex=?.) MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.56 sec HDFS Read: 8555 HDFS Write: 975 SUCCESS Stage-Stage-2: Map: 1 Reduce: 2 Cumulative CPU: 4.33 sec HDFS Read: 12029 HDFS Write: 955 SUCCESS Total MapReduce CPU Time Spent: 7 seconds 890 msec
在运行完分区表数据的导入后并没有注意这里产生了一个FAILED,就继续后面的操作了,但当我准备删除这个表数据的时候却发现hive一直卡在这条命令上,什么做不了
于是我想试一下暴力解决问题,我重启了hive却出现如下报错
Exception in thread “main” java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException:
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/phoenix-4.13.1-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/myHadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:310) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:290) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:266) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:558) ... 9 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3643) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:236) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:221) ... 14 more Caused by: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_all_functions(ThriftHiveMetastore.java:3716) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_all_functions(ThriftHiveMetastore.java:3704) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllFunctions(HiveMetaStoreClient.java:2328) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。