._hivesql生僻">
赞
踩
首先 cur.execute(“show tables”)或者
cur.execute(“select * from table limit 100”)
这些都能运行成功;
报错时执行的语句中含有count(1),现在需要查明原因?
首先看下具体的报错内容:
Traceback (most recent call last): File "C:/Users/admin/PycharmProjects/pythonhive/0914xwqy.py", line 7, in <module> cur.execute("select enttype_cn, count(1) as num from xwqy.onlyxwqy group by enttype_cn") File "C:\Users\admin\PycharmProjects\pythonhive\venv\lib\site-packages\impala\hiveserver2.py", line 331, in execute self._wait_to_finish() # make execute synchronous File "C:\Users\admin\PycharmProjects\pythonhive\venv\lib\site-packages\impala\hiveserver2.py", line 412, in _wait_to_finish raise OperationalError(resp.errorMessage) impala.error.OperationalError: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=admin, access=EXECUTE, inode="/tmp/hadoop-yarn":anonymous:supergroup:drwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1879) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1863) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1808) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1905) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:876) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:533) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
听信网上博客的改法,说要 执行 hadoop fs -chown -R root:root /tmp
结果没用。
请教了一下组长,不愧是组长,一招毙命。
aaaaaaaaaaaa’a’a
dbvisualizer 这样改
python 这样改
from impala.dbapi import connect
import time,datetime
start = datetime.datetime.now()
print("开始时间:" + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time())))
conn = connect(host='13.1.3.45',port = 10000,auth_mechanism='PLAIN',user='root')
cur=conn.cursor()
cur.execute("select enttype_cn, count(1) as num from xwqy.onlyxwqy group by enttype_cn")
row = cur.fetchall();
print(row)
end = datetime.datetime.now()
print("结束时间:" + time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time())))
print("总用时:" + str(end - start))
cur.close()
conn.close()
这样就能顺利执行了。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。