当前位置:   article > 正文

Ubuntu 16.04 LTS上用IDEA编程操作hive报错:Table or view not found_setting hive.metastore.warehouse.dir ('null') to t

setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.ware

Table of Contents

        系统:

        环境:

        问题:

        原因:

        解决方法:

        参考连接:


        这个问题(Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or view not found: emp2; line 1 pos 14     分割     Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'emp2' not found in database 'default';)困扰挺久了,之前(去年)是学长帮忙配环境,这回自己捣鼓遇到许多问题。很多博客都在瞎扯淡,你抄我的、我抄你的,根本解决不了问题。问题、原因与解决方法的详细情况都写在下面了。

        系统:

        Ubuntu 16.04 LTS。

        环境:

        ①编译器:IDEA 191.7479.19;

        ②Hadoop:Hadoop 2.6.0;

        ③Spark:Spark-2.4.3-bin-2.6.0-cdh5.9.3;

        ④hive:apache-hive-3.1.1-bin;

        ⑤maven:apache-maven-3.6.1;

        ⑥MySQL:MySQL 5.7.26;

        ⑦连接器:mysql-connector-java-5.1.28。

        问题:

        在终端可以通过hive访问数据库中的数据,如图1所示;但当通过IDEA编程访问时则报错,如图2所示。

图1 hive成功访问数据库

 

图2 IDEA编程访问失败

 

        详细提示信息如下所示:

        ①show table:

  1. log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
  2. log4j:WARN Please initialize the log4j system properly.
  3. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
  4. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  5. 19/07/26 11:03:13 INFO SparkContext: Running Spark version 2.4.3
  6. 19/07/26 11:03:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  7. 19/07/26 11:03:14 INFO SparkContext: Submitted application: TestHive
  8. 19/07/26 11:03:14 INFO SecurityManager: Changing view acls to: hadoop001
  9. 19/07/26 11:03:14 INFO SecurityManager: Changing modify acls to: hadoop001
  10. 19/07/26 11:03:14 INFO SecurityManager: Changing view acls groups to:
  11. 19/07/26 11:03:14 INFO SecurityManager: Changing modify acls groups to:
  12. 19/07/26 11:03:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop001); groups with view permissions: Set(); users with modify permissions: Set(hadoop001); groups with modify permissions: Set()
  13. 19/07/26 11:03:14 INFO Utils: Successfully started service 'sparkDriver' on port 33193.
  14. 19/07/26 11:03:14 INFO SparkEnv: Registering MapOutputTracker
  15. 19/07/26 11:03:14 INFO SparkEnv: Registering BlockManagerMaster
  16. 19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
  17. 19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
  18. 19/07/26 11:03:14 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-ec9ca4f5-2f81-4bc6-be79-0d46c2dc94da
  19. 19/07/26 11:03:14 INFO MemoryStore: MemoryStore started with capacity 1940.7 MB
  20. 19/07/26 11:03:14 INFO SparkEnv: Registering OutputCommitCoordinator
  21. 19/07/26 11:03:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  22. 19/07/26 11:03:14 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop001:4040
  23. 19/07/26 11:03:14 INFO Executor: Starting executor ID driver on host localhost
  24. 19/07/26 11:03:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45071.
  25. 19/07/26 11:03:14 INFO NettyBlockTransferService: Server created on hadoop001:45071
  26. 19/07/26 11:03:14 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
  27. 19/07/26 11:03:14 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop001, 45071, None)
  28. 19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: Registering block manager hadoop001:45071 with 1940.7 MB RAM, BlockManagerId(driver, hadoop001, 45071, None)
  29. 19/07/26 11:03:14 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop001, 45071, None)
  30. 19/07/26 11:03:14 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop001, 45071, None)
  31. 19/07/26 11:03:14 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop001/Documents/code/Scala/preprocessingData/spark-warehouse').
  32. 19/07/26 11:03:14 INFO SharedState: Warehouse path is 'file:/home/hadoop001/Documents/code/Scala/preprocessingData/spark-warehouse'.
  33. 19/07/26 11:03:15 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
  34. 19/0
声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号