当前位置:   article > 正文

Spark集群配置Hive

Spark集群配置Hive

Spark搭建过程

Spark集群搭建-CSDN博客

1.首先先下载并解压对应的hive压缩包,要选择适配自己系统的,我这个用的是3.1.2

2.配置环境变量

  1. vim ~/.bashrc
  2. export HIVE_HOME=/usr/local/hive
  3. export PATH=$PATH:$HIVE_HOME/bin
  4. source ~/.bashrc

3.修改hive-site.xml

  1. cd $HIVE_HOME/conf
  2. vim hive-site.xml
  3. <?xml version="1.0" encoding="UTF-8" standalone="no"?>
  4. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  5. <configuration>
  6. <property>
  7. <name>javax.jdo.option.ConnectionURL</name>
  8. <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
  9. <description>JDBC connect string for a JDBC metastore</description>
  10. </property>
  11. <property>
  12. <name>javax.jdo.option.ConnectionDriverName</name>
  13. <value>com.mysql.jdbc.Driver</value>
  14. <description>Driver class name for a JDBC metastore</description>
  15. </property>
  16. <property>
  17. <name>javax.jdo.option.ConnectionUserName</name>
  18. <value>hive</value>
  19. <description>username to use against metastore database</description>
  20. </property>
  21. <property>
  22. <name>javax.jdo.option.ConnectionPassword</name>
  23. <value>hive</value>
  24. <description>password to use against metastore database</description>
  25. </property>
  26. </configuration>

4.启动hive

报错,是guava包的版本冲突导致的

解决方法:

  1. ## 删除hive下的gua包
  2. rm -rf guava-19.0.jar
  3. ## 将hadoop下的复制一份到hive下
  4. cp guava-27.0-jre.jar /opt/bigdata/apache-hive-3.1.2-bin/lib

再次启动

hive

启动成功

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/491516
推荐阅读
相关标签
  

闽ICP备14008679号