当前位置:   article > 正文

hadoop环境搭建(六)

hadoop环境搭建(六)

hive搭建

1、将apache-hive-3.1.2-bin.tar.gz压缩包放在/opt/software路径下

2、解压软件

cd /opt/software/

tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/module/

3、修改系统环境变量

vi  /etc/profile

添加内容

export HIVE_HOME=/usr/local/soft/apache-hive-3.1.1-bin

export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

保存系统环境变量

source /etc/profile

4、修改hive环境变量

cd /opt/module/apache-hive-3.1.2-bin/bin

vi hive-config.sh

新增内容

  1. export JAVA_HOME=/opt/module/jdk1.8.0_212
  2. export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin
  3. export HADOOP_HOME=/opt/module/hadoop-3.1.3
  4. export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

5、拷贝hive配置文件

cd /opt/module/apache-hive-3.1.2-bin/conf/

cp hive-default.xml.template hive-site.xml

6、修改Hive配置文件,找到对应的位置进行修改

<property>

    <name>javax.jdo.option.ConnectionDriverName</name>

    <value>com.mysql.cj.jdbc.Driver</value>

    <description>Driver class name for a JDBC metastore</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionUserName</name>

    <value>root</value>

    <description>Username to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionPassword</name>

    <value>root123</value>

    <description>password to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://192.168.202.131:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value>

    <description>

      JDBC connect string for a JDBC metastore.

      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

    </description>

  </property>

  <property>

    <name>datanucleus.schema.autoCreateAll</name>

    <value>true</value>

    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>

  </property>

<property>

    <name>hive.metastore.schema.verification</name>

    <value>false</value>

    <description>

      Enforce metastore schema version consistency.

      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic

            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures

            proper metastore schema migration. (Default)

      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.

    </description>

  </property>

<property>

    <name>hive.exec.local.scratchdir</name>

    <value>/usr/local/soft/apache-hive-3.1.1-bin/tmp/${user.name}</value>

    <description>Local scratch space for Hive jobs</description>

  </property>

  <property>

<name>system:java.io.tmpdir</name>

<value>/usr/local/soft/apache-hive-3.1.2-bin/iotmp</value>

<description/>

</property>

  <property>

    <name>hive.downloaded.resources.dir</name>

<value>/usr/local/soft/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources</value>

    <description>Temporary local directory for added resources in the remote file system.</description>

  </property>

<property>

    <name>hive.querylog.location</name>

    <value>/usr/local/soft/apache-hive-3.1.2-bin/tmp/${system:user.name}</value>

    <description>Location of Hive run time structured log file</description>

  </property>

  <property>

    <name>hive.server2.logging.operation.log.location</name>

<value>/usr/local/soft/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs</value>

    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>

  </property>

  <property>

    <name>hive.metastore.db.type</name>

    <value>mysql</value>

    <description>

      Expects one of [derby, oracle, mysql, mssql, postgres].

      Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.

    </description>

  </property>

  <property>

    <name>hive.cli.print.current.db</name>

    <value>true</value>

    <description>Whether to include the current database in the Hive prompt.</description>

  </property>

  <property>

    <name>hive.cli.print.header</name>

    <value>true</value>

    <description>Whether to print the names of the columns in query output.</description>

  </property>

  <property>

    <name>hive.metastore.warehouse.dir</name>

    <value>/user/hive/warehouse</value>

    <description>location of default database for the warehouse</description>

  </property>

7、上传驱动包mysql-connector-java-5.1.27.jar到/opt/module/apache-hive-3.1.2-bin/lib/ 

8、确保 mysql数据库中有名称为hive的数据库

9、初始化初始化元数据库

 schematool -dbType mysql -initSchema

10、确保Hadoop启动

11、启动hive

hive

12、检测是否启动成功

show databases;

13、命令的简单实用

(1)启动hive

[linux01 hive]$ bin/hive

(2)显示数据库

hive>show databases;

(3)使用default数据库

hive>use default;

(4)显示default数据库中的表

hive>show tables;

(5)创建student表, 声明文件分隔符’\t’

hive> create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

(6)加载/usr/local/data/student.txt 文件到student数据库表中。

   Linux创建students.txt文件编辑文件内容,将文件上传至hdfs中

hadoop fs - put /usr/local/data/student.txt /data

load data inpath '/data/student.txt' overwrite into  table student;

load data local inpath '/data/student.txt' into table student;

(7)查询数据库

hive> select * from student;

查询id

hive> select id from student;

(8)退出hive窗口:

hive(default)>exit;

(9)在hive cli命令窗口中如何查看hdfs文件系统

   hive(default)>dfs -ls /;

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/570091
推荐阅读
相关标签
  

闽ICP备14008679号