当前位置:   article > 正文

Hive2.1.1集群搭建_麒麟部署 hive2.1.1集群

麒麟部署 hive2.1.1集群

软件环境:

linux系统: CentOS6.7
Hadoop版本: 2.6.5
zookeeper版本: 3.4.8
  • 1
  • 2
  • 3

主机配置:

一共m1, m2, m3这五部机, 每部主机的用户名都为centos
192.168.179.201: m1 
192.168.179.202: m2 
192.168.179.203: m3 

m1: Zookeeper, Namenode, DataNode, ResourceManager, NodeManager, Master, Worker
m2: Zookeeper, Namenode, DataNode, ResourceManager, NodeManager, Worker
m3: Zookeeper, DataNode, NodeManager, Worker
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7


集群搭建 (注:Hive只需在一个节点上安装)

1.下载Hive2.1.1安装包

http://www.apache.org/dyn/closer.cgi/hive/
  • 1

2.解压

tar -zxvf hive-0.9.0.tar.gz -C /home/hadoop/soft
  • 1

3.配置环境变量

vi /etc/profile
  • 1
# Hive
export HIVE_HOME=/home/centos/soft/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib
export PATH=$PATH:$HIVE_HOMW/bin
  • 1
  • 2
  • 3
  • 4
  • 5
source /etc/profile
  • 1

4.配置MySQL(注:切换到root用户)

4.1. 卸载CentOS自带的MySQL
rpm -qa | grep mysql
  • 1
rpm -e mysql-libs-5.1.66-2.el6_3.i686 --nodeps
  • 1
yum -y install mysql-server
  • 1
4.2. 初始化MySQL

(1) 修改mysql的密码(root权限执行)

cd  /usr/bin
  • 1
./mysql_secure_installation
  • 1

(2) 输入当前MySQL数据库的密码为root, 初始时root是没有密码的, 所以直接回车

Enter current password for root (enter for none):
  • 1

(3) 设置MySQL中root用户的密码(应与下面Hive配置一致,下面设置为123456)

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
... Success!
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

(4)删除匿名用户

Remove anonymous users? [Y/n] Y
... Success!
  • 1
  • 2

(5)是否不允许用户远程连接,选择N

Disallow root login remotely? [Y/n] N
... Success!
  • 1
  • 2

(6)删除test数据库

Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
  • 1
  • 2
  • 3
  • 4
  • 5

(7)重装

Reload privilege tables now? [Y/n] Y
... Success!
  • 1
  • 2

(8)完成

All done!  If you've completed all of the above steps, your MySQL
installation should now be secure.
Thanks for using MySQL!
  • 1
  • 2
  • 3

(9)登陆mysql

mysql -uroot -p
  • 1
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;
FLUSH PRIVILEGES;
  • 1
  • 2
exit;
  • 1


至此MySQL配置完成





5.配置Hive


1.编辑hive-env.xml文件

hive-env.sh.template文件复制为hive-env.sh, 编辑hive-env.xml文件

JAVA_HOME=/home/centos/soft/jdk
HADOOP_HOME=/home/centos/soft/hadoop
HIVE_HOME=/home/centos/soft/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HIVE_AUX_JARS_PATH=$SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.6.0.jar
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
export HADOOP_OPTS="-Dorg.xerial.snappy.tempdir=/tmp -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib $HADOOP_OPTS"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7


2.编辑hive-site.xml文件

配置hive-site.xml文件, 将hive-default.xml.template文件拷贝为hive-default.xml, 并编辑hive-site.xml文件(删除所有内容,只留一个<configuration></configuration>)

配置项参考:
hive.server2.thrift.port– TCP 的监听端口,默认为10000。  
hive.server2.thrift.bind.host– TCP绑定的主机,默认为localhost  
hive.server2.thrift.min.worker.threads– 最小工作线程数,默认为5。  
hive.server2.thrift.max.worker.threads – 最小工作线程数,默认为500。  

hive.server2.transport.mode – 默认值为binary(TCP),可选值HTTP。  
hive.server2.thrift.http.port– HTTP的监听端口,默认值为10001。  
hive.server2.thrift.http.path – 服务的端点名称,默认为 cliservice。  
hive.server2.thrift.http.min.worker.threads– 服务池中的最小工作线程,默认为5。  
hive.server2.thrift.http.max.worker.threads– 服务池中的最小工作线程,默认为500
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
<configuration>
<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://m1:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>username to use against metastore database</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123</value>
    <description>password to use against metastore database</description>
</property>
<property>
    <name>datanucleus.autoCreateSchema</name>
    <value>true</value>
</property>
<property>
    <name>datanucleus.autoCreateTables</name>
    <value>true</value>
</property>
<property>
    <name>datanucleus.autoCreateColumns</name>
    <value>true</value>
</property>
<!-- 设置 hive仓库的HDFS上的位置 -->
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/hive</value>
    <description>location of default database for the warehouse</description>
</property>
<!--资源临时文件存放位置 -->
<property>
    <name>hive.downloaded.resources.dir</name>
    <value>/home/centos/soft/hive/tmp/resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
 </property>
 <!-- Hive在0.9版本之前需要设置hive.exec.dynamic.partition为true, Hive在0.9版本之后默认为true -->
<property>
    <name>hive.exec.dynamic.partition</name>
    <value>true</value>
 </property>
<property>
    <name>hive.exec.dynamic.partition.mode</name>
    <value>nonstrict</value>
 </property>
<!-- 修改日志位置 -->
<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/home/centos/soft/hive/tmp/HiveJobsLog</value>
    <description>Local scratch space for Hive jobs</description>
</property>
<property>
    <name>hive.downloaded.resources.dir</name>
    <value>/home/centos/soft/hive/tmp/ResourcesLog</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
    <name>hive.querylog.location</name>
    <value>/home/centos/soft/hive/tmp/HiveRunLog</value>
    <description>Location of Hive run time structured log file</description>
</property>
<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/home/centos/soft/hive/tmp/OpertitionLog</value>
    <description>Top level directory where operation tmp are stored if logging functionality is enabled</description>
</property>
<!-- 配置HWI接口 -->
<property>  
    <name>hive.hwi.war.file</name>  
    <value>/home/centos/soft/hive/lib/hive-hwi-2.1.1.jar</value>  
    <description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. </description>  
</property>  
<property>  
    <name>hive.hwi.listen.host</name>  
    <value>m1</value>  
    <description>This is the host address the Hive Web Interface will listen on</description>  
</property>  
<property>  
    <name>hive.hwi.listen.port</name>  
    <value>9999</value>  
    <description>This is the port the Hive Web Interface will listen on</description>  
</property>
<!-- Hiveserver2已经不再需要hive.metastore.local这个配置项了(hive.metastore.uris为空,则表示是metastore在本地,否则就是远程)远程的话直接配置hive.metastore.uris即可 -->
<!-- property>
    <name>hive.metastore.uris</name>
    <value>thrift://m1:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property --> 
<property>
    <name>hive.server2.thrift.bind.host</name>
    <value>m1</value>
</property>
<property>
    <name>hive.server2.thrift.port</name>
    <value>10000</value>
</property>
<property>
    <name>hive.server2.thrift.http.port</name>
    <value>10001</value>
</property>
<property>
    <name>hive.server2.thrift.http.path</name>
    <value>cliservice</value>
</property>
<!-- HiveServer2的WEB UI -->
<property>
    <name>hive.server2.webui.host</name>
    <value>m1</value>
</property>
<property>
    <name>hive.server2.webui.port</name>
    <value>10002</value>
</property>
<property>
    <name>hive.scratch.dir.permission</name>
    <value>755</value>
</property>
<!-- 下面hive.aux.jars.path这个属性里面你这个jar包地址如果是本地的记住前面要加file://不然找不到, 而且会报org.apache.hadoop.hive.contrib.serde2.RegexSerDe错误 -->
<property>
    <name>hive.aux.jars.path</name>
    <value>file:///home/centos/soft/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar</value>
</property>
<property>
    <name>hive.server2.enable.doAs</name>
    <value>false</value>
</property>   
<!-- property>
    <name>hive.server2.authentication</name>
    <value>NOSASL</value>
</property -->
<property>
    <name>hive.auto.convert.join</name>
    <value>false</value>
</property>
</configuration>
<property>
    <name>spark.dynamicAllocation.enabled</name>
    <value>true</value>
    <description>动态分配资源</description>  
</property>
<!-- 使用Hive on spark时,若不设置下列该配置会出现内存溢出异常 -->
<property>
    <name>spark.driver.extraJavaOptions</name>
    <value>-XX:PermSize=128M -XX:MaxPermSize=512M</value>
</property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155


3.配置日志地址, 修改hive-log4j.properties文件
cp hive-log4j.properties.template hive-log4j.properties
  • 1
vi hive-log4j.properties
  • 1
hive.log.dir=/home/centos/soft/hive/tmp     ## 将hive.log日志的位置改为${HIVE_HOME}/tmp目录下
  • 1
mkdir ${HIVE_HOME}/tmp
  • 1


4.配置hive-config.sh文件

配置$HIVE_HOME/conf/hive-config.sh文件

## 增加以下三行
export JAVA_HOME=/home/centos/soft/java
export HIVE_HOME=/home/centos/soft/hive
export HADOOP_HOME=/home/centos/soft/hadoop
## 修改下列该行
HIVE_CONF_DIR=$HIVE_HOME/conf
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6


5.拷贝JDBC包

将JDBC的jar包放入$HIVE_HOME/lib目录下

cp /home/centos/soft/tar.gz/mysql-connector-java-5.1.6-bin.jar  /home/centos/soft/hive/lib/
  • 1


6.拷贝jline扩展包

$HIVE_HOME/lib目录下的jline-2.12.jar包拷贝到$HADOOP_HOME/share/hadoop/yarn/lib目录下,并删除$HADOOP_HOME/share/hadoop/yarn/lib目录下旧版本的jline



7.拷贝tools.jar包

复制$JAVA_HOME/lib目录下的tools.jar$HIVE_HOME/lib

cp  $JAVA_HOME/lib/tools.jar  ${HIVE_HOME}/lib
  • 1


8.执行初始化Hive操作

选用MySQLysqlDerby二者之一为元数据库
注意:先查看MySQL中是否有残留的Hive元数据,若有,需先删除

schematool -dbType mysql -initSchema    ## MySQL作为元数据库
  • 1

其中mysql表示用mysql做为存储hive元数据的数据库, 若不用mysql做为元数据库, 则执行

schematool -dbType derby -initSchema    ## Derby作为元数据库
  • 1

脚本hive-schema-1.2.1.mysql.sql会在配置的Hive元数据库中初始化创建表



9.启动Metastore服务

执行Hive前, 须先启动metastore服务, 否则会报错

./hive --service metastore
  • 1

然后打开另一个终端窗口,之后再启动Hive进程



10.测试
hive
  • 1
show databases;
show tables;
create table book (id bigint, name string) row format delimited fields terminated by '\t'; 
select * from book;
select count(*) from book;
  • 1
  • 2
  • 3
  • 4
  • 5




声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号