赞
踩
使用hadoop用户操作。
tar -zxvf apache-hive-1.2.1-bin.tar.gz
设置软连接:ln -s apache-hive-1.2.1-bin hive
cd /home/hadoop/app/apache-hive-1.2.1-bin/conf
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
添加如下信息(路径根据实际修改):
export HADOOP_HOME=/home/hadoop/app/hadoop
在HDFS中新建/tmp和/usr/hive/warehouse 两个文件目录(默认),并对同组用户增加写权限,作为Hive的存储目录(创建过程中可能已经存在tmp,则不执行第一句)。
hdfs dfs -mkdir /tmp
hdfs dfs -mkdir -p /usr/hive/warehouse
cd /home/hadoop/app/hadoop/share/hadoop/yarn/lib/
查看是否有jline相关jar包,如果有:
mv jline-0.9.94.jar jline-0.9.94.jar.bak
如果没有:
cd /home/hadoop/app/hive/lib
cp jline-2.12.jar /home/hadoop/app/hadoop/share/hadoop/yarn/lib
(1)如果是在root用户下配置的环境变量,切换到root用户下。
su root
输入密码
vi /etc/profile
,增加如下配置:
使配置文件生效:source /etc/profile
(2)如果是在hadoop用户下配置的环境变量,修改文件:vi ~/.bashrc
增加的配置跟上图相同。
使配置文件生效:source ~/.bashrc
su hadoop
hive
注意:启动hive如报如下错误:
原因是spark升级到spark2以后,原有lib目录下的大JAR包被分散成多个小JAR包,原来的spark-assembly-*.jar已经不存在,所以hive没有办法找到这个JAR包,解决办法是修改hive/bin下的hive文件:
cd /home/hadoop/app/hive/bin
(路径根据实际修改)
vi hive
找到以下内容:
将标红处修改为:sparkAssemblyPath=ls ${SPARK_HOME}/jars/*.jar
保存后重启hive,问题解决。
show tables;
show functions;
mysql-connector-java-5.1.32.jar是mysql驱动包。(需要自己去下载)
hive> select current_database();
set hive.cli.print.current.db=true;
注: 这是当前的session窗口有效;
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
hive-site.xml文件内容如下:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --><configuration> <!-- WARNING!!! This file is auto generated for documentation purposes ONLY! --> <!-- WARNING!!! Any changes you make to this file will be ignored by Hive. --> <!-- WARNING!!! You must make your changes in hive-site.xml instead. --> <property> <name>javax.jdo.option.ConnectionDriverName</name> <!-- mysql的驱动程序 --> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <!-- mysql的连接地址 意思是hive在MySQL里面没有的话自动创建 --> <value>jdbc:mysql://slave2:3306/hive?characterEncoding=UTF-8&createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <!-- 连接MySQL的用户名 --> <value>root</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <!-- 连接MySQL的密码 --> <value>123456</value> <description>password to use against metastore database</description> </property> <property> <name>hive.querylog.location</name> <value>/home/hadoop/app/hive/iotmp</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/home/hadoop/app/hive/iotmp</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/home/hadoop/app/hive/iotmp</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> </configuration>
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。