赞
踩
服务器运行模式
虽然可以使用hive与shell交互的方式启动hive
[root@mini1 ~]# cd apps/hive/bin
[root@mini1 bin]# ll
总用量 32
-rwxr-xr-x. 1 root root 1031 4月 30 2015 beeline
drwxr-xr-x. 3 root root 4096 10月 17 12:38 ext
-rwxr-xr-x. 1 root root 7844 5月 8 2015 hive
-rwxr-xr-x. 1 root root 1900 4月 30 2015 hive-config.sh
-rwxr-xr-x. 1 root root 885 4月 30 2015 hiveserver2
-rwxr-xr-x. 1 root root 832 4月 30 2015 metatool
-rwxr-xr-x. 1 root root 884 4月 30 2015 schematool
[root@mini1 bin]# ./hive
hive>
客户端运行模式
但是界面并不好看,而hive也可以发布为服务(Hive thrift服务),然后可以使用hive自带的beeline去连接。如下
窗口1,开启服务
[root@mini1 ~]# cd apps/hive/bin
[root@mini1 bin]# ll
总用量 32
-rwxr-xr-x. 1 root root 1031 4月 30 2015 beeline
drwxr-xr-x. 3 root root 4096 10月 17 12:38 ext
-rwxr-xr-x. 1 root root 7844 5月 8 2015 hive
-rwxr-xr-x. 1 root root 1900 4月 30 2015 hive-config.sh
-rwxr-xr-x. 1 root root 885 4月 30 2015 hiveserver2
-rwxr-xr-x. 1 root root 832 4月 30 2015 metatool
-rwxr-xr-x. 1 root root 884 4月 30 2015 schematool
[root@mini1 bin]# ./hiveserver2
窗口2,作为客户端连接
[root@mini1 bin]# ./beeline
Beeline version 1.2.1 by Apache Hive
beeline> [root@mini1 bin]#
[root@mini1 bin]# ./beeline
Beeline version 1.2.1 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: root
Enter password for jdbc:hive2://localhost:10000: ******
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000>
可能出现错误
Error: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop3:supergroup:drwx------
./hadoop dfs -chmod -R 777 /tmp
1、查看数据库
- 0: jdbc:hive2://localhost:10000> show databases;
- +----------------+--+
- | database_name |
- +----------------+--+
- | default |
- +----------------+--+
- 1 row selected (1.456 seconds)
-
2、创建并使用数据库,查看表
- 0: jdbc:hive2://localhost:10000> create database myhive;
- No rows affected (0.576 seconds)
- 0: jdbc:hive2://localhost:10000> show databases;
- +----------------+--+
- | database_name |
- +----------------+--+
- | default |
- | myhive |
- +----------------+--+
- 0: jdbc:hive2://localhost:10000> use myhive;
- No rows affected (0.265 seconds)
- 0: jdbc:hive2://localhost:10000> show tables;
- +-----------+--+
- | tab_name |
- +-----------+--+
- +-----------+--+
-

3、创建表
- 0: jdbc:hive2://localhost:10000> create table emp(id int,name string);
- No rows affected (0.29 seconds)
- 0: jdbc:hive2://localhost:10000> show tables;
- +-----------+--+
- | tab_name |
- +-----------+--+
- | emp |
- +-----------+--+
- 1 row selected (0.261 seconds)
-
上传数据到该目录下,从页面看的话是个目录,如下
image
里面没有文件当然没有数据,那么我们需要上传个文件到该目录下。
- [root@mini1 ~]# cat sz.data
- 1,zhangsan
- 2,lisi
- 3,wangwu
- 4,furong
- 5,fengjie
- [root@mini1 ~]# hadoop fs -put sz.data /user/hive/warehouse/myhive.db/emp
-
再查看
image
4、查看表信息
- 0: jdbc:hive2://localhost:10000> select * from emp;
- +---------+-----------+--+
- | emp.id | emp.name |
- +---------+-----------+--+
- | NULL | NULL |
- | NULL | NULL |
- | NULL | NULL |
- | NULL | NULL |
- | NULL | NULL |
- +---------+-----------+--+
-
结果肯定都是null,因为创建表的时候根本没指定根据”,”来切分,而文件中的字段分隔用了逗号。那么删除该表,重新上传文件,重新建表语句如下
- 0: jdbc:hive2://localhost:10000> drop table emp;
- No rows affected (1.122 seconds)
- 0: jdbc:hive2://localhost:10000> show tables;
- +-----------+--+
- | tab_name |
- +-----------+--+
- +-----------+--+
- 0: jdbc:hive2://localhost:10000> create table emp(id int,name string)
- 0: jdbc:hive2://localhost:10000> row format delimited
- 0: jdbc:hive2://localhost:10000> fields terminated by ',';
- No rows affected (0.265 seconds)
- 0: jdbc:hive2://localhost:10000>
-
- [root@mini1 ~]# hadoop fs -put sz.data /user/hive/warehouse/myhive.db/emp
- 0: jdbc:hive2://localhost:10000> select * from emp;
- +---------+-----------+--+
- | emp.id | emp.name |
- +---------+-----------+--+
- | 1 | zhangsan |
- | 2 | lisi |
- | 3 | wangwu |
- | 4 | furong |
- | 5 | fengjie |
- +---------+-----------+--+
-

6、条件查询
- 0: jdbc:hive2://localhost:10000> select id,name from emp where id>2 order by id desc;
- INFO : Number of reduce tasks determined at compile time: 1
- INFO : In order to change the average load for a reducer (in bytes):
- INFO : set hive.exec.reducers.bytes.per.reducer=<number>
- INFO : In order to limit the maximum number of reducers:
- INFO : set hive.exec.reducers.max=<number>
- INFO : In order to set a constant number of reducers:
- INFO : set mapreduce.job.reduces=<number>
- INFO : number of splits:1
- INFO : Submitting tokens for job: job_1508216103995_0004
- INFO : The url to track the job: http://mini1:8088/proxy/application_1508216103995_0004/
- INFO : Starting Job = job_1508216103995_0004, Tracking URL = http://mini1:8088/proxy/application_1508216103995_0004/
- INFO : Kill Command = /root/apps/hadoop-2.6.4/bin/hadoop job -kill job_1508216103995_0004
- INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
- INFO : 2017-10-18 00:35:39,865 Stage-1 map = 0%, reduce = 0%
- INFO : 2017-10-18 00:35:46,275 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec
- INFO : 2017-10-18 00:35:51,487 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.34 sec
- INFO : MapReduce Total cumulative CPU time: 2 seconds 340 msec
- INFO : Ended Job = job_1508216103995_0004
- +-----+----------+--+
- | id | name |
- +-----+----------+--+
- | 5 | fengjie |
- | 4 | furong |
- | 3 | wangwu |
- +-----+----------+--+
- 3 rows selected (18.96 seconds)
-

看到这就能明白了,写的sql最后是被解析为了mapreduce程序放到yarn上来跑的,hive其实是提供了众多的mapreduce模板。
7、创建外部表
内部表建到固定路径,外部表任意数据
与内部表的区别,删除表时候,外部创建的表所在文件夹不会被删除
- 0: jdbc:hive2://localhost:10000> create external table emp2(id int,name string)
- 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ','//指定逗号分割
- 0: jdbc:hive2://localhost:10000> stored as textfile//文本存储方式
- 0: jdbc:hive2://localhost:10000> location '/company';
- No rows affected (0.101 seconds)//存储在/company目录下
- 0: jdbc:hive2://localhost:10000> dfs -ls /;
- +----------------------------------------------------------------------------------------+--+
- | DFS Output |
- +----------------------------------------------------------------------------------------+--+
- | Found 16 items |
- | -rw-r--r-- 2 angelababy mygirls 7 2017-10-01 20:22 /canglaoshi_wuma.avi |
- | -rw-r--r-- 2 root supergroup 22 2017-10-03 21:12 /cangmumayi.avi |
- | drwxr-xr-x - root supergroup 0 2017-10-18 00:55 /company |
- | drwxr-xr-x - root supergroup 0 2017-10-13 04:44 /flowcount |
- | drwxr-xr-x - root supergroup 0 2017-10-17 03:44 /friends |
- | drwxr-xr-x - root supergroup 0 2017-10-17 06:19 /gc |
- | drwxr-xr-x - root supergroup 0 2017-10-07 07:28 /liushishi.log |
- | -rw-r--r-- 3 12706 supergroup 60 2017-10-04 21:58 /liushishi.love |
- | drwxr-xr-x - root supergroup 0 2017-10-17 07:32 /logenhance |
- | -rw-r--r-- 2 root supergroup 26 2017-10-16 20:49 /mapjoin |
- | drwxr-xr-x - root supergroup 0 2017-10-16 21:16 /mapjoincache |
- | drwxr-xr-x - root supergroup 0 2017-10-13 13:15 /mrjoin |
- | drwxr-xr-x - root supergroup 0 2017-10-16 23:35 /reverse |
- | drwx------ - root supergroup 0 2017-10-17 13:10 /tmp |
- | drwxr-xr-x - root supergroup 0 2017-10-17 13:13 /user |
- | drwxr-xr-x - root supergroup 0 2017-10-14 01:33 /wordcount |
- +----------------------------------------------------------------------------------------+--+
- 0: jdbc:hive2://localhost:10000> create external table emp2(id int,name string)
- 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by '\t'
- 0: jdbc:hive2://localhost:10000> stored as textfile
- 0: jdbc:hive2://localhost:10000> location '/company';
- No rows affected (0.135 seconds)
- 0: jdbc:hive2://localhost:10000> show tables;
- +-----------+--+
- | tab_name |
- +-----------+--+
- | emp |
- | emp2 |
- | t_sz_ext |
- +-----------+--+
-

能发现多了目录/company和两张表,不过这个时候/company下是没任何东西的。
8、加载文件信息到表中
前面使用了hadoop命令将文件上传到了表对应的目录下,但是也可以在命令行下直接导入文件信息
- 0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table emp2;(也可以用hadood直接上传,不加local就表示直接从hdfs上传文件)
- INFO : Loading data to table myhive.emp2 from file:/root/sz.data
- INFO : Table myhive.emp2 stats: [numFiles=0, totalSize=0]
- No rows affected (0.414 seconds)
- 0: jdbc:hive2://localhost:10000> select * from emp2;
- +----------+------------+--+
- | emp2.id | emp2.name |
- +----------+------------+--+
- | 1 | zhangsan |
- | 2 | lisi |
- | 3 | wangwu |
- | 4 | furong |
- | 5 | fengjie |
- +----------+------------+--+
-
9、表分区,分区字段为school,导入数据到2个不同的分区中
- 0: jdbc:hive2://localhost:10000> create table stu(id int,name string)
- 0: jdbc:hive2://localhost:10000> partitioned by(school string)
- 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ',';
- No rows affected (0.319 seconds)
- 0: jdbc:hive2://localhost:10000> show tables;
- +-----------+--+
- | tab_name |
- +-----------+--+
- | emp |
- | emp2 |
- | stu |
- | t_sz_ext |
- +-----------+--+
- 0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table stu partition(school='scu');
- INFO : Loading data to table myhive.stu partition (school=scu) from file:/root/sz.data
- INFO : Partition myhive.stu{school=scu} stats: [numFiles=1, numRows=0, totalSize=46, rawDataSize=0]
- No rows affected (0.607 seconds)
- 0: jdbc:hive2://localhost:10000> select * from stu;
- +---------+-----------+-------------+--+
- | stu.id | stu.name | stu.school |
- +---------+-----------+-------------+--+
- | 1 | zhangsan | scu |
- | 2 | lisi | scu |
- | 3 | wangwu | scu |
- | 4 | furong | scu |
- | 5 | fengjie | scu |
- +---------+-----------+-------------+--+
- 5 rows selected (0.286 seconds)
- 0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz2.data' into table stu partition(school='hfut');
- INFO : Loading data to table myhive.stu partition (school=hfut) from file:/root/sz2.data
- INFO : Partition myhive.stu{school=hfut} stats: [numFiles=1, numRows=0, totalSize=46, rawDataSize=0]
- No rows affected (0.671 seconds)
- 0: jdbc:hive2://localhost:10000> select * from stu;
- +---------+-----------+-------------+--+
- | stu.id | stu.name | stu.school |
- +---------+-----------+-------------+--+
- | 1 | Tom | hfut |
- | 2 | Jack | hfut |
- | 3 | Lucy | hfut |
- | 4 | Kitty | hfut |
- | 5 | Lucene | hfut |
- | 6 | Sakura | hfut |
- | 1 | zhangsan | scu |
- | 2 | lisi | scu |
- | 3 | wangwu | scu |
- | 4 | furong | scu |
- | 5 | fengjie | scu |
- +---------+-----------+-------------+--+
-

注:hive是不遵循三范式的,别去考虑主键了。
10、添加分区
- 0: jdbc:hive2://localhost:10000> alter table stu add partition (school='Tokyo');
-
为了更直观,去页面查看
image
image
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。