赞
踩
导入:
导出:
- 1。Insert导出
- 1).将查询的结果导出到本地
- hive (default)> insert overwrite local directory '/opt/module/datas/export/student'
- select * from student;
- 指定导出格式(此种方式导出的数据不一定可用,可能还是textfile格式):
- insert overwrite directory '/aaa/bb' stored as xxx select ...
-
- 2).将查询的结果格式化导出到本地
- hive(default)>insert overwrite local directory '/opt/module/datas/export/student1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select * from student;
- hive(default)>insert overwrite local directory '/opt/module/datas/export/student1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' collection items terminated by '\n' select * from student;
-
- 3).将查询的结果导出到HDFS上(没有local)
- hive (default)> insert overwrite directory '/user/atguigu/student2'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
- select * from student;
- 2. Hadoop命令导出到本地
- hive (default)> dfs -get /user/hive/warehouse/student/month=201709/000000_0
- /opt/module/datas/export/student3.txt;
- 3. Hive Shell 命令导出
- 基本语法:(hive -f/-e 执行语句或者脚本 > file)
- [root@sparkproject1 hive]$ bin/hive -e 'select * from default.student;' >
- /opt/module/datas/export/student4.txt;
- 4. Export导出到HDFS上
- (defahiveult)> export table default.student to
- '/user/hive/warehouse/export/student';
- export和import主要用于两个Hadoop平台集群之间Hive表迁移。
- 5. Sqoop导出
实例:
1.hive shell
1) 导出到本地目录
- 1.登录hive shell
- hive (default)>
-
- 2.use 数据库;
-
- 3.指定本地目录并导出
- INSERT OVERWRITE LOCAL DIRECTORY '/user/local/hive/table-daochu' ROW FORMAT DELIMITED FIELDS TERMINATED by ',' select * from o.aim_base_employees;
查看数据结果:
-rw-r--r-- 1 root root 11132487 Feb 10 17:03 000000_0 //产生的结果存放在文件名为:000000_0。
[root@pf-bigdata1 table-daochu]#
可以使用editplus打开查看
2)导出到HDFS
导入到HDFS和导入本地文件类似,去掉HQL语句的LOCAL就可以了
INSERT OVERWRITE DIRECTORY '/user/local/hive/table-daochu' ROW FORMAT DELIMITED FIELDS TERMINATED by ',' select * from o.aim_base_employees;
查看hfds输出文件:
[hadoop@hadoopcluster78 bin]$ ./hadoop fs -cat /home/hadoop/output/000000_0
2.采用hive的-e和-f参数来导出数据。
参数为: -e 的使用方式,后面接SQL语句。>>后面为输出文件路径
hive -e "select * from o.aim_base_employees" >> /user/local/hive/table-daochu/test.txt
参数为: -f 的使用方式,后面接存放sql语句的文件。>>后面为输出文件路径
- testsql0.sql文件内容:
-
- select * from o.aim_base_employees limit 1000;
-
- 执行:
- hive -f testsql0.sql >> /user/local/hive/table-daochu/test1.txt
准备测试文件:
- CREATE TABLE tmp.testA (
- id INT,
- name string,
- area string
- ) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
- CREATE TABLE tmp.testB (
- id INT,
- name string,
- area string,
- code string
- ) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
数据文件(sourceA.txt):
- 1,fish1,SZ
- 2,fish2,SH
- 3,fish3,HZ
- 4,fish4,QD
- 5,fish5,SR
数据文件(sourceB.txt):
- 1,zy1,SZ,1001
- 2,zy2,SH,1002
- 3,zy3,HZ,1003
- 4,zy4,QD,1004
- 5,zy5,SR,1005
(1)本地文件导入到Hive表
- 打开hive shell:
-
- LOAD DATA LOCAL INPATH ' /user/local/hive/testdatas/sourceA.txt' INTO TABLE tmp.testA PARTITION(create_time='2020-02-10');
-
- LOAD DATA LOCAL INPATH ' /user/local/hive/testdatas/sourceB.txt' INTO TABLE tmp.testB PARTITION(create_time='2020-02-10');
-
- 注意:
- 可以直接指定一个目录,此目录下的文件一次性全部导入:
- LOAD DATA LOCAL INPATH ' /user/local/hive/testdatas' INTO TABLE tmp.testB PARTITION(create_time='2020-02-10');
查看导入结果:
select * from tmp.testA
select * from tmp.testB
注意:
如果创建表时加了分区语句 PARTITIONED BY (create_time string) 则load data 进表时也要加如PARTITION(create_time='2020-02-10')
如果创建表时没有加分区语句,则load data数据进表时也不要加这个语句。
(2)HDFS文件导入到Hive表
将sourceA.txt和sourceB.txt传到HDFS中,路径分别是/home/hadoop/sourceA.txt和/home/hadoop/sourceB.txt中
- hive> LOAD DATA INPATH '/home/hadoop/sourceA.txt' INTO TABLE testA PARTITION(create_time='2015-07-08');
- ...(省略)
- OK
- Time taken: 0.237 seconds
- hive> LOAD DATA INPATH '/home/hadoop/sourceB.txt' INTO TABLE testB PARTITION(create_time='2015-07-09');
- <pre name="code" class="java">...(省略)
- OK
- Time taken: 0.212 seconds
- hive> select * from testA;
- OK
- fish1 SZ 2015-07-08
- fish2 SH 2015-07-08
- fish3 HZ 2015-07-08
- fish4 QD 2015-07-08
- fish5 SR 2015-07-08
- Time taken: 0.029 seconds, Fetched: 5 row(s)
- hive> select * from testB;
- OK
- zy1 SZ 1001 2015-07-09
- zy2 SH 1002 2015-07-09
- zy3 HZ 1003 2015-07-09
- zy4 QD 1004 2015-07-09
- zy5 SR 1005 2015-07-09
- Time taken: 0.047 seconds, Fetched: 5 row(s)
(3)Hive表导入到Hive表
hive (default)> INSERT INTO TABLE tmp.testA PARTITION(create_time='2020-02-10') select id, name, area from tmp.testB where id = 1;
select * from tmp.testA
说明:
1,将testB中id=1的行,导入到testA,分区为2020-02-10
2,将testB中id=2的行,导入到testA,分区create_time为id=2行的code值。
/home/hadoop/sourceA.txt'导入到testA表
/home/hadoop/sourceB.txt'导入到testB表
(4)创建表的过程中从其他表导入
create table tmp.testC as select name, code from tmp.testB;
select * from tmp.testB
select * from tmp.testC
(5)
参考:
https://www.cnblogs.com/duanxz/p/9015937.html
尚:==============================================================================================
1.语法
hive> load data [local] inpath '/opt/module/datas/student.txt' [overwrite] into table student [partition (partcol1=val1,…)];
(1)load data:表示加载数据
(2)local:表示从本地加载数据到hive表;否则从HDFS加载数据到hive表
(3)inpath:表示加载数据的路径
(4)overwrite:表示覆盖表中已有数据,否则表示追加
(5)into table:表示加载到哪张表
(6)student:表示具体的表
(7)partition:表示上传到指定分区
2.实操案例
(0)创建一张表
hive (default)> create table student(id string, name string) row format delimited fields terminated by '\t';
(1)加载本地文件到hive
hive (default)> load data local inpath '/opt/module/datas/student.txt' into table default.student;
(2)加载HDFS文件到hive中
上传文件到HDFS
hive (default)> dfs -put /opt/module/datas/student.txt /user/atguigu/hive;
加载HDFS上数据
hive (default)> load data inpath '/user/atguigu/hive/student.txt' into table default.student;
(3)加载数据覆盖表中已有的数据
上传文件到HDFS
hive (default)> dfs -put /opt/module/datas/student.txt /user/atguigu/hive;
加载数据覆盖表中已有的数据
hive (default)> load data inpath '/user/atguigu/hive/student.txt' overwrite into table default.student;
1.创建一张分区表
hive (default)> create table student(id int, name string) partitioned by (month string) row format delimited fields terminated by '\t';
2.基本插入数据
hive (default)> insert into table student partition(month='201709') values(1,'wangwu'),(2,’zhaoliu’);
3.基本模式插入(根据单张表查询结果)
hive (default)> insert overwrite table student partition(month='201708')
select id, name from student where month='201709';
insert into:以追加数据的方式插入到表或分区,原有数据不会删除
insert overwrite:会覆盖表或分区中已存在的数据
注意:insert不支持插入部分字段
4.多表(多分区)插入模式(根据多张表查询结果)
hive (default)> from student
insert overwrite table student partition(month='201707')
select id, name where month='201709'
insert overwrite table student partition(month='201706')
select id, name where month='201709';
详见4.5.1章创建表。
根据查询结果创建表(查询的结果会添加到新创建的表中)
create table if not exists student3
as select id, name from student;
1.上传数据到hdfs上
hive (default)> dfs -mkdir /student;
hive (default)> dfs -put /opt/module/datas/student.txt /student;
2. 创建表,并指定在hdfs上的位置
hive (default)> create external table if not exists student5(
id int, name string
)
row format delimited fields terminated by '\t'
location '/student;
3.查询数据
hive (default)> select * from student5;
注意:先用export导出后,再将数据导入。
hive (default)> import table student2 partition(month='201709') from
'/user/hive/warehouse/export/student';
5.1.6
1.将查询的结果导出到本地
hive (default)> insert overwrite local directory '/opt/module/datas/export/student'
select * from student;
2.将查询的结果格式化导出到本地
hive(default)>insert overwrite local directory '/opt/module/datas/export/student1'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select * from student;
3.将查询的结果导出到HDFS上(没有local)
hive (default)> insert overwrite directory '/user/atguigu/student2'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
select * from student;
hive (default)> dfs -get /user/hive/warehouse/student/month=201709/000000_0
/opt/module/datas/export/student3.txt;
基本语法:(hive -f/-e 执行语句或者脚本 > file)
[atguigu@hadoop102 hive]$ bin/hive -e 'select * from default.student;' >
/opt/module/datas/export/student4.txt;
(defahiveult)> export table default.student to
'/user/hive/warehouse/export/student';
export和import主要用于两个Hadoop平台集群之间Hive表迁移。
后续课程专门讲。
注意:Truncate只能删除管理表,不能删除外部表中数据
hive (default)> truncate table student;
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。