赞
踩
这两个,左边是外表,右边是内表从大体上看似乎没什么区别,但是他的主要区别在于删除操作上:
内表删除表或者分区元数据和数据都删了
外表删除表元数据删除,数据保留
下面分别执行两条语句:
drop table food;
drop table food_ex
执行这两条语句以后,两个表都删除了,但是结果却不一样,访问NameNode的50070端口,可以看到虽然都执行了表删除语句,内表删除后是把元数据和数据都删除了,而外表却只删除了元数据(表的信息)但真实数据却保留了下来;
必须在表定义时创建partition
a、单分区建表语句:
- create table psn5
- (
- id int,
- name string,
- likes array<string>,
- address map<string,string>
- )
- partitioned by(age int)
- row format delimited
- fields terminated by ','
- collection items terminated by '-'
- map keys terminated by ':';
单分区表,按年龄分区。 以age为文件夹区分
- 1,tom,game-music-book,stu:henan-home:henan-work:beijing
- 2,jack,money-meinv,stu:wuhan-home:wuhan
- 3,lusi,shopping-music,stu:shanghai-home:beijing
下面是log_info 的表结构信息,分区已经创建:
b、 双分区建表语句:
- create table psn6
- (
- id int,
- name string,
- likes array<string>,
- address map<string,string>
- )
- partitioned by(age int,sex string)
- row format delimited
- fields terminated by ','
- collection items terminated by '-'
- map keys terminated by ':';
双分区表,按age和sex分区,在表结构中新增加了age和sex两列。 先以age为文件夹,再以sex子文件夹区分
下面是log_info2 的表结构信息,分区已经创建:
查看数据:
- hive> dfs -cat /usr/hive_remote/warehouse/psn6/age=28/sex=male/data03;
- 1,lxk,game-music-book,stu:henan-home:henan-work:beijing
- 2,jack,money-meinv,stu:wuhan-home:wuhan
- 3,lusi,shopping-music,stu:shanghai-home:beijing
- hive>
c、Hive添加分区表语法 (表已创建,在此基础上添加分区内容):
- ALTER TABLE table_name ADD partition_spec
- [ LOCATION 'location1' ]
- partition_spec [ LOCATION 'location2' ] ...
-
- 举例:
- ALTER TABLE day_table
- ADD PARTITION (dt='2008-08-08', hour='08')
- location '/path/pv1.txt'
d、Hive删除分区语法:
ALTER TABLE table_name DROP PARTITION partition_spec, partition_spec,...
用户可以用 ALTER TABLE DROP PARTITION 来删除分区。分区的元数据和数据将被一并删除。例:
ALTER TABLE day_hour_table DROP PARTITION (dt='2008-08-08', hour='09');
alter table log_info drop partition (times='20160222');
e、Hive数据加载进分区表中语法:
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
例:
单分区数据加载
- load data local inpath '/opt/log' overwrite into table log_info partition(times='20160223');
- load data local inpath '/root/data03' overwrite into table psn6 partition(age=28,sex='male');
- hive> select * from log_info;
- OK
- 23.45.66.77 20160222
- 45.66.11.8 20160222
- 2.3.4.5 20160223
- 4.56.77.31 20160223
- 34.55.6.77 20160223
- 34.66.11.6 20160223
- Time taken: 0.125 seconds, Fetched: 6 row(s)
在Hive中会根据分区的名称新建两个分区目录
双分区数据加载:
- load data local inpath '/opt/log3' overwrite into table log_info2 partition(days='23',hours='12');
-
- hive> select * from log_info2;
- OK
- 12.3.33.66 23 12
- 23.44.56.6 23 12
- 12.22.33.4 23 12
- 8.78.99.4 23 12
- 233.23.211.2 23 12
- Time taken: 0.069 seconds, Fetched: 5 row(s)
- #当数据被加载至表中时,不会对数据进行任何转换。
- #Load操作只是将数据复制至Hive表对应的位置。数据加载时在表下自动创建一个目录基于分区的查询的语句:
-
-
- SELECT day_table.* FROM day_table WHERE day_table.dt>= '2008-08-08';
f、Hive查看分区语句:
- hive> show partitions day_hour_table;
- OK
- dt=2008-08-08/hour=08
- dt=2008-08-08/hour=09
- dt=2008-08-09/hour=09
- hive> show partitions log_info;
- OK
- times=20160222
- times=20160223
- Time taken: 0.06 seconds, Fetched: 2 row(s)
hive 动态分区配置:
其他相关参数 :
- set hive.exec.max.dynamic.partitions.pernode; #每一个执行mr节点上,允许创建的动态分区的最大数量(100)
- set hive.exec.max.dynamic.partitions; #所有执行mr节点上,允许创建的所有动态分区的最大数量(1000)
- set hive.exec.max.created.files; #所有的mr job允许创建的文件的最大数量(100000)
数据准备:
- 1,tom,female,28,game-music-book,stu:henan-home:henan-work:beijing
- 2,jack,male,21,money-meinv,stu:wuhan-home:wuhan
- 3,lusi,male,18,shopping-music,stu:shanghai-home:beijing
建表:
- create table psn9
- (
- id int,
- name string,
- likes array<string>,
- address map<string,string>
- )
- partitioned by(age int,sex string)
- row format delimited
- fields terminated by ','
- collection items terminated by '-'
- map keys terminated by ':';
动态分区: insert into psn9 partition (age,sex) select id, name, likes, address, age, sex from psn8;
- hive> dfs -cat /usr/hive_remote/warehouse/psn9/age=18/sex=male/000000_1000;
- 3,lusi,shopping-music,stu:shanghai-home:beijing
- hive>
修复分区:分区是作为元数据存储在MySQL中的,当hdfs路径中包含多级目录,同时存在分区列的是,可以创建外部表使用,但是分区的元数据没有在MySQL中存储,查不到数据。
命令:msck repair table tablename;
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。