赞
踩
Hive在0.14及以后版本支持字段的多分隔符,参考:
https://cwiki.apache.org/confluence/display/Hive/MultiDelimitSerDe
而Fayson在以前的文章中也基于C5的环境介绍过如何在Hive中使用多分隔符,参考《Hive多分隔符支持示例》。本文主要介绍在CDH6中如何让Hive支持多分隔符。
1.Redhat7.2
2.CDH6.2.0
3.Hive2.1
如何将多个字符作为字段分割符的数据文件加载到Hive表中,示例数据如下:
字段分隔符为“@#$”
test1@#$test1name@#$test2valuetest2@#$test2name@#$test2valuetest3@#$test3name@#$test4value
如何将上述示例数据加载到Hive表(multi_delimiter_test)中,表结构如下:
1.从CM进入Hive,点击配置搜索aux,在Hive 辅助 JAR 目录 中输入/opt/cloudera/parcels/CDH/lib/hive/contrib,保存更改,重启。
2.准备多分隔符文件并装载到HDFS对应目录
[root@cdh1 ~]# ll -h multi_de.txt -rw-r--r-- 1 root root 1.1G Jan 6 23:14 multi_de.txt[root@cdh1 ~]# tail -10 multi_de.txt test2949@#$test2949name@#$test2950valuetest2950@#$test2950name@#$test2951valuetest2951@#$test2951name@#$test2952valuetest2952@#$test2952name@#$test2953valuetest2953@#$test2953name@#$test2954valuetest2954@#$test2954name@#$test2955valuetest2955@#$test2955name@#$test2956valuetest2956@#$test2956name@#$test2957valuetest2957@#$test2957name@#$test2958valuetest2958@#$test2958name@#$test2959value[root@cdh1 ~]# hadoop fs -put multi_de.txt /test/[root@cdh1 ~]# hadoop fs -ls /test/Found 1 items-rw-r--r-- 3 root supergroup 1079408772 2020-01-06 23:33 /test/multi_de.txt
3.基于准备好的多分隔符文件建表
create external table multi_delimiter_test(s1 string,s2 string,s3 string)ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' WITH SERDEPROPERTIES ("field.delim"="@#$")stored as textfile location '/test';
4.测试
0: jdbc:hive2://localhost:10000/> select * from multi_delimiter_test limit 10;
0: jdbc:hive2://localhost:10000/> select count(*) from multi_delimiter_test;
1.在执行HQL时报错
Error: Error while compiling statement: FAILED: RuntimeException MetaException(message:java.lang.ClassNotFoundException Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found) (state=42000,code=40000)
这是由于没有指定Hive 的辅助 JAR 目录,导致找不到类。需要按照在Cloudera Manager中指定Hive的辅助JAR目录,然后重启,再次查询即可。目录的路径为/opt/cloudera/parcels/CDH/lib/hive/contrib
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。