当前位置:   article > 正文

Hive中的自定义分隔符(包含Hadoop和Hive详细安装)_hive表 使用多字符换行

hive表 使用多字符换行
  1. 导出到 HDFS 或者本地的数据文件,需要直接导入 Hive 时,有时包含特殊字符,按照给定的字段单字符分隔符或者默认换行分隔符,插入到 Hive 的数据可能不是我们预期的,此时需要我们自定义 Hive 的分隔符。
  2. 同时Hive默认只支持单字符,如果需要支持多字符作为分隔符,可以按照如下方式重写输入格式化类来自定义分割符,也可以进行一些设置,并在建表时声明出来分割方式。下面会分别介绍这两种方式。在介绍之前先准备环境,介绍一下Hadoop和Hive的安装。

目录

一、环境准备

1.1 Hadoop 安装

1.2 Hive 安装

二、Hive 自带的多字符分割使用

三、Hive 自定义分隔符

3.1 Maven项目的 pom.xml文件中添加如下依赖

3.2 自定义重写的 TextInputFormat 类

3.3 自定义的 LineRecordReader 类

3.4 使用


 

 

一、环境准备

环境使用 Centos 7,同时先安装好 JDK,这里假定集群已经配置好(比如SSH、NTP、防火墙、网络等)。Hive 需要依赖于 Hadoop,因此需要先安装好 Hadoop 和 Hive 。本次 Hadoop 选用的版本是 2.7.7,Hive的版本是 1.2.2 。
规划
节点名HDFSYARNHive
node1
NameNode 、DataNode
ResourceManager、NodeManager
yes
node2
DataNode
NodeManager
 
node3
DataNode
NodeManager
 

 

 

 

 

 

 

 

 

1.1 Hadoop 安装

1. 下载并解压
  1. wget http://archive.apache.org/dist/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz
  2. tar -zxf hadoop-2.7.7.tar.gz -C /opt/
  1. 2. 配置Hadoop环境变量
  2. 进入修改用户的环境变量配置文件:vim ~/.bash_profile ,添加如下配置,并使配置立即生效 source ~/.bash_profile
  1. #hadoop配置
  2. export HADOOP_HOME=/opt/hadoop-2.7.7
  3. export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

配置完成后可以在终端输入: hadoop version 查看配置是否神效,如果生效可以看到Hadoop版本信息。

3. 创建需要的文件
  1. #文件名可以所以,和配置文件对应起来就行
  2. mkdir -p /opt/hadoop/dfs/dn
  3. mkdir -p /opt/hadoop/dfs/nn
  4. mkdir -p /opt/hadoop/dfs/snn
  5. mkdir -p /opt/hadoop/yarn/container-logs

 

  1. 4. 修改 hadoop-env.sh 配置文件
vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh
配置如下,主要配置上JAVA_HOME
  1. export JAVA_HOME=/usr/local/zulu8
  2. export HADOOP_HOME=/opt/hadoop-2.7.7
  3. export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
  4. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  5. export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
  6. # 配置用户信息,如果是 root 用户启动,需要配置为 root;
  7. export HDFS_NAMENODE_USER=root
  8. export HDFS_DATANODE_USER=root
  9. export HDFS_SECONDARYNAMENODE_USER=root
  10. export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root
5. 修改 core-site.xml 配置文件
vim $HADOOP_HOME/etc/hadoop/core-site.xml
  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://node1:8020</value>
  5. </property>
  6. <!-- 设置垃圾回收的时间,0为禁止,单位分钟数 -->
  7. <property>
  8. <name>fs.trash.interval</name>
  9. <value>60</value>
  10. </property>
  11. <property>
  12. <name>fs.trash.checkpoint.interval</name>
  13. <value>0</value>
  14. </property>
  15. <property>
  16. <name>hadoop.proxyuser.root.groups</name>
  17. <value>*</value>
  18. </property>
  19. <property>
  20. <name>hadoop.proxyuser.root.hosts</name>
  21. <value>*</value>
  22. </property>
  23. </configuration>
6. 修改 hdfs-site.xml 配置文件
vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml
  1. <configuration>
  2. <property>
  3. <name>dfs.namenode.name.dir</name>
  4. <value>file:///opt/hadoop/dfs/nn</value>
  5. </property>
  6. <property>
  7. <name>dfs.datanode.data.dir</name>
  8. <value>file:///opt/hadoop/dfs/dn</value>
  9. </property>
  10. <property>
  11. <name>dfs.namenode.checkpoint.dir</name>
  12. <value>file:///opt/hadoop/dfs/snn</value>
  13. </property>
  14. <!--block的副本数,默认为3-->
  15. <property>
  16. <name>dfs.replication</name>
  17. <value>1</value>
  18. </property>
  19. <property>
  20. <name>dfs.namenode.http-address</name>
  21. <value>node1:50070</value>
  22. </property>
  23. <property>
  24. <name>dfs.namenode.secondary.http-address</name>
  25. <value>node1:50090</value>
  26. </property>
  27. <property>
  28. <name>dfs.permissions</name>
  29. <value>false</value>
  30. </property>
  31. </configuration>
7. 修改 mapred-site.xml 配置文件
vim $HADOOP_HOME/etc/hadoop/mapred-site.xml
  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5.     </property>
  6. <property>
  7. <name>mapreduce.jobhistory.address</name>
  8. <value>node1:10020</value>
  9. </property>
  10. <property>
  11. <name>mapreduce.jobhistory.webapp.address</name>
  12. <value>node1:19888</value>
  13. </property>
  14. <property>
  15. <name>mapreduce.jobhistory.webapp.https.address</name>
  16. <value>node1:19890</value>
  17. </property>
  18. </configuration>
8. 修改 yarn-site.xml 配置文件
vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
  1. <configuration>
  2. <!-- Site specific YARN configuration properties -->
  3. <property>
  4.         <name>yarn.resourcemanager.hostname</name>
  5.         <value>node1</value>
  6.     </property>
  7. <property>
  8.         <name>yarn.nodemanager.aux-services</name>
  9.         <value>mapreduce_shuffle</value>
  10.     </property>
  11. <property>
  12. <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  13. <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  14. </property>
  15. <!--NodeManager本地目录-->
  16. <property>
  17. <name>yarn.nodemanager.local-dirs</name>
  18. <value>file:///opt/hadoop/yarn</value>
  19. </property>
  20. <!--NodeManager容器日志目录-->
  21. <property>
  22. <name>yarn.nodemanager.log-dirs</name>
  23. <value>file:///opt/hadoop/yarn/container-logs</value>
  24. </property>
  25. <property>
  26. <name>yarn.log-aggregation-enable</name>
  27. <value>true</value>
  28. </property>
  29. <property>
  30. <name>yarn.log.server.url</name>
  31. <value>http://node1:19888/jobhistory/logs/</value>
  32. </property>
  33. <property>
  34. <name>yarn.nodemanager.vmem-check-enabled</name>
  35. <value>false</value>
  36. </property>
  37. <property>
  38. <name>yarn.application.classpath</name>
  39. <value>
  40. $HADOOP_HOME/etc/hadoop,
  41. $HADOOP_HOME/share/hadoop/common/*,
  42. $HADOOP_HOME/share/hadoop/common/lib/*,
  43. $HADOOP_HOME/share/hadoop/hdfs/*,
  44. $HADOOP_HOME/share/hadoop/hdfs/lib/*,
  45. $HADOOP_HOME/share/hadoop/mapreduce/*,
  46. $HADOOP_HOME/share/hadoop/mapreduce/lib/*,
  47. $HADOOP_HOME/share/hadoop/yarn/*,
  48. $HADOOP_HOME/share/hadoop/yarn/lib/*
  49. </value>
  50. </property>
  51. </configuration>
9. 修改 slaves 配置文件
  1. #vim $HADOOP_HOME/etc/hadoop/workers
  2. vim $HADOOP_HOME/etc/hadoop/slaves
  1. node1
  2. node2
  3. node3
10. 分发到各个节点 
  1. scp -r $HADOOP_HOME/ root@node2:/opt/
  2. scp -r $HADOOP_HOME/ root@node3:/opt/
11. 格式化 NameNode
$HADOOP_HOME/bin/hdfs namenode -format
12. 启动 Hadoop或关闭
  1. #$HADOOP_HOME/sbin/start-dfs.sh
  2. #$HADOOP_HOME/sbin/start-yarn.sh
  3. $HADOOP_HOME/sbin/start-all.sh
  4. #如果关闭
  5. $HADOOP_HOME/sbin/stop-all.sh
  1. 13. 检查和测试
  2. 查看启动的线程 jps

Hadoop jps

在浏览器中输入 http://node1:50070 可以看到Hadoop的详细信息
运行自带的 WordCount 测试环境是否可以正常运行和计算
  1. hadoop dfs -mkdir /tmp/input
  2. hadoop fs -put $HADOOP_HOME/README.txt /tmp/input
  3. hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar wordcount /tmp/input /tmp/output
  4. hadoop fs -tail /tmp/output/part-r-00000

test WordCount

 

1.2 Hive 安装

1. 下载和解压
  1. wget http://archive.apache.org/dist/hive/hive-1.2.2/apache-hive-1.2.2-bin.tar.gz
  2. tar -zxf apache-hive-1.2.2-bin.tar.gz -C /opt/
2. 配置Hive环境变量
vim ~/.bash_profile
添加如下配置,保存并推出
  1. #Hive配置
  2. export HIVE_HOME=/opt/apache-hive-1.2.2-bin
  3. export PATH=$PATH:$HIVE_HOME/bin
3. Mysql需要设置一个可以远程访问的账号,然后再创建一个hive数据库
  1. mysql> use mysql;
  2. mysql> select host,user from user;
  3. mysql> grant all privileges on *.* to 'hive'@'%' identified by '远程访问mysql的密码' with grant option;
  4. mysql> flush privileges;
  5. mysql> create database metastore;
  6. mysql> exit;
添加Mysql驱动到$HIVE_HOME/lib 下  
wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar -P $HIVE_HOME/lib/
4. 复制重命名 hive 的配置文件
  1. cd $HIVE_HOME/conf
  2. cp hive-env.sh.template hive-env.sh
  3. cp hive-default.xml.template hive-site.xml
  4. cp hive-log4j.properties.template hive-log4j.properties
  5. cp hive-exec-log4j.properties.template hive-exec-log4j.properties
  6. cp beeline-log4j.properties.template beeline-log4j.properties
5. 修改hive-env.sh中的内容:
  1. export JAVA_HOME=/usr/local/zulu8
  2. export HADOOP_HOME=/opt/hadoop-2.7.7
  3. export HIVE_HOME=/opt/apache-hive-1.2.2-bin
  4. export HIVE_CONF_DIR=$HIVE_HOME/conf
  1. 6. 配置hive-log4j.properties
  2. 找到 log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter 注释掉,改为:
  1. #log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter
  2. log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
7. 修改hive-site.xml
vim $HIVE_HOME/conf/hive-site.xml
  1. <configuration>
  2. <!-- 数据存储的HDFS目录,用来存储Hive数据库、表等数据 -->
  3. <property>
  4. <name>hive.metastore.warehouse.dir</name>
  5. <value>/hive/warehouse</value>
  6. <description>location of default database for the warehouse</description>
  7. </property>
  8. <!-- 远程服务HiveServer2绑定的IP -->
  9. <property>
  10. <name>hive.server2.thrift.bind.host</name>
  11. <value>node1</value>
  12. <description>Bind host on which to run the HiveServer2 Thrift service.</description>
  13. </property>
  14. <property>
  15. <name>hive.metastore.uris</name>
  16. <!-- <value/> -->
  17. <value>thrift://node1:9083</value>
  18. <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  19. </property>
  20. <!-- 配置数据库连接,用来存储数据库元信息 -->
  21. <property>
  22. <name>javax.jdo.option.ConnectionURL</name>
  23. <!--<value>jdbc:derby:;databaseName=metastore_db;create=true</value>-->
  24. <value>jdbc:mysql://node1:3306/metastore?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncoding=utf8&amp;useSSL=false</value>
  25. <description>
  26. JDBC connect string for a JDBC metastore.
  27. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
  28. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
  29. </description>
  30. </property>
  31. <!-- 配置mysql数据库驱动名称 -->
  32. <property>
  33. <name>javax.jdo.option.ConnectionDriverName</name>
  34. <!--<value>org.apache.derby.jdbc.EmbeddedDriver</value>-->
  35. <value>com.mysql.jdbc.Driver</value>
  36. <description>Driver class name for a JDBC metastore</description>
  37. </property>
  38. <!-- Mysql数据库用户名 -->
  39. <property>
  40. <name>javax.jdo.option.ConnectionUserName</name>
  41. <!--<value>APP</value>-->
  42. <value>hive</value>
  43. <description>Username to use against metastore database</description>
  44. </property>
  45. <!-- Mysql数据库登陆密码 -->
  46. <property>
  47. <name>javax.jdo.option.ConnectionPassword</name>
  48. <!--<value>mine</value>-->
  49. <value>123456</value>
  50. <description>password to use against metastore database</description>
  51. </property>
  52. <!-- 启动时自动建表 -->
  53. <property>
  54. <name>datanucleus.schema.autoCreateAll</name>
  55. <value>true</value>
  56. <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
  57. </property>
  58. <property>
  59. <name>hive.metastore.schema.verification</name>
  60. <value>false</value>
  61. <description>
  62. Enforce metastore schema version consistency.
  63. True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
  64. schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
  65. proper metastore schema migration. (Default)
  66. False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
  67. </description>
  68. </property>
  69. <!--Hive的job临时空间-->
  70. <property>
  71. <name>hive.exec.local.scratchdir</name>
  72. <!-- <value>${system:java.io.tmpdir}/${system:user.name}</value> -->
  73. <value>/tmp/hive/exec/${user.name}</value>
  74. <description>Local scratch space for Hive jobs</description>
  75. </property>
  76. <property>
  77. <name>hive.downloaded.resources.dir</name>
  78. <!-- <value>${system:java.io.tmpdir}/${hive.session.id}_resources</value> -->
  79. <value>/tmp/hive/${hive.session.id}_resources</value>
  80. <description>Temporary local directory for added resources in the remote file system.</description>
  81. </property>
  82. <property>
  83. <name>hive.querylog.location</name>
  84. <!-- <value>${system:java.io.tmpdir}/${system:user.name}</value> -->
  85. <value>/tmp/hive/log</value>
  86. <description>Location of Hive run time structured log file</description>
  87. </property>
  88. <property>
  89. <name>hive.server2.logging.operation.log.location</name>
  90. <!-- <value>${system:java.io.tmpdir}/${system:user.name}/operation_logs</value> -->
  91. <value>/tmp/hive/server2/${user.name}/operation_logs</value>
  92. <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  93. </property>
  94. <!--配置执行动态分区的模式。nonstrict:不严格模式;strict:严格模式-->
  95. <property>
  96. <name>hive.exec.dynamic.partition.mode</name>
  97. <value>nonstrict</value>
  98. <description>
  99. In strict mode, the user must specify at least one static partition
  100. in case the user accidentally overwrites all partitions.
  101. In nonstrict mode all partitions are allowed to be dynamic.
  102. </description>
  103. </property>
  104. <property>
  105. <name>hive.server2.authentication</name>
  106. <value>NONE</value>
  107. <description>
  108. Expects one of [nosasl, none, ldap, kerberos, pam, custom].
  109. Client authentication types.
  110. NONE: no authentication check
  111. LDAP: LDAP/AD based authentication
  112. KERBEROS: Kerberos/GSSAPI authentication
  113. CUSTOM: Custom authentication provider
  114. (Use with property hive.server2.custom.authentication.class)
  115. PAM: Pluggable authentication module
  116. NOSASL: Raw transport
  117. </description>
  118. </property>
  119. <property>
  120. <name>hive.server2.thrift.client.user</name>
  121. <!--<value>anonymous</value>-->
  122. <value>hive</value>
  123. <description>Username to use against thrift client</description>
  124. </property>
  125. <property>
  126. <name>hive.server2.thrift.client.password</name>
  127. <!--<value>anonymous</value>-->
  128. <value>hive</value>
  129. <description>Password to use against thrift client</description>
  130. </property>
  131. </configuration>
  1. 8. 初始化 hive
  2. 这一步会在 Mysql 的 metastore 库下初始化的表。
$HIVE_HOME/bin/schematool -dbType mysql -initSchema
  1. 9. 启动 hive
  2. 如果是需要远程连接,这两个服务必须开启,比如JDBC、数据库工具、beeline等。
  1. hive --service metastore >/dev/null 2>&1 &
  2. hive --service hiveserver2 >/dev/null 2>&1 &
10 使用和退出
  1. hive
  2. hive> show databases;
  3. OK
  4. default
  5. Time taken: 0.786 seconds, Fetched: 1 row(s)
  6. hive> quit;

 

二、Hive 自带的多字符分割使用

默认情况下,Hive对于分隔符只支持单字符,不过Hive自带一个工具jar包,这个包支持正则和多字符方式定义分隔符。

1. 查询hive自带的工具jar包位置
find / -name hive-contrib-*.jar
2. 将上面搜索到的jar包配置到配置hive-site.xml文件中
  1. <property>
  2. <name>hive.aux.jars.path</name>
  3. <value>file:///opt/apache-hive-1.2.2-bin/lib/hive-contrib-1.2.2.jar</value>
  4. <description>Added by tiger.zeng on 20120202.These JAR file are available to all users for all jobs</description>
  5. </property>
上面配置之后可以不用重启Hive服务,只需要重新进入Hive CLI就可生效,且是永久的。也可以配置为临时的,就是在进入Hive CLI后,临时加载这个jar包,执行如下:
hive> add jar file:///opt/apache-hive-1.2.2-bin/lib/hive-contrib-1.2.2.jar
  1. 3. 使用
  2. 准备如下数据,分隔符为 |#|,
  1. 3324|#|003|#|20190816 09:16:18|#|0.00|#|2017-11-13 12:00:00
  2. 3330|#|009|#|20190817 15:21:03|#|1234.56|#|2017-11-14 12:01:00
建表时如下声明与定义如下,并加载数据,查询数据:
  1. drop table if exists split_test;
  2. CREATE TABLE split_test(
  3. id INT COMMENT '借阅查询ID',
  4. number STRING COMMENT '流水号',
  5. `date` STRING COMMENT '查询返回日期',
  6. loanamount DOUBLE COMMENT '借款金额范围',
  7. createtime TIMESTAMP COMMENT '创建时间'
  8. )ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe'
  9. WITH SERDEPROPERTIES ("field.delim"="|#|")
  10. STORED AS TEXTFILE;
  11. --加载数据
  12. LOAD DATA LOCAL INPATH '/root/split_test.txt' OVERWRITE INTO TABLE split_test;
查询结果如下:
  1. --查询数据
  2. hive> select * from split_test;
  3. OK
  4. 3324 003 20190816 09:16:18 0.0 2017-11-13 12:00:00
  5. 3330 009 20190817 15:21:03 1234.56 2017-11-14 12:01:00
  6. Time taken: 0.11 seconds, Fetched: 2 row(s)

 

三、Hive 自定义分隔符

自定义部分使用 Java 编写,使用Idea新建一个 Maven项目。

3.1 Maven项目的 pom.xml文件中添加如下依赖

  1. <properties>
  2. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  3. <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  4. <maven.compiler.source>1.8</maven.compiler.source>
  5. <maven.compiler.target>1.8</maven.compiler.target>
  6. <java.version>1.8</java.version>
  7. </properties>
  8. <dependencies>
  9. <dependency>
  10. <groupId>org.apache.hadoop</groupId>
  11. <artifactId>hadoop-client</artifactId>
  12. <version>2.7.7</version>
  13. </dependency>
  14. <dependency>
  15. <groupId>org.apache.hadoop</groupId>
  16. <artifactId>hadoop-common</artifactId>
  17. <version>2.7.7</version>
  18. </dependency>
  19. <dependency>
  20. <groupId>org.apache.hadoop</groupId>
  21. <artifactId>hadoop-hdfs</artifactId>
  22. <version>2.7.7</version>
  23. </dependency>
  24. <dependency>
  25. <groupId>org.apache.hadoop</groupId>
  26. <artifactId>hadoop-mapreduce-client-core</artifactId>
  27. <version>2.7.7</version>
  28. </dependency>
  29. <dependency>
  30. <groupId>org.apache.hive</groupId>
  31. <artifactId>hive-exec</artifactId>
  32. <version>1.2.2</version>
  33. <!--<exclusions>
  34. <exclusion>
  35. <groupId>org.pentaho</groupId>
  36. <artifactId>pentaho-aggdesigner-algorithm</artifactId>
  37. </exclusion>
  38. </exclusions>-->
  39. </dependency>
  40. </dependencies>

3.2 自定义重写的 TextInputFormat 类

  1. package yore.hive;
  2. import org.apache.hadoop.fs.FileSystem;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.io.LongWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.io.compress.CompressionCodec;
  7. import org.apache.hadoop.io.compress.CompressionCodecFactory;
  8. import org.apache.hadoop.io.compress.SplittableCompressionCodec;
  9. import org.apache.hadoop.mapred.*;
  10. import java.io.IOException;
  11. /**
  12. * 自定义重写的 TextInputFormat 类
  13. * 此类是将org.apache.hadoop.mapred下的TextInputFormat源码拷贝进来进行改写。
  14. *
  15. * <pre>
  16. * Hive将HDFS上的文件导入Hive会进行如下处理:
  17. * 调用InputFormat,将文件切成不同的文档。每篇文档即一行(Row)。
  18. * 调用SerDe的Deserializer,将一行(Row),切分为各个字段。
  19. *
  20. * 可以查看hadoop-mapreduce-client-core-2.7.7.jar包org.apache.hadoop.mapred下的类TextInputFormat。
  21. * 建表前在hive的CLI界面上输入如下即可实现自定义多字符换行符
  22. * set textinputformat.record.delimiter=<自定义换行字符串>;
  23. * Maven项目的 pom.xml文件中添加如下依赖。
  24. *
  25. * </pre>
  26. *
  27. * Created by yore on 2019/4/3 17:56
  28. */
  29. public class SQPTextInputFormat extends FileInputFormat<LongWritable, Text> implements JobConfigurable {
  30. private CompressionCodecFactory compressionCodecs = null;
  31. //"US-ASCII""ISO-8859-1""UTF-8""UTF-16BE""UTF-16LE""UTF-16"
  32. private final static String defaultEncoding = "UTF-8";
  33. private String encoding = null;
  34. public void configure(JobConf jobConf) {
  35. this.compressionCodecs = new CompressionCodecFactory(jobConf);
  36. }
  37. @Override
  38. protected boolean isSplitable(FileSystem fs, Path filename) {
  39. CompressionCodec codec = this.compressionCodecs.getCodec(filename);
  40. if (null == codec) {
  41. return true;
  42. }
  43. return codec instanceof SplittableCompressionCodec;
  44. }
  45. public RecordReader<LongWritable, Text> getRecordReader(InputSplit inputSplit, JobConf jobConf, Reporter reporter) throws IOException {
  46. reporter.setStatus(inputSplit.toString());
  47. String delimiter = jobConf.get("textinputformat.record.linesep");
  48. this.encoding = jobConf.get("textinputformat.record.encoding",defaultEncoding);
  49. byte[] recordDelimiterBytes = null;
  50. if (null != delimiter) {//Charsets.UTF_8
  51. recordDelimiterBytes = delimiter.getBytes(this.encoding);
  52. }
  53. return new SQPRecordReader(jobConf, (FileSplit)inputSplit, recordDelimiterBytes);
  54. }
  55. }

3.3 自定义的 LineRecordReader 类

  1. package yore.hive;
  2. import org.apache.commons.logging.Log;
  3. import org.apache.commons.logging.LogFactory;
  4. import org.apache.hadoop.conf.Configuration;
  5. import org.apache.hadoop.fs.FSDataInputStream;
  6. import org.apache.hadoop.fs.FileSystem;
  7. import org.apache.hadoop.fs.Path;
  8. import org.apache.hadoop.fs.Seekable;
  9. import org.apache.hadoop.io.LongWritable;
  10. import org.apache.hadoop.io.Text;
  11. import org.apache.hadoop.io.compress.*;
  12. import org.apache.hadoop.mapred.FileSplit;
  13. import org.apache.hadoop.mapred.RecordReader;
  14. import org.apache.hadoop.util.LineReader;
  15. import java.io.IOException;
  16. import java.io.InputStream;
  17. /**
  18. * 自定义的 LineRecordReader 类
  19. * 此类是将org.apache.hadoop.mapred下的 LineRecordReader 源码拷贝进来进行改写。
  20. *
  21. * Created by yore on 2019/4/3 18:03
  22. */
  23. public class SQPRecordReader implements RecordReader<LongWritable, Text> {
  24. private static final Log LOG = LogFactory.getLog(SQPRecordReader.class.getName());
  25. private CompressionCodecFactory compressionCodecs = null;
  26. private long start;
  27. private long pos;
  28. private long end;
  29. private LineReader in;
  30. private FSDataInputStream fileIn;
  31. private final Seekable filePosition;
  32. int maxLineLength;
  33. private CompressionCodec codec;
  34. private Decompressor decompressor;
  35. //field separator
  36. private String FieldSep;
  37. private static final String defaultFSep="\001";
  38. //"US-ASCII""ISO-8859-1""UTF-8""UTF-16BE""UTF-16LE""UTF-16"
  39. private final static String defaultEncoding = "UTF-8";
  40. private String encoding = null;
  41. public SQPRecordReader(Configuration job, FileSplit split) throws IOException {
  42. this(job, split, null);
  43. }
  44. public SQPRecordReader(Configuration job, FileSplit split, byte[] recordDelimiter) throws IOException {
  45. this.maxLineLength = job.getInt("mapreduce.input.linerecordreader.line.maxlength", 2147483647);
  46. this.FieldSep = job.get("textinputformat.record.fieldsep",defaultFSep);
  47. this.encoding = job.get("textinputformat.record.encoding",defaultEncoding);
  48. this.start = split.getStart();
  49. this.end = (this.start + split.getLength());
  50. Path file = split.getPath();
  51. this.compressionCodecs = new CompressionCodecFactory(job);
  52. this.codec = this.compressionCodecs.getCodec(file);
  53. FileSystem fs = file.getFileSystem(job);
  54. this.fileIn = fs.open(file);
  55. if (isCompressedInput()) {
  56. this.decompressor = CodecPool.getDecompressor(this.codec);
  57. if ((this.codec instanceof SplittableCompressionCodec)) {
  58. SplitCompressionInputStream cIn = ((SplittableCompressionCodec)this.codec).createInputStream(this.fileIn, this.decompressor, this.start, this.end, SplittableCompressionCodec.READ_MODE.BYBLOCK);
  59. this.in = new LineReader(cIn, job, recordDelimiter);
  60. this.start = cIn.getAdjustedStart();
  61. this.end = cIn.getAdjustedEnd();
  62. this.filePosition = cIn;
  63. } else {
  64. this.in = new LineReader(this.codec.createInputStream(this.fileIn, this.decompressor), job, recordDelimiter);
  65. this.filePosition = this.fileIn;
  66. }
  67. } else {
  68. this.fileIn.seek(this.start);
  69. this.in = new LineReader(this.fileIn, job, recordDelimiter);
  70. this.filePosition = this.fileIn;
  71. }
  72. if (this.start != 0L) {
  73. this.start += this.in.readLine(new Text(), 0, maxBytesToConsume(this.start));
  74. }
  75. this.pos = this.start;
  76. }
  77. public SQPRecordReader(InputStream in, long offset, long endOffset, int maxLineLength) {
  78. this(in, offset, endOffset, maxLineLength, null);
  79. }
  80. public SQPRecordReader(InputStream in, long offset, long endOffset, int maxLineLength, byte[] recordDelimiter) {
  81. this.maxLineLength = maxLineLength;
  82. this.in = new LineReader(in, recordDelimiter);
  83. this.start = offset;
  84. this.pos = offset;
  85. this.end = endOffset;
  86. this.filePosition = null;
  87. }
  88. public SQPRecordReader(InputStream in, long offset, long endOffset, Configuration job) throws IOException {
  89. this(in, offset, endOffset, job, null);
  90. }
  91. public SQPRecordReader(InputStream in, long offset, long endOffset, Configuration job, byte[] recordDelimiter) throws IOException {
  92. this.maxLineLength = job.getInt("mapreduce.input.linerecordreader.line.maxlength", 2147483647);
  93. this.in = new LineReader(in, job, recordDelimiter);
  94. this.start = offset;
  95. this.pos = offset;
  96. this.end = endOffset;
  97. this.filePosition = null;
  98. }
  99. public LongWritable createKey() {
  100. return new LongWritable();
  101. }
  102. public Text createValue() {
  103. return new Text();
  104. }
  105. private boolean isCompressedInput() {
  106. return this.codec != null;
  107. }
  108. private int maxBytesToConsume(long pos) {
  109. return isCompressedInput() ? 2147483647 : (int)Math.min(2147483647L, this.end - pos);
  110. }
  111. private long getFilePosition() throws IOException {
  112. long retVal;
  113. if ((isCompressedInput()) && (null != this.filePosition))
  114. retVal = this.filePosition.getPos();
  115. else {
  116. retVal = this.pos;
  117. }
  118. return retVal;
  119. }
  120. public boolean next(LongWritable longWritable, Text text) throws IOException {
  121. while (getFilePosition() <= this.end) {
  122. longWritable.set(this.pos);
  123. int newSize = this.in.readLine(text, this.maxLineLength, Math.max(maxBytesToConsume(this.pos), this.maxLineLength));
  124. if (newSize == 0) {
  125. return false;
  126. }
  127. if (encoding.compareTo(defaultEncoding) != 0) {
  128. String str = new String(text.getBytes(), 0, text.getLength(), encoding);
  129. text.set(str);
  130. }
  131. if (FieldSep.compareTo(defaultFSep) != 0) {
  132. String replacedValue = text.toString().replace(FieldSep, defaultFSep);
  133. text.set(replacedValue);
  134. }
  135. this.pos += newSize;
  136. if (newSize < this.maxLineLength) {
  137. return true;
  138. }
  139. LOG.info("Skipped line of size " + newSize + " at pos " + (this.pos - newSize));
  140. }
  141. return false;
  142. }
  143. public long getPos() throws IOException {
  144. return this.pos;
  145. }
  146. public void close() throws IOException {
  147. try {
  148. if (this.in != null)
  149. this.in.close();
  150. }
  151. finally {
  152. if (this.decompressor != null)
  153. CodecPool.returnDecompressor(this.decompressor);
  154. }
  155. }
  156. public float getProgress() throws IOException {
  157. if(this.start == this.end){
  158. return 0.0F;
  159. }
  160. return Math.min(1.0F, (float)(getFilePosition() - this.start) / (float)(this.end - this.start));
  161. }
  162. }

3.4 使用

启动 Hive CLI 后,先测试和查看当前 Hive 的信息,输入如下命令

  1. hive> create table test1(id int);
  2. OK
  3. Time taken: 0.442 seconds
  4. hive> show tables;
  5. OK
  6. test1
  7. Time taken: 0.031 seconds, Fetched: 1 row(s)
  8. hive> describe extended test1;
  9. OK
  10. id int
  11. Detailed Table Information Table(tableName:test1, dbName:default, owner:root, createTime:1554317591, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null)], location:hdfs://node1:8020/hive/warehouse/test1, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{transient_lastDdlTime=1554317591}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)
  12. Time taken: 0.126 seconds, Fetched: 3 row(s)
  13. hive>

从上面打印的信息可以看到,hive 的输入和输出调用的类有:  

  • inputFormat:org.apache.hadoop.mapred.TextInputFormat,  
  • outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat,  

下面我们就来设置 Hive 的输入格式化为我们刚才开发的自定的类,如果设置成功后再次查看 inputFormat会是我们添加的那个类了。 

3.4.1 将源码打包

因为项目是一个 Maven 项目,可以直接使用 Maven 命令打包 mvn clean package。如果是在 Idea 则直接点击运行右侧的 Maven Projects --> Lifecycle --> package 。

3.4.2 将打好的 jar 包添加到环境的lib库中

将程序打成jar包,放到 Hive 和 Hadoop 的lib库下。如果有多个节点,每个节点下都需要上传一份。

3.4.3 设置 Hive 加载数据的编码格式、自定义字段分隔符和自定义换行符

添加完 jar 包后需要重新进入 Hive CLI。进入Hive CLI输入如下命令:

  1. //"US-ASCII""ISO-8859-1""UTF-8""UTF-16BE""UTF-16LE""UTF-16"
  2. set textinputformat.record.encoding=UTF-8;
  3. // 字段间的切分字符
  4. set textinputformat.record.fieldsep=,;
  5. // 行切分字符
  6. set textinputformat.record.linesep=|+|;
以上命令设置输入文件的编码格式为 UTF-8,字符安间的分割符为英文逗号,行分隔符为 |+| 符。

3.4.4 在本地创建一个文件,测试数据如下

例如 在家目录下创建一个 hive_separator.txt 文件,输入如下测试数据:
3,Yore|+|9,Yuan|+|11,東

3.4.5 创建一个测试表

其中 INPUTFORMAT 为我们自定的 TextInputFormat 类的全限定类名:yore.hive.SQPTextInputFormat.java

  1. create table test (
  2. id string,
  3. name string
  4. ) stored as
  5. INPUTFORMAT 'yore.hive.SQPTextInputFormat'
  6. OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' ;

3.4.6 加载测试数据到表中

  1. load data local inpath '/root/hive_separator.txt'
  2. overwrite into table test;

3.4.7 查询数据

  1. hive> select * from test;
  2. OK
  3. 3 Yore
  4. 9 Yuan
  5. 11

hive-user-defined-separator

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/389865
推荐阅读
相关标签
  

闽ICP备14008679号