赞
踩
详见:大数据技术之Hadoop(入门)
1)测试集群规划:
|
服务器bigdata02 |
服务器bigdata03 |
服务器bigdata04 |
NameNode DataNode |
DataNode |
DataNode SecondaryNameNode |
|
Yarn |
NodeManager |
Resourcemanager NodeManager |
NodeManager |
注意:尽量使用离线方式安装,CDH要收费了,使用的人会越来越少...
若HDFS存储空间紧张,需要对DataNode进行磁盘扩展。
1)在DataNode节点增加磁盘并进行挂载。
2)在hdfs-site.xml文件中配置多目录,注意新挂载磁盘的访问权限问题。
- <property>
-
- <name>dfs.datanode.data.dir</name>
-
- <value>file:///${hadoop.tmp.dir}/dfs/data1,file:///hd2/dfs/data2,file:///hd3/dfs/data3,file:///hd4/dfs/data4</value>
-
- </property>
3)增加磁盘后,保证每个目录数据均衡
开启数据均衡命令:
bin/start-balancer.sh –threshold 10
对于参数10,代表的是集群中各个节点的磁盘空间利用率相差不超过10%,可根据实际情况进行调整。
停止数据均衡命令:
bin/stop-balancer.sh
1)hadoop本身并不支持lzo压缩,故需要使用twitter提供的hadoop-lzo开源组件。hadoop-lzo需依赖hadoop和lzo进行编译,编译步骤如下。
- Hadoop支持LZO
-
- 0. 环境准备
- maven(下载安装,配置环境变量,修改sitting.xml加阿里云镜像)
- gcc-c++
- zlib-devel
- autoconf
- automake
- libtool
- 通过yum安装即可,yum -y install gcc-c++ lzo-devel zlib-devel autoconf automake libtool
-
- 1. 下载、安装并编译LZO
-
- wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz
-
- tar -zxvf lzo-2.10.tar.gz
-
- cd lzo-2.10
-
- ./configure -prefix=/usr/local/hadoop/lzo/
-
- make
-
- make install
-
- 2. 编译hadoop-lzo源码
-
- 2.1 下载hadoop-lzo的源码,下载地址:https://github.com/twitter/hadoop-lzo/archive/master.zip
- 2.2 解压之后,修改pom.xml
- <hadoop.current.version>2.7.2</hadoop.current.version>
- 2.3 声明两个临时环境变量
- export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include
- export LIBRARY_PATH=/usr/local/hadoop/lzo/lib
- 2.4 编译
- 进入hadoop-lzo-master,执行maven编译命令
- mvn package -Dmaven.test.skip=true
- 2.5 进入target,hadoop-lzo-0.4.21-SNAPSHOT.jar 即编译成功的hadoop-lzo组件

2)将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/
- [hadoop@bigdata02 common]$ pwd
- /opt/module/hadoop-2.7.2/share/hadoop/common
- [hadoop@bigdata02 common]$ ls
- hadoop-lzo-0.4.20.jar
3)同步hadoop-lzo-0.4.20.jar到bigdata03、bigdata04
[hadoop@bigdata02 common]$ xsync hadoop-lzo-0.4.20.jar
4)core-site.xml增加配置支持LZO压缩
- <?xml version="1.0" encoding="UTF-8"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <property>
- <name>io.compression.codecs</name>
- <value>
- org.apache.hadoop.io.compress.GzipCodec,
- org.apache.hadoop.io.compress.DefaultCodec,
- org.apache.hadoop.io.compress.BZip2Codec,
- org.apache.hadoop.io.compress.SnappyCodec,
- com.hadoop.compression.lzo.LzoCodec,
- com.hadoop.compression.lzo.LzopCodec
- </value>
- </property>
-
- <property>
- <name>io.compression.codec.lzo.class</name>
- <value>com.hadoop.compression.lzo.LzoCodec</value>
- </property>
- </configuration>

5)同步core-site.xml到bigdata03、bigdata04
[hadoop@bigdata02 hadoop]$ xsync core-site.xml
6)启动及查看集群
- [hadoop@bigdata02 hadoop-2.7.2]$ sbin/start-dfs.sh
-
- [hadoop@bigdata03 hadoop-2.7.2]$ sbin/start-yarn.sh
1)创建LZO文件的索引,LZO压缩文件的可切片特性依赖于其索引,故我们需要手动为LZO压缩文件创建索引。若无索引,则LZO文件的切片只有一个。
hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer bigtable.lzo
2)测试
(1)将bigtable.lzo(150M)上传到集群的根目录
- [hadoop@bigdata02 module]$ hadoop fs -mkdir /input
-
- [hadoop@bigdata02 module]$ hadoop fs -put bigtable.lzo /input
(2)执行wordcount程序
[hadoop@bigdata02 module]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output1
注意:这个lzo文件有214m,默认的块大小是128m,那么应该是有两个块,我们通过运行mapreduce程序发现,只有一个切分,需要进行下面的步骤才行。
(3)对上传的LZO文件建索引
[hadoop@bigdata02 module]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo
(4)再次执行WordCount程序
[hadoop@bigdata02 module]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output2
面试官可能会问,我有10T的数据,读完需要多长时间,写需要多长时间?你应该回答,我们会先做基准测试,测试出这个集群的读写能力,计算的能力...
1) 测试HDFS写性能
测试内容:向HDFS集群写10个128M的文件
- [hadoop@bigdata02 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 128MB
-
-
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Date & time: Thu May 02 11:45:23 CST 2019
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Number of files: 10
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Throughput mb/sec: 10.69751115716984
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 14.91699504852295
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: IO rate std deviation: 11.160882132355928
-
- 19/05/02 11:45:23 INFO fs.TestDFSIO: Test exec time sec: 52.315

最重要的参数是 Throughput mb/sec: 10.69751115716984 表示平均吞吐量为10.6975m/s
2)测试HDFS读性能
测试内容:读取HDFS集群10个128M的文件
- [hadoop@bigdata02 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 128MB
-
-
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Date & time: Thu May 02 11:56:36 CST 2019
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Number of files: 10
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Throughput mb/sec: 16.001000062503905
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.202795028686523
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: IO rate std deviation: 4.881590515873911
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO: Test exec time sec: 49.116
-
- 19/05/02 11:56:36 INFO fs.TestDFSIO:

3)删除测试生成数据
[hadoop@bigdata02 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -clean
4)使用Sort程序评测MapReduce
(1)使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数
[hadoop@bigdata02 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar randomwriter random-data
(2)执行Sort程序
[hadoop@bigdata02 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar sort random-data sorted-data
(3)验证数据是否真正排好序了
- [hadoop@bigdata02 mapreduce]$
-
- hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data
1)HDFS参数调优hdfs-site.xml
dfs.namenode.handler.count=20 * log2(Cluster Size),比如集群规模为8台时,此参数设置为60
作用:dataNode与namenode之间通信需要有一定的并发度,如果太低,datanode需要排队,如果太高,集群也没有这么多资源,需要有一个比较合理的并发度...
- The number of Namenode RPC server threads that listen to requests from clients. If dfs.namenode.servicerpc-address is not configured then Namenode RPC server threads listen to requests from all nodes.
-
- NameNode有一个工作线程池,用来处理不同DataNode的并发心跳以及客户端并发的元数据操作。对于大集群或者有大量客户端的集群来说,通常需要增大参数dfs.namenode.handler.count的默认值10。设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20 * log2(Cluster Size),N为集群大小。
2)YARN参数调优yarn-site.x
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。