当前位置:   article > 正文

基于zookeeper的Hadoop集群搭建详细步骤_hdfs-site.cml

hdfs-site.cml

目录

一、一些基本概念

二、集群配置图

三、Hadoop高可用集群配置步骤

1.在第一台虚拟机解压hadoop-3.1.3.tar.gz到/opt/soft/目录

2.修改文件名、属主和属组

3.配置windows四台虚拟机的ip映射

4.修改环境变量和Hadoop相关配置文件

(1)hadoop-env.sh

(2)workers

(3)crore-site.xml

(4)hdfs-site.xml

(5)mapred-site.xml

(6)yarn-site.xml

5.拷贝hadoop到其他三台虚拟机

6.分配环境变量

7.重启环境变量,检验四台虚拟机安装是否成功

四、首次启动hadoop集群 

1.高可用启动之前要启动zookeeper

2.三台机器启动JournalNode

3.第一台机器格式化

4.第一台机器启动namenode

5.在第二台机器同步namenode信息

6.第二台机器启动namenode

7.每台机器查看namenode的状态都是standby

8.关闭所有的与dfs有关的服务

9.格式化zookeeper

10.zkCli.sh

11.启动dfs

12.查看namenode节点状态

13.打开网页登录查看

14.每台虚拟机下载主备切换工具

15.启动yarn

16.有resourcemanager的主机名登录8088端口

17.查看resourcemanager节点状态

18.关闭集群


zookeeper集群的安装步骤参考博文《搭建zookeeper高可用集群详细步骤》,注意主机名,这里换了主机名ant161=ant165;ant162=ant166;ant163=ant167;ant164=ant168

一、一些基本概念

JournalNode的作用

Hadoop集群中的DFSZKFailoverController进程的作用

二、集群配置图

ant161ant162ant163ant164
NameNodeNameNode
DataNode  DataNodeDataNodeDataNode
NodeManagerNodeManagerNodeManagerNodeManager
ResourceManagerResourceManager
JournalNode监控NameNode是否同步JournalNodeJournalNode

DFSZKFailoverController监控NameNode是否存活

DFSZKFailoverController

zookeeper0zookeeper1zookeeper2
JobHistory

三、Hadoop高可用集群配置步骤

1.在第一台虚拟机解压hadoop-3.1.3.tar.gz到/opt/soft/目录

[root@ant161 install]# tar -zxf ./hadoop-3.1.3.tar.gz -C /opt/soft/

2.修改文件名、属主和属组

  1. [root@ant161 soft]# mv ./hadoop-3.1.3/ hadoop313
  2. [root@ant161 soft]# chown -R root:root ./hadoop313/

3.配置windows四台虚拟机的ip映射

C:\Windows\System32\drivers\etc目录下的host文件,添加以下的主机ip配置

4.修改环境变量和Hadoop相关配置文件

vim /etc/profile

(1)hadoop-env.sh

  1. # The java implementation to use. By default, this environment
  2. # variable is REQUIRED on ALL platforms except OS X!
  3. export JAVA_HOME=/opt/soft/jdk180
  4. export HDFS_NAMENODE_USER=root
  5. export HDFS_DATANODE_USER=root
  6. export HDFS_SECONDARYNAMENODE_USER=root
  7. export HDFS_JOURNALNODE_USER=root
  8. export HDFS_ZKFC_USER=root
  9. export YARN_RESOURCEMANAGER_USER=root
  10. export YARN_NODEMANAGER_USER=root

(2)workers

        输入四台虚拟机主机名

  1. ant161
  2. ant162
  3. ant163
  4. ant164

(3)crore-site.xml

  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://gky</value>
  5. <description>逻辑名称,必须与hdfs-site.cml中的dfs.nameservices值保持一致</description>
  6. </property>
  7. <property>
  8. <name>hadoop.tmp.dir</name>
  9. <value>/opt/soft/hadoop313/tmpdata</value>
  10. <description>namenode上本地的hadoop临时文件夹</description>
  11. </property>
  12. <property>
  13. <name>hadoop.http.staticuser.user</name>
  14. <value>root</value>
  15. <description>默认用户</description>
  16. </property>
  17. <property>
  18. <name>hadoop.proxyuser.root.hosts</name>
  19. <value>*</value>
  20. <description></description>
  21. </property>
  22. <property>
  23. <name>hadoop.proxyuser.root.groups</name>
  24. <value>*</value>
  25. <description></description>
  26. </property>
  27. <property>
  28. <name>io.file.buffer.size</name>
  29. <value>131072</value>
  30. <description>读写文件的buffer大小为:128K</description>
  31. </property>
  32. <property>
  33. <name>ha.zookeeper.quorum</name>
  34. <value>ant161:2181,ant162:2181,ant163:2181</value>
  35. <description></description>
  36. </property>
  37. <property>
  38. <name>ha.zookeeper.session-timeout.ms</name>
  39. <value>10000</value>
  40. <description>hadoop链接zookeeper的超时时长设置为10s</description>
  41. </property>
  42. </configuration>

(4)hdfs-site.xml

  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>3</value>
  5. <description>Hadoop中每一个block的备份数</description>
  6. </property>
  7. <property>
  8. <name>dfs.namenode.name.dir</name>
  9. <value>/opt/soft/hadoop313/data/dfs/name</value>
  10. <description>namenode上存储hdfs名字空间元数据目录</description>
  11. </property>
  12. <property>
  13. <name>dfs.datanode.data.dir</name>
  14. <value>/opt/soft/hadoop313/data/dfs/data</value>
  15. <description>datanode上数据块的物理存储位置</description>
  16. </property>
  17. <property>
  18. <name>dfs.namenode.secondary.http-address</name>
  19. <value>ant161:9869</value>
  20. <description></description>
  21. </property>
  22. <property>
  23. <name>dfs.nameservices</name>
  24. <value>gky</value>
  25. <description>指定hdfs的nameservice,需要和core-site.xml中保持一致</description>
  26. </property>
  27. <property>
  28. <name>dfs.ha.namenodes.gky</name>
  29. <value>nn1,nn2</value>
  30. <description>gky为集群的逻辑名称,映射两个namenode逻辑名</description>
  31. </property>
  32. <property>
  33. <name>dfs.namenode.rpc-address.gky.nn1</name>
  34. <value>ant161:9000</value>
  35. <description>namenode1的RPC通信地址</description>
  36. </property>
  37. <property>
  38. <name>dfs.namenode.http-address.gky.nn1</name>
  39. <value>ant161:9870</value>
  40. <description>namenode1的http通信地址</description>
  41. </property>
  42. <property>
  43. <name>dfs.namenode.rpc-address.gky.nn2</name>
  44. <value>ant162:9000</value>
  45. <description>namenode2的RPC通信地址</description>
  46. </property>
  47. <property>
  48. <name>dfs.namenode.http-address.gky.nn2</name>
  49. <value>ant162:9870</value>
  50. <description>namenode2的http通信地址</description>
  51. </property>
  52. <property>
  53. <name>dfs.namenode.shared.edits.dir</name>
  54. <value>qjournal://ant161:8485;ant162:8485;ant163:8485/gky</value>
  55. <description>指定NameNode的edits元数据的共享存储位置(JournalNode列表)</description>
  56. </property>
  57. <property>
  58. <name>dfs.journalnode.edits.dir</name>
  59. <value>/opt/soft/hadoop313/data/journaldata</value>
  60. <description>指定JournalNode在本地磁盘存放数据的位置</description>
  61. </property>
  62. <!-- 容错 -->
  63. <property>
  64. <name>dfs.ha.automatic-failover.enabled</name>
  65. <value>true</value>
  66. <description>启用NameNode故障自动切换</description>
  67. </property>
  68. <property>
  69. <name>dfs.client.failover.proxy.provider.gky</name>
  70. <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  71. <description>失败后自动切换的实现方式</description>
  72. </property>
  73. <property>
  74. <name>dfs.ha.fencing.methods</name>
  75. <value>sshfence</value>
  76. <description>防止脑裂的处理</description>
  77. </property>
  78. <property>
  79. <name>dfs.ha.fencing.ssh.private-key-files</name>
  80. <value>/root/.ssh/id_rsa</value>
  81. <description>使用sshfence隔离机制时需要ssh免登陆</description>
  82. </property>
  83. <property>
  84. <name>dfs.permissions.enabled</name>
  85. <value>false</value>
  86. <description>关闭HDFS操作权限验证</description>
  87. </property>
  88. <property>
  89. <name>dfs.image.transfer.bandwidthPerSec</name>
  90. <value>1048576</value>
  91. <description>1M</description>
  92. </property>
  93. <property>
  94. <name>dfs.block.scanner.volume.bytes.per.second</name>
  95. <value>1048576</value>
  96. <description>如果该值为0,则DataNode的块扫描程序将被禁用。如果这是正数,则这是DataNode的块扫描程序将尝试从每个卷扫描的每秒字节数。</description>
  97. </property>
  98. </configuration>

(5)mapred-site.xml

  1. <property>
  2. <name>mapreduce.framework.name</name>
  3. <value>yarn</value>
  4. <description>job执行框架: local, classic or yarn</description>
  5. </property>
  6. <property>
  7. <name>mapreduce.application.classpath</name>
  8. <value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*:/opt/soft/hadoop313/share/hadoop/yarn/lib/*</value>
  9. <description></description>
  10. </property>
  11. <property>
  12. <name>mapreduce.jobhistory.address</name>
  13. <value>ant161:10020</value>
  14. </property>
  15. <property>
  16. <name>mapreduce.jobhistory.webapp.address</name>
  17. <value>ant161:19888</value>
  18. </property>
  19. <property>
  20. <name>mapreduce.map.memory.mb</name>
  21. <value>1024</value>
  22. <description>设置map阶段的task工作内存</description>
  23. </property>
  24. <property>
  25. <name>mapreduce.reduce.memory.mb</name>
  26. <value>2048</value>
  27. <description>设置reduce阶段的task工作内存</description>
  28. </property>

(6)yarn-site.xml

  1. <configuration>
  2. <property>
  3. <name>yarn.resourcemanager.ha.enabled</name>
  4. <value>true</value>
  5. <description>开启resourcemanager高可用</description>
  6. </property>
  7. <property>
  8. <name>yarn.resourcemanager.cluster-id</name>
  9. <value>yrcabc</value>
  10. <description>指定yarn集群中的id</description>
  11. </property>
  12. <property>
  13. <name>yarn.resourcemanager.ha.rm-ids</name>
  14. <value>rm1,rm2</value>
  15. <description>指定resourcemanager的名字</description>
  16. </property>
  17. <property>
  18. <name>yarn.resourcemanager.hostname.rm1</name>
  19. <value>ant163</value>
  20. <description>设置rm1的名字</description>
  21. </property>
  22. <property>
  23. <name>yarn.resourcemanager.hostname.rm2</name>
  24. <value>ant164</value>
  25. <description>设置rm2的名字</description>
  26. </property>
  27. <property>
  28. <name>yarn.resourcemanager.webapp.address.rm1</name>
  29. <value>ant163:8088</value>
  30. <description></description>
  31. </property>
  32. <property>
  33. <name>yarn.resourcemanager.webapp.address.rm2</name>
  34. <value>ant164:8088</value>
  35. <description></description>
  36. </property>
  37. <property>
  38. <name>yarn.resourcemanager.zk-address</name>
  39. <value>ant161:2181,ant162:2181,ant163:2181</value>
  40. <description>指定zookeeper集群地址</description>
  41. </property>
  42. <property>
  43. <name>yarn.nodemanager.aux-services</name>
  44. <value>mapreduce_shuffle</value>
  45. <description>运行mapreduce程序必须配置的附属服务</description>
  46. </property>
  47. <property>
  48. <name>yarn.nodemanager.local-dirs</name>
  49. <value>/opt/soft/hadoop313/tmpdata/yarn/local</value>
  50. <description>nodemanager本地存储目录</description>
  51. </property>
  52. <property>
  53. <name>yarn.nodemanager.log-dirs</name>
  54. <value>/opt/soft/hadoop313/tmpdata/yarn/log</value>
  55. <description>nodemanager本地日志目录</description>
  56. </property>
  57. <property>
  58. <name>yarn.nodemanager.resource.memory-mb</name>
  59. <value>2048</value>
  60. <description>resource进程的工作内存</description>
  61. </property>
  62. <property>
  63. <name>yarn.nodemanager.resource.cpu-vcores</name>
  64. <value>2</value>
  65. <description>resource工作中所能使用机器的内核数</description>
  66. </property>
  67. <!--下面三个配置在公司要删除-->
  68. <property>
  69. <name>yarn.scheduler.minimum-allocation-mb</name>
  70. <value>256</value>
  71. <description></description>
  72. </property>
  73. <property>
  74. <name>yarn.log-aggregation-enable</name>
  75. <value>true</value>
  76. <description></description>
  77. </property>
  78. <property>
  79. <name>yarn.log-aggregation.retain-seconds</name>
  80. <value>86400</value>
  81. <description>日志保留多少秒</description>
  82. </property>
  83. <property>
  84. <name>yarn.nodemanager.vmem-check-enabled</name>
  85. <value>false</value>
  86. <description></description>
  87. </property>
  88. <property>
  89. <name>yarn.application.classpath</name>
  90. <value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*:/opt/soft/hadoop313/share/hadoop/yarn/lib/*</value>
  91. </property>
  92. <property>
  93. <name>yarn.nodemanager.env-whitelist</name>
  94. <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
  95. </property>
  96. </configuration>

5.拷贝hadoop到其他三台虚拟机

  1. [root@ant161 soft]# scp -r ./hadoop313/ root@ant162:/opt/soft/
  2. [root@ant161 soft]# scp -r ./hadoop313/ root@ant163:/opt/soft/
  3. [root@ant161 soft]# scp -r ./hadoop313/ root@ant164:/opt/soft/

6.分配环境变量

  1. [root@ant161 soft]# scp /etc/profile root@ant162:/etc/
  2. profile 100% 2202 1.4MB/s 00:00
  3. [root@ant161 soft]# scp /etc/profile root@ant163:/etc/
  4. profile 100% 2202 1.4MB/s 00:00
  5. [root@ant161 soft]# scp /etc/profile root@ant164:/etc/
  6. profile

7.重启环境变量,检验四台虚拟机安装是否成功

  1. source /etc/profile
  2. hadoop
  3. hadoop version

四、首次启动hadoop集群 

1.高可用启动之前要启动zookeeper

  1. [root@ant161 soft]# /opt/shell/zkop.sh start
  2. ------------ ant161 zookeeper -----------
  3. JMX enabled by default
  4. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  5. Starting zookeeper ... STARTED
  6. ------------ ant162 zookeeper -----------
  7. JMX enabled by default
  8. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  9. Starting zookeeper ... STARTED
  10. ------------ ant163 zookeeper -----------
  11. JMX enabled by default
  12. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  13. Starting zookeeper ... STARTED
  14. [root@ant161 soft]# /opt/shell/showjps.sh
  15. ---------------- ant161 服务启动状态 -----------------
  16. 2532 QuorumPeerMain
  17. 2582 Jps
  18. ---------------- ant162 服务启动状态 -----------------
  19. 2283 QuorumPeerMain
  20. 2335 Jps
  21. ---------------- ant163 服务启动状态 -----------------
  22. 2305 Jps
  23. 2259 QuorumPeerMain
  24. ---------------- ant164 服务启动状态 -----------------
  25. 2233 Jps
  26. [root@ant161 soft]# /opt/shell/zkop.sh status
  27. ------------ ant161 zookeeper -----------
  28. JMX enabled by default
  29. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  30. Mode: follower
  31. ------------ ant162 zookeeper -----------
  32. JMX enabled by default
  33. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  34. Mode: leader
  35. ------------ ant163 zookeeper -----------
  36. JMX enabled by default
  37. Using config: /opt/soft/zk345/bin/../conf/zoo.cfg
  38. Mode: follower

2.三台机器启动JournalNode

  1. [root@ant161 soft]# hdfs --daemon start journalnode
  2. WARNING: /opt/soft/hadoop313/logs does not exist. Creating.
  3. [root@ant162 soft]# hdfs --daemon start journalnode
  4. WARNING: /opt/soft/hadoop313/logs does not exist. Creating.
  5. [root@ant163 soft]# hdfs --daemon start journalnode
  6. WARNING: /opt/soft/hadoop313/logs does not exist. Creating.

3.第一台机器格式化

[root@ant161 soft]# hdfs namenode -format

4.第一台机器启动namenode

[root@ant161 hadoop]# hdfs --daemon start namenode

5.在第二台机器同步namenode信息

[root @ant162 soft]# hdfs namenode -bootstrapStandby

6.第二台机器启动namenode

[root @ant162 soft]# hdfs --daemon start namenode

7.每台机器查看namenode的状态都是standby

  1. [root @ant161 soft]# hdfs haadmin -getServiceState nn1
  2. standby
  3. [root @ant161 soft]# hdfs haadmin -getServiceState nn2
  4. standby

8.关闭所有的与dfs有关的服务

[root @ant161 soft]# stop-dfs.sh

9.格式化zookeeper

[root @ant161 soft]# hdfs zkfc -formatZK

10.zkCli.sh

11.启动dfs

12.查看namenode节点状态

  1. [root@ant161 soft]# hdfs haadmin -getServiceState nn1
  2. standby
  3. [root@ant161 soft]# hdfs haadmin -getServiceState nn2
  4. active

13.打开网页登录查看

14.每台虚拟机下载主备切换工具

[root@ant161 soft]# yum install psmisc -y

此时如果停止active那一台的namenode,7218是active那一台机器namenode的进程号

[root@ant162 soft]# kill -9 7218

此时active那一台网页无法连接,另一台没有关闭namenode的机器的网页变为active

再重新启动关闭的namenode,网页端的两个网址刷新,就会发现刚刚变为active的那一台机器,还是active状态,而另一台重启的机器就编程了standby等待状态

15.启动yarn

16.有resourcemanager的主机名登录8088端口

哪一台机器是active的状态,就会自动跳转到那一台机器的主机名

17.查看resourcemanager节点状态

  1. [root@ant161 soft]# yarn rmadmin -getServiceState rm1
  2. active
  3. [root@ant161 soft]# yarn rmadmin -getServiceState rm2
  4. standby

18.关闭集群

(1)关闭dfs

[root@ant161 soft]# stop-dfs.sh

(2)关闭yarn

[root@ant161 soft]# stop-yarn.sh

(3)关闭journalnode

[root@ant161 soft]# hdfs --daemon stop journalnode

(4)关闭zookeeper

[root@ant161 soft]# /opt/shell/zkop.sh stop
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/605478
推荐阅读
相关标签
  

闽ICP备14008679号