当前位置:   article > 正文

hadoop3X HA安装部署_hadoop yarn-site.xml

hadoop yarn-site.xml

目录

 一、前期准备

1.1安装了jdk

1.2安装了zookeeper分布式

1.3配置免密码登录

1.4机器准备

二、安装部署

2.1上传

2.2解压

2.3配置环境变量(每一台都要)

2.4修改hadoo-env.sh

2.5修改core-site.xml

2.6修改hdfs-site.xml

2.7修改mapred-site.xml

2.8修改yarn-site.xml

2.9修改workers

2.10修改:yarn-env.sh

2.11拷贝到其他节点

三、验证

3.1启动zookeeper集群(每一台都要)

3.2启动

3.3jps查看进程

3.4浏览器验证9870

3.5 浏览器验证8088


注:hadoop2X HA安装部署参考:

Hadoop2X HA环境部署_一个人的牛牛的博客-CSDN博客

 一、前期准备

1.1安装了jdk

没有的可以参考:

Linux系统CentOS7安装jdk_一个人的牛牛的博客-CSDN博客

1.2安装了zookeeper分布式

没有的可以参考:

​​​​​​zookeeper单机和集群(全分布)的安装过程_一个人的牛牛的博客-CSDN博客

1.3配置免密码登录

没有的可以参考:

Linux配置免密登录单机和全分布_一个人的牛牛的博客-CSDN博客

1.4机器准备

主节点从节点
hadoop01hadoop03
hadoop02hadoop04

二、安装部署

2.1上传

MobaXterm_Portable的简单使用_一个人的牛牛的博客-CSDN博客_mobaxterm portable和installer区别

2.2解压

进入安装包目录,执行:

tar -zvxf hadoop-3.1.3.tar.gz /training/

2.3配置环境变量(每一台都要)

vi ~/.bash_profile

  1. #hadoop
  2. export HADOOP_HOME=/training/hadoop-3.1.3
  3. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 环境变量生效

source ~/.bash_profile

2.4修改hadoo-env.sh

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi hadoo-env.sh

添加内容:

export JAVA_HOME=/training/jdk1.8.0_171

2.5修改core-site.xml

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi core-site.xml

添加内容:

  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://HAhadoop01</value>
  5. </property>
  6. <property>
  7. <name>hadoop.tmp.dir</name>
  8. <value>/training/hadoop-3.1.3/tmp</value>
  9. </property>
  10. <property>
  11. <name>ha.zookeeper.quorum</name>
  12. <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
  13. </property>
  14. </configuration>

2.6修改hdfs-site.xml

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi hdfs-site.xml

添加内容:

  1. <configuration>
  2. <property>
  3. <name>dfs.nameservices</name>
  4. <value>HAhadoop01</value>
  5. </property>
  6. <property>
  7. <name>dfs.ha.namenodes.HAhadoop01</name>
  8. <value>HAhadoop02,HAhadoop03</value>
  9. </property>
  10. <property>
  11. <name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop02</name>
  12. <value>hadoop01:9000</value>
  13. </property>
  14. <property>
  15. <name>dfs.namenode.http-address.HAhadoop01.HAhadoop02</name>
  16. <value>hadoop01:9870</value>
  17. </property>
  18. <property>
  19. <name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop03</name>
  20. <value>hadoop02:9000</value>
  21. </property>
  22. <property>
  23. <name>dfs.namenode.http-address.HAhadoop01.HAhadoop03</name>
  24. <value>hadoop02:9870</value>
  25. </property>
  26. <property>
  27. <name>dfs.namenode.shared.edits.dir</name>
  28. <value>qjournal://hadoop01:8485;hadoop02:8485;/HAhadoop01</value>
  29. </property>
  30. <property>
  31. <name>dfs.journalnode.edits.dir</name>
  32. <value>/training/hadoop-3.1.3/journal</value>
  33. </property>
  34. <property>
  35. <name>dfs.ha.automatic-failover.enabled.HAhadoop01</name>
  36. <value>true</value>
  37. </property>
  38. <property>
  39. <name>dfs.client.failover.proxy.provider.HAhadoop01</name>
  40. <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  41. </property>
  42. <property>
  43. <name>dfs.ha.fencing.methods</name>
  44. <value>
  45. sshfence
  46. shell(/bin/true)
  47. </value>
  48. </property>
  49. <property>
  50. <name>dfs.ha.fencing.ssh.private-key-files</name>
  51. <value>/root/.ssh/id_rsa</value>
  52. </property>
  53. <property>
  54. <name>dfs.ha.fencing.ssh.connect-timeout</name>
  55. <value>30000</value>
  56. </property>
  57. <property>
  58. <name>dfs.datanode.data.dir</name>
  59. <value>/training/hadoop-3.1.3/data</value>
  60. </property>
  61. <property>
  62. <name>dfs.datanode.name.dir</name>
  63. <value>/training/hadoop-3.1.3/name</value>
  64. </property>
  65. <property>
  66. <name>dfs.replication</name>
  67. <value>2</value>
  68. </property>
  69. </configuration>

进入hadoop目录,用mkdir创建tmp,journal,logs

2.7修改mapred-site.xml

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi mapred-site.xml

添加内容:

  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. </configuration>

2.8修改yarn-site.xml

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi yarn-site.xml

添加内容:

  1. <configuration>
  2. <property>
  3. <name>yarn.resourcemanager.ha.enabled</name>
  4. <value>true</value>
  5. </property>
  6. <property>
  7. <name>yarn.resourcemanager.cluster-id</name>
  8. <value>yarn</value>
  9. </property>
  10. <property>
  11. <name>yarn.resourcemanager.ha.rm-ids</name>
  12. <value>rm1,rm2</value>
  13. </property>
  14. <property>
  15. <name>yarn.resourcemanager.hostname.rm1</name>
  16. <value>hadoop01</value>
  17. </property>
  18. <property>
  19. <name>yarn.resourcemanager.hostname.rm2</name>
  20. <value>hadoop02</value>
  21. </property>
  22. <property>
  23. <name>yarn.resourcemanager.zk-address</name>
  24. <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
  25. </property>
  26. <property>
  27. <name>yarn.nodemanager.aux-services</name>
  28. <value>mapreduce_shuffle</value>
  29. </property>
  30. </configuration>

2.9修改workers

进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:

vi workers

添加内容:

  1. hadoop02
  2. hadoop03

2.10修改:yarn-env.sh

 添加

export JAVA_HOME=/training/jdk1.8.0_171

2.11拷贝到其他节点

2.11.1:scp

  1. scp -r hadoop-2.7.3/ root@hadoop02:/training/
  2. scp -r hadoop-2.7.3/ root@hadoop03:/training/

2.11.2:rsync

xsync /training/hadoop-3.1.3

xsync分发脚本参考:

jps、kafka、zookeeper群起脚本和rsync文件分发脚本(超详细)_一个人的牛牛的博客-CSDN博客

三、验证

3.1启动zookeeper集群(每一台都要)

进入zookeeper的bin目录下执行:

zkServer.sh start

jps查看

jps、kafka、zookeeper群起脚本和rsync文件分发脚本(超详细)_一个人的牛牛的博客-CSDN博客

3.2启动

  1. #在所有节点启动journalnode(一次,不可多次执行)
  2. hadoop-daemon.sh start journalnode
  3. #格式化HDFS(在hadoop01上执行)(一次,不可多次执行)
  4. hdfs namenode -format
  5. #将hadoop-3.1.3/tmp拷贝到hadoop02/root/training/hadoop-3.1.3/tmp下
  6. scp -r /training/hadoop-3.1.3/tmp/ root@hadoop02:/training/hadoop-3.1.3/
  7. #格式化zookeeper(一次,不可多次执行)
  8. hdfs zkfc -formatZK
  9. 会有日志Successfully created /hadoop-ha/HAhadoop01 in ZK.
  10. #在所有节点停止journalnode(一次,不可多次执行)
  11. hadoop-daemon.sh stop journalnode
  12. #hadoop01和hadoop02上启动zkfc
  13. hadoop-daemon.sh start zkfc
  14. #在hadoop01上启动Hadoop集群
  15. start-all.sh

3.3jps查看进程

3.4浏览器验证9870

浏览器输入:hadoop01:9870

没有做ip地址映射的输入ip+9870如:192.168.12.137:9870

3.5 浏览器验证8088

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/855732
推荐阅读
相关标签
  

闽ICP备14008679号