赞
踩
目录
注:hadoop2X HA安装部署参考:
Hadoop2X HA环境部署_一个人的牛牛的博客-CSDN博客
没有的可以参考:
Linux系统CentOS7安装jdk_一个人的牛牛的博客-CSDN博客
没有的可以参考:
zookeeper单机和集群(全分布)的安装过程_一个人的牛牛的博客-CSDN博客
没有的可以参考:
Linux配置免密登录单机和全分布_一个人的牛牛的博客-CSDN博客
主节点 | 从节点 |
hadoop01 | hadoop03 |
hadoop02 | hadoop04 |
MobaXterm_Portable的简单使用_一个人的牛牛的博客-CSDN博客_mobaxterm portable和installer区别
进入安装包目录,执行:
tar -zvxf hadoop-3.1.3.tar.gz /training/
vi ~/.bash_profile
- #hadoop
- export HADOOP_HOME=/training/hadoop-3.1.3
- export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
环境变量生效
source ~/.bash_profile
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi hadoo-env.sh
添加内容:
export JAVA_HOME=/training/jdk1.8.0_171
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi core-site.xml
添加内容:
- <configuration>
-
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://HAhadoop01</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/training/hadoop-3.1.3/tmp</value>
- </property>
- <property>
- <name>ha.zookeeper.quorum</name>
- <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
- </property>
-
- </configuration>
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi hdfs-site.xml
添加内容:
- <configuration>
- <property>
- <name>dfs.nameservices</name>
- <value>HAhadoop01</value>
- </property>
-
- <property>
- <name>dfs.ha.namenodes.HAhadoop01</name>
- <value>HAhadoop02,HAhadoop03</value>
- </property>
-
- <property>
- <name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop02</name>
- <value>hadoop01:9000</value>
- </property>
-
- <property>
- <name>dfs.namenode.http-address.HAhadoop01.HAhadoop02</name>
- <value>hadoop01:9870</value>
- </property>
-
- <property>
- <name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop03</name>
- <value>hadoop02:9000</value>
- </property>
-
- <property>
- <name>dfs.namenode.http-address.HAhadoop01.HAhadoop03</name>
- <value>hadoop02:9870</value>
- </property>
-
- <property>
- <name>dfs.namenode.shared.edits.dir</name>
- <value>qjournal://hadoop01:8485;hadoop02:8485;/HAhadoop01</value>
- </property>
-
- <property>
- <name>dfs.journalnode.edits.dir</name>
- <value>/training/hadoop-3.1.3/journal</value>
- </property>
-
-
- <property>
- <name>dfs.ha.automatic-failover.enabled.HAhadoop01</name>
- <value>true</value>
- </property>
-
- <property>
- <name>dfs.client.failover.proxy.provider.HAhadoop01</name>
- <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
- </property>
-
- <property>
- <name>dfs.ha.fencing.methods</name>
- <value>
- sshfence
- shell(/bin/true)
- </value>
- </property>
-
- <property>
- <name>dfs.ha.fencing.ssh.private-key-files</name>
- <value>/root/.ssh/id_rsa</value>
- </property>
-
- <property>
- <name>dfs.ha.fencing.ssh.connect-timeout</name>
- <value>30000</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>/training/hadoop-3.1.3/data</value>
- </property>
- <property>
- <name>dfs.datanode.name.dir</name>
- <value>/training/hadoop-3.1.3/name</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- </configuration>
进入hadoop目录,用mkdir创建tmp,journal,logs
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi mapred-site.xml
添加内容:
- <configuration>
-
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
-
- </configuration>
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi yarn-site.xml
添加内容:
- <configuration>
-
- <property>
- <name>yarn.resourcemanager.ha.enabled</name>
- <value>true</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.cluster-id</name>
- <value>yarn</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.ha.rm-ids</name>
- <value>rm1,rm2</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.hostname.rm1</name>
- <value>hadoop01</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.hostname.rm2</name>
- <value>hadoop02</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.zk-address</name>
- <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
- </property>
-
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- </configuration>
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi workers
添加内容:
- hadoop02
- hadoop03
添加
export JAVA_HOME=/training/jdk1.8.0_171
2.11.1:scp
- scp -r hadoop-2.7.3/ root@hadoop02:/training/
- scp -r hadoop-2.7.3/ root@hadoop03:/training/
2.11.2:rsync
xsync /training/hadoop-3.1.3
xsync分发脚本参考:
jps、kafka、zookeeper群起脚本和rsync文件分发脚本(超详细)_一个人的牛牛的博客-CSDN博客
进入zookeeper的bin目录下执行:
zkServer.sh start
jps查看
jps、kafka、zookeeper群起脚本和rsync文件分发脚本(超详细)_一个人的牛牛的博客-CSDN博客
- #在所有节点启动journalnode(一次,不可多次执行)
- hadoop-daemon.sh start journalnode
-
-
- #格式化HDFS(在hadoop01上执行)(一次,不可多次执行)
- hdfs namenode -format
-
- #将hadoop-3.1.3/tmp拷贝到hadoop02的/root/training/hadoop-3.1.3/tmp下
- scp -r /training/hadoop-3.1.3/tmp/ root@hadoop02:/training/hadoop-3.1.3/
-
- #格式化zookeeper(一次,不可多次执行)
- hdfs zkfc -formatZK
- 会有日志Successfully created /hadoop-ha/HAhadoop01 in ZK.
-
- #在所有节点停止journalnode(一次,不可多次执行)
- hadoop-daemon.sh stop journalnode
-
-
- #hadoop01和hadoop02上启动zkfc
- hadoop-daemon.sh start zkfc
-
- #在hadoop01上启动Hadoop集群
- start-all.sh
浏览器输入:hadoop01:9870
没有做ip地址映射的输入ip+9870如:192.168.12.137:9870
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。