当前位置:   article > 正文

hadoop-3.3.1分布式搭建_hadoop3.3.1

hadoop3.3.1

一、基础配置

1.主机映射

vim /etc/hosts
  1. 192.168.200.154 master
  2. 192.168.200.155 worker1
  3. 192.168.200.156 worker2

ssh-keygen

ssh-copy-id master

ssh-copy-id worker1

ssh-copy-id worker2

yum install -y vim

2.关闭防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld
  3. setenforce 0

二、安装jdkhadoop

1.解压资源包

  1. mkdir /usr/local/jdk
  2. tar -zxf jdk-8u152-linux-x64.tar.gz -C /usr/local/jdk/
  3. tar -zxf hadoop-3.3.1.tar.gz -C /usr/local/
  4. mv /usr/local/hadoop-3.3.1/ /usr/local/hadoop

 2.修改环境变量

vim /etc/profile
  1. #JAVA HOME
  2. export JAVA_HOME=/usr/local/jdk/jdk1.8.0_152/
  3. export PATH=$PATH:$JAVA_HOME/bin
  4. #Hadoop
  5. export HADOOP_HOME=/usr/local/hadoop/
  6. export PATH=$PATH:$HADOOP_HOME/bin
  7. export PATH=$PATH:$HADOOP_HOME/sbin

3.让环境变量生效与验证

  1. source /etc/profile
  2. java -version
  3. hadoop version

三、配置集群环境

1. hadoop-env.sh

vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
  1. export JAVA_HOME=/usr/local/jdk/jdk1.8.0_152/
  2. export HDFS_NAMENODE_USER=root
  3. export HDFS_DATANODE_USER=root
  4. export HDFS_SECONDARYNAMENODE_USER=root
  5. export YARN_RESOURCEMANAGER_USER=root
  6. export YARN_NODEMANAGER_USER=root

2-1 core-site.xml (均在<configuration> </configuration>中间添加)

  1. cd /usr/local/hadoop/etc/hadoop/
  2. vim core-site.xm
  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://master:9000</value>
  5. </property>
  6. <!-- 临时文件存放位置 -->
  7. <property>
  8. <name>hadoop.tmp.dir</name>
  9. <value>/usr/local/hadoop/tmp</value>
  10. </property>
  11. </configuration>
 2-2  hdfs-site.xml
vim hdfs-site.xml
  1. <property>
  2. <name>dfs.replication</name>
  3. <value>2</value>
  4. </property>
  5. <!-- namenode存放的位置,老版本是用dfs.name.dir -->
  6. <property>
  7. <name>dfs.namenode.name.dir</name>
  8. <value>/usr/local/hadoop/name</value>
  9. </property>
  10. <!-- datanode存放的位置,老版本是dfs.data.dir -->
  11. <property>
  12. <name>dfs.datanode.data.dir</name>
  13. <value>/usr/local/hadoop/data</value>
  14. </property>
  15. <!-- 关闭文件上传权限检查 -->
  16. <property>
  17. <name>dfs.permissions.enalbed</name>
  18. <value>false</value>
  19. </property>
  20. <!-- namenode运行在哪儿节点,默认是0.0.0.0:9870,在hadoop3.x中端口从原先的50070改为了9870 -->
  21. <property>
  22. <name>dfs.namenode.http-address</name>
  23. <value>master:9870</value>
  24. </property>
  25. <!-- secondarynamenode运行在哪个节点,默认0.0.0.0:9868 -->
  26. <property>
  27. <name>dfs.namenode.secondary.http-address</name>
  28. <value>master:9868</value>
  29. </property>
2-3  yarn-site.xml
vim yarn-site.xml
  1. <property>
  2. <name>yarn.resourcemanager.hostname</name>
  3. <value>master</value>
  4. </property>
  5. <!-- nodemanager获取数据的方式 -->
  6. <property>
  7. <name>yarn.nodemanager.aux-services</name>
  8. <value>mapreduce_shuffle</value>
  9. </property>
  10. <!-- 关闭虚拟内存检查 -->
  11. <property>
  12. <name>yarn.nodemanager.vmem-check-enabled</name>
  13. <value>false</value>
  14. </property>
 2-4 mapred-site.xml
vim mapred-site.xml
  1. <property>
  2. <name>mapreduce.framework.name</name>
  3. <value>yarn</value>
  4. </property>
  5. <!-- 配了上面这个下面这个也得配, 不然跑mapreduce会找不到主类。MR应用程序的CLASSPATH-->
  6. <property>
  7. <name>mapreduce.application.classpath</name>
  8. <value>/usr/local/hadoop/share/hadoop/mapreduce/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*</value>
  9. </property>
 2-5 workers(将原来的一行删除,添加下列两行)
vim workers
  1. worker1
  2. worker2

3.scp至其他节点

  1. scp /usr/local/hadoop/etc/hadoop/core-site.xml worker1:/usr/local/hadoop/etc/hadoop/core-site.xml
  2. scp /usr/local/hadoop/etc/hadoop/core-site.xml worker2:/usr/local/hadoop/etc/hadoop/core-site.xml
  3. scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml worker2:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
  4. scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml worker1:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
  5. scp /usr/local/hadoop/etc/hadoop/core-site.xml worker1:/usr/local/hadoop/etc/hadoop/core-site.xml
  6. scp /usr/local/hadoop/etc/hadoop/core-site.xml worker2:/usr/local/hadoop/etc/hadoop/core-site.xml
  7. scp /usr/local/hadoop/etc/hadoop/yarn-site.xml worker2:/usr/local/hadoop/etc/hadoop/yarn-site.xml
  8. scp /usr/local/hadoop/etc/hadoop/yarn-site.xml worker1:/usr/local/hadoop/etc/hadoop/yarn-site.xml
  9. scp /usr/local/hadoop/etc/hadoop/mapred-site.xml worker1:/usr/local/hadoop/etc/hadoop/mapred-site.xml
  10. scp /usr/local/hadoop/etc/hadoop/mapred-site.xml worker2:/usr/local/hadoop/etc/hadoop/mapred-site.xml
  11. scp /usr/local/hadoop/etc/hadoop/workers worker1:/usr/local/hadoop/etc/hadoop/workers
  12. scp /usr/local/hadoop/etc/hadoop/workers worker2:/usr/local/hadoop/etc/hadoop/workers
  13. scp /usr/local/hadoop/etc/hadoop/hadoop-env.sh worker1:/usr/local/hadoop/etc/hadoop/hadoop-env.sh
  14. scp /usr/local/hadoop/etc/hadoop/hadoop-env.sh worker2:/usr/local/hadoop/etc/hadoop/hadoop-env.sh

4.格式化

hdfs namenode -format

5.启动服务(在第一台)

start-all.sh

6.访问ip:9870和ip:8088

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/酷酷是懒虫/article/detail/911132
推荐阅读
相关标签
  

闽ICP备14008679号