当前位置:   article > 正文

Hadoop集群的搭建(全网最详细,看了就会)

hadoop集群

Hadoop是Apache旗下的一个用java语言实现开源软件框架,是一个开发和运行处理大规模数据的软件平台。允许使用简单的编程模型在大量计算机集群上对大型数据集进行分布式处理。在搭建之前请一定要确保Hadoop集群搭建的前置准备已经完成,详细内容可以参考我的这篇文章。Hadoop安装包链接:https://pan.baidu.com/s/12R1q8ygEnosP9pVbX5rvxg 
提取码:LZZY

http://t.csdn.cn/FzkEShttp://t.csdn.cn/FzkES

 一、上传并解压Hadoop的压缩包

1、上传Hadoop的压缩包到我们的CentOS7系统上去,可以将压缩包直接拖入系统根目录,如图所示。(安装包在百度网盘,需要的小伙伴可以自行获取)。还可以通过我们的wgt命令去从官网下载,不过使用wgt命令下载的话,下载速度是非常慢的,使用小编还是建议大家采取第一种方案去获取我们的Hadoop的压缩包)使用wgt的完整命令在这里:wget http: /archive.apache.org/dist/hadoop/common/hadoop3.1.3/hadoop-3.1.3.tar.g

 2、然后我们就要解压我们的Hadoop压缩包,使用命令:tar -zxvf hadoop-3.1.3.tar.gz -C /export/server/ 将Hadoop压缩包解压至/export/server目录下。正常解压好后是如下图所示,但是有可能会有解压不了的情况,可能是在Hadoop压缩包上传的时候出现了问题。可以虚拟机上的Hadoop的压缩包,在重新上传即可。

3、还是跟之前安装JDK一样,为了后续操作的方便去给Hadoop创建一个软连接。

ln -s /export/server/hadoop-3.1.3 /export/server/hadoop

二、修改Hadoop的配置文件

1、首先进入Hadoop的文件夹中,

cd /export/server/hadoop/etc/Hadoop

可以看见其中有个叫etc的文件,他存放的就是我们的配置文件

 

 2、修改配置文件hadoop-env.sh

使用命令:vim hadoop-env.sh 在其开头加上下段代码。

  1. # 配置Java安装路径
  2. export JAVA_HOME=/export/server/jdk
  3. #配置Hadoop安装路径
  4. export HADOOP_HOME=/export/server/hadoop
  5. #Hadoop hdfs配置文件路径
  6. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
  7. #Hadoop YARN配置文件路径
  8. export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
  9. # Hadoop YARN 日志文件夹
  10. export YARN_LOG_DIR=$HADOOP_HOME/1ogs/yarn
  11. # Hadoop hdfs 日志文件夹
  12. export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
  13. # Hadoop的使用启动用户配置
  14. export HDFS_NAMENODE_USER=root
  15. export HDFS_DATANODE_USER=root
  16. export HDFS_SECONDARYNAMENODE_USER=root
  17. export YARN_RESOURCEMANAGER_USER=root
  18. export YARN_NODEMANAGER_USER=root
  19. export YARN_PROXYSERVER_USER=root

插入好后入下图所示,然后保存退出。(vim编辑器保存方式是在末行模式下点击esc键进入命令模式,然后输入:在输入wq这里表示推出,q表示保存。后面我会在单独出发布一篇vim编辑器的使用方法以及常用命令)

3、修改配置文件core-site.xml 将下面代码替代core-site.xml里的所有代码

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4. Licensed under the Apache License, Version 2.0 (the "License");
  5. you may not use this file except in compliance with the License.
  6. You may obtain a copy of the License at
  7. http://www.apache.org/licenses/LICENSE-2.0
  8. Unless required by applicable law or agreed to in writing, software
  9. distributed under the License is distributed on an "AS IS" BASIS,
  10. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11. See the License for the specific language governing permissions and
  12. limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16. <property>
  17. <name>fs.defaultFS</name>
  18. <value>hdfs://master:8020</value>
  19. </property>
  20. <property>
  21. <name>io.file.buffer.size</name>
  22. <value>131072</value>
  23. <description></description>
  24. </property>
  25. </configuration>

4、修改配置文件hdfs-site.xml将下面代码替代hdfs-site.xml里的所有代码

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4. Licensed under the Apache License, Version 2.0 (the "License");
  5. you may not use this file except in compliance with the License.
  6. You may obtain a copy of the License at
  7. http://www.apache.org/licenses/LICENSE-2.0
  8. Unless required by applicable law or agreed to in writing, software
  9. distributed under the License is distributed on an "AS IS" BASIS,
  10. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11. See the License for the specific language governing permissions and
  12. limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16. <property>
  17. <name>dfs.datanode.data.dir.perm</name>
  18. <value>700</value>
  19. </property>
  20. <property>
  21. <name>dfs.namenode.name.dir</name>
  22. <value>/data/nn</value>
  23. <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
  24. </property>
  25. <property>
  26. <name>dfs.namenode.hosts</name>
  27. <value>node1,node2,node3</value>
  28. <description>List of permitted DataNodes.</description>
  29. </property>
  30. <property>
  31. <name>dfs.blocksize</name>
  32. <value>268435456</value>
  33. <description></description>
  34. </property>
  35. <property>
  36. <name>dfs.namenode.handler.countL</name>
  37. <value>100</value>
  38. <description></description>
  39. </property>
  40. <property>
  41. <name>dfs.datanode.data.dir</name>
  42. <value>/data/dn</value>
  43. </property>
  44. </configuration>

 5、修改配置文件mapred-env.sh 将下面代码加入mapred-env.sh的开头

  1. export JAVA_HOME=/export/server/jdk
  2. export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
  3. export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA

6 、修改配置文件mapred-site.xml将下面代码替换mapred-site.xml的所有代码

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--Licensed under the Apache License, Version 2.0
  4. (the "License");
  5. you may not use this file except in compliance
  6. with the License.
  7. You may obtain a copy of the License at
  8. http: /www.apache.org/licenses/LICENSE-2.0
  9. Unless required by applicable law or agreed to in
  10. writing, software
  11. distributed under the License is distributed on
  12. an "AS IS" BASIS,
  13. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
  14. either express or implied.
  15. See the License for the specific language
  16. governing permissions and
  17. limitations under the License. See accompanying
  18. LICENSE file.
  19. -->
  20. <!-- Put site-specific property overrides in this
  21. file. >
  22. <configuration>
  23. <property>
  24. <name>mapreduce.framework.name</name>
  25. <value>yarn</value>
  26. <description></description>
  27. </property>
  28. <property>
  29. <name>mapreduce.jobhistory.address</name>
  30. <value>node1:10020</value>
  31. <description></description>
  32. </property>
  33. <property>
  34. <name>mapreduce.jobhistory.webapp.address</name>
  35. <value>node1:19888</value>
  36. <description></description>
  37. </property>
  38. <property>
  39. <name>mapreduce.jobhistory.intermediate-done-dir</name>
  40. <value>/data/mr-history/tmp</value>
  41. <description></description>
  42. </property>
  43. <property>
  44. <name>mapreduce.jobhistory.done-dir</name>
  45. <value>/data/mr-history/done</value>
  46. <description></description>
  47. </property>
  48. <property>
  49. <name>yarn.app.mapreduce.am.env</name>
  50. <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
  51. </property>
  52. <property>
  53. <name>mapreduce.map.env</name>
  54. <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
  55. </property>
  56. <property>
  57. <name>mapreduce.reduce.env</name>
  58. <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
  59. </property>
  60. </configuration>

 7、修改配置文件yarn-env.sh将下面代码替换yarn-env.sh的所有代码

  1. export JAVA_HOME=/export/server/jdk
  2. export HADOOP_HOME=/export/server/hadoop
  3. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
  4. export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
  5. export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
  6. export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs

 8、修改配置文件yarn-site.xmll将下面代码替换yarn-site.xml的所有代码

  1. <?xml version="1.0"?>
  2. <!--
  3. Licensed under the Apache License, Version 2.0
  4. (the "License");
  5. you may not use this file except in compliance
  6. with the License.
  7. You may obtain a copy of the License at
  8. http: /www.apache.org/licenses/LICENSE-2.0
  9. Unless required by applicable law or agreed to in
  10. writing, software
  11. distributed under the License is distributed on
  12. an "AS IS" BASIS,
  13. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
  14. either express or implied. See the License for the specific language
  15. governing permissions and
  16. limitations under the License. See accompanying
  17. LICENSE file.
  18. -->
  19. <configuration>
  20. <!--- Site specific YARN configuration properties -->
  21. <property>
  22. <name>yarn.log.server.url</name>
  23. <value>http: /node1:19888/jobhistory/logs</value>
  24. <description></description>
  25. </property>
  26. <property>
  27. <name>yarn.web-proxy.address</name>
  28. <value>node1:8089</value>
  29. <description>proxy server hostname and port</description>
  30. </property>
  31. <property>
  32. <name>yarn.log-aggregation-enable</name>
  33. <value>true</value>
  34. <description>Configuration to enable or disable log aggregation</description>
  35. </property>
  36. <property>
  37. <name>yarn.nodemanager.remote-app-logdir</name>
  38. <value>/tmp/logs</value>
  39. <description>Configuration to enable or disable log aggregation</description>
  40. </property>
  41. <!--- Site specific YARN configuration properties -->
  42. <property>
  43. <name>yarn.resourcemanager.hostname</name>
  44. <value>node1</value>
  45. <description></description>
  46. </property>
  47. <property>
  48. <name>yarn.resourcemanager.scheduler.class</name>
  49. <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
  50. </value>
  51. <description></description>
  52. </property>
  53. <property>
  54. <name>yarn.nodemanager.local-dirs</name>
  55. <value>/data/nm-local</value>
  56. <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
  57. </property>
  58. <property>
  59. <name>yarn.nodemanager.log-dirs</name>
  60. <value>/data/nm-log</value>
  61. <description>Comma-separated list of paths on the local filesystem where logs are written.</description>
  62. </property>
  63. <property>
  64. <name>yarn.nodemanager.log.retainseconds</name>
  65. <value>10800</value>
  66. <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
  67. </property>
  68. <property>
  69. <name>yarn.nodemanager.aux-services</name>
  70. <value>mapreduce_shuffle</value>
  71. <description>Shuffle service that needs to be set for Map Reduce applications.
  72. </description>
  73. </property>
  74. </configuration>

 9、在配置文件workers插入如下代码

  1. node1
  2. node2
  3. node3

三、搭建Hadoop

1、将Hadoop分发给其他虚拟机,此过程只用在node1中操作即可。使用命令:cd /export/server进入我们Hadoop安装目录

分发给node2

scp -r hadoop-3.1.3 node2:`pwd`/ 

分发给node3

scp -r hadoop-3.1.3 node3:`pwd`/

2、分发好后,还是同样操作在node2,node3中创建Hadoop软链接

ln -s /export/server/hadoop-3.1.3 /export/server/hadoop

 3、创建工作目录。

在node1中分别创建以下目录

  1. mkdir -p /data/nn
  2. mkdir -p /data/dn
  3. mkdir -p /data/nm-log
  4. mkdir -p /data/nm-local

 在node2中分别创建以下目录

  1. mkdir -p /data/dn
  2. mkdir -p /data/nm-log
  3. mkdir -p /data/nm-local

  在node3中分别创建以下目录

  1. mkdir -p /data/dn
  2. mkdir -p /data/nm-log
  3. mkdir -p /data/nm-local

 4、配置环境变量

在node1、node2、node3修改/etc/profile 将下面代码复制到/etc/profile文件最下面

  1. export HADOOP_HOME=/export/server/hadoop
  2. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 注意,需要在三台虚拟机中都要执行此操作。保存退出后还要执行命令:source /etc/profile使其生效

 4、格式化NameNode

在node1中操作即可,使用命令: hadoop namenode -formathadoop 这个命令来自于:$HADOOP_HOME/bin中的程序 由于配置了环境变量PATH,所以可以在任意位置执行hadoop命令哦

 四、启动Hadoop集群

1、启动hadoop的hdfs集群,在node1执行即可

  1. start-dfs.sh
  2. # 如需停止可以执行
  3. stop-dfs.sh

2、启动hadoop的yarn集群,在node1执行即可

  1. start-yarn.sh
  2. # 如需停止可以执行
  3. stop-yarn.sh

 3、启动历史服务器

  1. mapred -daemon start historyserver
  2. # 停止命令
  3. mapred -daemon stop historyserver

 注意:如果ips后发现没有启动历史服务器可以进入hadoop目录下的sbin输入如下命令(mr-jobhistory-daemon.sh start historyserver)

 4、启动web代理服务器

  1. yarn-daemon.sh start proxyserver
  2. #停止命令
  3. yarn-daemon.sh stop proxyserver

 五、验证Hadoop集群搭建是否成功

1、 node1 node2 node3 上通过 jps 验证进程是否都启动成功

到这里我们的Hadoop集群就已经搭建完毕了,感谢您阅读我的博客!希望本文能够给您带来新的思考和启发。如果您对本文中的观点有任何疑问或者想要深入讨论的话题,请随时在评论区留言,我会尽力回答。期待与您共同探索更多有趣的话题,下次见!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/386300
推荐阅读
相关标签
  

闽ICP备14008679号