赞
踩
大数据平台系列文章:
1、大数据基础平台搭建-(一)基础环境准备
2、大数据基础平台搭建-(二)Hadoop集群搭建
3、大数据基础平台搭建-(三)Hadoop集群HA+Zookeeper搭建
4、大数据基础平台搭建-(四)HBase集群HA+Zookeeper搭建
5、大数据基础平台搭建-(五)Hive搭建
大数据平台是基于Apache Hadoop_3.3.4搭建的;
[root@hnode1 ~]# cd /opt/hadoop/
[root@hnode1 hadoop]# mkdir data
[root@hnode1 hadoop]# tar -xzvf hadoop-3.3.4.tar.gz
[root@hnode1 hadoop]# vim /etc/profile
#Hadoop
export HADOOP_HOME=/opt/hadoop/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@hnode1 hadoop]# source /etc/profile
配置文件所在的目录/opt/hadoop/hadoop-3.3.4/etc/hadoop/,所要编辑的配置文件如下表所示
序号 | 配置文件 | 说明 |
---|---|---|
1 | core-site.xml | 配置Hadoop守护进程 |
2 | hdfs-site.xml | 配置HDFS的NameNode |
3 | yarn-site.xml | 配置YARN的ResourceManager |
4 | mapred-site.xml | 配置MapReduce应用 |
5 | hadoop-env.sh | 配置Hadoop环境变量 |
6 | yarn-env.sh | 配置Yarn环境变量 |
7 | workers | 配置Hadoop集群工作节点 |
[root@hnode1 hadoop]# cd hadoop-3.3.4
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/core-site.xml
<configuration> <!-- 指定NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hnode1:8020</value> </property> <!-- 在读写SequenceFiles时缓存区大小128k --> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <!-- 指定 hadoop 数据的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data</value> </property> <!-- 配置 HDFS 网页登录使用的静态用户为 root --> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property> </configuration>
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/hdfs-site.xml
<configuration>
<!-- NameNode Master节点 web 端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>hnode1:9870</value>
</property>
<!-- NameNode Seconde节点 web 端访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hnode2:9868</value>
</property>
</configuration>
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <!-- 指定 MR 走 shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!-- 指定 ResourceManager 的地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>hnode1</value> </property> <!-- 环境变量的继承 --> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> <!-- 开启日志聚集功能 --> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <!-- 设置日志聚集服务器地址 --> <property> <name>yarn.log.server.url</name> <value>http://hnode2:19888/jobhistory/logs</value> </property> <!-- 设置日志保留时间为 7 天 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> </configuration>
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/mapred-site.xml
<configuration> <!-- 指定 MapReduce 程序运行在 Yarn 上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 历史服务器端地址 --> <property> <name>mapreduce.jobhistory.address</name> <value>hnode2:10020</value> </property> <!-- 历史服务器 web 端地址 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hnode2:19888</value> </property> </configuration>
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_271
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_271
[root@hnode1 hadoop-3.3.4]# vim etc/hadoop/works
hnode1
hnode2
hnode3
hnode4
hnode5
[root@hnode2 ~]#cd /opt/hadoop/
[root@hnode2 hadoop]# mkdir data
[root@hnode2 hadoop]# scp -r hnode1:/opt/hadoop/hadoop-3.3.4/ ./
[root@hnode2 hadoop]# vim /etc/profile
#Hadoop
export HADOOP_HOME=/opt/hadoop/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@hnode2 hadoop]# source /etc/profile
[root@hnode3 ~]#cd /opt/hadoop/
[root@hnode3 hadoop]# mkdir data
[root@hnode3 hadoop]# scp -r hnode1:/opt/hadoop/hadoop-3.3.4/ ./
[root@hnode3 hadoop]# vim /etc/profile
#Hadoop
export HADOOP_HOME=/opt/hadoop/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@hnode3 hadoop]# source /etc/profile
[root@hnode4 ~]#cd /opt/hadoop/
[root@hnode4 hadoop]# mkdir data
[root@hnode4 hadoop]# scp -r hnode1:/opt/hadoop/hadoop-3.3.4/ ./
[root@hnode4 hadoop]# vim /etc/profile
#Hadoop
export HADOOP_HOME=/opt/hadoop/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@hnode4 hadoop]# source /etc/profile
[root@hnode5 ~]#cd /opt/hadoop/
[root@hnode5 hadoop]# mkdir data
[root@hnode5 hadoop]# scp -r hnode1:/opt/hadoop/hadoop-3.3.4/ ./
[root@hnode5 hadoop]# vim /etc/profile
#Hadoop
export HADOOP_HOME=/opt/hadoop/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@hnode5 hadoop]# source /etc/profile
shell脚本是引用 FlyingCodes的博文大数据平台搭建详细流程(二)Hadoop集群搭建的脚本
[root@hnode1 ~]#cd /opt/hadoop/
[root@hnode1 hadoop]# vim hadoop.sh
#!/bin/bash if [ $# -lt 1 ] then echo "缺少参数..." exit ; fi case $1 in "start") echo " ===================| 启动 Hadoop集群 | ===================" echo " -------------------| 启动 HDFS | ---------------" ssh hnode1 "/opt/hadoop/hadoop-3.3.4/sbin/start-dfs.sh" echo " -------------------| 启动 YARN | ---------------" ssh hnode1 "/opt/hadoop/hadoop-3.3.4/sbin/start-yarn.sh" echo " -------------------| 启动 HistoryServer |---------------" ssh hnode2 "/opt/hadoop/hadoop-3.3.4/bin/mapred --daemon start historyserver" ;; "stop") echo " ===================| 关闭 Hadoop集群 |===================" echo " -------------------| 关闭 HistoryServer |---------------" ssh hnode2 "/opt/hadoop/hadoop-3.3.4/bin/mapred --daemon stop historyserver" echo " -------------------| 关闭 YARN | ---------------" ssh hnode1 "/opt/hadoop/hadoop-3.3.4/sbin/stop-yarn.sh" echo " -------------------| 关闭 HDFS | ---------------" ssh hnode1 "/opt/hadoop/hadoop-3.3.4/sbin/stop-dfs.sh" ;; *) echo "输入的参数错误..." ;; esac
[root@hnode1 hadoop]# chmod +x ./hadoop.sh
[root@hnode1 hadoop]# start-dfs.sh
[root@hnode1 hadoop]# start-yarn.sh
[root@hnode1 hadoop]# hdfs namenode -format
[root@hnode1 hadoop]# ./hadoop.sh stop
[root@hnode1 hadoop]# ./hadoop.sh start
http://hnode1:8088
http://hnode1:9870
http://hnode2:19888/jobhistory
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。