赞
踩
集群规划
1个NameNode节点
1个SecondaryNameNode节点
1个ResourceManager节点
1个JobHistory节点
2个Slave节点
1个Client节点
hadoop、jdk解压备用
- docker
- docker-compose
- hadoop-3.2.3.tar.gz
- jdk-11.0.15_linux-x64_bin.tar.gz
service ssh start >>/root/start_ssh.log
export JAVA_HOME=/root/jdk-11.0.15
export HADOOP_HOME=/root/hadoop-3.2.3
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
2.4 配置hadoop个配置文件
vim /root/hadoop-3.2.3/etc/hadoop/hadoop-env.sh JAVA_HOME=jdk路径 #hdfs export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root # yarn export YARN_RESOURCEMANAGER_USER=root export HADOOP_SECURE_DN_USER=yarn export YARN_NODEMANAGER_USER=root ============================================ vim /root/hadoop-3.2.3/etc/hadoop/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://namenode:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property> </configuration> ============================================ vim /root/hadoop-3.2.3/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>resourcemanager</value> </property> </configuration> ============================================ vim /root/hadoop-3.2.3/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>jobhistory:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>jobhistory:19888</value> </property> </configuration> ============================================ vim /root/hadoop-3.2.3/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///hadoop/hdfs/data</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>secondarynamenode:9868</value> </property> <property> <name>dfs.namenode.http-address</name> <value>namenode:9870</value> </property> </configuration> ============================================ vim /root/hadoop-3.2.3/etc/hadoop/workers slave2 slave1 namenode secondarynamenode
2.5 打包镜像
docker commit 容器id 镜像名称:TAG
3. 创建网络
docker network create hadoop_nw
4. 编写docker-compose.yml文件
version: "3.9" services: namenode: ports: - "9870:9870" - "9000:9000" image: "myhadoop:v2" networks: - hadoop_nw container_name: "namenode" tty: true secondarynamenode: ports: - "9868:9868" image: "myhadoop:v2" networks: - hadoop_nw container_name: "secondarynamenode" tty: true resourcemanager: ports: - "8032:8032" image: "myhadoop:v2" networks: - hadoop_nw container_name: "resourcemanager" tty: true jobhistory: ports: - "19888:19888" image: "myhadoop:v2" networks: - hadoop_nw container_name: "jobhistory" tty: true slave1: image: "myhadoop:v2" networks: - hadoop_nw container_name: "slave1" tty: true slave2: image: "myhadoop:v2" networks: - hadoop_nw container_name: "slave2" tty: true client: image: "myhadoop:v2" networks: - hadoop_nw container_name: "client" tty: true networks: hadoop_nw: external: true
hdfs namenode -format
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。