当前位置:   article > 正文

docker搭建hadoop完全分布式_1个主节点 3个从节点 hadoop docker 完全分布式

1个主节点 3个从节点 hadoop docker 完全分布式

集群规划
1个NameNode节点
1个SecondaryNameNode节点
1个ResourceManager节点
1个JobHistory节点
2个Slave节点
1个Client节点

  1. 环境准备
hadoop、jdk解压备用
 - docker
 - docker-compose
 - hadoop-3.2.3.tar.gz
 - jdk-11.0.15_linux-x64_bin.tar.gz
  • 1
  • 2
  • 3
  • 4
  • 5
  1. 基础镜像制作
    2.1 下载相关工具 如 ssh、vim等
    2.2 上传hadoop、jdk
    在这里插入图片描述
    2.3 配置ssh开机启动与环境变量文件生效
    编辑 .bashrc文件 容器启动会执行此文件 注意如此文件在/root目录不执行可将此文件cp到根目录
    ssh免密登录自行百度
    在文件中追加
service ssh start >>/root/start_ssh.log
export JAVA_HOME=/root/jdk-11.0.15
export HADOOP_HOME=/root/hadoop-3.2.3
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
  • 1
  • 2
  • 3
  • 4

2.4 配置hadoop个配置文件

vim /root/hadoop-3.2.3/etc/hadoop/hadoop-env.sh

JAVA_HOME=jdk路径
 #hdfs
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
 # yarn
export YARN_RESOURCEMANAGER_USER=root
export HADOOP_SECURE_DN_USER=yarn
export YARN_NODEMANAGER_USER=root

============================================
vim /root/hadoop-3.2.3/etc/hadoop/core-site.xml

<configuration>
    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://namenode:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/data</value>
    </property>
    <property>
	    <name>hadoop.http.staticuser.user</name>
	    <value>root</value>
    </property>
</configuration>

============================================
vim /root/hadoop-3.2.3/etc/hadoop/yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>resourcemanager</value>
    </property>
</configuration>

============================================
vim /root/hadoop-3.2.3/etc/hadoop/mapred-site.xml

<configuration>
    <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>jobhistory:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>jobhistory:19888</value>
    </property>
</configuration>

============================================
vim /root/hadoop-3.2.3/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
      <name>dfs.replication</name>
      <value>2</value>
    </property>
    <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:///hadoop/hdfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>secondarynamenode:9868</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>namenode:9870</value>
    </property>
</configuration>

============================================
vim /root/hadoop-3.2.3/etc/hadoop/workers

slave2
slave1
namenode
secondarynamenode

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92

2.5 打包镜像
docker commit 容器id 镜像名称:TAG
3. 创建网络
docker network create hadoop_nw
4. 编写docker-compose.yml文件

version: "3.9"
services:
  namenode:
    ports:
      - "9870:9870"
      - "9000:9000"
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "namenode"
    tty: true
  secondarynamenode:
    ports:
      - "9868:9868"
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "secondarynamenode"
    tty: true
  resourcemanager:
    ports:
      - "8032:8032"
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "resourcemanager"
    tty: true
  jobhistory:
    ports:
      - "19888:19888"
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "jobhistory"
    tty: true
  slave1:
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "slave1"
    tty: true
  slave2:
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "slave2"
    tty: true
  client:
    image: "myhadoop:v2"
    networks:
      - hadoop_nw
    container_name: "client"
    tty: true
networks:
  hadoop_nw:
    external: true

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  1. 启动 docker compose up -d
  2. 进入任意容器启动集群
hdfs namenode -format
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
  • 1
  • 2
  • 3
  • 4
  1. 验证 按照对外暴露的接口可访问验证如:http://localhost:9870/
    在这里插入图片描述
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/602135
推荐阅读
相关标签
  

闽ICP备14008679号