当前位置:   article > 正文

Hadoop集群搭建详细步骤(只有JDK和Hadoop)

hadoop集群搭建

目录

一、模板虚拟机环境准备

1.新建一台虚拟机hadoop100,并且配置好网络

3.安装 epel-release

4.其他工具

5. 配置普通用户具有root权限,方便后期加sudo执行root权限的命令

6.删除/opt/目录下的所有文件

7.在/opt/目录下创建文件夹,并修改所属主和所属组

8. 卸载虚拟机自带的JDK

9.重启虚拟机

二、克隆虚拟机

1.关闭模板虚拟机

2.克隆三台虚拟机

3.修改三台克隆机的ip和主机名后重启

4.Windows主机host文件下配置集群的ip地址映射

三、hadoop102安装JDK

1.检查hadoop102是否有JDK

2.xftp上传JDK8压缩包至/opt/software目录下

3.解压JDK至/opt/software目录下

4.配置JDK环境变量

5.重启环境变量并测试JDK是否安装成功

四、hadoop102安装Hadoop

1.xftp上传Hadoop压缩包至/opt/software目录下

2.解压Hadoop压缩包至/opt/module

3.重启环境变量并检查Hadoop是否配置成功

五、测试Hadoop官方WordCount

1.在/opt/module/hadoop-3.1.3目录下新建wcinput文件夹

2.wcinput文件夹下新建word.txt文件并输入内容

3.执行wordcount命令

4.检查结果

六、SSH免密登录

1.在hadoop102上登录hadoop103

2.输入yes

3.输入hadoop103密码

4.退出登录

5.查看/home目录下生成的ssh文件

6.生成私钥和公钥

7.将公钥拷贝到要免密登录的目标机器上 

8.hadoop102尝试免密登录102、103、104三台机器

9.在hadoop103和hadoop104重复步骤6~8

七、配置集群

1.集群部署规划

2.进入/opt/module/hadoop-3.1.3/etc/hadoop/目录下

3.配置core-site.xml

4.配置hdfs-site.xml

5.配置mapred-site.xml

6.配置yarn-site.xml

八、xsync分发脚本

1.编写一个xsync脚本,期望脚本全局调用

2. 在根目录下创建bin文件夹

3.xsync脚本内容,注意主机名

4.赋予xsync脚本执行权限

5.测试脚本

6.将脚本复制到/bin 中,以便全局调用 

7.分发/opt/module

8.同步环境变量配置

九、时间同步

十、配置workers

1.输入主机名

2.分发workers

十一、启动集群

1.初次启动集群要进行初始化

2.hadoop102开启dfs

3.hadoop103开启yarn

4.jps命令查看

5.浏览器测试

十二、配置历史服务器与日志聚集

1.配置mapred-site.xml

2.配置yarn-site.xml

3.分发

4.重启Hadoop集群

5.hadoop102启动历史服务器

十三、执行wordcount

十四、Hadoop 集群启停脚本

1.myhadoop.sh脚本

2.赋予权限

3.分发脚本

十五、jspall脚本

1.jpsall脚本

2.赋予权限

3.分发脚本


一、模板虚拟机环境准备

1.新建一台虚拟机hadoop100,并且配置好网络

虚拟机参数参考:

新建虚拟机参考博文《CentOS7中新建虚拟机详细步骤

注意:1.选择桌面版安装

           2.设置普通用户

虚拟机网络配置参考博文《Linux网关设置

3.安装 epel-release

[root@hadoop100 ~]# yum install -y epel-release

4.其他工具

如果你是最小化安装要执行以下命令,桌面版不需要

  1. [root@hadoop100 ~]# yum install -y net-tools
  2. [root@hadoop100 ~]# yum install -y vim

5. 配置普通用户具有root权限,方便后期加sudo执行root权限的命令

[root@hadoop100 ~]# vim /etc/sudoers

在%wheel下面添加命令如下图:

6.删除/opt/目录下的所有文件

[root@hadoop100 ~]# rm -rf /opt/*

7.在/opt/目录下创建文件夹,并修改所属主和所属组

  1. [root@hadoop100 ~]# mkdir /opt/module
  2. [root@hadoop100 ~]# mkdir /opt/software
  3. [root@hadoop100 ~]# chown atguigu:atguigu /opt/module
  4. [root@hadoop100 ~]# chown atguigu:atguigu /opt/software

8. 卸载虚拟机自带的JDK

  1. [root@hadoop100 ~]# rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps
  2. # 检查是否删除干净
  3. [root@hadoop100 ~]# rpm -qa | grep -i java
rpm -qa :查询所安装的所有 rpm 软件包
grep -i :忽略大小写
xargs -n1 :表示每次只传递一个参数
rpm -e –nodeps :强制卸载软件

9.重启虚拟机

[root@hadoop100 ~]# reboot

二、克隆虚拟机

1.关闭模板虚拟机

2.克隆三台虚拟机

模板虚拟机右键→管理→克隆

再克隆两台虚拟机hadoop103和hadoop104

3.修改三台克隆机的ip和主机名后重启

(1)开启这四台虚拟机

(2)修改ip地址

[root@hadoop100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

(3)修改主机名

[root@hadoop100 ~]# vim /etc/hostname

(4)重启hadoop102

(5)关闭hadoop102的防火墙

  1. [root@hadoop102 ~]# systemctl stop firewalld
  2. [root@hadoop102 ~]# systemctl disable firewalld.service
  3. [root@hadoop102 ~]# systemctl status firewalld
  4. ● firewalld.service - firewalld - dynamic firewall daemon
  5. Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
  6. Active: inactive (dead)
  7. Docs: man:firewalld(1)

(6)开启network,测试网络连接

[root@hadoop102 network-scripts]# systemctl stop NetworkManager
[root@hadoop102 network-scripts]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
[root@hadoop102 network-scripts]# systemctl restart network
[root@hadoop102 network-scripts]# ping www.baidu.com
[root@hadoop102 network-scripts]# hostname
hadoop102
[root@hadoop102 network-scripts]# ifconfig

  1. [root@hadoop102 network-scripts]# systemctl stop NetworkManager
  2. [root@hadoop102 network-scripts]# systemctl disable NetworkManager
  3. Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
  4. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
  5. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
  6. [root@hadoop102 network-scripts]# systemctl restart network
  7. [root@hadoop102 network-scripts]# ping www.baidu.com
  8. PING www.a.shifen.com (180.101.50.242) 56(84) bytes of data.
  9. 64 bytes from 180.101.50.242 (180.101.50.242): icmp_seq=1 ttl=128 time=2.99 ms
  10. 64 bytes from 180.101.50.242 (180.101.50.242): icmp_seq=2 ttl=128 time=3.26 ms
  11. ^C
  12. --- www.a.shifen.com ping statistics ---
  13. 2 packets transmitted, 2 received, 0% packet loss, time 1001ms
  14. rtt min/avg/max/mdev = 2.999/3.134/3.269/0.135 ms
  15. [root@hadoop102 network-scripts]# hostname
  16. hadoop102
  17. [root@hadoop102 network-scripts]# ifconfig
  18. ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  19. inet 192.168.180.152 netmask 255.255.255.0 broadcast 192.168.180.255
  20. inet6 fe80::20c:29ff:fec5:2a4e prefixlen 64 scopeid 0x20<link>
  21. ether 00:0c:29:c5:2a:4e txqueuelen 1000 (Ethernet)
  22. RX packets 5 bytes 524 (524.0 B)
  23. RX errors 0 dropped 0 overruns 0 frame 0
  24. TX packets 30 bytes 4235 (4.1 KiB)
  25. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  26. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  27. inet 127.0.0.1 netmask 255.0.0.0
  28. inet6 ::1 prefixlen 128 scopeid 0x10<host>
  29. loop txqueuelen 1 (Local Loopback)
  30. RX packets 0 bytes 0 (0.0 B)
  31. RX errors 0 dropped 0 overruns 0 frame 0
  32. TX packets 0 bytes 0 (0.0 B)
  33. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  34. virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  35. inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
  36. ether 52:54:00:55:4e:cf txqueuelen 1000 (Ethernet)
  37. RX packets 0 bytes 0 (0.0 B)
  38. RX errors 0 dropped 0 overruns 0 frame 0
  39. TX packets 0 bytes 0 (0.0 B)
  40. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

(7)另外两台虚拟机hadoop103和hadoop104重复(1)~(6)操作

(8)《关于Job for network.service failed because the control process exited with error code.

4.Windows主机host文件下配置集群的ip地址映射

进入 C:\Windows\System32\drivers\etc 路径,添加下面的ip映射

三、hadoop102安装JDK

1.检查hadoop102是否有JDK

注意:安装 JDK 前,一定确保提前删除了虚拟机自带的 JDK 。参考一、8
  1. [lxm@hadoop102 ~]$ rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps
  2. rpm:未给出要擦除的软件包

2.xftp上传JDK8压缩包至/opt/software目录下

3.解压JDK至/opt/software目录下

  1. [lxm@hadoop102 module]$ tar -zxvf /opt/software/jdk-8u321-linux-x64.tar.gz -C /opt/module/
  2. [lxm@hadoop102 module]$ ll
  3. 总用量 0
  4. drwxr-xr-x. 8 lxm lxm 273 1216 2021 jdk1.8.0_321

4.配置JDK环境变量

  1. [lxm@hadoop102 module]$ sudo vim /etc/profile.d/my_env.sh
  2. # JAVA_HOME
  3. export JAVA_HOME=/opt/module/jdk1.8.0_321
  4. export PATH=$PATH:$JAVA_HOME/bin

5.重启环境变量并测试JDK是否安装成功

  1. [lxm@hadoop102 module]$ source /etc/profile
  2. [lxm@hadoop102 module]$ javac
  3. [lxm@hadoop102 module]$ java -version
  4. java version "1.8.0_321"
  5. Java(TM) SE Runtime Environment (build 1.8.0_321-b07)
  6. Java HotSpot(TM) 64-Bit Server VM (build 25.321-b07, mixed mode)

四、hadoop102安装Hadoop

1.xftp上传Hadoop压缩包至/opt/software目录下

2.解压Hadoop压缩包至/opt/module

  1. [lxm@hadoop102 module]$ sudo vim /etc/profile.d/my_env.sh
  2. # HADOOP_HOME
  3. export HADOOP_HOME=/opt/module/hadoop-3.1.3
  4. export PATH=$PATH:$HADOOP_HOME/bin
  5. export PATH=$PATH:$HADOOP_HOME/sbin

3.重启环境变量并检查Hadoop是否配置成功

  1. [lxm@hadoop102 module]$ source /etc/profile
  2. [lxm@hadoop102 module]$ hadoop version
  3. Hadoop 3.1.3
  4. Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
  5. Compiled by ztang on 2019-09-12T02:47Z
  6. Compiled with protoc 2.5.0
  7. From source with checksum ec785077c385118ac91aadde5ec9799
  8. This command was run using /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar

五、测试Hadoop官方WordCount

1.在/opt/module/hadoop-3.1.3目录下新建wcinput文件夹

[lxm@hadoop102 hadoop-3.1.3]$ mkdir wcinput

2.wcinput文件夹下新建word.txt文件并输入内容

  1. [lxm@hadoop102 hadoop-3.1.3]$ cd ./wcinput/
  2. [lxm@hadoop102 wcinput]$ vim word.txt
  3. hello world
  4. hello java
  5. hello hadoop

3.执行wordcount命令

[lxm@hadoop102 wcinput]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /opt/module/hadoop-3.1.3/wcinput/ /opt/module/hadoop-3.1.3/wcoutput/

4.检查结果

  1. [lxm@hadoop102 hadoop-3.1.3]$ cat ./wcoutput/part-r-00000
  2. hadoop 1
  3. hello 3
  4. java 1
  5. world 1

注意:如果第二次提交wordcount命令就要删除/opt/module/hadoop-3.1.3目录下的wcoutput文件。

六、SSH免密登录

免密登录就是从一台虚拟机登录另一台虚拟机不需要输入密码

1.在hadoop102上登录hadoop103

[lxm@hadoop102 ~]$ ssh hadoop103

2.输入yes

Are you sure you want to continue connecting (yes/no)? yes

3.输入hadoop103密码

lxm@hadoop103's password: 

4.退出登录

  1. [lxm@hadoop103 ~]# exit
  2. 登出
  3. Connection to hadoop103 closed.
  4. [lxm@hadoop102 ~]

5.查看/home目录下生成的ssh文件

  1. [lxm@hadoop102 ~]$ ll -al
  2. [lxm@hadoop102 ~]$ cd .ssh

6.生成私钥和公钥

[lxm@hadoop102 .ssh]$ ssh-keygen -t rsa -P ''

回车

  1. [lxm@hadoop102 .ssh]$ ll
  2. -rw-------. 1 lxm lxm 1675 212 11:10 id_rsa
  3. -rw-r--r--. 1 lxm lxm 395 212 11:10 id_rsa.pub
  4. -rw-r--r--. 1 lxm lxm 561 212 11:11 known_hosts

7.将公钥拷贝到要免密登录的目标机器上 

  1. [lxm@hadoop102 .ssh]$ ssh-copy-id hadoop102
  2. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/lxm/.ssh/id_rsa.pub"
  3. The authenticity of host 'hadoop102 (192.168.180.152)' can't be established.
  4. ECDSA key fingerprint is SHA256:GFkoioPRq02rdq9bMm4L+OWi08ZceOxyBAaKGj4pObQ.
  5. ECDSA key fingerprint is MD5:8f:f8:55:6a:f6:ba:13:a4:81:3e:04:d2:3c:07:1c:a5.
  6. Are you sure you want to continue connecting (yes/no)? yes
  7. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  8. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  9. lxm@hadoop102's password:
  10. Number of key(s) added: 1
  11. Now try logging into the machine, with: "ssh 'hadoop102'"
  12. and check to make sure that only the key(s) you wanted were added.
  1. [lxm@hadoop102 .ssh]$ ssh-copy-id hadoop103
  2. [lxm@hadoop102 .ssh]$ ssh-copy-id hadoop104

8.hadoop102尝试免密登录102、103、104三台机器

  1. [lxm@hadoop102 .ssh]$ ssh hadoop102
  2. Last login: Sun Feb 12 10:11:45 2023 from 192.168.180.1
  3. [lxm@hadoop102 ~]$ exit
  4. 登出
  5. Connection to hadoop102 closed.
  6. [lxm@hadoop102 .ssh]$ ssh hadoop103
  7. Last login: Sun Feb 12 11:10:15 2023 from 192.168.180.152
  8. [lxm@hadoop103 ~]$ exit
  9. 登出
  10. Connection to hadoop103 closed.
  11. [lxm@hadoop102 .ssh]$ ssh hadoop104
  12. Last login: Sun Feb 12 09:57:31 2023 from 192.168.180.1
  13. [lxm@hadoop104 ~]$ exit
  14. 登出
  15. Connection to hadoop104 closed.

9.在hadoop103和hadoop104重复步骤6~8

七、配置集群

1.集群部署规划

注意:

  • NameNode和SecondaryNameNode不要安装在同一台服务器;
  • ResourceManager也很消耗内存,不要和NameNode、SecondaryNameNode配置在同一台机器上。

2.进入/opt/module/hadoop-3.1.3/etc/hadoop/目录下

  1. [lxm@hadoop102 hadoop]$ pwd
  2. /opt/module/hadoop-3.1.3/etc/hadoop

在下面四个xml文件的<configuration></configuration>标签内插入以下内容,注意根据你的主机名更改对应的localhost

3.配置core-site.xml

  1. <!-- 指定 NameNode 的地址 -->
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://hadoop102:8020</value>
  5. </property>
  6. <!-- 指定 hadoop 数据的存储目录 -->
  7. <property>
  8. <name>hadoop.tmp.dir</name>
  9. <value>/opt/module/hadoop-3.1.3/data</value>
  10. </property>
  11. <!-- 配置 HDFS 网页登录使用的静态用户为 atguigu -->
  12. <property>
  13. <name>hadoop.http.staticuser.user</name>
  14. <value>atguigu</value>
  15. </property>

4.配置hdfs-site.xml

  1. <!-- nn web 端访问地址-->
  2. <property>
  3. <name>dfs.namenode.http-address</name>
  4. <value>hadoop102:9870</value>
  5. </property>
  6. <!-- 2nn web 端访问地址-->
  7. <property>
  8. <name>dfs.namenode.secondary.http-address</name>
  9. <value>hadoop104:9868</value>
  10. </property>

5.配置mapred-site.xml

  1. <!-- 指定 MapReduce 程序运行在 Yarn 上 -->
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>

6.配置yarn-site.xml

  1. <!-- 指定 MR 走 shuffle -->
  2. <property>
  3. <name>yarn.nodemanager.aux-services</name>
  4. <value>mapreduce_shuffle</value>
  5. </property>
  6. <!-- 指定 ResourceManager 的地址-->
  7. <property>
  8. <name>yarn.resourcemanager.hostname</name>
  9. <value>hadoop103</value>
  10. </property>
  11. <!-- 环境变量的继承 -->
  12. <property>
  13. <name>yarn.nodemanager.env-whitelist</name>
  14. <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
  15. </property>

八、xsync分发脚本

1.编写一个xsync脚本,期望脚本全局调用

脚本放在声明了全局环境变量的路径

2. 在根目录下创建bin文件夹

  1. [lxm@hadoop102 ~]$ cd /home/lxm
  2. [lxm@hadoop102 ~]$ mkdir bin
  3. [lxm@hadoop102 ~]$ cd bin
  4. [lxm@hadoop102 ~]$ vim xsync

3.xsync脚本内容,注意主机名

  1. #!/bin/bash
  2. #1. 判断参数个数
  3. if [ $# -lt 1 ]
  4. then
  5. echo Not Enough Arguement!
  6. exit;
  7. fi
  8. #2. 遍历集群所有机器
  9. for host in hadoop102 hadoop103 hadoop104
  10. do
  11. echo ==================== $host ====================
  12. #3. 遍历所有目录,挨个发送
  13. for file in $@
  14. do
  15. #4. 判断文件是否存在
  16. if [ -e $file ]
  17. then
  18. #5. 获取父目录
  19. pdir=$(cd -P $(dirname $file); pwd)
  20. #6. 获取当前文件的名称
  21. fname=$(basename $file)
  22. ssh $host "mkdir -p $pdir"
  23. rsync -av $pdir/$fname $host:$pdir
  24. else
  25. echo $file does not exists!
  26. fi
  27. done
  28. done

4.赋予xsync脚本执行权限

[lxm@hadoop102 bin]$ chmod 777 xsync

5.测试脚本

[lxm@hadoop102 ~]$ xsync /home/lxm/bin

执行成功

6.将脚本复制到/bin 中,以便全局调用 

​[lxm@hadoop102 ~]$ sudo cp xsync /bin/

7.分发/opt/module

[lxm@hadoop102 ~]$ xsync /opt/module/

8.同步环境变量配置

  1. # 同步环境变量配置
  2. # 注意:如果用了sudo,那么xsync一定要给它的路径补全。
  3. [lxm@hadoop102 ~]$ sudo ./bin/xsync /etc/profile.d/my_env.sh
  4. # 刷新环境
  5. [lxm@hadoop103 ~]$ source /etc/profile
  6. [lxm@hadoop104 ~]$ source /etc/profile

九、时间同步

参考博文《Linux中CentOS7时间与网络时间orWindows同步的方法

十、配置workers

1.输入主机名

  1. [lxm@hadoop102 ~]$ vim /opt/module/hadoop-3.1.3/etc/hadoop/workers
  2. hadoop102
  3. hadoop103
  4. hadoop104

2.分发workers

[lxm@hadoop102 ~]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/workers

十一、启动集群

1.初次启动集群要进行初始化

[lxm@hadoop102 ~]$ hdfs namenode -format

2.hadoop102开启dfs

[lxm@hadoop102 ~]$ start-dfs.sh

3.hadoop103开启yarn

[lxm@hadoop103 ~]$ start-yarn.sh

4.jps命令查看

5.浏览器测试

http://hadoop102:9870

十二、配置历史服务器与日志聚集

为了查看程序的历史运行情况,需要配置一下历史服务器

1.配置mapred-site.xml

  1. <!-- 历史服务器端地址 -->
  2. <property>
  3. <name>mapreduce.jobhistory.address</name>
  4. <value>hadoop102:10020</value>
  5. </property>
  6. <!-- 历史服务器 web 端地址 -->
  7. <property>
  8. <name>mapreduce.jobhistory.webapp.address</name>
  9. <value>hadoop102:19888</value>
  10. </property>

2.配置yarn-site.xml

  1. <!-- 开启日志聚集功能 -->
  2. <property>
  3. <name>yarn.log-aggregation-enable</name>
  4. <value>true</value>
  5. </property>
  6. <!-- 设置日志聚集服务器地址 -->
  7. <property>
  8. <name>yarn.log.server.url</name>
  9. <value>http://hadoop102:19888/jobhistory/logs</value>
  10. </property>
  11. <!-- 设置日志保留时间为 7 天 -->
  12. <property>
  13. <name>yarn.log-aggregation.retain-seconds</name>
  14. <value>604800</value>
  15. </property>

3.分发

[lxm@hadoop102 ~]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/

4.重启Hadoop集群

  1. [lxm@hadoop102 ~]$ stop-dfs.sh
  2. [lxm@hadoop103 ~]$ stop-yarn.sh
  3. [lxm@hadoop102 ~]$ start-dfs.sh
  4. [lxm@hadoop103 ~]$ start-yarn.sh

5.hadoop102启动历史服务器

[lxm@hadoop102 hadoop]$ mapred --daemon start historyserver

十三、执行wordcount

参考博文《Hadoop之——WordCount案例与执行本地jar包

十四、Hadoop 集群启停脚本

1.myhadoop.sh脚本

  1. [lxm@hadoop102 bin]$ vim myhadoop.sh
  2. #!/bin/bash
  3. if [ $# -lt 1 ]
  4. then
  5. echo "No Args Input..."
  6. exit ;
  7. fi
  8. case $1 in
  9. "start")
  10. echo " =================== 启动 hadoop 集群 ==================="
  11. echo " --------------- 启动 hdfs ---------------"
  12. ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
  13. echo " --------------- 启动 yarn ---------------"
  14. ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
  15. echo " --------------- 启动 historyserver ---------------"
  16. ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
  17. ;;
  18. "stop")
  19. echo " =================== 关闭 hadoop 集群 ==================="
  20. echo " --------------- 关闭 historyserver ---------------"
  21. ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
  22. echo " --------------- 关闭 yarn ---------------"
  23. ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"
  24. echo " --------------- 关闭 hdfs ---------------"
  25. ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
  26. ;;
  27. *)
  28. echo "Input Args Error..."
  29. ;;
  30. esac

2.赋予权限

[lxm@hadoop102 bin]$ chmod +x myhadoop.sh

3.分发脚本

[lxm@hadoop102 ~]$ sudo ./bin/xsync /home/lxm/bin/

十五、jspall脚本

1.jpsall脚本

  1. [lxm@hadoop102 bin]$ vim jpsall
  2. #!/bin/bash
  3. for host in hadoop102 hadoop103 hadoop104
  4. do
  5. echo =============== $host ===============
  6. ssh $host jps
  7. done

2.赋予权限

[lxm@hadoop102 bin]$ chmod +x jpsall

3.分发脚本

[lxm@hadoop102 ~]$ sudo ./bin/xsync /home/lxm/bin/
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/386288
推荐阅读
相关标签
  

闽ICP备14008679号