当前位置:   article > 正文

CentOS7安装Hadoop-3.3.0集群_jnb-gssx.site

jnb-gssx.site

Hadoop

一、环境搭建VMware ,CentOS7,Ubuntu

1、VMware Workstation 15 Pro安装图解教程

详细见:https://www.shuzhiduo.com/A/Gkz1PGeQdR/
  • 1

2、VMware Workstation 15 Pro安装带图形化界面的CentOS7

详细见:https://www.shuzhiduo.com/A/qVde08Gg5P/
  • 1

3、VMware Workstation 15 Pro中安装ubuntu1804

详细见:https://www.shuzhiduo.com/A/MAzAPGWnd9/
  • 1
注释:强烈安利复制粘贴、适应屏幕(安装VMware Tools)

二.创建普通用户(本人一开始用了,后面全部直接用的root)

1.在当前用户下面创建hadoop用户

[root@localhost ~]# useradd -m hadoop -s /bin/bash
[root@localhost ~]# passwd hadoop
  • 1
  • 2

2.如果不是root用户

[用户@localhost ~]# sudo useradd -m hadoop -s /bin/bash
[用户@localhost ~]# sudo passwd hadoop
  • 1
  • 2

3.为hadoop用户添加管理员权限

[root@localhost ~]# adduser hadoop sudo
  • 1

三.安装jdk

[hadoop@localhost ~]$ su
密码:
[root@localhost hadoop]# ls
hadoop-3.3.0.tar.gz  jdk-11.0.10_linux-x64_bin.rpm  模板  图片  下载  桌面
java                 公共                           视频  文档  音乐
[root@localhost hadoop]# pwd
/home/hadoop
[root@localhost hadoop]# cd ..
[root@localhost home]# cd ..
[root@localhost /]# ls
bin   dev  Hello.java  lib    media  opt     proc  run   software  sys  usr
boot  etc  home        lib64  mnt    passwd  root  sbin  srv       tmp  var
[root@localhost /]# cd /home/hadoop
[root@localhost hadoop]# ls
hadoop-3.3.0.tar.gz  jdk-11.0.10_linux-x64_bin.rpm  模板  图片  下载  桌面
java                 公共                           视频  文档  音乐
[root@localhost hadoop]# rpm -i jdk-11.0.10_linux-x64_bin.rpm 
警告:jdk-11.0.10_linux-x64_bin.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID ec551f03: NOKEY

[root@localhost hadoop]# 
[root@localhost hadoop]# ls
hadoop-3.3.0.tar.gz  jdk-11.0.10_linux-x64_bin.rpm  模板  图片  下载  桌面
java                 公共                           视频  文档  音乐
[root@localhost hadoop]# java
用法:java [options] <主类> [args...]
           (执行类)
   或  java [options] -jar <jar 文件> [args...]
           (执行 jar 文件)
   或  java [options] -m <模块>[/<主类>] [args...]
       java [options] --module <模块>[/<主类>] [args...]
           (执行模块中的主类)
   或  java [options] <源文件> [args]
           (执行单个源文件程序)

 将主类、源文件、-jar <jar 文件>-m 或
 --module <模块>/<主类> 后的参数作为参数
 传递到主类。

 其中,选项包括:

    -cp <目录和 zip/jar 文件的类搜索路径>
    -classpath <目录和 zip/jar 文件的类搜索路径>
    --class-path <目录和 zip/jar 文件的类搜索路径>
                  使用 : 分隔的, 用于搜索类文件的目录, JAR 档案
                  和 ZIP 档案列表。
    -p <模块路径>
    --module-path <模块路径>...: 分隔的目录列表, 每个目录
                  都是一个包含模块的目录。
    --upgrade-module-path <模块路径>...: 分隔的目录列表, 每个目录
                  都是一个包含模块的目录, 这些模块
                  用于替换运行时映像中的可升级模块
    --add-modules <模块名称>[,<模块名称>...]
                  除了初始模块之外要解析的根模块。
                  <模块名称> 还可以为 ALL-DEFAULT, ALL-SYSTEM,
                  ALL-MODULE-PATH.
    --list-modules
                  列出可观察模块并退出
    -d <module name>
    --describe-module <模块名称>
                  描述模块并退出
    --dry-run     创建 VM 并加载主类, 但不执行 main 方法。
                  此 --dry-run 选项对于验证诸如
                  模块系统配置这样的命令行选项可能非常有用。
    --validate-modules
                  验证所有模块并退出
                  --validate-modules 选项对于查找
                  模块路径中模块的冲突及其他错误可能非常有用。
    -D<名称>=<>
                  设置系统属性
    -verbose:[class|module|gc|jni]
                  启用详细输出
    -version      将产品版本输出到错误流并退出
    --version     将产品版本输出到输出流并退出
    -showversion  将产品版本输出到错误流并继续
    --show-version
                  将产品版本输出到输出流并继续
    --show-module-resolution
                  在启动过程中显示模块解析输出
    -? -h -help
                  将此帮助消息输出到错误流
    --help        将此帮助消息输出到输出流
    -X            将额外选项的帮助输出到错误流
    --help-extra  将额外选项的帮助输出到输出流
    -ea[:<程序包名称>...|:<类名>]
    -enableassertions[:<程序包名称>...|:<类名>]
                  按指定的粒度启用断言
    -da[:<程序包名称>...|:<类名>]
    -disableassertions[:<程序包名称>...|:<类名>]
                  按指定的粒度禁用断言
    -esa | -enablesystemassertions
                  启用系统断言
    -dsa | -disablesystemassertions
                  禁用系统断言
    -agentlib:<库名>[=<选项>]
                  加载本机代理库 <库名>, 例如 -agentlib:jdwp
                  另请参阅 -agentlib:jdwp=help
    -agentpath:<路径名>[=<选项>]
                  按完整路径名加载本机代理库
    -javaagent:<jar 路径>[=<选项>]
                  加载 Java 编程语言代理, 请参阅 java.lang.instrument
    -splash:<图像路径>
                  使用指定的图像显示启动屏幕
                  自动支持和使用 HiDPI 缩放图像
                  (如果可用)。应始终将未缩放的图像文件名 (例如, image.ext)
                  作为参数传递给 -splash 选项。
                  将自动选取提供的最合适的缩放
                  图像。
                  有关详细信息, 请参阅 SplashScreen API 文档
    @argument 文件
                  一个或多个包含选项的参数文件
    -disable-@files
                  阻止进一步扩展参数文件
    --enable-preview
                  允许类依赖于此发行版的预览功能
要为长选项指定参数, 可以使用 --<名称>=<>--<名称> <>[root@localhost hadoop]# ls
hadoop-3.3.0.tar.gz  jdk-11.0.10_linux-x64_bin.rpm  模板  图片  下载  桌面
java                 公共                           视频  文档  音乐
[root@localhost hadoop]# cd /usr
[root@localhost usr]# ls
bin  games    java  lib64    local  share  tmp
etc  include  lib   libexec  sbin   src
[root@localhost usr]# cd java
[root@localhost java]# ls
default  jdk-11.0.10  jdk-11.0.10_linux-x64_bin.rpm  latest
[root@localhost java]# vim /etc/profile
[root@localhost java]# source /etc/profile
[root@localhost java]# javac -version
javac 11.0.10
[root@localhost java]# java -version
java version "11.0.10" 2021-01-19 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.10+8-LTS-162)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.10+8-LTS-162, mixed mode)
[root@localhost java]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
[root@localhost java]# vim /etc/profile
   
在最下面添加环境变量    
    
#Java_HOME
export JAVA_HOME=/usr/java/jdk-11.0.10
export PATH=${PATH}:${JAVA_HOME}/bin    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

四、hadoop 不在 sudoers 文件中。此事将被报告。

详细见:https://my.oschina.net/u/4410397/blog/3428595
  • 1

五、虚拟机网络配置

[root@hadoop001 ~]# cd /etc
[root@hadoop001 etc]# ls
abrt                        hosts                     purple
adjtime                     hosts.allow               python
aliases                     hosts.deny                qemu-ga
aliases.db                  hp                        qemu-kvm
alsa                        idmapd.conf               radvd.conf
alternatives                init.d                    ras
anacrontab                  inittab                   rc0.d
asound.conf                 inputrc                   rc1.d
at.deny                     iproute2                  rc2.d
audisp                      ipsec.conf                rc3.d
audit                       ipsec.d                   rc4.d
avahi                       ipsec.secrets             rc5.d
bash_completion.d           iscsi                     rc6.d
bashrc                      issue                     rc.d
binfmt.d                    issue.net                 rc.local
bluetooth                   kdump.conf                rdma
brltty                      kernel                    redhat-release
brltty.conf                 krb5.conf                 request-key.conf
centos-release              krb5.conf.d               request-key.d
centos-release-upstream     ksmtuned.conf             resolv.conf
chkconfig.d                 ld.so.cache               resolv.conf.YW7P00
chrony.conf                 ld.so.conf                rpc
chrony.keys                 ld.so.conf.d              rpm
cifs-utils                  libaudit.conf             rsyncd.conf
cron.d                      libibverbs.d              rsyslog.conf
cron.daily                  libnl                     rsyslog.d
cron.deny                   libreport                 rwtab
cron.hourly                 libuser.conf              rwtab.d
cron.monthly                libvirt                   samba
crontab                     locale.conf               sane.d
cron.weekly                 localtime                 sasl2
crypttab                    login.defs                scl
csh.cshrc                   logrotate.conf            securetty
csh.login                   logrotate.d               security
cups                        lsm                       selinux
cupshelpers                 lvm                       services
dbus-1                      machine-id                sestatus.conf
dconf                       magic                     setroubleshoot
default                     mail.rc                   setuptool.d
depmod.d                    makedumpfile.conf.sample  sgml
dhcp                        man_db.conf               shadow
DIR_COLORS                  mcelog                    shadow-
DIR_COLORS.256color         mke2fs.conf               shells
DIR_COLORS.lightbgcolor     modprobe.d                skel
dleyna-server-service.conf  modules-load.d            smartmontools
dnsmasq.conf                motd                      sos.conf
dnsmasq.d                   mtab                      speech-dispatcher
dracut.conf                 mtools.conf               ssh
dracut.conf.d               multipath                 ssl
drirc                       my.cnf                    statetab
e2fsck.conf                 my.cnf.d                  statetab.d
enscript.cfg                nanorc                    subgid
environment                 netconfig                 subuid
ethertypes                  NetworkManager            sudo.conf
exports                     networks                  sudoers
exports.d                   nfs.conf                  sudoers.d
favicon.png                 nfsmount.conf             sudo-ldap.conf
fcoe                        nsswitch.conf             sysconfig
festival                    nsswitch.conf.bak         sysctl.conf
filesystems                 nsswitch.conf.rpmnew      sysctl.d
firefox                     ntp                       systemd
firewalld                   numad.conf                system-release
flatpak                     oddjob                    system-release-cpe
fonts                       oddjobd.conf              tcsd.conf
fprintd.conf                oddjobd.conf.d            terminfo
fstab                       openldap                  tmpfiles.d
fuse.conf                   opt                       trusted-key.key
gconf                       os-release                tuned
gcrypt                      PackageKit                udev
gdbinit                     pam.d                     udisks2
gdbinit.d                   passwd                    updatedb.conf
gdm                         passwd-                   UPower
geoclue                     pbm2ppa.conf              usb_modeswitch.conf
GeoIP.conf                  pinforc                   vconsole.conf
GeoIP.conf.default          pkcs11                    vimrc
ghostscript                 pki                       virc
gnupg                       plymouth                  vmware-tools
GREP_COLORS                 pm                        wgetrc
groff                       pnm2ppa.conf              wpa_supplicant
group                       polkit-1                  wvdial.conf
group-                      popt.d                    X11
grub2.cfg                   postfix                   xdg
grub.d                      ppp                       xinetd.d
gshadow                     prelink.conf.d            xml
gshadow-                    printcap                  yum
gss                         profile                   yum.conf
gssproxy                    profile.d                 yum.repos.d
host.conf                   protocols
hostname                    pulse
[root@hadoop001 etc]# cd sysconfig/
[root@hadoop001 sysconfig]# ls
anaconda         ip6tables         network          run-parts
atd              ip6tables-config  network-scripts  samba
authconfig       iptables          nfs              saslauthd
cbq              iptables-config   ntpdate          selinux
chronyd          iptables.save     qemu-ga          smartmontools
console          irqbalance        radvd            sshd
cpupower         kdump             raid-check       sysstat
crond            kernel            rdisc            sysstat.ioconf
ebtables-config  ksm               readonly-root    virtlockd
fcoe             libvirtd          rpcbind          virtlogd
firewalld        man-db            rpc-rquotad      wpa_supplicant
grub             modules           rsyncd
init             netconsole        rsyslog
[root@hadoop001 sysconfig]# cd network-scripts/
[root@hadoop001 network-scripts]# ls
ifcfg-ens33   ifdown-post      ifup-eth     ifup-sit
ifcfg-ens33~  ifdown-ppp       ifup-ib      ifup-Team
ifcfg-lo      ifdown-routes    ifup-ippp    ifup-TeamPort
ifdown        ifdown-sit       ifup-ipv6    ifup-tunnel
ifdown-bnep   ifdown-Team      ifup-isdn    ifup-wireless
ifdown-eth    ifdown-TeamPort  ifup-plip    init.ipv6-global
ifdown-ib     ifdown-tunnel    ifup-plusb   network-functions
ifdown-ippp   ifup             ifup-post    network-functions-ipv6
ifdown-ipv6   ifup-aliases     ifup-ppp
ifdown-isdn   ifup-bnep        ifup-routes
    
#修改ifcfg-ens33配置文件
[root@hadoop001 network-scripts]# vi ifcfg-ens33

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static  #修改
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=1167f30e-5f62-4002-9627-43659abebb33
DEVICE=ens33
ONBOOT=yes   #修改
IPADDR=192.168.1.100  #配置ip
MACADDR=00:0C:29:80:73:BE   
NETMASK=255.255.255.0   #配置mac,虚拟机-->设置-->网络适配器NAT-->高级-->MAC
GATEWAY=192.168.1.2   #容易错误配置,编辑-->虚拟网咯编辑器-->VMnet8-->NAT设置-->网关
DNS1=192.168.1.2   #与网关一样就可以
DNS2=8.8.8.8
IPV6_PRIVACY=no

    
#重启网络服务   
[root@hadoop001 network-scripts]# service network restart
Restarting network (via systemctl):                        [  确定  ]
[root@hadoop001 network-scripts]# 

    
#查看ip地址
[root@hadoop001 network-scripts]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:80:73:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.100/24 brd 192.168.1.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::467a:1602:431:275/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:80:d1:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:80:d1:02 brd ff:ff:ff:ff:ff:ff
[root@hadoop001 network-scripts]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174

六、主机名设置

1.编辑网络配置文件

[root@hadoop001 network-scripts]# vim /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=hadoop001
  • 1
  • 2
  • 3
  • 4

2.永久设置主机名

[root@hadoop001 network-scripts]# hostnamectl set-hostname hadoop001
  • 1

3.重启

[root@hadoop001 network-scripts]# hostnamectl set-hostname hadoop001
  • 1

4.查看主机名是否有效

[root@hadoop001 ~]# hostname
hadoop001
  • 1
  • 2
另外几台分别同样操作设置为hadoop002,hadoop003

七、hosts文件配置

配置IP和hostname的映射关系

[root@hadoop001 ~]# vim /etc/hosts

#增加
192.168.1.100  hadoop001
192.168.1.101  hadoop002
192.168.1.102  hadoop003
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
其他台配置一样

八、安装本机ssh免密登陆

1.关闭防火墙(永久)和SELinux(root权限)

[root@hadoop001 ~]# vim /etc/selinux/config
  • 1
修改 /etc/selinux/config 文件中的 SELINUX=enforcing 修改为 SELINUX=disabled ,然后重启。
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled    #修改为disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted


~                                                                               
~                                                                               
~                                                                               
~                                                                               
"/etc/selinux/config" 15L, 565C
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

2.ssh免密登录

[root@hadoop001 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:F0VeLRWK+eOPDIccQgNhUfKWbo1tYrp28j9NHzw3HYk root@hadoop001
The key's randomart image is:
+---[RSA 2048]----+
|       *+. .o .+o|
|      . + .oo.o .|
|         *.o.....|
|        + =..E o |
|        SB.= o. o|
|        +.= +..=o|
|       .   +oo. =|
|       o.. .+.o. |
|      ..+....o . |
+----[SHA256]-----+
[root@hadoop001 ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

3.向本机复制公钥

[root@hadoop001 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  • 1

4.登录hadoop001验证(第一次输入yes回车进入就表示免密码登录配置成功)

[root@hadoop001 ~]# ssh hadoop001
The authenticity of host 'hadoop001 (192.168.1.100)' can't be established.
ECDSA key fingerprint is SHA256:RcMX0Yg6dBCZlIAM3cS0Rh/fSwNjJSCr2TNb3GPup4I.
ECDSA key fingerprint is MD5:d5:1a:ac:1f:33:c0:30:3f:95:68:41:a5:33:17:20:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001,192.168.1.100' (ECDSA) to the list of known hosts.
Last login: Sun Mar 28 14:40:59 2021
[root@hadoop001 ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

5.把公钥复制到另外两台虚拟机(途中输入密码)

[root@hadoop001 ~]# ssh-copy-id -i hadoop002
[root@hadoop001 ~]# ssh-copy-id -i hadoop003    
  • 1
  • 2

6.验证在hadoop001上面使用ssh命令确认集群免密码登录是否成功,可以免密码登录说明是没有问题

[root@hadoop001 ~]# ssh hadoop002
Last login: Sun Mar 28 14:45:37 2021
[root@hadoop002 ~]# exit
登出
Connection to hadoop002 closed.
[root@hadoop001 ~]# ssh hadoop003
Last login: Sun Mar 28 14:48:32 2021
[root@hadoop003 ~]# exit
登出
Connection to hadoop003 closed.
[root@hadoop001 ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
[root@hadoop001 ~]# ssh-copy-id -i hadoop002
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop002 (192.168.1.101)' can't be established.
ECDSA key fingerprint is SHA256:RcMX0Yg6dBCZlIAM3cS0Rh/fSwNjJSCr2TNb3GPup4I.
ECDSA key fingerprint is MD5:d5:1a:ac:1f:33:c0:30:3f:95:68:41:a5:33:17:20:4f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop002's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop002'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop001 ~]# ssh-copy-id -i hadoop002
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
		(if you think this is a mistake, you may want to use -f option)

[root@hadoop001 ~]# ssh-copy-id -i hadoop003
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop003 (192.168.1.102)' can't be established.
ECDSA key fingerprint is SHA256:RcMX0Yg6dBCZlIAM3cS0Rh/fSwNjJSCr2TNb3GPup4I.
ECDSA key fingerprint is MD5:d5:1a:ac:1f:33:c0:30:3f:95:68:41:a5:33:17:20:4f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop003's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop003'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop001 ~]# ssh-copy-id -i hadoop003
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
		(if you think this is a mistake, you may want to use -f option)

[root@hadoop001 ~]# ssh hadoop002
Last login: Sun Mar 28 14:45:37 2021
[root@hadoop002 ~]# exit
登出
Connection to hadoop002 closed.
[root@hadoop001 ~]# ssh hadoop003
Last login: Sun Mar 28 14:48:32 2021
[root@hadoop003 ~]# exit
登出
Connection to hadoop003 closed.
[root@hadoop001 ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

或者(差不多)

2.配置sshd(root权限)

[hadoop@localhost root]$ sudo vi /etc/ssh/sshd_config
  • 1
vim进去修改配置
43 PubkeyAuthentication yes
    
47 AuthorizedKeysFile      .ssh/authorized_keys

添加部分 
140 RSAAuthentication yes

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
重启 sshd 服务,使用命令:systemctl restart sshd.service 。
[hadoop@localhost root]$ systemctl restart sshd.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: hadoop
Password: 
==== AUTHENTICATION COMPLETE ===
[hadoop@localhost root]$ 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

3.生成秘钥(普通用户权限)

从 root 用户切换到hadoop 用户,可使用命令:su hadoop。然后, 执行命令 ssh-keygen -t rsa 来生成秘钥。无需指定口令密码,遇到询问直接回车,命令执行完毕后会在 hadoop 用户的家目录中(/home/hadoop/.ssh)生成两个文件:
id_rsa 私钥
id_rsa.pub 公钥
[hadoop@localhost root]$ su hadoop
密码:
[hadoop@localhost root]$ ssh-keygen -trsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3E8juUADf2abvIQypfJSeaecamsXbe7ddtzFXAkFBZw hadoop@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|      .     .o*o |
|       o     E   |
|        = +   . .|
|       * O +   ..|
|    . * S.X o  o.|
|     + =.*o* .  +|
|    . . ++o .  .o|
|     .o.. .. .. +|
|     ooo .. .... |
+----[SHA256]-----+
[hadoop@localhost root]$ 
[hadoop@localhost root]$ vi /home/hadoop/.ssh
[hadoop@localhost root]$ cd  /home/hadoop/.ssh
[hadoop@localhost .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@localhost .ssh]$ cd id_rsa
bash: cd: id_rsa: 不是目录
[hadoop@localhost .ssh]$ cat id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA0YJvPvHbZPVjORoqfi2yZy4kBDuK8NWZDA4XXKaAq6K6aQD9
nWXn54xYNpLIp402FOheOcWcYXoZAaNwZyif7JkNlOE3F+z6SAX5AUnO0FbmqU9i
A63elh9l074CRB/kFR5FyS7wVbMMf9pBzPH0YwvkCVHnIIJv43LojLZqZkkm18LK
vw0Jexo5mcPilLrO5GFjWwk6r9JncBVBOIvinffhnUEh8yw84iMaHTyCF7eQtyyV
YQ+sbt7dzx2kDIPDHtaqCs3KzTTshPONrXOwnV8uy9qrIXlSdJZ+uDKF3iwBjua/
krv7uFOIXL/oSybUI3i+dLGjsU9cC2KyTTZM+wIDAQABAoIBAE8IFxL1hT92WbGm
rzTd5BiMDUYTd/wVdyBXCHUI0izsu8j0eLcxQ+PSy0v63vpliGsmpLTCWJVky54U
m0guyPUlXzw0IOZDnL4ikqXrw7pPrb9clKqyoe6bdXwEhzJPnWhh/Q1BSaPqYOKx
4HSBKSFb0O+7F6bpzW9NX3AFN+nRK1gnGPA0iGOGNUyP/0UVDoMfctW5VXhD1iu4
RptvpLUh1nPwOzhZRBrJnbPyE94bJH18KJPlgw01phG1TwnlqqM6u2tFbc6ThtgH
wEgdh5f7HDGmej9Aza48jRCUmiY3r77MUOaTrivohi3cjOBQz2BCJ6+uGlEZ9jN+
hyhloVECgYEA98aZH0k2egS8edxK9cqYjNJD3267QQIQvLo+rjsf+7QhVFxwO2ge
KOYg0V9YUIbpttRaE2f+X6xmTcH2ybL18wK60y9AEQiGHHN1XfaQiNFvsMnn/eFY
U3RDTCGlfc0e7tpQboHxfJrKwiB6iCpPj4YqdwthfZ0USWR+34xp4zcCgYEA2Hau
Gc/fC6QGtz2tKHcqPFpVH3dKJvnvqO8OXUU4tGSaZISyga2S8115d4KZniXYfXMQ
FCqVwMq/cNj9LRMxmuqeHv3XUaMJGE95AW2jD2JeKo7VSjd2CysMRaaK7qWGX6jt
OF6ijzDZiyppQkFME8oLfSK2v2KIk/j/q/6cTl0CgYBpSuIDI4+c5qpZdr38GW4e
WbQyHNJCW+hU6yh7zfBXfEK1oNqoxCQc6T6E+umCvvJOmYr1uDmm2pJW9Ng0+nH+
JOjTmb61/lNPf9keZwsguS+nhwWpI7vvKvb2QU4cWbCNfAS2EU5Xz0femwK3HpPU
wAUHtbRmNvxJ/ATWZssQnQKBgQDR5pOAmB/TK+UPPxFwEc205GtyrbwL+4S4Lcei
DgOkeYF1Q2/Na8D5mIS2rL/FqTE6xJ6sz3aTkob9KIyobtpFPIjDyKZIlW22Uyol
hmj9/AcQAZ018H3Y6o9l4s8KBxw8GpSderbrXxU0a5XSF3tsHRny5/yJrUR/KI7T
+3saSQKBgQCybQycXkJNhBQQxpXGA/moo0dxNdKxdp0YFnkRbDC7F3gYqH9/v3J8
A9BfmBImJjHA1ihZm2JAO03rp2sB3TYQwIlkRejCUNp7V4FA/QcR+RU8D2guB1rO
YSQQRb227YmP0W5jkZ//M0ysEjI6RZa+CXP2gAW5IlxUHUyKwqom5g==
-----END RSA PRIVATE KEY-----
[hadoop@localhost .ssh]$ cd ..
[hadoop@localhost ~]$ cd ..
[hadoop@localhost home]$ 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62

4.将公钥导入到认证文件(普通用户权限)

[hadoop@localhost home]$ cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys
[hadoop@localhost home]$ 
  • 1
  • 2

5.设置认证文件访问权限(普通用户权限)

[hadoop@localhost home]$ chmod 700 /home/hadoop/.ssh
[hadoop@localhost home]$ chmod 600 /home/hadoop/.ssh/authorized_keys
[hadoop@localhost home]$ 
  • 1
  • 2
  • 3

九、安装hadoop3.3.0

1、解压hadoop

[root@localhost hadoop330]# cd /home
[root@localhost home]# ls
hadoop  sf  zkycentos7
[root@localhost home]# cd hadoop/
[root@localhost hadoop]# ls
hadoop-3.3.0.tar.gz  java  jdk-11.0.10_linux-x64_bin.rpm  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@localhost hadoop]# sudo cp hadoop-3.3.0.tar.gz /usr/hadoop330/
[root@localhost hadoop]# 
    
    
[root@localhost hadoop]# cd /usr/hadoop330/
[root@localhost hadoop]# tar -zxvf hadoop-2.9.2.tar.gz
[root@localhost hadoop330]# ls
hadoop-3.3.0  hadoop-3.3.0.tar.gz
[root@localhost hadoop330]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
配置hadoop-env.sh文件,增加环境变量信息
[root@hadoop001 hadoop]# cd /usr/hadoop330/hadoop-3.3.0/etc/hadoop/
[root@hadoop001 hadoop]# ls
capacity-scheduler.xml            kms-site.xml
configuration.xsl                 log4j.properties
container-executor.cfg            mapred-env.cmd
core-site.xml                     mapred-env.sh
hadoop-env.cmd                    mapred-queues.xml.template
hadoop-env.sh                     mapred-site.xml
hadoop-metrics2.properties        shellprofile.d
hadoop-policy.xml                 ssl-client.xml.example
hadoop-user-functions.sh.example  ssl-server.xml.example
hdfs-rbf-site.xml                 user_ec_policies.xml.template
hdfs-site.xml                     vim
httpfs-env.sh                     workers
httpfs-log4j.properties           yarn-env.cmd
httpfs-site.xml                   yarn-env.sh
kms-acls.xml                      yarnservice-log4j.properties
kms-env.sh                        yarn-site.xml
kms-log4j.properties
[root@hadoop001 hadoop]# vim hadoop-env.sh
#添加如下内容
export JAVA_HOME=/usr/java/jdk-11.0.10
export HADOOP_LOG_DIR=/usr/hadoop330/hadoop-3.3.0/tmp/logs/hadoop
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

2、配置hadoop环境变量

(1)vi /etc/profile 添加如下配置
#hadoop3.3.0
export HADOOP_HOME=/usr/hadoop330/hadoop-3.3.0
export PATH=$HADOOP_HOME/bin:$PATH
  • 1
  • 2
  • 3
(2)然后刷新配置:source /etc/profile
(3)完成后命令:hadoop version,出现版本号则成功。
[root@localhost hadoop330]# hadoop version
Hadoop 3.3.0
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r aa96f1871bfd858f9bac59cf2a81ec470da649af
Compiled by brahma on 2020-07-06T18:44Z
Compiled with protoc 3.7.1
From source with checksum 5dc29b802d6ccd77b262ef9d04d19c4
This command was run using /usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar
[root@localhost hadoop330]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

十、修改hadoop相关配置文件

1.然后切换到配置文件目录

[root@hadoop001 hadoop]# cd /usr/hadoop330/hadoop-3.3.0/etc/hadoop
[root@hadoop001 hadoop]# ls
capacity-scheduler.xml            kms-log4j.properties
configuration.xsl                 kms-site.xml
container-executor.cfg            log4j.properties
core-site.xml                     mapred-env.cmd
hadoop-env.cmd                    mapred-env.sh
hadoop-env.sh                     mapred-queues.xml.template
hadoop-metrics2.properties        mapred-site.xml
hadoop-policy.xml                 shellprofile.d
hadoop-user-functions.sh.example  ssl-client.xml.example
hdfs-rbf-site.xml                 ssl-server.xml.example
hdfs-site.xml                     user_ec_policies.xml.template
httpfs-env.sh                     workers
httpfs-log4j.properties           yarn-env.cmd
httpfs-site.xml                   yarn-env.sh
kms-acls.xml                      yarnservice-log4j.properties
kms-env.sh                        yarn-site.xml
[root@hadoop001 hadoop]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

2.配置core-site.xml文件 ,添加如下配置

[root@hadoop001 hadoop]# vim core-site.xml 
  • 1
<configuration>
    <property>
        <name>fs.defaultFS</name>
    	<!-- 这里填的是你自己的ip,端口默认-->
        <!-- 这样貌似也可以<value>hdfs://192.168.187.128:9000</value> -->
        <value>hdfs://hadoop001:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <!-- 这里填的是你自定义的hadoop工作的目录,端口默认-->
        <value>/usr/hadoop330/hadoop-3.3.0/tmp</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3.配置hdfs-site.xml文件,把hdfs中文件副本的数量设置为2

[root@hadoop001 hadoop]# vim hdfs-site.xml
  • 1
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop001:50090</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

4.配置mapred-site.xml文件,设置mapreduce使用资源调度的框架

[root@hadoop001 hadoop]# vim mapred-site.xml
  • 1
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

5.配置yarn-site.xml文件,设置yarn上支持运行的服务和环境变量白名单

[root@hadoop001 hadoop]# vim yarn-site.xml
  • 1
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CL
        ASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop001</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

6.修改workers文件,增加所有从节点的主机名,一个一行

[root@hadoop001 hadoop]# vim workers

#删除localhost,添加下面内容
hadoop002
hadoop003
  • 1
  • 2
  • 3
  • 4
  • 5

7.修改启动脚本

[root@hadoop001 hadoop]# cd /usr/hadoop330/hadoop-3.3.0/sbin
[root@hadoop001 sbin]# ls
distribute-exclude.sh    start-all.sh         stop-balancer.sh
FederationStateStore     start-balancer.sh    stop-dfs.cmd
hadoop-daemon.sh         start-dfs.cmd        stop-dfs.sh
hadoop-daemons.sh        start-dfs.sh         stop-secure-dns.sh
httpfs.sh                start-secure-dns.sh  stop-yarn.cmd
kms.sh                   start-yarn.cmd       stop-yarn.sh
mr-jobhistory-daemon.sh  start-yarn.sh        workers.sh
refresh-namenodes.sh     stop-all.cmd         yarn-daemon.sh
start-all.cmd            stop-all.sh          yarn-daemons.sh
[root@hadoop001 sbin]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

8.在start-dfs.sh和stop-dfs.sh增加下面内容

[root@hadoop001 sbin]# vim start-dfs.sh
[root@hadoop001 sbin]# vim stop-dfs.sh
  • 1
  • 2
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
  • 1
  • 2
  • 3
  • 4

9.在start-yarn.sh和stop-yarn.sh增加下面内容

[root@hadoop001 sbin]# vim start-yarn.sh
[root@hadoop001 sbin]# vim stop-yarn.sh
  • 1
  • 2
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
  • 1
  • 2
  • 3

10.格式化namenode

[root@hadoop001 sbin]# cd /usr/hadoop330/hadoop-3.3.0/
[root@hadoop001 hadoop-3.3.0]# ls
bin            include  LICENSE-binary   NOTICE-binary  sbin
core-site.xml  lib      licenses-binary  NOTICE.txt     share
etc            libexec  LICENSE.txt      README.txt
[root@hadoop001 hadoop-3.3.0]# bin/hdfs namenode -format
    
    
#或者直接
[root@hadoop001 ~]# cd $HADOOP_HOME
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
[root@hadoop001 hadoop-3.3.0]# bin/hdfs namenode -format
WARNING: /usr/hadoop330/hadoop-3.3.0/logs does not exist. Creating.
2021-03-28 15:58:37,512 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop001/192.168.1.100
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.3.0
STARTUP_MSG:   classpath = /usr/hadoop330/hadoop-3.3.0/etc/hadoop:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/avro-1.7.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/javax.activation-api-1.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/hadoop-auth-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-webapp-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-server-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-annotations-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/gson-2.2.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/httpclient-4.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/hadoop-annotations-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/asm-5.0.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-util-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-security-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jsch-0.1.55.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-compress-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/curator-framework-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-net-3.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-io-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/curator-client-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-xml-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.0.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/httpcore-4.4.10.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-databind-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-http-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/json-smart-2.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jackson-core-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-text-1.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/netty-3.10.6.Final.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/commons-io-2.5.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/re2j-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/zookeeper-3.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/failureaccess-1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/nimbus-jose-jwt-7.9.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/jetty-servlet-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-registry-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-nfs-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-kms-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/javax.activation-api-1.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/hadoop-auth-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-server-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-annotations-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/httpclient-4.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/hadoop-annotations-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-util-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-security-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-io-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/netty-all-4.1.50.Final.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-xml-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.0.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/httpcore-4.4.10.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-databind-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-http-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jackson-core-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-7.9.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-client-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-client-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/hdfs/hadoop-hdfs-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.0-tests.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/asm-tree-7.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/asm-analysis-7.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jetty-jndi-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jetty-plus-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/websocket-server-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/guice-4.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/asm-commons-7.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jline-3.9.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/websocket-servlet-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/websocket-common-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jetty-client-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jakarta.activation-api-1.2.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/websocket-api-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.10.3.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/websocket-client-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/fst-2.50.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/jetty-annotations-9.4.20.v20190813.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-services-api-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-api-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-client-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-services-core-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-common-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-common-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-registry-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-router-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.0.jar:/usr/hadoop330/hadoop-3.3.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.0.jar
STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r aa96f1871bfd858f9bac59cf2a81ec470da649af; compiled by 'brahma' on 2020-07-06T18:44Z
STARTUP_MSG:   java = 11.0.10
************************************************************/
2021-03-28 15:58:37,525 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-03-28 15:58:37,654 INFO namenode.NameNode: createNameNode [-format]
2021-03-28 15:58:38,366 INFO namenode.NameNode: Formatting using clusterid: CID-40be2aef-0936-4b00-8bcc-083c49a6f93d
2021-03-28 15:58:38,456 INFO namenode.FSEditLog: Edit logging is async:true
2021-03-28 15:58:38,493 INFO namenode.FSNamesystem: KeyProvider: null
2021-03-28 15:58:38,495 INFO namenode.FSNamesystem: fsLock is fair: true
2021-03-28 15:58:38,495 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-03-28 15:58:38,552 INFO namenode.FSNamesystem: fsOwner                = root (auth:SIMPLE)
2021-03-28 15:58:38,553 INFO namenode.FSNamesystem: supergroup             = supergroup
2021-03-28 15:58:38,553 INFO namenode.FSNamesystem: isPermissionEnabled    = true
2021-03-28 15:58:38,553 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2021-03-28 15:58:38,553 INFO namenode.FSNamesystem: HA Enabled: false
2021-03-28 15:58:38,630 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-03-28 15:58:38,639 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-03-28 15:58:38,639 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-03-28 15:58:38,641 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-03-28 15:58:38,642 INFO blockmanagement.BlockManager: The block deletion will start around 2021 328 15:58:38
2021-03-28 15:58:38,643 INFO util.GSet: Computing capacity for map BlocksMap
2021-03-28 15:58:38,643 INFO util.GSet: VM type       = 64-bit
2021-03-28 15:58:38,645 INFO util.GSet: 2.0% max memory 498 MB = 10.0 MB
2021-03-28 15:58:38,645 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2021-03-28 15:58:38,651 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2021-03-28 15:58:38,651 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-03-28 15:58:38,656 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: defaultReplication         = 2
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: maxReplication             = 512
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: minReplication             = 1
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2021-03-28 15:58:38,657 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-03-28 15:58:38,677 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2021-03-28 15:58:38,677 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2021-03-28 15:58:38,677 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2021-03-28 15:58:38,677 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2021-03-28 15:58:38,694 INFO util.GSet: Computing capacity for map INodeMap
2021-03-28 15:58:38,694 INFO util.GSet: VM type       = 64-bit
2021-03-28 15:58:38,695 INFO util.GSet: 1.0% max memory 498 MB = 5.0 MB
2021-03-28 15:58:38,695 INFO util.GSet: capacity      = 2^19 = 524288 entries
2021-03-28 15:58:38,695 INFO namenode.FSDirectory: ACLs enabled? true
2021-03-28 15:58:38,695 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-03-28 15:58:38,695 INFO namenode.FSDirectory: XAttrs enabled? true
2021-03-28 15:58:38,695 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-03-28 15:58:38,701 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-03-28 15:58:38,703 INFO snapshot.SnapshotManager: SkipList is disabled
2021-03-28 15:58:38,707 INFO util.GSet: Computing capacity for map cachedBlocks
2021-03-28 15:58:38,707 INFO util.GSet: VM type       = 64-bit
2021-03-28 15:58:38,707 INFO util.GSet: 0.25% max memory 498 MB = 1.2 MB
2021-03-28 15:58:38,707 INFO util.GSet: capacity      = 2^17 = 131072 entries
2021-03-28 15:58:38,716 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-03-28 15:58:38,716 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-03-28 15:58:38,716 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-03-28 15:58:38,718 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-03-28 15:58:38,718 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-03-28 15:58:38,720 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-03-28 15:58:38,720 INFO util.GSet: VM type       = 64-bit
2021-03-28 15:58:38,720 INFO util.GSet: 0.029999999329447746% max memory 498 MB = 153.0 KB
2021-03-28 15:58:38,720 INFO util.GSet: capacity      = 2^14 = 16384 entries
2021-03-28 15:58:38,747 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1630466842-192.168.1.100-1616918318738
    
#successful成功啦
    
2021-03-28 15:58:38,769 INFO common.Storage: Storage directory /usr/hadoop330/hadoop-3.3.0/tmp/dfs/name has been successfully formatted.
2021-03-28 15:58:38,817 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/hadoop330/hadoop-3.3.0/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-03-28 15:58:38,946 INFO namenode.FSImageFormatProtobuf: Image file /usr/hadoop330/hadoop-3.3.0/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 396 bytes saved in 0 seconds .
2021-03-28 15:58:38,967 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-03-28 15:58:38,973 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-03-28 15:58:38,975 INFO namenode.NameNode: SHUTDOWN_MSG: 

#该部分为成功
    
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/192.168.1.100
************************************************************/
[root@hadoop001 hadoop-3.3.0]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
注意:如果多次格式化,需要先删除虚拟机的/usr/hadoop330/hadoop-3.3.0/tmp目录(core-site.xml文件中设置的目录),再格式化才能成功

11.启动集群(出问题了java环境前面没有加到hadoop目录cd /usr/hadoop330/hadoop-3.3.0/etc/hadoop/的 hadoop-env.sh,具体见–九、安装hadoop3.3.3的–>2.配置hadoop环境变量)

#重新复制java环境
[root@hadoop001 hadoop-3.3.0]# scp -rq /usr/java/jdk-11.0.10 hadoop002:/usr/java/
[root@hadoop001 hadoop-3.3.0]# scp -rq /etc/profile hadoop002:/etc/
  • 1
  • 2
  • 3
[root@hadoop001 hadoop-3.3.0]# scp -rq /usr/java/jdk-11.0.10 hadoop003:/usr/java/
[root@hadoop001 hadoop-3.3.0]# scp -rq /etc/profile hadoop003:/etc/
  • 1
  • 2
后面复制按照前面步骤格式化namenode
scp -rq /usr/hadoop330/hadoop-3.3.0 hadoop002:/usr/hadoop330/
scp -rq /usr/hadoop330/hadoop-3.3.0 hadoop003:/usr/hadoop330/
  • 1
  • 2
#查看了下cp过去的文件。有点无聊
[root@hadoop002 ~]# cd /usr/hadoop330/hadoop-3.3.0/etc/hadoop
[root@hadoop002 hadoop]# ls
capacity-scheduler.xml            kms-log4j.properties
configuration.xsl                 kms-site.xml
container-executor.cfg            log4j.properties
core-site.xml                     mapred-env.cmd
hadoop-env.cmd                    mapred-env.sh
hadoop-env.sh                     mapred-queues.xml.template
hadoop-metrics2.properties        mapred-site.xml
hadoop-policy.xml                 shellprofile.d
hadoop-user-functions.sh.example  ssl-client.xml.example
hdfs-rbf-site.xml                 ssl-server.xml.example
hdfs-site.xml                     user_ec_policies.xml.template
httpfs-env.sh                     workers
httpfs-log4j.properties           yarn-env.cmd
httpfs-site.xml                   yarn-env.sh
kms-acls.xml                      yarnservice-log4j.properties
kms-env.sh                        yarn-site.xml
[root@hadoop002 hadoop]# vim core-site.xml 
[root@hadoop002 hadoop]# vim hdfs-site.xml
[root@hadoop002 hadoop]# vim mapred-site.xml
[root@hadoop002 hadoop]# vim yarn-site.xml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

十一、验证

1、启动,jps

[root@hadoop001 ~]# $HADOOP_HOME
bash: /usr/hadoop330/hadoop-3.3.0: 是一个目录
[root@hadoop001 ~]# cd $HADOOP_HOME
[root@hadoop001 hadoop-3.3.0]# sbin/start-all.sh
Starting namenodes on [hadoop001]
上一次登录:日 328 19:22:59 CST 2021pts/1 上
hadoop001: namenode is running as process 27966.  Stop it first.
Starting datanodes
上一次登录:日 328 19:32:59 CST 2021pts/0 上
hadoop002: datanode is running as process 18915.  Stop it first.
hadoop003: datanode is running as process 18581.  Stop it first.
Starting secondary namenodes [hadoop001]
上一次登录:日 328 19:33:00 CST 2021pts/0 上
hadoop001: secondarynamenode is running as process 28264.  Stop it first.
Starting resourcemanager
上一次登录:日 328 19:33:03 CST 2021pts/0 上
resourcemanager is running as process 28546.  Stop it first.
Starting nodemanagers
上一次登录:日 328 19:33:12 CST 2021pts/0 上
hadoop002: nodemanager is running as process 19051.  Stop it first.
hadoop003: nodemanager is running as process 18716.  Stop it first.

    
#jps
[root@hadoop001 hadoop-3.3.0]# jps
28546 ResourceManager
28264 SecondaryNameNode
27966 NameNode
30607 Jps
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

2、WEB UI查看

#linux里面的火狐验证网页是否能打开
    
http://192.168.1.100:8088/cluster

http://192.168.1.100:9870/dfshealth.html#tab-overview
  • 1
  • 2
  • 3
  • 4
  • 5

3、运行PI实例检查集群是否成功

[root@hadoop001 mapreduce]# hadoop jar /usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 10 10
  • 1
[root@hadoop001 hadoop-3.3.0]# ls
bin            hadoop-env.sh  libexec          LICENSE.txt    NOTICE.txt  share
core-site.xml  include        LICENSE-binary   logs           README.txt  tmp
etc            lib            licenses-binary  NOTICE-binary  sbin
[root@hadoop001 hadoop-3.3.0]# cd share/
[root@hadoop001 share]# ls
doc  hadoop
[root@hadoop001 share]# cd hadoop/
[root@hadoop001 hadoop]# ls
client  common  hdfs  mapreduce  tools  yarn
[root@hadoop001 hadoop]# cd mapreduce/
[root@hadoop001 mapreduce]# ls
hadoop-mapreduce-client-app-3.3.0.jar
hadoop-mapreduce-client-common-3.3.0.jar
hadoop-mapreduce-client-core-3.3.0.jar
hadoop-mapreduce-client-hs-3.3.0.jar
hadoop-mapreduce-client-hs-plugins-3.3.0.jar
hadoop-mapreduce-client-jobclient-3.3.0.jar
hadoop-mapreduce-client-jobclient-3.3.0-tests.jar
hadoop-mapreduce-client-nativetask-3.3.0.jar
hadoop-mapreduce-client-shuffle-3.3.0.jar
hadoop-mapreduce-client-uploader-3.3.0.jar
hadoop-mapreduce-examples-3.3.0.jar
jdiff
lib-examples
sources
[root@hadoop001 mapreduce]# pwd
/usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce
[root@hadoop001 mapreduce]# hadoop jar /usr/hadoop330/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 10 10
Number of Maps  = 10
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
2021-03-28 19:42:10,007 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hadoop001/192.168.1.100:8032
2021-03-28 19:42:10,414 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1616930585147_0001
2021-03-28 19:42:10,582 INFO input.FileInputFormat: Total input files to process : 10
2021-03-28 19:42:10,663 INFO mapreduce.JobSubmitter: number of splits:10
2021-03-28 19:42:10,887 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1616930585147_0001
2021-03-28 19:42:10,887 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-03-28 19:42:11,041 INFO conf.Configuration: resource-types.xml not found
2021-03-28 19:42:11,042 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-03-28 19:42:11,442 INFO impl.YarnClientImpl: Submitted application application_1616930585147_0001
2021-03-28 19:42:11,472 INFO mapreduce.Job: The url to track the job: http://hadoop001:8088/proxy/application_1616930585147_0001/
2021-03-28 19:42:11,472 INFO mapreduce.Job: Running job: job_1616930585147_0001
2021-03-28 19:42:21,650 INFO mapreduce.Job: Job job_1616930585147_0001 running in uber mode : false
2021-03-28 19:42:21,653 INFO mapreduce.Job:  map 0% reduce 0%
2021-03-28 19:42:39,321 INFO mapreduce.Job:  map 20% reduce 0%
2021-03-28 19:43:01,602 INFO mapreduce.Job:  map 40% reduce 7%
2021-03-28 19:43:04,633 INFO mapreduce.Job:  map 70% reduce 7%
2021-03-28 19:43:05,682 INFO mapreduce.Job:  map 80% reduce 7%
2021-03-28 19:43:06,690 INFO mapreduce.Job:  map 100% reduce 7%
2021-03-28 19:43:07,700 INFO mapreduce.Job:  map 100% reduce 100%
2021-03-28 19:43:08,721 INFO mapreduce.Job: Job job_1616930585147_0001 completed successfully
2021-03-28 19:43:08,858 INFO mapreduce.Job: Counters: 54
	File System Counters
		FILE: Number of bytes read=226
		FILE: Number of bytes written=2904869
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=2630
		HDFS: Number of bytes written=215
		HDFS: Number of read operations=45
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
		HDFS: Number of bytes read erasure-coded=0
	Job Counters 
		Launched map tasks=10
		Launched reduce tasks=1
		Data-local map tasks=10
		Total time spent by all maps in occupied slots (ms)=347743
		Total time spent by all reduces in occupied slots (ms)=26244
		Total time spent by all map tasks (ms)=347743
		Total time spent by all reduce tasks (ms)=26244
		Total vcore-milliseconds taken by all map tasks=347743
		Total vcore-milliseconds taken by all reduce tasks=26244
		Total megabyte-milliseconds taken by all map tasks=356088832
		Total megabyte-milliseconds taken by all reduce tasks=26873856
	Map-Reduce Framework
		Map input records=10
		Map output records=20
		Map output bytes=180
		Map output materialized bytes=280
		Input split bytes=1450
		Combine input records=0
		Combine output records=0
		Reduce input groups=2
		Reduce shuffle bytes=280
		Reduce input records=20
		Reduce output records=0
		Spilled Records=40
		Shuffled Maps =10
		Failed Shuffles=0
		Merged Map outputs=10
		GC time elapsed (ms)=3557
		CPU time spent (ms)=24070
		Physical memory (bytes) snapshot=2350845952
		Virtual memory (bytes) snapshot=31071526912
		Total committed heap usage (bytes)=1931476992
		Peak Map Physical memory (bytes)=257482752
		Peak Map Virtual memory (bytes)=2827128832
		Peak Reduce Physical memory (bytes)=166195200
		Peak Reduce Virtual memory (bytes)=2830069760
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=1180
	File Output Format Counters 
		Bytes Written=97
Job Finished in 59.022 seconds
Estimated value of Pi is 3.20000000000000000000
[root@hadoop001 mapreduce]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126

第一次接触,如果有错误欢迎批评指正~~~~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/568902
推荐阅读
相关标签
  

闽ICP备14008679号