赞
踩
# 可以使用cat /etc/group | grep hadoop 查看hadoop用户是否存在
$ cat /etc/group | grep hadoop
hadoop:x:1000:
# 用户名为hadoop(可取符合规则的任意用户名),使用/bin/bash作为shell
sudo useradd -m hadoop -s /bin/bash
# 添加用户时同时指定用户组,id,以及用户主目录
# sudo useradd -u 3000 -d /home/hadoop -g 1271 -m hadoop
sudo passwd hadoop
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
https://blog.csdn.net/lyhkmm/article/details/79524712
1)查看yum库中都有哪些jdk版本
[root@localhost:] # yum search java|grep jdk
2)选择指定的版本安装,注意最后的 * 以及yum源安装的是openjdk,注意openjdk的区别。
[root@localhost:] # yum install java-1.8.0-openjdk*
1)下载 jdk
2)在usr/local 下新建java目录 解压安装包
tar -zxvf jdk-8u121-linux-x64.tar.gz
3)环境变量配置
vi /etc/profile
# 输入“G”定位到最后一行,按“i”进入编辑模式,在最下面添加如下几行信息:
export JAVA_HOME=/usr/java/jdk1.8.0_121
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
4)使环境变量生效
source profile
安装前准备
关闭selinux有两种方式:①临时关闭 ②永久关闭,可以通过sestatus命令查询当前selinux状态
[hadoop@hadoop1 ~]$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: disabled
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
[hadoop@hadoop1 ~]$ setenforce 0
setenforce: setenforce() failed #如果报这个错误,请使用sudo
[hadoop@hadoop1 ~]$ sudo setenforce 0
[hadoop@hadoop2 ~]$ sudo sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# root 权限执行
[root@hadoop1 ~]# rm -rf /etc/localtime && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
通常,先配置临时生效,再更改配置文件。这样就算服务器重启了,selinux也还是关闭的。
[root@hadoop3 ~]# systemctl stop firewalld
[root@hadoop3 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@hadoop3 ~]#
hostnamectl 命令是RHEL 7新增命令,基于RHEL 7的centos 7可以使用hostnamectl 命令来修改主机名。
sudo hostnamectl set-hostname newHostName
来实现,命令执行完并不会马上生效,执行reboot 命令后重启生效。# 将IP修改为集群机器对应的ip
192.168.0.109 hadoop1
192.168.0.110 hadoop2
192.168.0.111 hadoop3
ssh-keygen #一路回车即可 [hadoop@hadoop3 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: SHA256:eWAow8nEMyN8gDeiaSRw18XuNGXnasJJe00cj/3QU2U hadoop@hadoop3 The key's randomart image is: +---[RSA 2048]----+ |ooooo. o. E| |+o==*.... o o ..| |++ +*+..oo + = ..| |o. o .=o = +..| |. =S+.+ o.| | *.+ . .| | + | | | | | +----[SHA256]-----+
使用ssh-copy-id 发送公钥, 在集群机器之间依次执行(建议使用脚本分发执行)
[hadoop@hadoop1 ~]$ ssh-copy-id hadoop2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
The authenticity of host 'hadoop2 (192.168.0.110)' can't be established.
ECDSA key fingerprint is SHA256:lECmiRCReMDEjO9Azb4aYxF0D++qjFtPZdw50R24uf8.
ECDSA key fingerprint is MD5:25:a1:bf:18:40:1d:56:d9:2c:4d:26:f0:60:8b:93:4e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@hadoop2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop2'"
and check to make sure that only the key(s) you wanted were added.
# 如果提示-bash: wget: 未找到命令, 使用yum install wget 安装wget
wget http://archive.apache.org/dist/hadoop/core/hadoop-3.1.0/hadoop-3.1.0.tar.gz
#解压到目录 (目录随意)
tar -zxvf hadoop-3.1.0.tar.gz -C /home/hadoop/module
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/java/jdk1.8.0_212
4)修改core-site.xml,
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/module/hadoop-3.1.0/tmp</value> <description>临时文件目录</description> </property> <property> <name>fs.trash.interval</name> <value>4320</value> <description>checkpoint 只保留72小时</description> </property> </configuration>
5)修改hdfs-site.xml
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/module/hadoop-3.1.0/data/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/module/hadoop-3.1.0/data/data</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration>
6) 修改mapred-site.xml
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
7) 修改yarn-site.xml
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
8)修改workers
vi /home/hadoop/module/hadoop-3.1.0/etc/hadoop/workers
hadoop2
hadoop3
9) 格式化hdfs
/home/hadoop/module/hadoop-3.1.0/bin/hdfs namenode -format
10)启动集群
/home/hadoop/module/hadoop-3.1.0/sbin/start-dfs.sh
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。