赞
踩
本文是基于CentOS 7 系统搭建
相关资源下载
链接:https://pan.baidu.com/s/1FW228OfyURxEgnXW0qqpmA 密码:18uc
# 查看防火墙状态
firewall-cmd --state
# 停止防火墙
[root@localhost ~]# systemctl stop firewalld.service
# 禁止防火墙开机自启
[root@localhost ~]# systemctl disable firewalld.service
[root@localhost ~]# vi /etc/selinux/config
将 SELINUX=enforcing
改为 SELINUX=disabled
[root@localhost ~]# ifconfig
ip 为192.168.78.100
[root@localhost ~]# vi /etc/hostname
[root@localhost ~]# vi /etc/hosts
192.168.78.100 CentOS
[root@localhost ~]# ssh-keygen -t rsa # 生产密钥
# 连续三次回车
# 将密钥发送给需要登陆本机的机器,这里只有一台机器 所以发给自己
[root@localhost ~]# ssh-copy-id root@CentOS
# 测试ssh
[root@localhost ~]# ssh root@CentOS
创建 install文件夹
[root@localhost ~]# mkdir /opt/install/
这里选用JDK8
[root@localhost ~]# tar -zxvf jdk-8u144-linux-x64.tar.gz -C /opt/install/
[root@localhost jdk1.8.0_144]# vi /etc/profile
# 加入配置 加入位置如下图所示
export JAVA_HOME=/opt/install/jdk1.8.0_144
export PATH=$PATH:$JAVA_HOME/bin
# 保存后刷新环境变量
[root@localhost jdk1.8.0_144]# source /etc/profile
# 刷新完 执行命令验证JDK是否安装成功
[root@localhost jdk1.8.0_144]# java -version
成功界面
[root@localhost ~]# tar -zxvf hadoop-2.9.2.tar.gz -C /opt/install/
[root@localhost ~]# cd /opt/install/hadoop-2.9.2/etc/hadoop
hadoop-env.sh
export JAVA_HOME=/opt/install/jdk1.8.0_144
core-site.xml
<!-- 用于设置namenode并且作为Java程序的访问入口 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://CentOS:8020</value>
</property>
<!-- 存储NameNode持久化的数据,DataNode块数据 -->
<!-- 手工创建$HADOOP_HOME/data/tmp -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/install/hadoop-2.9.2/data/tmp</value>
</property>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.http.address</name>
<value>CentOS:50070</value>
</property>
mapred-site.xml
首先拷贝一个mapred-site.xml
[root@localhost hadoop]# cp mapred-site.xml.template mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffleslaves
这里配置DataNode的主机名
伪分布式
情况下这里NameNode
也充当DataNode
CentOS
[root@localhost hadoop-2.9.2]# vim /etc/profile
# 加入
export HADOOP_HOME=/opt/install/hadoop-2.9.2
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# 刷新环境变量
[root@localhost hadoop-2.9.2]# source /etc/profile
[root@localhost hadoop-2.9.2]# hadoop version
目的作用:格式化hdfs系统,并且生成存储数据块的目录
[root@localhost hadoop-2.9.2]# hadoop namenode -format
格式化成功后如图显示
start-all.sh
stop-all.sh
启动成后 jps
查看进程
http://CentOS:50070 访问 hdfs
http://CentOS:8088 访问 yarn
在Hadoop3.0后会有一些身份的配置,如果照上面配置 启动后会抛出以下异常:
Starting namenodes on [namenode]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [datanode1]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
此时需要去hadoop的sbin目录下做一下小改动
在start-dfs.sh 和 stop-dfs.sh 中 新增!!!
:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
在start-yarn.sh 和 stop-yarn.sh 中 新增!!!
:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
修改完保存重启可解决
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。