赞
踩
虚拟机版本:centos7
jdk版本:1.8
hadoop版本:3.1.3
查看防火墙状态:
firewall-cmd --state
停止服务:
systemctl stop firewalld.service
systemctl start sshd.service
cd ~/.ssh
若不存在该文件夹 可使用以下命令 使用root账户登录后生成
ssh root@localhost
ssh-keygen -t rsa
一路回车就行
cat id_rsa.pub >> authorized_keys
chmod 644 authorized_keys
ssh root@localhost
通过xftp或winSCP等工具 将jdk文件上传至CentOS7 的 /usr/java 文件夹中
进入文件夹并进行解压缩
- cd /usr/local/java
- tar -zxvf jdk-8u191-linux-x64.tar.gz
设置环境变量
- vim ~/.bashrc
-
- #在最下面添加:
- export JAVA_HOME=/usr/java/jdk1.8.0_191
- export PATH=$JAVA_HOME/bin:$PATH
-
- #使用以下命令使配置生效
- source ~/.bashrc
- cd /usr/java
- tar -zxvf hadoop-3.1.3.tar.gz
vim ~/.bashrc
在最下方添加:
- export HADOOP_HOME=/usr/java/hadoop-3.1.3
- export HADOOP_INSTALL=$HADOOP_HOME
- export HADOOP_MAPRED_HOME=$HADOOP_HOME
- export HADOOP_COMMON_HOME=$HADOOP_HOME
- export HADOOP_HDFS_HOME=$HADOOP_HOME
- export YARN_HOME=$HADOOP_HOME
- export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
- export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
- export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
使用以下命令使配置生效:
source ~/.bashrc
在<configuration>标签中添加:
- <property>
- <!-- 指定hadoop运行时产生文件的存储路径-->
- <name>hadoop.tmp.dir</name>
- <value>/usr/java/hadoop-3.1.3/tmp</value>
- <description>A base for other temporary directories.</description>
- </property>
- <property>
- <!--hdfs namenode的通信地址-->
- <name>fs.defaultFS</name>
- <value>hdfs://127.0.0.1:9000</value>
- </property>
- <property>
- <name>hadoop.native.lib</name>
- <value>false</value>
- </property>
在<configuration>标签中添加:
- <property>
- <!--指定HDFS储存数据的副本数目,默认情况下为3份-->
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- <property>
- <!--name node 存放 name table 的目录-->
- <name>dfs.namenode.name.dir</name>
- <value>file:/usr/java/hadoop-3.1.3/tmp/dfs/name</value>
- </property>
- <property>
- <!--data node 存放数据 block 的目录-->
- <name>dfs.datanode.data.dir</name>
- <value>file:/usr/java/hadoop-3.1.3/tmp/dfs/data</value>
- </property>
- <property>
- <!--设置监控页面的端口及地址-->
- <name>dfs.http.address</name>
- <value>0.0.0.0:9870</value>
- </property>
在<configuration>标签中添加:
- <property>
- <!-- 指定mapreduce 编程模型运行在yarn上 -->
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
在<configuration>标签中添加:
- <!-- Site specific YARN configuration properties -->
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>localhost</value>
-
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>${yarn.resourcemanager.hostname}:8088</value>
- </property>
- <property>
- <name>yarn.nodemanager.vmem-check-enabled</name>
- <value>false</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.application.classpath</name>
- <value>
- ${HADOOP_HOME}/etc/hadoop/conf,
- ${HADOOP_HOME}/share/hadoop/common/lib/*,
- ${HADOOP_HOME}/share/hadoop/common/*,
- ${HADOOP_HOME}/share/hadoop/hdfs,
- ${HADOOP_HOME}/share/hadoop/hdfs/lib/*,
- ${HADOOP_HOME}/share/hadoop/hdfs/*,
- ${HADOOP_HOME}/share/hadoop/mapreduce/*,
- ${HADOOP_HOME}/hadoop/yarn,
- ${HADOOP_HOME}/share/hadoop/yarn/lib/*,
- ${HADOOP_HOME}/share/hadoop/yarn/*
- </value>
- </property>
在/usr/java/hadoop-3.1.3/sbin 下的 start-dfs.sh 和 stop-dfs.sh中添加:
- HDFS_DATANODE_USER=root
- HDFS_DATANODE_SECURE_USER=hdfs
- HDFS_NAMENODE_USER=root
- HDFS_SECONDARYNAMENODE_USER=root
在/usr/java/hadoop-3.1.3/sbin 下的 start-yarn.sh 和 stop-yarn.sh中添加:
- YARN_RESOURCEMANAGER_USER=root
- HADOOP_SECURE_DN_USER=yarn
- YARN_NODEMANAGER_USER=root
9、格式化namenode,只格式化一次即可
hadoop namenode -format
在sbin目录下输入:
start-all.sh
查看进程:
jps
若显示五个进程 : namenode、secondarynamenode、datanode、resourcemanager、nodemanager 则启动成功
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。