赞
踩
前置环境部署:
通过mobaxterm将hadoop发布包上传/usr/local/soft/目录下
- cd /usr/local/soft/
- tar -zxvf hadoop-3.2.0.tar.gz
修改Java配置路径:/usr/local/soft/hadoop-3.2.0/etc/hadoop/hadoop-env.sh
vi /usr/local/soft/hadoop-3.2.0/etc/hadoop/hadoop-env.sh
文件末尾添加如下内容:
- export JAVA_HOME=/usr/local/soft/jdk1.8.0_11
-
- export HDFS_NAMENODE_USER=root
- export HDFS_DATANODE_USER=root
- export HDFS_SECONDARYNAMENODE_USER=root
- export YARN_RESOURCEMANAGER_USER=root
- export YARN_NODEMANAGER_USER=root
- export HADOOP_PID_DIR=/data/hadoop/pids
- export HADOOP_LOG_DIR=/data/hadoop/logs
vi /etc/profile
新增如下内容
- export HADOOP_HOME=/usr/local/soft/hadoop-3.2.0
- export PATH=$PATH:$HADOOP_HOME/bin
- export PATH=$PATH:$HADOOP_HOME/sbin
- export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
- export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
使环境变量生效
source /etc/profile
进入/usr/local/soft/hadoop-3.2.0/etc/hadoop/目录,修改core-site.xml
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://hadoop100:9000</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/data/hadoop/tmp</value>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
- </configuration>
- <configuration>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>hadoop100:50090</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/data/hadoop/hdfs/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/data/hadoop/hdfs/data</value>
- </property>
- <property>
- <name>dfs.permissions.enabled</name>
- <value>false</value>
- </property>
- <property>
- <name>hadoop.proxyuser.root.hosts</name>
- <value>*</value>
- </property>
- <property>
- <name>hadoop.proxyuser.root.groups</name>
- <value>*</value>
- </property>
- </configuration>

dfs.namenode.secondary.http-address是指定secondaryNameNode的http访问地址和端口号,因为在规划中,我们将master1规划为SecondaryNameNode服务器。
- <configuration>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.localizer.address</name>
- <value>0.0.0.0:8140</value>
- </property>
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>hadoop100</value>
- </property>
- <property>
- <name>yarn.log-aggregation-enable</name>
- <value>true</value>
- </property>
- <property>
- <name>yarn.log-aggregation.retain-seconds</name>
- <value>604800</value>
- </property>
- <property>
- <name>yarn.log.server.url</name>
- <value>http://hadoop100:19888/jobhistory/logs</value>
- </property>
- </configuration>

根据规划yarn.resourcemanager.hostname这个指定resourcemanager服务器指向master1。
yarn.log-aggregation-enable是配置是否启用日志聚集功能。
yarn.log-aggregation.retain-seconds是配置聚集的日志在HDFS上最多保存多长时间。
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>yarn.app.mapreduce.am.env</name>
- <value>HADOOP_MAPRED_HOME=/usr/local/soft/hadoop-3.2.0</value>
- </property>
- <property>
- <name>mapreduce.map.env</name>
- <value>HADOOP_MAPRED_HOME=/usr/local/soft/hadoop-3.2.0</value>
- </property>
- <property>
- <name>mapreduce.reduce.env</name>
- <value>HADOOP_MAPRED_HOME=/usr/local/soft/hadoop-3.2.0</value>
- </property>
-
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>hadoop100:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value> hadoop100:19888</value>
- </property>
- </configuration>

mapreduce.framework.name设置mapreduce任务运行在yarn上。
mapreduce.jobhistory.address是设置mapreduce的历史服务器安装在master1机器上。
mapreduce.jobhistory.webapp.address是设置历史服务器的web页面地址和端口号
hadoop100
workers文件是指定HDFS上有哪些DataNode节点。
hdfs namenode -format
强烈提示:格式化命令仅在配置后执行一次,不允许在启动后再格式化,如果真的这样操作后续会导致集群ID号不一致无法连接到子节点,解决方案:将/data下面的文件删除后再格式化(为了密码出错,切记仅执行一次)
start-all.sh
停止:
stop-all.sh
进程:
jps
打开C:\Windows\System32\drivers\etc文件夹,修改hosts,添加如下内容
- 192.168.1.100 hadoop100
- 192.168.1.101 hadoop101
- 192.168.1.102 hadoop102
http://hadoop100:9870/
http://hadoop100:8088/
hadoop fs -mkdir /test
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。