赞
踩
1、首先需要的环境:Centos7-版本不限,克隆三台配置好的机子、hadoop、jdk安装包、Xftp软件(缺一不可)
1、安装VMware虚拟机
2、安装Centos7版本的虚拟机
3、准备3台配置完毕的虚拟机
4、搭建3台节点的Hadoop集群
1、链接: https://pan.baidu.com/s/1gWpQ7Dh5dgXyjKfHUYuC5Q
提取码:k2q8
在搭建的过程中,小伙伴们统一保持和博主一样的配置
呃,博主这安装过了
vmware安装链接: https://blog.csdn.net/Alger_/article/details/111193639
1:通过设置来配置ios镜像
1、设置虚拟机的虚拟网络配置
2、查看NAT的默认网关、ip地址以及子网掩码
3、设置windwos的VMNet8网络地址
4、linux设置网络配置
vim /etc/sysconfig/network-scripts/ifcfg-ens33
添加网络必备
IPADDR=192.168.253.100 IP地址
NETMASK=255.255.255.0 子网掩码
GATEWAY=192.168.253.1 网关
DNS1=8.8.8.8 DNS解析
编辑静态IP和开机自启
BOOTPROTO=static
NOBOOT=yes
重启网络服务
systemctl restart network
安装vim和常用软件
yum -y install vim
yum -y install net-tools
vim /etc/sysconfig/network-scripts/ifcfg-ens33
IPADDR=192.168.253.101 IP地址
NETMASK=255.255.255.0 子网掩码
GATEWAY=192.168.253.1 网关
DNS1=8.8.8.8 DNS解析
hadoop102
IPADDR=192.168.253.102 IP地址
NETMASK=255.255.255.0 子网掩码
GATEWAY=192.168.253.1 网关
DNS1=8.8.8.8 DNS解析
hadoop103
IPADDR=192.168.253.103 IP地址
NETMASK=255.255.255.0 子网掩码
GATEWAY=192.168.253.1 网关
DNS1=8.8.8.8 DNS解析
systemctl status firewalld
若呈现绿色字样及未关闭
若现呈黑色字样及为关闭
systemctl stop firewalld
systemctl disable firewalld
vim /etc/hostname
克隆机1
hadoop101
克隆机2
hadoop102
克隆机3
hadoop103
vim /etc/hosts
192.168.253.100 hadoop-base
192.168.253.101 hadoop101
192.168.253.102 hadoop102
192.168.253.103 hadoop103
192.168.253.104 hadoop104
192.168.253.105 hadoop105
useradd user001
passwd user001
vim /etc/sudoers
添加如下内容
user001 ALL=ALL(ALL) NOPASSWD:ALL
三台克隆机在根目录下创建bigdata目录—project、software
目录权限更改为user001
chown user001:user001 project/ software/
三台克隆机通过su命令切换user001用户
su user001
123123
hadoop101:
ssh-keygen
ssh-copy-id hadoop102
ssh-copy-id hadoop103
ssh-copy-id hadoop101
hadoop102:
ssh-keygen
ssh-copy-id hadoop101
ssh-copy-id hadoop103
ssh-copy-id hadoop102
hadoop103:
ssh-keygen
ssh-copy-id hadoop101
ssh-copy-id hadoop102
ssh-copy-id hadoop103
cd //bigdata/project
cd …
cd /software
ll:tarX2(hadoop\jak)
tar -zxvf jdk----------------------.tar.gz -C /bigdata/project
tar -zxvf hadoop-------------------.tar.gz -C /bigdata/project
cd /project
mv hadoop---- /hadoop
mv jdk------- /jdk
sudo vim /etc/profile.d/my_env.sh
内容:
JAVA_HOME
export JAVA_HOME=/bigdata/project/jdk
export PATH= P A T H : PATH: PATH:JAVA_HOME/bin
HADOOP_HOME
export HADOOP_HOME=/bigdata/project/hadoop
export PATH= P A T H : PATH: PATH:HADOOP_HOME/bin
export PATH= P A T H : PATH: PATH:HADOOP_HOME/sbin
source /etc/profile 更新源
cd //bigdata/software
tar -zxvf hadoop-------------------.tar.gz -C /bigdata/project
vim core-site.xml
core-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop101:8020</value>
</property>
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/bigdata/project/hadoop/data</value>
</property>
</configuration>
vim hdfs-site.xml
hdfs-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<!-- nn web端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop101:9870</value>
</property>
<!-- 2nn wen段访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop103:9868</value>
</property>
</configuration>
vim mapred-site.xml
mapred-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MapReduce程序运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
第一台克隆机执行以下命令
vim yarn-site.xml
yarn-site.xml:
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop102</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
</value>
</property>
</configuration>
第一台克隆机执行以下命令
vim workers
替换
hadoop101
hadoop102
hadoop103
把core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml这四个配置文件通过xftp覆盖到/bigdata/project/hadoop/etc/hadoop/下面
分发101配置文件给102和103的克隆机
hint:前提是要在/bigdata/project/Hadoop/etc/hadoop目录下
rsync -a -v ./ hadoop102:/bigdata/project/hadoop/etc/hadoop
rsync -a -v ./ hadoop103:/bigdata/project/hadoop/etc/hadoop
执行完之后在/bigdata/project/hadoop目录执行
hdfs namenode -format (对namenode进行格式化)=(对hdfs文件系统格式化)
没有报错则格式成功
101克隆机:hadoop的根目录下执行:sbin/start-dfs.sh 启动hdfs
102克隆机:hadoop的根目录下执行:sbin/start-yarn.sh 启动yarn
在浏览器上访问:
hadoop101:9870或ip+:9870
hadoop102:8088或ip+:8088
关闭服务
hadoop根目录下执行:stop-dfs.sh
hadoop根目录下执行:stop-yarn.sh
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。