当前位置:   article > 正文

Zookeeper入门到高可用(HA)

Zookeeper入门到高可用(HA)

上节:zookeeper基本操作

1、概述

Zookeeper是一个开源的分布式的,为分布式应用提供协调服务的Apache项目。


2、Zookeeper工作机制

在这里插入图片描述

3、Zookeeper特点

在这里插入图片描述


4、数据结构

在这里插入图片描述


5、应用场景

提供的服务包括:统一命名服务、统一配置管理、统一集群管理、服务器节点动态上下线、软负载均衡等。

5.1 统一命名服务
在这里插入图片描述

5.2 统一配置管理
在这里插入图片描述

5.3 统一集群管理
在这里插入图片描述

5.4 服务器节点动态上下线
在这里插入图片描述

5.5 软负载均衡
在这里插入图片描述


下载地址
1.官网首页:
https://zookeeper.apache.org/

2.下载截图,如图5-5,5-6,5-7所示
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述


Zookeeper安装

1、本地模式安装部署
1.安装前准备
(1)安装Jdk
(2)拷贝Zookeeper安装包到Linux系统下
(3)解压到指定目录

[root@hadoop100 zookeeper]# tar -zxvf zookeeper-3.4.5.tar.gz -c /usr/local/java/zookeeper
  • 1

2.配置修改
(1)将/usr/local/java/zookeeper/zookeeper-3.4.10/conf这个路径下的zoo_sample.cfg修改为zoo.cfg
(2)打开zoo.cfg文件,修改dataDir路径:
修改如下内容:

dataDir=/usr/local/java/zookeeper/zookeeper3.4.5/zkData

(3)在/usr/local/java/zookeeper/zookeeper-3.4.10/这个目录上创建zkData文件夹

[root@hadoop100 zookeeper3.4.5]# mkdir zkData
  • 1

3、操作Zookeeper
(1)启动Zookeeper

[root@hadoop100 zookeeper3.4.5]# bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop100 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

(2)查看进程是否启动

[root@hadoop100 zookeeper3.4.5]# jps
7713 QuorumPeerMain
7734 Jps
7662 NameNode
[root@hadoop100 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

(3)查看状态:

[root@hadoop100 zookeeper3.4.5]# bin/zkServer.sh status
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Mode: standalone
[root@hadoop100 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

(4)启动客户端:

[root@hadoop100 zookeeper3.4.5]# bin/zkCli.sh

Connecting to localhost:2181
2019-12-01 07:45:18,728 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2019-12-01 07:45:18,732 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=hadoop100
2019-12-01 07:45:18,732 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_221
2019-12-01 07:45:18,733 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2019-12-01 07:45:18,734 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/local/java/jdk/jdk1.8.0_221/jre
2019-12-01 07:45:18,734 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/java/zookeeper/zookeeper3.4.5/bin/../build/classes:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../build/lib/*.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../lib/netty-3.2.2.Final.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../lib/log4j-1.2.15.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../lib/jline-0.9.94.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../zookeeper-3.4.5.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../src/java/lib/*.jar:/usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf:.:/usr/local/java/jdk/jdk1.8.0_221/lib/dt.jar:/usr/local/java/jdk/jdk1.8.0_221/lib/tools.jar
2019-12-01 07:45:18,735 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-12-01 07:45:18,744 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2019-12-01 07:45:18,744 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2019-12-01 07:45:18,744 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2019-12-01 07:45:18,744 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2019-12-01 07:45:18,744 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-957.el7.x86_64
2019-12-01 07:45:18,745 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2019-12-01 07:45:18,746 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2019-12-01 07:45:18,747 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/usr/local/java/zookeeper/zookeeper3.4.5
2019-12-01 07:45:18,750 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

(5)退出客户端:

[zk: localhost:2181(CONNECTED) 0] quit
Quitting...
2019-12-01 07:46:20,749 [myid:] - INFO  [main:ZooKeeper@684] - Session: 0x16ec06a14e60000 closed
2019-12-01 07:46:20,750 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@509] - EventThread shut down
[root@hadoop100 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

(6)停止Zookeeper

[root@hadoop100 zookeeper3.4.5]# bin/zkServer.sh stop
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
[root@hadoop100 zookeeper3.4.5]# 
  • 1
  • 2
  • 3
  • 4
  • 5

配置总结:

  1. 解压
  2. 把conf文件夹下配置文件改个名字
    cp zoo_sample.cfg zoo.cfg
  3. 编辑zoo.cfg,配置datadir
    dataDir=/opt/module/zookeeper-3.4.10/zkData
  4. 配置集群机器,每台机器分配一个不同的Serverid
    server.0=hadoop102:2888:3888
    server.1=hadoop103:2888:3888
    server.2=hadoop104:2888:3888
    以上配置0,1,2就是Serverid
  5. 在zkData文件夹里新建一个myid文件,内容是本机的Serverid
[root@hadoop100 zkData]# touch myid
[root@hadoop100 zkData]# 

[root@hadoop100 zkData]# vim myid 
[root@hadoop100 zkData]# 

0

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  1. 配置Zookeeper的LogDIR:配置bin/zkEnv.sh文件
    在这里插入图片描述
    在这里插入图片描述
[root@hadoop100 zookeeper3.4.5]# cd bin/
[root@hadoop100 bin]# ls
README.txt  zkCleanup.sh  zkCli.cmd  zkCli.sh  zkEnv.cmd  zkEnv.sh  zkServer.cmd  zkServer.sh  zookeeper.out

[root@hadoop100 bin]# vim zkEnv.sh 

ZOO_LOG_DIR="/usr/local/java/zookeeper/zookeeper3.4.5/logs"

export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_221

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

保存退出!

分发到别的机器:

[root@hadoop100 zookeeper]#  scp -r /usr/local/java/zookeeper/zookeeper3.4.5/* root@hadoop101:/usr/local/java/zookeeper/zookeeper3.4.5/

[root@hadoop100 zookeeper]#  scp -r /usr/local/java/zookeeper/zookeeper3.4.5/* root@hadoop102:/usr/local/java/zookeeper/zookeeper3.4.5/
  • 1
  • 2
  • 3

修改hadoop101机器配置参数:

[root@hadoop101 zookeeper3.4.5]# cd zkData/
[root@hadoop101 zkData]# ls
myid  version-2
[root@hadoop101 zkData]# echo 1 > myid
[root@hadoop101 zkData]# cat myid
1
[root@hadoop101 zkData]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

修改hadoop102机器配置参数:

[root@hadoop102 ~]# cd /usr/local/java/zookeeper/zookeeper3.4.5/zkData/
[root@hadoop102 zkData]# ls
myid  version-2
[root@hadoop102 zkData]# echo 2 > myid
[root@hadoop102 zkData]# cat myid
2
[root@hadoop102 zkData]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

接下来,启动zookeeper

hadoop100机器的zookeeper

[root@hadoop100 zookeeper3.4.5]# bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop100 zookeeper3.4.5]# jps
8050 QuorumPeerMain
8068 Jps
[root@hadoop100 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

hadoop101机器的zookeeper

[root@hadoop101 zkData]# cd ..
[root@hadoop101 zookeeper3.4.5]# bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop101 zookeeper3.4.5]# jps
7568 QuorumPeerMain
7586 Jps
[root@hadoop101 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

hadoop102机器的zookeeper

[root@hadoop102 zkData]# cd ..
[root@hadoop102 zookeeper3.4.5]# bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/java/zookeeper/zookeeper3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop102 zookeeper3.4.5]# jps
7641 QuorumPeerMain
7659 Jps
[root@hadoop102 zookeeper3.4.5]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

搭建HA(高可用)集群

1、首先,创建ha文件

[root@hadoop100 opt]# mkdir ha
  • 1

2、切换普通用户:MissZhou

[root@hadoop100 opt]# su MissZhou
[MissZhou@hadoop100 opt]$ 
  • 1
  • 2

3、给普通用户设权限,发现无法授权,只能使用root授权

[MissZhou@hadoop100 opt]$ sudo  chown MissZhou:MissZhou ha
[sudo] password for MissZhou: 
Sorry, user MissZhou is not allowed to execute '/bin/chown MissZhou:MissZhou ha' as root on hadoop100.
[MissZhou@hadoop100 opt]$ su root
Password: 
[root@hadoop100 opt]# chown MissZhou:MissZhou ha
[root@hadoop100 opt]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

如图所示:
在这里插入图片描述
将/opt/hadoop/module/hadoop-2.7.2.tar.gz的压缩包拷贝到ha目录下,并对其解压

[root@hadoop100 module]# cp hadoop-2.7.2.tar.gz /opt/ha/
[root@hadoop100 module]# cd /opt/ha/

[root@hadoop100 ha]# ll
total 193028
-rw-r--r-- 1 root root 197657687 Dec  1 12:06 hadoop-2.7.2.tar.gz
[root@hadoop100 ha]# tar -zxvf hadoop-2.7.2.tar.gz 

[root@hadoop100 ha]# ls
hadoop-2.7.2  hadoop-2.7.2.tar.gz

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

接着,进入配置文件,配置core-site.xml

[root@hadoop100 hadoop-2.7.2]# cd etc/hadoop/
[root@hadoop100 hadoop]# ls
capacity-scheduler.xml  hadoop-env.sh               httpfs-env.sh            kms-env.sh            mapred-env.sh               ssl-server.xml.example
configuration.xsl       hadoop-metrics2.properties  httpfs-log4j.properties  kms-log4j.properties  mapred-queues.xml.template  yarn-env.cmd
container-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret  kms-site.xml          mapred-site.xml.template    yarn-env.sh
core-site.xml           hadoop-policy.xml           httpfs-site.xml          log4j.properties      slaves                      yarn-site.xml
hadoop-env.cmd          hdfs-site.xml               kms-acls.xml             mapred-env.cmd        ssl-client.xml.example
[root@hadoop100 hadoop]# 

[root@hadoop100 hadoop]# vim core-site.xml 


<configuration>
<!--把两个NameNode的地址组装成一个集群mycluster -->
      <property>
         <name>fs.defaultFS</name>
         <value>hdfs://mycluster </value>
      </property>

<!--指定hadoop运行时产生文件的存储目录-->

        <property>
         <name>hadoop.tmp.dir</name>
         <value>/opt/ha/hadoop-2.7.2/data/tmp</value>
      </property>


</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

保存退出!

接着,进入配置文件,配置hdfs-site.xml

[root@hadoop100 hadoop]# vim hdfs-site.xml 

<configuration>
<!--完全分布式集群名称-->
<property>  
    <name>dfs.nameservices</name>  
    <value>ns1</value>  
</property>  

<!--集群中NameNode节点都有哪些 -->
<property>  
    <name>dfs.ha.namenodes.mycluster</name>  
    <value>nn1,nn2</value>  
</property>  

<!--nn1的RPC通信地址-->
<property>  
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>  
    <value>hadoop100:9000</value>  
</property>  

<!--nn2的RPC通信地址-->
<property>
</property>

</property>  

<!--配置隔离机制,即同一时刻只能有一台服务器对外响应-->
<property>  
    <name>dfs.ha.fencing.methods</name>  
    <value>sshfence</value> 
</property>  

<!--使用隔离机制时需要ssh无密钥登陆-->
<property>  
    <name>dfs.ha.fencing.ssh.private-key-files</name>  
    <value>/root/.ssh/id_rsa</value>
</property>  

<!--声明journalnode服务器存储目录-->
<property>  
    <name>dfs.journalnode.edits.dir</name>  
    <value>/opt/ha/hadoop-2.7.2/data/jn</value>  
</property>  

<!--关闭权限检查 -->
<property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
</property>

<!--访问代理类:client,mycluster,active配置日志失败自动切换实现方式-->
<property>  
    <name>dfs.client.failover.proxy.provider.mycluster</name>  
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>  
</property>  

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57

保存退出!

接下来分发别的机器

[root@hadoop100 hadoop]# scp -r /opt/ha/*  root@hadoop101:/opt/

[root@hadoop100 hadoop]# scp -r /opt/ha/*  root@hadoop102:/opt/

  • 1
  • 2
  • 3
  • 4

接下来,分别启动hadoop100、hadoop101、Hadoop102机器

[root@hadoop100 hadoop-2.7.2]# sbin/start-all.sh start journalnalnode

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /opt/ha/hadoop-2.7.2/logs/hadoop-root-namenode-hadoop100.out
localhost: starting datanode, logging to /opt/ha/hadoop-2.7.2/logs/hadoop-root-datanode-hadoop100.out
Starting journal nodes [hadoop100 hadoop101 hadoop102]
hadoop101: starting journalnode, logging to /opt/ha/hadoop-2.7.2/logs/hadoop-root-journalnode-hadoop101.out
hadoop100: starting journalnode, logging to /opt/ha/hadoop-2.7.2/logs/hadoop-root-journalnode-hadoop100.out
hadoop102: starting journalnode, logging to /opt/ha/hadoop-2.7.2/logs/hadoop-root-journalnode-hadoop102.out

starting yarn daemons
starting resourcemanager, logging to /opt/ha/hadoop-2.7.2/logs/yarn-root-resourcemanager-hadoop100.out
localhost: starting nodemanager, logging to /opt/ha/hadoop-2.7.2/logs/yarn-root-nodemanager-hadoop100.out
[root@hadoop100 hadoop-2.7.2]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

接着,格式化

[root@hadoop100 hadoop-2.7.2]# bin/hdfs namenode -format
  • 1

启动namenode节点

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/天景科技苑/article/detail/965779
推荐阅读
相关标签
  

闽ICP备14008679号