赞
踩
在每台服务器上,选择一个目录作为 ZooKeeper 的安装目录,例如 /opt/zookeeper
。解压 ZooKeeper 安装包并将其内容移动到此目录。
在每台服务器上编辑 zoo.cfg
文件(通常位于安装目录的 conf
子目录下),为每个节点指定相应的配置。关键配置项包括:
dataDir: 指定每个节点的数据存储目录,应为服务器上的唯一路径,例如 /var/lib/zookeeper/data
。
clientPort: 指定每个节点对外提供服务的客户端连接端口,通常设为 2181。
server.X: 配置集群成员信息,其中 X
是一个唯一的整数标识符(如 1、2、3)。格式为 hostname:peerPort:leaderElectionPort
,示例如下:
- server.1=server1.example.com:2888:3888
- server.2=server2.example.com:2888:3888
- server.3=server3.example.com:2888:3888
hostname
: 对应服务器的主机名或IP地址。
peerPort
: 用于集群内部通信的端口(默认为 2888)。
leaderElectionPort
: 用于选举 leader 的端口(默认为 3888)。
确保在所有节点的 zoo.cfg
文件中都配置了完整的集群成员列表。
在每台服务器的数据目录(即 dataDir
设置的路径)下创建一个名为 myid
的文件,内容仅为一个整数,对应于 zoo.cfg
文件中 server.X
配置项中的 X
。例如,对于 server.1
,myid
文件内容应为 1
。
确保所有服务器上的相关端口(客户端端口 2181,以及集群内部通信端口 2888 和 3888)在防火墙中开放,允许集群间通信。
在每台服务器上,通过以下命令启动 ZooKeeper 服务:
- cd /opt/zookeeper/bin
- ./zkServer.sh start
检查服务启动日志,确认无错误信息。
通过 ZooKeeper CLI 连接到任意一个节点,检查集群状态:
./zkCli.sh -server server1.example.com:2181
在 CLI 中执行 stat
命令,观察输出信息中的 Mode:
字段,确认是否有节点处于 leader
状态,其余节点应为 follower
状态。此外,还可以使用 ls /
命令查看根目录下的节点,确保所有节点数据一致。
以上就是一个基本的 ZooKeeper 真集群搭建过程。实际操作时,请根据具体的系统环境和需求进行适当的调整。确保所有节点配置正确,网络通信畅通,且能够正常加入集群并参与 leader 选举。完成搭建后,应定期检查集群健康状况,确保其持续稳定运行。
接下来我们来演示一下配置的详细过程命令,下面以三个虚拟机 node1,node2,node3 为例,安装目录/opt/apps/zookeeper 。
- [zhang@node3 soft]$ ls
- apache-flume-1.11.0-bin_.tar.gz apache-zookeeper-3.8.4-bin_.tar.gz jdk-8u281-linux-x64.tar.gz
- apache-hive-3.1.3-bin.tar.gz hadoop-3.2.4.tar.gz
- [zhang@node3 soft]$ tar -zxvf apache-zookeeper-3.8.4-bin_.tar.gz -C /opt/apps/
- # .....省略过程
-
- # 重命名
- [zhang@node3 soft]$ cd /opt/apps/
- [zhang@node3 apps]$ ls
- apache-zookeeper-3.8.4-bin flume hadoop-3.2.4 hive3.1 jdk1.8.0_281 temp
- [zhang@node3 apps]$ mv apache-zookeeper-3.8.4-bin/ zookeeper
- [zhang@node3 apps]$ ls
- flume hadoop-3.2.4 hive3.1 jdk1.8.0_281 temp zookeeper
为了操作命令方便性,这里会配置~/bashrc 中的环境变量 path 指向 zookeeper/bin 目录 具体配置可以参考之前的单机搭建
在 ZooKeeper 安装目录 conf 下配置 zoo.cfg 文件,三个虚拟机配置相同,内容关键配置如下:
- dataDir=/opt/apps/zookeeper/zkdata
- # the port at which the clients will connect
- clientPort=2181
- server.1=node1:2888:3888
- server.2=node2:2888:3888
- server.3=node3:2888:3888
具体命令:
- [zhang@node3 conf]$ vim zoo.cfg #
- # 编辑的内容如上面关键配置内容
-
- [zhang@node3 conf]$ ls
- configuration.xsl logback.xml zoo.cfg zoo_sample.cfg
- [zhang@node3 conf]$ cd ..
- [zhang@node3 zookeeper]$ ls
- bin conf docs lib LICENSE.txt NOTICE.txt README.md README_packaging.md
- # 新建存放数据和日志文件的目录
- [zhang@node3 zookeeper]$ mkdir zkdata
- [zhang@node3 zookeeper]$ ls
- bin conf docs lib LICENSE.txt NOTICE.txt README.md README_packaging.md zkdata
- [zhang@node3 zookeeper]$
需要在 node1、node2、node3 三台主机上分别新建 myid
注意:每个主机的 myid 不能相同
- [zhang@node3 zookeeper]$ cd zkdata/
- [zhang@node3 zkdata]$ ls
- [zhang@node3 zkdata]$ echo 3 >> myid # 给node3 设置的id 为3
- # 这里的3 和 上面的 server.3 对应的
- [zhang@node3 zkdata]$ ls
- myid
- [zhang@node3 zkdata]$
其他节点上做同样的步骤,新建 myid ,内容不同
node2 上新建 myid :
- [zhang@node2 zookeeper]$ mkdir zkdata
- [zhang@node2 zookeeper]$ ls
- bin conf docs lib LICENSE.txt logs NOTICE.txt README.md README_packaging.md zkdata
- [zhang@node2 zookeeper]$ cd zkdata
- [zhang@node2 zkdata]$ echo 2 >> myid
- [zhang@node2 zkdata]$
node1 上新建 myid:
- [zhang@node1 zookeeper]$ mkdir zkdata
- [zhang@node1 zookeeper]$ cd zkdata
- [zhang@node1 zkdata]$ echo 1 >> myid
- [zhang@node1 zkdata]$
集群中每个节点的配置文件相同,一台配置好后,需要同步配置文件
- [zhang@node3 zookeeper]$ cd conf
- [zhang@node3 conf]$ ls
- configuration.xsl logback.xml zoo.cfg zoo_sample.cfg
- # 同步配置文件
- [zhang@node3 conf]$ xsync zoo.cfg
- ==================== node1 ====================
- sending incremental file list
- zoo.cfg
-
- sent 845 bytes received 47 bytes 1,784.00 bytes/sec
- total size is 1,270 speedup is 1.42
- ==================== node2 ====================
- sending incremental file list
- zoo.cfg
-
- sent 885 bytes received 47 bytes 621.33 bytes/sec
- total size is 1,270 speedup is 1.36
- ==================== node3 ====================
- sending incremental file list
-
- sent 59 bytes received 12 bytes 47.33 bytes/sec
- total size is 1,270 speedup is 17.89
- [zhang@node3 conf]$

为了方便启停集群各个节点,接下来新建并配置集群脚本
在之前 ~/mybin 目录下,新建 zk 文件,输入下面内容
然后配置环境变量指向 mybin 目录(之前配置过的不用再配置)
- [zhang@node1 zkdata]$ cd #回到自己家
- [zhang@node1 ~]$ cd mybin/
- [zhang@node1 mybin]$ ls
- jpsall xsync
- # 编辑 zk ,并复制下面内容到文件中即可
- [zhang@node1 mybin]$ vim zk
- [zhang@node1 mybin]$ cd ..
- [zhang@node1 ~]$ zk status
- -bash: /home/zhang/mybin/zk: Permission denied
- [zhang@node1 ~]$ cd mybin
- [zhang@node1 mybin]$ ll
- total 12
- -rwxrwxr-x. 1 zhang zhang 100 Mar 16 22:27 jpsall
- -rwxrwxr-x. 1 zhang zhang 645 Mar 16 23:06 xsync
- -rw-rw-r--. 1 zhang zhang 500 Apr 26 17:51 zk
- # 修改权限,添加可执行权限
- [zhang@node1 mybin]$ chmod u+x zk
-
- ########### 同步文件到其他节点 ######
- [zhang@node3 conf]$ xsync zoo.cfg
- ==================== node1 ====================
- sending incremental file list
- zoo.cfg
-
- sent 845 bytes received 47 bytes 1,784.00 bytes/sec
- total size is 1,270 speedup is 1.42
- ==================== node2 ====================
- sending incremental file list
- zoo.cfg
-
- sent 885 bytes received 47 bytes 621.33 bytes/sec
- total size is 1,270 speedup is 1.36
- ==================== node3 ====================
- sending incremental file list
-
- sent 59 bytes received 12 bytes 47.33 bytes/sec
- total size is 1,270 speedup is 17.89

zk 文件中的内容如下:
- #!/bin/bash
- case $1 in
- "start"){
- for i in node1 node2 node3
- do
- echo ---------- zookeeper $i 启动 ------------
- ssh $i "/opt/apps/zookeeper/bin/zkServer.sh start"
- done
- };;
- "stop"){
- for i in node1 node2 node3
- do
- echo ---------- zookeeper $i 停止 ------------
- ssh $i "/opt/apps/zookeeper/bin/zkServer.sh stop"
- done
- };;
- "status"){
- for i in node1 node2 node3
- do
- echo ---------- zookeeper $i 状态 ------------
- ssh $i "/opt/apps/zookeeper/bin/zkServer.sh status"
- done
- };;
- esac

启动集群
- [zhang@node1 ~]$ zk start
- ---------- zookeeper node1 启动 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- ---------- zookeeper node2 启动 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- ---------- zookeeper node3 启动 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
查看状态
注意角色的不同
- [zhang@node3 ~]$ zk status
- ---------- zookeeper node1 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Mode: follower
- ---------- zookeeper node2 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Mode: leader
- ---------- zookeeper node3 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Mode: follower
- [zhang@node3 ~]$

停止集群
- [zhang@node3 ~]$ zk stop
- ---------- zookeeper node1 停止 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Stopping zookeeper ... STOPPED
- ---------- zookeeper node2 停止 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Stopping zookeeper ... STOPPED
- ---------- zookeeper node3 停止 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Stopping zookeeper ... STOPPED
- [zhang@node3 ~]$
zkCli.sh -server host:port 连接 zookeeper
- # node1
- [zhang@node1 mybin]$ zkCli.sh -server node1:2181
- Connecting to node1:2181
-
- # node2
- [zhang@node2 ~]$ zkCli.sh
- Connecting to localhost:2181
-
- #node3
- [zhang@node3 ~]$ zkCli.sh # 默认本机的2181端口
- Connecting to localhost:2181
分别登录各个节点的 zookeeper
在其中一个节点上新建 znode 节点,查看其他虚拟机上是否同步有次节点
在node1 上新建节点 book 并带数据 “图书信息”
- [zk: node1:2181(CONNECTED) 0] ls /
- [zookeeper]
- [zk: node1:2181(CONNECTED) 2] create /book 图书信息
- Created /book
- [zk: node1:2181(CONNECTED) 3] get /book
- 图书信息
- [zk: node1:2181(CONNECTED) 4]
再在 node2 上查询,可以看到集群中每个节点数据会自动同步的。
- [zk: localhost:2181(CONNECTED) 3] get /book
- 图书信息
- [zk: localhost:2181(CONNECTED) 4]
ZooKeeper 集群的投票选举机制是一个很重要的内容,下面来验证是否存在自动选举,保证高可用。
思路:先查看哪一台节点是领导者,停止领导者后,剩下的两个跟随者的状态角色,就会发生选举改变
上面已经查看过,node2 是领导者,下面停止 node2 上的 ZooKeeper 服务
- [zhang@node2 ~]$ zkServer.sh stop
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Stopping zookeeper ... STOPPED
-
- # 停掉服务后,再次查看各个节点状态
- [zhang@node2 ~]$ zk status
- ---------- zookeeper node1 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Mode: follower
- ---------- zookeeper node2 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Error contacting service. It is probably not running.
- ---------- zookeeper node3 状态 ------------
- ZooKeeper JMX enabled by default
- Using config: /opt/apps/zookeeper/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost. Client SSL: false.
- Mode: leader
- [zhang@node2 ~]$

从上面可以看到,node2 停止后,node3 的状态升级为 leader 领导者角色。
到这里,关于 ZooKeeper 的环境搭建包括单机模式、伪集群、真集群都已讲完,有问题可以留言交流!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。