赞
踩
注:kafka3.0 不再支持 JDK8,建议安装 JDK11 或 JDK17,事先安装好
zookeeper与kafka百度网盘下载地址:https://pan.baidu.com/s/1ZPLvNcy6gzJ19mQls5QUNg 提取码: w4cb
zookeeper部署情况
准备三台 Linux 服务器,IP 地址如下所示
节点 IP
node1 192.168.245.129
node2 192.168.245.130
node3 192.168.245.131
zookeeper集群部署
分别在三台服务器上执行如下操作,搭建并配置 zookeeper 集群
1.在三台机器都需要下载安装包
- mkdir /usr/local/zookeeper && cd /usr/local/zookeeper
- wget http://archive.apache.org/dist/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz
2.解压安装包
- tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz
- cd apache-zookeeper-3.8.0-bin/
3.创建数据存放目录
mkdir -p /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas
4.创建配置文件,复制并创建新的配置文件
- cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
- cp zoo_sample.cfg zoo.cfg
5.编辑配置文件
- cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
- vi zoo.cfg
6.配置文件内容
- # The number of milliseconds of each tick
- tickTime=2000
- # The number of ticks that the initial
- # synchronization phase can take
- initLimit=10
- # The number of ticks that can pass between
- # sending a request and getting an acknowledgement
- syncLimit=5
- # the directory where the snapshot is stored.
- # do not use /tmp for storage, /tmp here is just
- # example sakes.
- # Zookeeper的数据存放目录,修改数据目录
- dataDir=/usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas
- # the port at which the clients will connect
- clientPort=2181
- # the maximum number of client connections.
- # increase this if you need to handle more clients
- #maxClientCnxns=60
- #
- # Be sure to read the maintenance section of the
- # administrator guide before turning on autopurge.
- #
- # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
- #
- # The number of snapshots to retain in dataDir
- # 保留多少个快照,本来就有这个配置, 只不过注释了, 取消注释即可
- autopurge.snapRetainCount=3
- # Purge task interval in hours
- # Set to "0" to disable auto purge feature
- # 日志多少小时清理一次,本来就有这个配置, 只不过注释了, 取消注释即可
- autopurge.purgeInterval=1
-
- ## Metrics Providers
- #
- # https://prometheus.io Metrics Exporter
- #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
- #metricsProvider.httpHost=0.0.0.0
- #metricsProvider.httpPort=7000
- #metricsProvider.exportJvmInfo=true
- # 集群中服务器地址,集群间通讯的端口号,文件末尾直接增加
- server.1=192.168.245.129:2888:3888
- server.2=192.168.245.130:2888:3888
- server.3=192.168.245.131:2888:3888
7.配置myid 文件
该 myid 文件中的值用于和 zoo.cfg 配置文件中的配置项 server.x=nodex:2888:3888 进行对应,用于标识当前节点的 zookeeper,在 zookeeper 集群启动的时候,完成 leader 的选举
1)node1执行
echo 1 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid
2)node2执行
echo 2 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid
3)node3执行
echo 3 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid
8.启动服务
从第 1 台服务器开始,依次执行如下命令
cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin
#启动服务
sh bin/zkServer.sh start
# 查看进程
jps
9.查看集群状态
sh bin/zkServer.sh status
kafka集群部署
部署规划
首先目前官方页面上并没有集群搭建文档,我们下载安装包,可以查看config/kraft下的README.md文件,根据说明,这里我搭建Kraft模式的kafka集群,这里我用的是3.3.1版本
HOSTNAME IP OS
kafka01 192.168.245.129 centos7.9
kafka02 192.168.245.130 centos7.9
kafka03 192.168.245.131 centos7.9
1.下载kafka,三台机器都需要下载Kafka
- mkdir /usr/local/kafka/ && cd /usr/local/kafka/
- wget https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz
2.解压安装,kafka安装包下载完成后就需要解压以及创建日志目录了
- tar -zxvf kafka_2.12-3.3.1.tgz
- cd kafka_2.12-3.3.1 && mkdir logs
3.配置server.properties,我们需要配置kafka的配置文件,该文件在kafka的config/server.properties目录下
4.kafka01配置
vi config/server.properties
1)文件修改内容
#节点ID
broker.id=1
# 本机节点
listeners=PLAINTEXT://192.168.245.129:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.129:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
5.kafka02配置
vi config/kraft/server.properties
1)文件修改内容
#节点ID
broker.id=2
# 本机节点
listeners=PLAINTEXT://192.168.245.130:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.130:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
6.kafka03配置
vi config/kraft/server.properties
1)文件修改内容
#节点ID
broker.id=3
# 本机节点
listeners=PLAINTEXT://192.168.245.131:9092
# 本机节点
advertised.listeners=PLAINTEXT://192.168.245.131:9092
# 这里我修改了日志文件的路径
log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
# zookeeper连接地址
zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181
7.启动集群,可以使用下面的命令来启动集群
nohup sh bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &
8.创建主题
sh bin/kafka-topics.sh --create --topic test --partitions 1 --replication-factor 1 --bootstrap-server 192.168.245.129:9092
9.查看主题列表
sh bin/kafka-topics.sh --list --bootstrap-server 192.168.245.129:9092
10.查看主题信息
sh bin/kafka-topics.sh --bootstrap-server 192.168.245.129:9092 --describe --topic test
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。