当前位置:   article > 正文

Kafka集群搭建(ZK模式)_kafka2.12集群部署

kafka2.12集群部署

注:kafka3.0 不再支持 JDK8,建议安装 JDK11 或 JDK17,事先安装好

zookeeper与kafka百度网盘下载地址:https://pan.baidu.com/s/1ZPLvNcy6gzJ19mQls5QUNg 提取码: w4cb

zookeeper部署情况

    准备三台 Linux 服务器,IP 地址如下所示
    节点        IP
    node1    192.168.245.129
    node2    192.168.245.130
    node3    192.168.245.131

zookeeper集群部署

    分别在三台服务器上执行如下操作,搭建并配置 zookeeper 集群

    1.在三台机器都需要下载安装包

  1. mkdir /usr/local/zookeeper && cd /usr/local/zookeeper
  2. wget http://archive.apache.org/dist/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz

    2.解压安装包

  1. tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz
  2. cd apache-zookeeper-3.8.0-bin/

    3.创建数据存放目录

mkdir -p /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas

    4.创建配置文件,复制并创建新的配置文件

  1. cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
  2. cp zoo_sample.cfg zoo.cfg

    5.编辑配置文件

  1. cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/conf
  2. vi zoo.cfg

    6.配置文件内容

  1. # The number of milliseconds of each tick
  2. tickTime=2000
  3. # The number of ticks that the initial
  4. # synchronization phase can take
  5. initLimit=10
  6. # The number of ticks that can pass between
  7. # sending a request and getting an acknowledgement
  8. syncLimit=5
  9. # the directory where the snapshot is stored.
  10. # do not use /tmp for storage, /tmp here is just
  11. # example sakes.
  12. # Zookeeper的数据存放目录,修改数据目录
  13. dataDir=/usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas
  14. # the port at which the clients will connect
  15. clientPort=2181
  16. # the maximum number of client connections.
  17. # increase this if you need to handle more clients
  18. #maxClientCnxns=60
  19. #
  20. # Be sure to read the maintenance section of the
  21. # administrator guide before turning on autopurge.
  22. #
  23. # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  24. #
  25. # The number of snapshots to retain in dataDir
  26. # 保留多少个快照,本来就有这个配置, 只不过注释了, 取消注释即可
  27. autopurge.snapRetainCount=3
  28. # Purge task interval in hours
  29. # Set to "0" to disable auto purge feature
  30. # 日志多少小时清理一次,本来就有这个配置, 只不过注释了, 取消注释即可
  31. autopurge.purgeInterval=1
  32. ## Metrics Providers
  33. #
  34. # https://prometheus.io Metrics Exporter
  35. #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
  36. #metricsProvider.httpHost=0.0.0.0
  37. #metricsProvider.httpPort=7000
  38. #metricsProvider.exportJvmInfo=true
  39. # 集群中服务器地址,集群间通讯的端口号,文件末尾直接增加
  40. server.1=192.168.245.129:2888:3888
  41. server.2=192.168.245.130:2888:3888
  42. server.3=192.168.245.131:2888:3888

    7.配置myid 文件

        该 myid 文件中的值用于和 zoo.cfg 配置文件中的配置项 server.x=nodex:2888:3888 进行对应,用于标识当前节点的 zookeeper,在 zookeeper 集群启动的时候,完成 leader 的选举

        1)node1执行

echo 1 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid 

        2)node2执行

echo 2 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid

        3)node3执行

echo 3 > /usr/local/zookeeper/apache-zookeeper-3.8.0-bin/zkdatas/myid

    8.启动服务

从第 1 台服务器开始,依次执行如下命令
cd /usr/local/zookeeper/apache-zookeeper-3.8.0-bin
#启动服务
sh bin/zkServer.sh start
# 查看进程
jps

    9.查看集群状态

sh bin/zkServer.sh status

kafka集群部署

    部署规划

首先目前官方页面上并没有集群搭建文档,我们下载安装包,可以查看config/kraft下的README.md文件,根据说明,这里我搭建Kraft模式的kafka集群,这里我用的是3.3.1版本
HOSTNAME  IP                           OS
kafka01          192.168.245.129   centos7.9
kafka02          192.168.245.130   centos7.9
kafka03          192.168.245.131   centos7.9

1.下载kafka,三台机器都需要下载Kafka

  1. mkdir /usr/local/kafka/ && cd /usr/local/kafka/
  2. wget https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz

2.解压安装,kafka安装包下载完成后就需要解压以及创建日志目录了

  1. tar -zxvf kafka_2.12-3.3.1.tgz
  2. cd kafka_2.12-3.3.1 && mkdir logs

3.配置server.properties,我们需要配置kafka的配置文件,该文件在kafka的config/server.properties目录下

4.kafka01配置

vi config/server.properties
  1)文件修改内容
      #节点ID
      broker.id=1
      # 本机节点
      listeners=PLAINTEXT://192.168.245.129:9092
      # 本机节点
      advertised.listeners=PLAINTEXT://192.168.245.129:9092
      # 这里我修改了日志文件的路径
      log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
      # zookeeper连接地址
      zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181

5.kafka02配置

vi config/kraft/server.properties
1)文件修改内容
      #节点ID
      broker.id=2
      # 本机节点
      listeners=PLAINTEXT://192.168.245.130:9092
      # 本机节点
      advertised.listeners=PLAINTEXT://192.168.245.130:9092
      # 这里我修改了日志文件的路径
      log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
      # zookeeper连接地址
      zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181

6.kafka03配置

vi config/kraft/server.properties
1)文件修改内容
      #节点ID
      broker.id=3
      # 本机节点
      listeners=PLAINTEXT://192.168.245.131:9092
      # 本机节点
      advertised.listeners=PLAINTEXT://192.168.245.131:9092
      # 这里我修改了日志文件的路径
      log.dirs=/usr/local/kafka/kafka_2.12-3.3.1/logs
      # zookeeper连接地址
      zookeeper.connect=192.168.245.129:2181,192.168.245.130:2181,192.168.245.131:2181

7.启动集群,可以使用下面的命令来启动集群

nohup sh bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &

8.创建主题

sh bin/kafka-topics.sh --create --topic test --partitions 1 --replication-factor 1 --bootstrap-server 192.168.245.129:9092

9.查看主题列表

sh bin/kafka-topics.sh --list --bootstrap-server 192.168.245.129:9092  

10.查看主题信息

sh bin/kafka-topics.sh --bootstrap-server 192.168.245.129:9092 --describe --topic test

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/678809
推荐阅读
相关标签
  

闽ICP备14008679号