赞
踩
kafka下载地址:Apache Kafka
本机环境:Centos7
本文选用:kafka_2.12-2.0.0.tgz
由于kafka集群需要zookeeper集群作为前提,故需首先搭建zookeeper集群。
JDK安装,可参考:
linux-安装oracleJDK替换默认openJDK_Java菜鸟小白~的博客-CSDN博客
本文节点:192.168.160.128,192.168.160.129,192.168.160.130
已保证IP间互通,内外网物理机均互通
具体可参考
linux安装虚拟机_Java菜鸟小白~的博客-CSDN博客
linux搭建虚拟机与本机物理机互联_Java菜鸟小白~的博客-CSDN博客
编辑hosts文件,提供域名映射:
vim /etc/hosts
插入以下内容:
- 192.168.160.128 node1
- 192.168.160.129 node2
- 192.168.160.130 node3
本文以node1为master节点,node2,node3位slave节点。以上内容三个虚拟机均需进行配置。
本文使用zookeeper版本为:apache-zookeeper-3.6.4-bin.tar.gz
下载地址为:Apache ZooKeeper
tar -zxvf apache-zookeeper-3.6.4-bin.tar.gz
创建data和log文件夹
- cd /usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin
- mkdir data && mkdir log
创建myid文件,输入1保存
cd data
129,130myid文件夹内容分别为2、3!!!
cd /usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin/conf
重命名zookeeper配置文件名称
mv zoo_sample.cfg zoo.cfg
编辑zoo.cfg文件:
vim zoo.cfg
注释原有内容,新增以下内容
- #dataDir=/tmp/zookeeper
- dataDir=/usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin/data
- dataLogDir=/usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin/log
- server.1=192.168.160.128:2888:3888
- server.2=192.168.160.129:2888:3888
- server.3=192.168.160.130:2888:3888
具体文件位置:
vim /etc/profile
新增以下内容:
- export ZOOKEEPER_HOME=/usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin
- export PATH=$PATH:$ZOOKEEPER_HOME/bin
配置文件生效:
source /etc/profile
129,130重复上述操作,注意data目录下myid的对应关系!!!
此命令需三节点都执行
- cd /usr/local/kafka/zookeeper/apache-zookeeper-3.6.4-bin
-
- ./bin/zkServer.sh start
查看状态
./bin/zkServer.sh status
启动成功:
可见128,130为slave,129为master
至此,zookeeper集群已搭建完毕。
官方下载:
https://kafka.apache.org/downloads
wget下载:
wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.12-2.0.0.tgz
tar -xzvf kafka_2.12-2.0.0.tgz
cd /usr/local/kafka/kafka_2.12-2.0.0
创建日志文件夹:
mkdir log
编辑配置文件:
- cd /usr/local/kafka/kafka_2.12-2.0.0/config
- vim server.properties
编辑内容:
128节点:
- broker.id=1
-
- listeners=PLAINTEXT://:9092
- advertised.listeners=PLAINTEXT://192.168.160.128:9092
-
- log.dirs=/usr/local/kafka/kafka_2.12-2.0.0/log
-
- # topic 在当前broker上的分片个数,与broker保持一致
- num.partitions=3
-
- # 设置zookeeper集群地址与端口如下:
- zookeeper.connect=192.168.160.128:2181,192.168.160.129:2181,192.168.160.130:2181
-
129节点
- broker.id=2
-
- listeners=PLAINTEXT://:9092
- advertised.listeners=PLAINTEXT://192.168.160.129:9092
-
- log.dirs=/usr/local/kafka/kafka_2.12-2.0.0/log
-
- # topic 在当前broker上的分片个数,与broker保持一致
- num.partitions=3
-
- # 设置zookeeper集群地址与端口如下:
- zookeeper.connect=192.168.160.128:2181,192.168.160.129:2181,192.168.160.130:2181
-
130节点:
- broker.id=3
-
- listeners=PLAINTEXT://:9092
- advertised.listeners=PLAINTEXT://192.168.160.130:9092
-
- log.dirs=/usr/local/kafka/kafka_2.12-2.0.0/log
-
- # topic 在当前broker上的分片个数,与broker保持一致
- num.partitions=3
-
- # 设置zookeeper集群地址与端口如下:
- zookeeper.connect=192.168.160.128:2181,192.168.160.129:2181,192.168.160.130:2181
-
- 启动kafka
- bin/kafka-server-start.sh config/server.properties
- 以守护线程启动kafka
- bin/kafka-server-start.sh -daemon config/server.properties
bin/kafka-server-stop.sh
128节点
129节点:
130节点:
128创建topic
- bin/kafka-topics.sh --create --zookeeper 192.168.160.128:2181,192.168.160.129:2181,192.168.160.130:2181 --replication-factor 1 --partitions 1 --topic test
-
- --replication-factor:指定副本个数
- --partitions:指定分区个数
- --topic:指定topic名
查看topic列表
bin/kafka-topics.sh --zookeeper 192.168.160.128:2181,192.168.160.129:2181,192.168.160.130:2181 --list
128节点
129节点
130节点
- 创建消费者组
- bin/kafka-console-consumer.sh --bootstrap-server 192.168.160.128:9092 --topic test --group consumer-test
- 查看消费者组
- bin/kafka-consumer-groups.sh --bootstrap-server 192.168.160.128:9092 --list
- 消费数据
- bin/kafka-console-consumer.sh --bootstrap-server 192.168.2.128:9092 --topic test --from-beginning
通过改变ip可在不在的节点上看到相同的group组
128生产数据
bin/kafka-console-producer.sh --broker-list 192.168.160.128:9092 --topic test
129,130消费数据
至此已全部搭建完成
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。