赞
踩
创建目录用于存放Docker Compose
部署Kafka
集群的yaml
文件:
mkdir -p /root/composefile/kafka/
写入该yaml
文件:
vim /root/composefile/kafka/kafka.yaml
内容如下:
version: '3' networks: kafka-networks: driver: bridge services: kafka1: image: wurstmeister/kafka container_name: kafka1 ports: - "9092:9092" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9092 KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003" KAFKA_ADVERTISED_PORT: 9092 KAFKA_BROKER_ID: 1 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 networks: - kafka-networks kafka2: image: wurstmeister/kafka container_name: kafka2 ports: - "9093:9092" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9093 KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003" KAFKA_ADVERTISED_PORT: 9093 KAFKA_BROKER_ID: 2 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 networks: - kafka-networks kafka3: image: wurstmeister/kafka container_name: kafka3 ports: - "9094:9092" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9094 KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003" KAFKA_ADVERTISED_PORT: 9094 KAFKA_BROKER_ID: 3 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 networks: - kafka-networks
192.168.1.9
是宿主机的IP
地址,192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003
是已经部署好的ZooKeeper
集群。
KAFKA_BROKER_ID
用于区分集群中的节点。
部署Kafka
集群:
docker compose -f /root/composefile/kafka/kafka.yaml up -d
[+] Running 8/8
⠿ kafka3 Pulled 68.6s
⠿ 540db60ca938 Pull complete 6.2s
⠿ f0698009749d Pull complete 25.2s
⠿ d67ee08425e3 Pull complete 25.3s
⠿ 1a56bfced4ac Pull complete 62.7s
⠿ dccb9e5a402a Pull complete 62.8s
⠿ kafka1 Pulled 68.7s
⠿ kafka2 Pulled 68.6s
[+] Running 3/3
⠿ Container kafka3 Started 1.9s
⠿ Container kafka1 Started 1.9s
⠿ Container kafka2 Started 2.0s
查看容器状态。
docker compose ls
Kafka
集群和ZooKeeper
集群都在运行。
NAME STATUS
kafka running(3)
zookeeper running(3)
Kafka
集群的信息已经注册到了ZooKeeper
集群。
[zk: 192.168.1.9:9001(CONNECTED) 0] ls / [admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper] [zk: 192.168.1.9:9001(CONNECTED) 1] ls -R /brokers /brokers /brokers/ids /brokers/seqid /brokers/topics /brokers/ids/1 /brokers/ids/2 /brokers/ids/3 [zk: 192.168.1.9:9001(CONNECTED) 2] get /brokers/ids/1 {"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.9","version":5,"timestamp":"1644751714958"} [zk: 192.168.1.9:9001(CONNECTED) 3] get /brokers/ids/2 {"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9093"],"jmx_port":-1,"port":9093,"host":"192.168.1.9","version":5,"timestamp":"1644751714877"} [zk: 192.168.1.9:9001(CONNECTED) 4] get /brokers/ids/3 {"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9094"],"jmx_port":-1,"port":9094,"host":"192.168.1.9","version":5,"timestamp":"1644751714887"}
用代码来测试一下,项目pom.xml
:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.kaven</groupId> <artifactId>kafka</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>8</maven.compiler.source> <maven.compiler.target>8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>3.0.0</version> </dependency> </dependencies> </project>
测试代码:
package com.kaven.kafka.admin; import org.apache.kafka.clients.admin.*; import org.apache.kafka.common.KafkaFuture; import java.util.Collections; import java.util.Map; import java.util.Properties; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; public class Admin { // 基于Kafka服务地址与请求超时时间来创建AdminClient实例 private static final AdminClient adminClient = Admin.getAdminClient("192.168.1.9:9092", "40000"); public static void main(String[] args) throws InterruptedException, ExecutionException { Admin admin = new Admin(); // 创建Topic,Topic名称为new-topic,分区数为2,复制因子为1 admin.createTopic("new-topic", 2, (short) 1); } public static AdminClient getAdminClient(String address, String requestTimeoutMS) { Properties properties = new Properties(); properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, address); properties.setProperty(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, requestTimeoutMS); return AdminClient.create(properties); } public void createTopic(String name, int numPartitions, short replicationFactor) throws InterruptedException { CountDownLatch latch = new CountDownLatch(1); CreateTopicsResult topics = adminClient.createTopics( Collections.singleton(new NewTopic(name, numPartitions, replicationFactor)) ); Map<String, KafkaFuture<Void>> values = topics.values(); values.forEach((name__, future) -> { future.whenComplete((a, throwable) -> { if(throwable != null) { System.out.println(throwable.getMessage()); } System.out.println(name__); latch.countDown(); }); }); latch.await(); } }
输出:
new-topic
通过ZooKeeper
集群可以查询到刚刚创建的Topic
。
[zk: 192.168.1.9:9001(CONNECTED) 9] ls -R /brokers/topics
/brokers/topics
/brokers/topics/new-topic
/brokers/topics/new-topic/partitions
/brokers/topics/new-topic/partitions/0
/brokers/topics/new-topic/partitions/1
/brokers/topics/new-topic/partitions/0/state
/brokers/topics/new-topic/partitions/1/state
Docker Compose
部署Kafka
集群就介绍到这里,如果博主有说错的地方或者大家有不同的见解,欢迎大家评论补充。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。