当前位置:   article > 正文

Kafka:Docker Compose部署Kafka集群_docker-compose-kafka.yaml

docker-compose-kafka.yaml

创建目录用于存放Docker Compose部署Kafka集群的yaml文件:

mkdir -p /root/composefile/kafka/
  • 1

写入该yaml文件:

vim /root/composefile/kafka/kafka.yaml 
  • 1

内容如下:

version: '3'
networks:
  kafka-networks:
    driver: bridge
services:
  kafka1:
    image: wurstmeister/kafka
    container_name: kafka1
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9092
      KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    networks:
      - kafka-networks
  kafka2:
    image: wurstmeister/kafka
    container_name: kafka2
    ports:
      - "9093:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9093
      KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_BROKER_ID: 2
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    networks:
      - kafka-networks
  kafka3:
    image: wurstmeister/kafka
    container_name: kafka3
    ports:
      - "9094:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9094
      KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
      KAFKA_ADVERTISED_PORT: 9094
      KAFKA_BROKER_ID: 3
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    networks:
      - kafka-networks
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47

192.168.1.9是宿主机的IP地址,192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003是已经部署好的ZooKeeper集群。

KAFKA_BROKER_ID用于区分集群中的节点。

部署Kafka集群:

docker compose -f /root/composefile/kafka/kafka.yaml up -d
  • 1
[+] Running 8/8
 ⠿ kafka3 Pulled                                                                                                                                                                                           68.6s
   ⠿ 540db60ca938 Pull complete                                                                                                                                                                             6.2s
   ⠿ f0698009749d Pull complete                                                                                                                                                                            25.2s
   ⠿ d67ee08425e3 Pull complete                                                                                                                                                                            25.3s
   ⠿ 1a56bfced4ac Pull complete                                                                                                                                                                            62.7s
   ⠿ dccb9e5a402a Pull complete                                                                                                                                                                            62.8s
 ⠿ kafka1 Pulled                                                                                                                                                                                           68.7s
 ⠿ kafka2 Pulled                                                                                                                                                                                           68.6s
[+] Running 3/3
 ⠿ Container kafka3  Started                                                                                                                                                                                1.9s
 ⠿ Container kafka1  Started                                                                                                                                                                                1.9s
 ⠿ Container kafka2  Started                                                                                                                                                                                2.0s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

查看容器状态。

docker compose ls
  • 1

Kafka集群和ZooKeeper集群都在运行。

NAME                STATUS
kafka               running(3)
zookeeper           running(3)
  • 1
  • 2
  • 3

Kafka集群的信息已经注册到了ZooKeeper集群。

[zk: 192.168.1.9:9001(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: 192.168.1.9:9001(CONNECTED) 1] ls -R /brokers 
/brokers
/brokers/ids
/brokers/seqid
/brokers/topics
/brokers/ids/1
/brokers/ids/2
/brokers/ids/3
[zk: 192.168.1.9:9001(CONNECTED) 2] get /brokers/ids/1
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.9","version":5,"timestamp":"1644751714958"}
[zk: 192.168.1.9:9001(CONNECTED) 3] get /brokers/ids/2
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9093"],"jmx_port":-1,"port":9093,"host":"192.168.1.9","version":5,"timestamp":"1644751714877"}
[zk: 192.168.1.9:9001(CONNECTED) 4] get /brokers/ids/3
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9094"],"jmx_port":-1,"port":9094,"host":"192.168.1.9","version":5,"timestamp":"1644751714887"}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

用代码来测试一下,项目pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.kaven</groupId>
    <artifactId>kafka</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>3.0.0</version>
        </dependency>
    </dependencies>
</project>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

测试代码:

package com.kaven.kafka.admin;

import org.apache.kafka.clients.admin.*;
import org.apache.kafka.common.KafkaFuture;

import java.util.Collections;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;

public class Admin {

    // 基于Kafka服务地址与请求超时时间来创建AdminClient实例
    private static final AdminClient adminClient = Admin.getAdminClient("192.168.1.9:9092", "40000");

    public static void main(String[] args) throws InterruptedException, ExecutionException {
        Admin admin = new Admin();
        // 创建Topic,Topic名称为new-topic,分区数为2,复制因子为1
        admin.createTopic("new-topic", 2, (short) 1);
    }

    public static AdminClient getAdminClient(String address, String requestTimeoutMS) {
        Properties properties = new Properties();
        properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, address);
        properties.setProperty(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, requestTimeoutMS);
        return AdminClient.create(properties);
    }

    public void createTopic(String name, int numPartitions, short replicationFactor) throws InterruptedException {
        CountDownLatch latch = new CountDownLatch(1);
        CreateTopicsResult topics = adminClient.createTopics(
                Collections.singleton(new NewTopic(name, numPartitions, replicationFactor))
        );
        Map<String, KafkaFuture<Void>> values = topics.values();
        values.forEach((name__, future) -> {
            future.whenComplete((a, throwable) -> {
                if(throwable != null) {
                    System.out.println(throwable.getMessage());
                }
                System.out.println(name__);
                latch.countDown();
            });
        });
        latch.await();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47

输出:

new-topic
  • 1

通过ZooKeeper集群可以查询到刚刚创建的Topic

[zk: 192.168.1.9:9001(CONNECTED) 9] ls -R /brokers/topics 
/brokers/topics
/brokers/topics/new-topic
/brokers/topics/new-topic/partitions
/brokers/topics/new-topic/partitions/0
/brokers/topics/new-topic/partitions/1
/brokers/topics/new-topic/partitions/0/state
/brokers/topics/new-topic/partitions/1/state
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Docker Compose部署Kafka集群就介绍到这里,如果博主有说错的地方或者大家有不同的见解,欢迎大家评论补充。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/2023面试高手/article/detail/652131
推荐阅读
相关标签
  

闽ICP备14008679号