当前位置:   article > 正文

kafka集群搭建、SASL配置_kafka sasl配置

kafka sasl配置

1 安装kafka需要安装JDK

2 下载kafka

  1. cd /usr/local
  2. # 下载kafka安装包
  3. wget https://archive.apache.org/dist/kafka/3.1.0/kafka_2.13-3.1.0.tgz

3 解压

  1. # 解压kafka安装包
  2. tar -zxvf kafka_2.13-3.1.0.tgz
  3. mv kafka_2.13-3.1.0 kafka

4 修改配置文件

  1. listeners=SASL_PLAINTEXT://0.0.0.0:9092
  2. security.inter.broker.protocol=SASL_PLAINTEXT
  3. sasl.mechanism.inter.broker.protocol=PLAIN
  4. sasl.enabled.mechanisms=PLAIN
  5. allow.everyone.if.no.acl.found=false
  6. #超级管理员权限用户
  7. super.users=User:admin
  8. advertised.listeners=SASL_PLAINTEXT://公网IP:9092
  9. log.dirs=/tmp/kafka-logs
  10. zookeeper.connect=172.19.115.100:2181,172.19.115.99:2181,172.19.115.98:2181

5 分发kafka安装目录

  1. # 分发kafka安装目录给其他集群节点
  2. scp -r /usr/local/kafka/ 内网IP2:/usr/local
  3. scp -r /usr/local/kafka/ 内网IP3:/usr/local

6 配置环境

  1. # 导入java环境
  2. vim /etc/profile
  3. # 添加如下内容(注意:填写自己的java安装目录)
  4. export JAVA_HOME=/usr/java/jdk1.8.0_131
  5. export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
  6. export PATH=$PATH:$JAVA_HOME/bin

7 配置zookeeper

  1. dataDir=/usr/local/kafka_2.13-3.1.0/zkdata
  2. # the port at which the clients will connect
  3. clientPort=2181
  4. # disable the per-ip limit on the number of connections since this is a non-production config
  5. maxClientCnxns=0
  6. # Disable the adminserver by default to avoid port conflicts.
  7. # Set the port to something non-conflicting if choosing to enable this
  8. admin.enableServer=false
  9. # admin.serverPort=8080
  10. tickTime=2000
  11. initLimit=10
  12. syncLimit=5
  13. server.0=172.19.115.100:2888:3888
  14. server.1=172.19.115.98:2888:3888
  15. server.2=172.19.115.99:2888:3888
  16. authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
  17. requireClientAuthScheme=sasl
  18. jaasLoginRenew=3600000
  19. #zookeeper.sasl.client=true

将dataDir = /tmp/zookeeper 目录改为 dataDir=/usr/local/kafka_2.13-3.1.0/zkdata

在kafka目录下新建 zkdata目录 mkdir zkdata

默认在tmp下的目录系统会定时清除,所以务必修改目录

8 新建/tmp/kafka-logs目录

9 在/zkdata/下 新建 myid文件,内容填写节点数字 0 其他节点分别填写 1 和2

10 在kafka/conf/下新增 kafka_server_jaas.conf

kafkaserver中定义了kafka的访问用户名密码 user_admin 必须和username password一致

client中定义了和zookeeper通信的用户名密码,必须和下方的zoo_jaas.conf里定义的一致

  1. KafkaServer {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="admin"
  4. password="admin"
  5. user_admin="admin"
  6. user_producer="producer@123"
  7. user_consumer="consumer@123";
  8. };
  9. Client {
  10. org.apache.kafka.common.security.plain.PlainLoginModule required
  11. username="admin"
  12. password="admin";
  13. };

11 配置 consumer.properties 

  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. # see org.apache.kafka.clients.consumer.ConsumerConfig for more details
  16. # list of brokers used for bootstrapping knowledge about the rest of the cluster
  17. # format: host1:port1,host2:port2 ...
  18. bootstrap.servers=localhost:9092
  19. # consumer group id
  20. group.id=test-consumer-group
  21. ##username 和 password 对应kafka_server_jaas.conf中的用户名密码
  22. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="consumer" password="consumer@123";
  23. security.protocol=SASL_PLAINTEXT
  24. sasl.mechanism=PLAIN
  25. # What to do when there is no initial offset in Kafka or if the current
  26. # offset does not exist any more on the server: latest, earliest, none
  27. #auto.offset.reset=

12 配置 producer.properties

  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. # see org.apache.kafka.clients.producer.ProducerConfig for more details
  16. ############################# Producer Basics #############################
  17. # list of brokers used for bootstrapping knowledge about the rest of the cluster
  18. # format: host1:port1,host2:port2 ...
  19. bootstrap.servers=localhost:9092
  20. # specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
  21. compression.type=none
  22. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="producer" password="producer@123";
  23. security.protocol=SASL_PLAINTEXT
  24. sasl.mechanism=PLAIN
  25. # name of the partitioner class for partitioning events; default partition spreads data randomly
  26. #partitioner.class=
  27. # the maximum amount of time the client will wait for the response of a request
  28. #request.timeout.ms=
  29. # how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
  30. #max.block.ms=
  31. # the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
  32. #linger.ms=
  33. # the maximum size of a request in bytes
  34. #max.request.size=
  35. # the default batch size in bytes when batching multiple records sent to a partition
  36. #batch.size=
  37. # the total bytes of memory the producer can use to buffer records waiting to be sent to the server
  38. #buffer.memory=

13 kafka_consumer_jaas.conf

  1. Client {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="consumer"
  4. password="consumer@123";
  5. };

14 kafka_producer_jaas.conf

  1. Client {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="producer"
  4. password="producer@123";
  5. };

15 zoo_jaas.conf  zookeeper和kafka通信的用户名密码,必须user_admin 和username password必须一致

  1. Server {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="admin"
  4. password="admin"
  5. user_admin="admin";
  6. };

16 sasl.properties

  1. security.protocol=SASL_PLAINTEXT
  2. sasl.mechanism=PLAIN
  3. sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin";

17 修改bin下命令

  1. #zookeeper-server-start.sh
  2. export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/zoo_jaas.conf"
  3. exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$@"
  4. #kafka-server-start.sh
  5. export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
  6. exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"
  7. #kafka-console-consumer.sh
  8. export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
  9. exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
  10. #kafka-console-producer.sh
  11. export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
  12. exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"
  13. #kafka-topics.sh
  14. export KAFKA_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
  15. exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@"

18 kafka命令汇总

  1. 切换主目录
  2. cd /usr/local/kafka/
  3. 开启zk
  4. ./bin/zookeeper-server-start.sh ./config/zookeeper.properties
  5. 守护进程方式开启
  6. ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
  7. 开启kafka
  8. ./bin/kafka-server-start.sh ./config/server.properties
  9. 守护方式开启
  10. ./bin/kafka-server-start.sh -daemon ./config/server.properties
  11. 停止kafka
  12. ./bin/kafka-server-stop.sh
  13. 停止zk
  14. ./bin/zookeeper-server-stop.sh
  15. 开启顺序:先启动zk,在启动kafka
  16. 停止顺序:先停止kafka,再停止zk
  17. 查看服务运行状态
  18. jps
  19. 强关服务
  20. kill -s kill 【pid】
  21. 查看topic列表
  22. ./bin/kafka-topics.sh --list --bootstrap-server 172.19.115.100:9092 --command-config ./config/sasl.properties
  23. 查看topic详情
  24. ./bin/kafka-topics.sh --describe --bootstrap-server 172.19.115.100:9092 --command-config ./config/sasl.properties
  25. 创建topic 3分区 3副本
  26. ./bin/kafka-topics.sh --create --bootstrap-server 172.19.115.100:9092 --replication-factor 3 --partitions 3 --topic chint02 --command-config ./config/sasl.properties
  27. 生产消息
  28. ./bin/kafka-console-producer.sh --broker-list 172.19.115.100:9092 --topic chint01 -producer.config ./config/producer.properties
  29. 消费消息
  30. ./bin/kafka-console-consumer.sh --bootstrap-server 172.19.115.100:9092 --topic chint01 --group test-consumer-group --consumer.config ./config/consumer.properties
  31. 查看group列表
  32. bin/kafka-consumer-groups.sh --bootstrap-server 172.19.115.100:9092 --list --command-config ./config/sasl.properties
  33. 查看 Group 详情
  34. bin/kafka-consumer-groups.sh --bootstrap-server 172.19.115.100:9092 --group test-consumer-group --describe --command-config ./config/sasl.properties

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/443164
推荐阅读
相关标签
  

闽ICP备14008679号