当前位置:   article > 正文

Kafka集群启动报错:failed authentication due to invalid credentials with SASL mechanism_failed authentication due to: unexpected handshake

failed authentication due to: unexpected handshake request with client mecha

一、问题描述

在这里插入图片描述

某次,一线反馈,kafka部署过程启动报错:ERROR [Controller id=2, targetBrokerId=2] Connection to node 2 (5-hb-217-BClinux/172.18.1.217:8074) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient),如下所示:

在这里插入图片描述
现场环境:独立zookeeper 3.8.2,kafka:2.13-3.5.1

资源链接:Authentication using SASL/SCRAMZooKeeper AuthenticationZooKeeper Deprecation

二、分析调试处理

1)由报错我们可知,broker明确提示是因为身份验证失败所致,可能原因有如下几点:

1、账号密码错误,现场咨询后排除;
2、zookeeper没有对应账号的信息,broker请求时无法正确连接zookeeper endpoint,导致授权失败,无法连接;
3、kafka身份验证机制:SCRAM-SHA-512支持问题
4、kafka controller与broker实例监听配置问题,未分离
在这里插入图片描述
启动成功的如下:
在这里插入图片描述

2)检查kafka 配置文件,修改如下:

############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# 配置ACL入口类
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
# 设置本例中admin为超级用户
super.users=User:admin
listener.name.sasl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="123456"
listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
listeners=INTERNAL://172.18.1.217:8073,EXTERNAL://172.31.77.237:8072,CONTROLLER://172.18.1.217:8074
inter.broker.listener.name=INTERNAL
control.plane.listener.name = CONTROLLER
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

修改后重启kafka报错未明显改善。

3)怀疑是zk内元数据信息问题,参考官网说明:

The SCRAM(Salted Challenge Response Authentication Mechanism ) implementation in Kafka uses Zookeeper as credential store. Credentials can be created in Zookeeper using kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created before Kafka brokers are started. Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections.

即必须先试用kafka-configs.sh脚本添加SCRAM的配置信息,执行:

bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=123456t],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name admin
#验证
bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file /data1/zookeeper/conf/zoo.cfg --describe --entity-type users --entity-name admin
#删除
bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name bclinux
  • 1
  • 2
  • 3
  • 4
  • 5

完成后再次重启,执行:/data1/kafka/bin/kafka-server-start.sh -daemon /data1/kafka/config/server.properties,kafka虽然还是报错一些信息,但kafka server已能正常启动完成。

在这里插入图片描述

三、附录:其他SASL配置

1) Kafka Brokers配置参考

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"   #kafka启动后初始化连接会创建这2个属性用于内部broker之间通信
    password="admin-secret";  
};

#编辑 kafka-run-class.sh 将jaas文件加入kafka启动项
vim ./bin/kafka-run-class.sh   #找到JVM任何一个选项参数加入以下配置,比如 KAFKA_OPTS=""

KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"

#修改kafka server.properties
vim server.properties  #参考如下
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)
#kafka client配置
vim producer.properties //同样适用于consumer.properties,参考如下
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
#kafka client jaas文件配置
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="alice" \   #producer用该用户连接broker
password="alice-secret";

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

安全特性说明:

  1. The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This is suitable for production use in installations where Zookeeper is secure and on a private network.
    \
  2. Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect against brute force attacks if Zookeeper security is compromised.
    \
  3. SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.
    \
  4. From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers by configuring sasl.server.callback.handler.class in installations where Zookeeper is not secure.
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/416252
推荐阅读
相关标签
  

闽ICP备14008679号