赞
踩
某次,一线反馈,kafka部署过程启动报错:ERROR [Controller id=2, targetBrokerId=2] Connection to node 2 (5-hb-217-BClinux/172.18.1.217:8074) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient),如下所示:
现场环境:独立zookeeper 3.8.2,kafka:2.13-3.5.1
资源链接:Authentication using SASL/SCRAM、ZooKeeper Authentication、ZooKeeper Deprecation
1)由报错我们可知,broker明确提示是因为身份验证失败所致,可能原因有如下几点:
1、账号密码错误,现场咨询后排除;
2、zookeeper没有对应账号的信息,broker请求时无法正确连接zookeeper endpoint,导致授权失败,无法连接;
3、kafka身份验证机制:SCRAM-SHA-512支持问题
4、kafka controller与broker实例监听配置问题,未分离
启动成功的如下:
2)检查kafka 配置文件,修改如下:
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# 配置ACL入口类
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
# 设置本例中admin为超级用户
super.users=User:admin
listener.name.sasl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="123456"
listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
listeners=INTERNAL://172.18.1.217:8073,EXTERNAL://172.31.77.237:8072,CONTROLLER://172.18.1.217:8074
inter.broker.listener.name=INTERNAL
control.plane.listener.name = CONTROLLER
修改后重启kafka报错未明显改善。
3)怀疑是zk内元数据信息问题,参考官网说明:
The SCRAM(Salted Challenge Response Authentication Mechanism ) implementation in Kafka uses Zookeeper as credential store. Credentials can be created in Zookeeper using kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created before Kafka brokers are started. Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections.
即必须先试用kafka-configs.sh脚本添加SCRAM的配置信息,执行:
bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=123456t],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name admin
#验证
bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file /data1/zookeeper/conf/zoo.cfg --describe --entity-type users --entity-name admin
#删除
bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name bclinux
完成后再次重启,执行:/data1/kafka/bin/kafka-server-start.sh -daemon /data1/kafka/config/server.properties,kafka虽然还是报错一些信息,但kafka server已能正常启动完成。
1) Kafka Brokers配置参考
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin" #kafka启动后初始化连接会创建这2个属性用于内部broker之间通信
password="admin-secret";
};
#编辑 kafka-run-class.sh 将jaas文件加入kafka启动项
vim ./bin/kafka-run-class.sh #找到JVM任何一个选项参数加入以下配置,比如 KAFKA_OPTS=""
KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
#修改kafka server.properties
vim server.properties #参考如下
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)
#kafka client配置
vim producer.properties //同样适用于consumer.properties,参考如下
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
#kafka client jaas文件配置
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="alice" \ #producer用该用户连接broker
password="alice-secret";
安全特性说明:
- The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This is suitable for production use in installations where Zookeeper is secure and on a private network.
\- Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect against brute force attacks if Zookeeper security is compromised.
\- SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.
\- From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers by configuring sasl.server.callback.handler.class in installations where Zookeeper is not secure.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。