赞
踩
数据量猛增的时候,需要给 kafka 的 topic 新增分区,增大处理的数据量,可以通过以下步骤
kafka-topics --zookeeper hadoop004:2181 --alter --topic flink-test-04 --partitions 3
生成迁移计划,手动新建一个 json 文件
{
"topics": [
{"topic": "flink-test-03"}
],
"version": 1
}
生成迁移计划
kafka-reassign-partitions --zookeeper hadoop004:2181 --topics-to-move-json-file topic.json --broker-list “120,121,122” --generate
Current partition replica assignment:
{"version":1,"partitions":[{"topic":"flink-test-02","partition":5,"replicas":[120]},{"topic":"flink-test-02","partition":0,"replicas":[121]},{"topic":"flink-test-02","partition":2,"replicas":[120]},{"topic":"flink-test-02","partition":1,"replicas":[122]},{"topic":"flink-test-02","partition":4,"replicas":[122]},{"topic":"flink-test-02","partition":3,"replicas":[121]}]}
新建一个文件reassignment.json,保存上边这些信息
kafka-reassign-partitions --zookeeper hadoop004:2181 --reassignment-json-file reassignment.json --execute
kafka-reassign-partitions --zookeeper hadoop004:2181 --reassignment-json-file reassignment.json --verify
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。