当前位置:   article > 正文

kafka rebalance与数据重复消费问题_kafka clientid相同

kafka clientid相同

问题和现象:

某个程序在消费kafka数据时,总是重复消费相关数据,仿佛在数据消费完毕之后,没有提交相应的偏移量。然而在程序中设置了自动提交:enable.auto.commit为true
检查日志,发现日志提示:

2020-03-26 17:20:21.414  WARN 28800 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-1, 
groupId=test-consumer-group] Synchronous auto-commit of offsets
 {E2C-GDFS-0=OffsetAndMetadata{offset=9632, leaderEpoch=8, metadata=''}} failed: 
Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. 
This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing.
 You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/142476
推荐阅读
相关标签
  

闽ICP备14008679号