当前位置:   article > 正文

Kafka-python 客户端导致的 cpu 使用过高,且无法消费消息的问题

kafka 三个broker其中一个broker的cpu很高

今天遇到一个情况使用了 Kafka-python 1.3.3 来操作读取 broker 1.0.1 版本的 kafka。出现了 rebalance 之后分配到了客户端,但是 cpu 利用率很高且无法消费的情况。

先是排查了连接方面和代码方面的问题,后来发现都没有问题就把注意力转移到了 kafka-client 本身。

搜索相关问题首先搜到了 kafka-python issues 1033 

When no module exists to handle Snappy decompression, the KafkaConsumer returns no messages, rather than reporting the problem. This differs from the legacy Consumer API which provides a much more useful error message.

Background

I was attempting to fetch some data from a Kafka topic which was using snappy compression. No data was ever returned even though I knew data was being landed in the topic (confirmed with the Kafka CLI tools). This had me very confused.

  1. >>> consumer = kafka.KafkaConsumer("test", bootstrap_servers=["svr:9092"])
  2. >>> consumer.poll(5000)
  3. {}

I then attempted to use the legacy consumer API which pointed me to the exact problem.

  1. >>> client = kafka.SimpleClient("svr:9092")
  2. >>> consumer.close()
  3. >>> consumer = kafka.SimpleConsumer(client, "group", "test")
  4. >>> for message in consumer:
  5. ... print(message)
  6. ...
  7. Traceback (most recent call last):
  8. File "<stdin>", line 1, in <module>
  9. File "/usr/lib/python2.7/site-packages/kafka/consumer/simple.py", line 353, in __iter__
  10. message = self.get_message(True, timeout)
  11. File "/usr/lib/python2.7/site-packages/kafka/consumer/simple.py", line 305, in get_message
  12. return self._get_message(block, timeout, get_partition_info)
  13. File "/usr/lib/python2.7/site-packages/kafka/consumer/simple.py", line 320, in _get_message
  14. self._fetch()
  15. File "/usr/lib/python2.7/site-packages/kafka/consumer/simple.py", line 379, in _fetch
  16. fail_on_error=False
  17. File "/usr/lib/python2.7/site-packages/kafka/client.py", line 665, in send_fetch_request
  18. KafkaProtocol.decode_fetch_response)
  19. File "/usr/lib/python2.7/site-packages/kafka/client.py", line 295, in _send_broker_aware_request
  20. for payload_response in decoder_fn(future.value):
  21. File "/usr/lib/python2.7/site-packages/kafka/protocol/legacy.py", line 212, in decode_fetch_response
  22. for partition, error, highwater_offset, messages in partitions
  23. File "/usr/lib/python2.7/site-packages/kafka/protocol/legacy.py", line 219, in decode_message_set
  24. inner_messages = message.decompress()
  25. File "/usr/lib/python2.7/site-packages/kafka/protocol/message.py", line 121, in decompress
  26. assert has_snappy(), 'Snappy decompression unsupported'
  27. AssertionError: Snappy decompression unsupported

All I needed to do was install the python-snappy module to handle the decompression.

pip install python-snappy

跟我目前遭遇的情况非常相似。

的确我看了一下 requiments 里面也确实没有安装 python-snappy。看了一下我使用的生产者也确实使用了 snappy 来压缩 message 。 

python-kafka 在新版本中修复了这个问题,如果没有安装 python-snappy 将会把错误 raise 出来而不是让人不知所措。

所以我直接升级了 python-kafka 然后安装了 python-snappy 便可以愉快运行了!

 

 

Reference:

https://github.com/dpkp/kafka-python/issues/1033  KafkaConsumer Fails to Report Problem with Compression

https://github.com/dpkp/kafka-python/issues/1315  High CPU usage in KafkaConsumer.poll() when subscribed to many topics with no new messages (possibly SSL related)

 

转载于:https://www.cnblogs.com/piperck/p/10265706.html

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号