当前位置:   article > 正文

KafkaProducer使用介绍、参数配置以及核心源码解析_org.apache.kafka.clients.producer.kafkaproducer

org.apache.kafka.clients.producer.kafkaproducer

前言

Kafka,作为目前在大数据领域应用最为广泛的消息队列,其内部实现和设计有很多值得深入研究和分析的地方,使用kafka首先需要接触到producer的开发,然后是consumer开发,自0.8.2.x版本以后,kafka提供了java 版本的producer以代替以前scala版本的producer,下面进行producer的解析。

Producer 概要设计

发送简略流程图

KafkaProducer首先使用序列化器将需要发送的数据进行序列化,让后通过分区器(partitioner)确定该数据需要发送的Topic的分区,kafka提供了一个默认的分区器,如果消息指定了key,那么partitioner会根据key的hash值来确定目标分区,如果没有指定key,那么将使用轮询的方式确定目标分区,这样可以最大程度的均衡每个分区的消息,确定分区之后,将会进一步确认该分区的leader节点(处理该分区消息读写的主节点),消息会进入缓冲池进行缓冲,然后等消息到达一定数量或者大小后进行批量发送

Producer 程序开发

  • producer同步发送消息
  1. package com.huawei.kafka.producer;
  2. import org.apache.kafka.clients.producer.KafkaProducer;
  3. import org.apache.kafka.clients.producer.ProducerRecord;
  4. import org.apache.kafka.clients.producer.RecordMetadata;
  5. import java.util.Properties;
  6. import java.util.concurrent.ExecutionException;
  7. /**
  8. * @author: xuqiangnj@163.com
  9. * @date: 2019/4/15 21:59
  10. * @description:
  11. */
  12. public class SynKafkaProducer {
  13. public static final Properties props = new Properties();
  14. static {
  15. props.put("bootstrap.servers", "192.168.142.139:9092");
  16. // 0:producer不会等待broker发送ack
  17. // 1:当leader接收到消息后发送ack
  18. // -1:当所有的follower都同步消息成功后发送ack
  19. props.put("acks", "-1");//
  20. props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  21. props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  22. }
  23. final KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(props);
  24. private String topicName;
  25. public SynKafkaProducer(String topicName) {
  26. this.topicName = topicName;
  27. }
  28. public RecordMetadata send(String key, String value) {
  29. RecordMetadata recordMetadata = null;
  30. try {
  31. recordMetadata = kafkaProducer.send(new ProducerRecord<String, String>(topicName,
  32. key, value)).get();//get方法将阻塞,直到返回结果RecordMetadata
  33. } catch (InterruptedException e) {
  34. //进行日志记录或者异常处理
  35. } catch (ExecutionException e) {
  36. //进行日志记录或者异常处理
  37. }
  38. return recordMetadata;
  39. }
  40. public void close() {
  41. if (kafkaProducer != null) {
  42. kafkaProducer.close();
  43. }
  44. }
  45. public static void main(String[] args) {
  46. SynKafkaProducer synKafkaProducer = new SynKafkaProducer("kafka-test");
  47. for (int i = 0; i < 10; i++) {
  48. RecordMetadata metadata = synKafkaProducer.send(String.valueOf(i), "This is " + i +
  49. " message");
  50. System.out.println("TopicName : " + metadata.topic() + " Partiton : " + metadata
  51. .partition() + " Offset : " + metadata.offset());
  52. }
  53. synKafkaProducer.close();
  54. }
  55. }

运行结果

  • producer异步发送消息
  1. package com.huawei.kafka.producer;
  2. import org.apache.kafka.clients.producer.Callback;
  3. import org.apache.kafka.clients.producer.KafkaProducer;
  4. import org.apache.kafka.clients.producer.ProducerRecord;
  5. import org.apache.kafka.clients.producer.RecordMetadata;
  6. import java.util.Properties;
  7. import java.util.concurrent.ExecutionException;
  8. /**
  9. * @author: xuqiangnj@163.com
  10. * @date: 2019/4/15 21:59
  11. * @description:
  12. */
  13. public class AsynKafkaProducer {
  14. public static final Properties props = new Properties();
  15. static {
  16. props.put("bootstrap.servers", "192.168.142.139:9092");
  17. // 0:producer不会等待broker发送ack
  18. // 1:当leader接收到消息后发送ack
  19. // -1:当所有的follower都同步消息成功后发送ack
  20. props.put("acks", "1");//
  21. props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  22. props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  23. }
  24. final KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(props);
  25. private String topicName;
  26. public AsynKafkaProducer(String topicName) {
  27. this.topicName = topicName;
  28. }
  29. public void send(String key, String value) {
  30. kafkaProducer.send(new ProducerRecord<String, String>(topicName,
  31. key, value), new Callback() {
  32. public void onCompletion(RecordMetadata metadata, Exception exception) {
  33. if (exception == null){
  34. System.out.println("TopicName : " + metadata.topic() + " Partiton : " + metadata
  35. .partition() + " Offset : " + metadata.offset());
  36. }
  37. else {
  38. //进行异常处理
  39. }
  40. }
  41. });
  42. }
  43. public void close() {
  44. if (kafkaProducer != null) {
  45. kafkaProducer.close();
  46. }
  47. }
  48. public static void main(String[] args) {
  49. AsynKafkaProducer synKafkaProducer = new AsynKafkaProducer("kafka-test");
  50. for (int i = 0; i < 10; i++) {
  51. synKafkaProducer.send(String.valueOf(i), "This is " + i + " message");
  52. }
  53. try {
  54. Thread.sleep(2000);//这里阻塞两秒钟 等待回调函数打印结果
  55. } catch (InterruptedException e) {
  56. e.printStackTrace();
  57. }
  58. synKafkaProducer.close();
  59. }
  60. }

运行结果

同步发送消息

优点:可以保证每条消息准确无误的写入了broker,对于立即需要发送结果的情况非常适用,在producer故障或者宕机的情况也可以保证结果的正确性

缺点:由于同步发送需要每条消息都需要及时发送到broker,没有缓冲批量操作,性能较低

异步发送消息

优点:可以通过缓冲池对消息进行缓冲,然后进行消息的批量发送,大量减少了和broker的交互频率,性能极高,可以通过回调机制获取发送结果

缺点:在producer直接断电或者重启等故障,将有可能丢失消息发送结果,对消息准确性要求很高的场景不适用

producer使用非常简单

1.配置producer参数

2.构造ProducerRecord消息

3.调用send方法进行发送

4.最后关闭producer资源

Producer参数说明

  • bootstrap.servers:Kafka群集信息列表,用于连接kafka集群,如果集群中机器数很多,只需要指定部分的机器主机的信息即可,不管指定多少台,producer都会通过其中一台机器发现集群中所有的broker,一般指定3台机器的信息即可,格式为 hostname1:port,hostname2:port,hostname3:port
  • key.serializer:任何消息发送到broker格式都是字节数组,因此在发送到broker之前需要进行序列化,该参数用于对ProducerRecord中的key进行序列化
  • value.serializer:该参数用于对ProducerRecord中的value进行序列化

kafka默认提供了常用的序列化类,也可以通过实现org.apache.kafka.common.serialization.Serializer实现定义的序列化,在下面说明消息序列化详细列出 

  • acks:ack具有3个取值 0、1和-1(all)

acks=0:发送消息时,完全不用等待broker的处理结果,即可进行下一次发送,吞吐量最高

acks=-1或all:不仅需要leader broker将消息持久化,同时还需要等待ISR副本(in-sync replica 即与leader节点保持消息同步的副本集合)全部都成功持久化消息后,才返回响应结果,吞吐量最低

ack=1:是前面两个的折中,只需要leader broker持久化后,便响应结果

  • buffer.memory:该参数用于设置发送缓冲池的内存大小,单位是字节。默认值33554432KB(32M)
  • compression.type:压缩类型,目前kafka支持四种种压缩类型gzip、snappy、lz4、zstd,性能依次递增。默认值none(不压缩)
  • retries:重试次数,0表示不进行重试。默认值2147483647
  • batch.size:批次处理大小,通常增加该参数,可以提升producer的吞吐量
  • linger.ms:发送时延,和batch.size任满足其一条件,就会进行发送
  • max.block.ms:当producer缓冲满了之后,阻塞的时间
  • max.request.size:控制producer发送请求的大小
  • receive.buffer.bytes:读取数据时要使用的TCP接收缓冲区(SO_RCVBUF)的大小。如果值为-1,则将使用OS默认值。默认值32768
  • send. buffer.bytes:发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。如果值为-1,则将使用OS默认值。默认值131072
  • request.timeout.ms:生产者发送数据时等待服务器返回响应的时间。默认值30000ms

消息分区机制

  • 消息分区策略:kafka发送消息到broker之前需要确定该消息的具体分区信息,kafka默认提供分区器,该分区机制为如果在ProducerRecord中指定了key,那么将根据key的值进行hash计算,如果没有则采用轮询方式进行,除此之外,还可以在构造ProducerRecord的时候指定分区信息,Kafka将优先使用指定的分区信息,避免进行计算提升了性能,但是客户端在指定分区信息时需要考虑数据均衡问题。
  • 自定义分区机制:可以通过实现org.apache.kafka.clients.producer.Partitioner自定分区策略,在构造KafkaProducer是配置参数partitioner.class为自定义的分区类即可

消息序列化

  • 默认序列化:下图为kafka实现的默认序列化与反序列器,主要为常用类型序列化器

  • 自定义序列化:可以通过org.apache.kafka.common.serialization.Serializer来自定义序列化器

消息压缩

  • 压缩类型:kafka压缩类型有5个有效值 none(默认值)、gzip、snappy、lz4、zstd
  • 性能比较(压缩比和压缩速率): gzip << snappy << lz4 << zstd

多线程处理

producer实例是线程安全的,在多线程环境,一般有两种使用方式

  • 多线程单实例:实现简单,所有线程共用一个KafkaProducer实例,内存开销小(使用一个内存缓冲区),但是如果某个线程发送阻塞或则崩溃将会影响到所有线程的正常发送。
  • 多线程多实例:可以使用一对一(即一个线程一个实例)或者M对N(即M个线程共同使用N个实例)两种方式,一对一方式每个线程维护自己的KafkaProducer实例,管理简单,一个线程崩溃,不会影响到其他,M对N方式可以达到资源利用率最大化,根据实际线程数来调整KafkaProducer实例资源的个数,而且在某个KafkaProducer崩溃,还可以切换到其他实例上,总体上更佳,它们相比于单实例需要消耗更多的内存资源并且实现更为复杂。

Producer发送流程源码解析

从上面使用producer代码示例可以看出,producer发送消息就是调用send方法

producer send()方法实现

  1. // 异步向一个 topic 发送数据
  2. @Override
  3. public Future<RecordMetadata> send(ProducerRecord<K, V> record) {
  4. return send(record, null);
  5. }
  6. // 异步向一个 topic 发送数据,并注册回调函数,在回调函数中接受发送响应
  7. @Override
  8. public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
  9. // intercept the record, which can be potentially modified; this method does not throw exceptions
  10. ProducerRecord<K, V> interceptedRecord = this.interceptors.onSend(record);
  11. return doSend(interceptedRecord, callback);
  12. }

send() 方法通过重载实现带回调和不带回调的参数,最终调用 doSend() 方法

producer doSend()方法实现

  1. //实现异步发送消息到对应topic
  2. private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
  3. TopicPartition tp = null;
  4. try {
  5. //1、检查producer实例是否已关闭,如果关闭则抛出异常
  6. throwIfProducerClosed();
  7. ClusterAndWaitTime clusterAndWaitTime;
  8. try {
  9. //2、确保topic的元数据(metadata)是可用的
  10. clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
  11. } catch (KafkaException e) {
  12. if (metadata.isClosed())
  13. throw new KafkaException("Producer closed while send in progress", e);
  14. throw e;
  15. }
  16. long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
  17. Cluster cluster = clusterAndWaitTime.cluster;
  18. //3、序列化key
  19. byte[] serializedKey;
  20. try {
  21. serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
  22. } catch (ClassCastException cce) {
  23. throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
  24. " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
  25. " specified in key.serializer", cce);
  26. }
  27. //4、序列化value
  28. byte[] serializedValue;
  29. try {
  30. serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
  31. } catch (ClassCastException cce) {
  32. throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
  33. " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
  34. " specified in value.serializer", cce);
  35. }
  36. //5、获取分区信息,如果ProducerRecord指定了分区信息就使用该指定分区否则通过计算获取
  37. int partition = partition(record, serializedKey, serializedValue, cluster);
  38. tp = new TopicPartition(record.topic(), partition);
  39. setReadOnly(record.headers());
  40. Header[] headers = record.headers().toArray();
  41. //6、估算消息的字节大小
  42. int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
  43. compressionType, serializedKey, serializedValue, headers);
  44. //7、确保消息大小不超过发送请求最大值(max.request.size)或者发送缓冲池发小(buffer.memory)
  45. ensureValidRecordSize(serializedSize);
  46. long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
  47. log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
  48. // producer callback will make sure to call both 'callback' and interceptor callback
  49. Callback interceptCallback = new InterceptorCallback<>(callback, this.interceptors, tp);
  50. //8、是否使用事务
  51. if (transactionManager != null && transactionManager.isTransactional())
  52. transactionManager.maybeAddPartitionToTransaction(tp);
  53. //9、向 accumulator 中追加数据
  54. RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
  55. serializedValue, headers, interceptCallback, remainingWaitMs);
  56. //10、如果 batch 已经满了,唤醒 sender 线程发送数据
  57. if (result.batchIsFull || result.newBatchCreated) {
  58. log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
  59. this.sender.wakeup();
  60. }
  61. return result.future;
  62. // handling exceptions and record the errors;
  63. // for API exceptions return them in the future,
  64. // for other exceptions throw directly
  65. } catch (ApiException e) {
  66. .............
  67. }
  68. ..........
  69. }

doSend()方法主要分为10步完成:

1.检查producer实例是否已关闭,如果关闭则抛出异常

2.确保topic的元数据(metadata)是可用的

3.序列化ProducerRecord中的key

4.序列化ProducerRecord中的value

5.确定分区信息,如果构造ProducerRecord指定了分区信息就使用该指定分区否则通过计算获取

6.估算整条消息的字节大小

7.确保消息大小不超过发送请求最大值(max.request.size)或者发送缓冲池发小(buffer.memory),如果超过则抛出异常

8.是否使用事务,如果使用则按照事务流程进行

9.向 accumulator 中追加数据

10.如果 batch 已经满了,唤醒 sender 线程发送数据

发送过程详解

获取 topic 的 metadata 信息

Producer 通过 waitOnMetadata() 方法来获取对应 topic 的 metadata 信息,具体实现如下

  1. //如果元数据Topic列表中还没有该Topic,则将其添加到元数据Topic列表中,并重置过期时间
  2. private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
  3. //首先从元数据中获取集群信息
  4. Cluster cluster = metadata.fetch();
  5. //集群的无效topic列表包含该topic那么抛出异常
  6. if (cluster.invalidTopics().contains(topic))
  7. throw new InvalidTopicException(topic);
  8. //否则将topic添加到Topic列表中
  9. metadata.add(topic);
  10. //获取该topic的分区数
  11. Integer partitionsCount = cluster.partitionCountForTopic(topic);
  12. //如果存在缓存的元数据,并且未指定分区的或者在已知分区范围内,那么返回缓存的元数据
  13. if (partitionsCount != null && (partition == null || partition < partitionsCount))
  14. return new ClusterAndWaitTime(cluster, 0);
  15. long begin = time.milliseconds();
  16. long remainingWaitMs = maxWaitMs;
  17. long elapsed;
  18. // Issue metadata requests until we have metadata for the topic and the requested partition,
  19. // or until maxWaitTimeMs is exceeded. This is necessary in case the metadata
  20. // is stale and the number of partitions for this topic has increased in the meantime.
  21. //发出元数据请求,直到主题和所请求分区的元数据出现,或者超过maxWaitTimeMs为止。
  22. //同时可以有效检测元数据已经过期或者主题的分区数量增加。
  23. do {
  24. if (partition != null) {
  25. log.trace("Requesting metadata update for partition {} of topic {}.", partition, topic);
  26. } else {
  27. log.trace("Requesting metadata update for topic {}.", topic);
  28. }
  29. metadata.add(topic);
  30. int version = metadata.requestUpdate();
  31. sender.wakeup();
  32. try {
  33. metadata.awaitUpdate(version, remainingWaitMs);
  34. } catch (TimeoutException ex) {
  35. // Rethrow with original maxWaitMs to prevent logging exception with remainingWaitMs
  36. throw new TimeoutException(
  37. String.format("Topic %s not present in metadata after %d ms.",
  38. topic, maxWaitMs));
  39. }
  40. cluster = metadata.fetch();
  41. elapsed = time.milliseconds() - begin;
  42. if (elapsed >= maxWaitMs) {
  43. throw new TimeoutException(partitionsCount == null ?
  44. String.format("Topic %s not present in metadata after %d ms.",
  45. topic, maxWaitMs) :
  46. String.format("Partition %d of topic %s with partition count %d is not present in metadata after %d ms.",
  47. partition, topic, partitionsCount, maxWaitMs));
  48. }
  49. if (cluster.unauthorizedTopics().contains(topic))
  50. throw new TopicAuthorizationException(topic);
  51. if (cluster.invalidTopics().contains(topic))
  52. throw new InvalidTopicException(topic);
  53. remainingWaitMs = maxWaitMs - elapsed;
  54. partitionsCount = cluster.partitionCountForTopic(topic);
  55. } while (partitionsCount == null || (partition != null && partition >= partitionsCount));
  56. //返回集群信息和获取信息等待的时间
  57. return new ClusterAndWaitTime(cluster, elapsed);
  58. }

key 和 value 的序列化

对于发送到broker的信息都需要进行序列化,根据在构造Producer的参数中指定或者构造方法中指定序列化方式

确定partition值

关于 partition 值的计算,分为三种情况:

  1. 指明 partition 的情况下,直接将指定的值直接作为 partiton 值;
  2. 没有指明 partition 值但有 key 的情况下,将 key 的 hash 值与 topic 的 partition 数进行取余得到 partition 值;
  3. 既没有 partition 值又没有 key 值的情况下,第一次调用时随机生成一个整数(后面每次调用在这个整数上自增),将这个值与 topic 可用的 partition 总数取余得到 partition 值,也就是常说的轮询算法。

具体实现如下

  1. private int partition(ProducerRecord<K, V> record, byte[] serializedKey, byte[] serializedValue, Cluster cluster) {
  2. Integer partition = record.partition();
  3. //如果指定了partition,那么直接使用该值 否则使用分区器计算
  4. return partition != null ? partition : partitioner.partition(
  5. record.topic(), record.key(), serializedKey, record.value(), serializedValue, cluster);
  6. }
  7. //默认使用的 partitioner是org.apache.kafka.clients.producer.internals.DefaultPartitioner,分区实现为
  8. public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
  9. List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
  10. int numPartitions = partitions.size();
  11. if (keyBytes == null) {
  12. //不指定key时,根据 topic 获取对应的整数变量
  13. int nextValue = nextValue(topic);
  14. List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
  15. if (availablePartitions.size() > 0) {
  16. int part = Utils.toPositive(nextValue) % availablePartitions.size();
  17. return availablePartitions.get(part).partition();
  18. } else {
  19. // no partitions are available, give a non-available partition
  20. return Utils.toPositive(nextValue) % numPartitions;
  21. }
  22. } else {
  23. //指定key时,通过key进行hash运算然后对该topic分区总数取余
  24. return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
  25. }
  26. }
  27. private int nextValue(String topic) {
  28. AtomicInteger counter = topicCounterMap.get(topic);
  29. if (null == counter) {
  30. //第一次随机生成一个数
  31. counter = new AtomicInteger(ThreadLocalRandom.current().nextInt());
  32. AtomicInteger currentCounter = topicCounterMap.putIfAbsent(topic, counter);
  33. if (currentCounter != null) {
  34. counter = currentCounter;
  35. }
  36. }
  37. //以后每次递增
  38. return counter.getAndIncrement();
  39. }

估算消息大小并检查

校验消息是为了防止消息过大或者数量过多,导致内存异常,具体实现

  1. //估算消息大小
  2. int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
  3. compressionType, serializedKey, serializedValue, headers);
  4. //确保消息大小在有效范围内
  5. ensureValidRecordSize(serializedSize);
  6. private void ensureValidRecordSize(int size) {
  7. //是否超过最大请求的限制,如果超过抛出异常
  8. if (size > this.maxRequestSize)
  9. throw new RecordTooLargeException("The message is " + size +
  10. " bytes when serialized which is larger than the maximum request size you have configured with the " +
  11. ProducerConfig.MAX_REQUEST_SIZE_CONFIG +
  12. " configuration.");
  13. //是否超过最大内存缓冲池大小,如果超过抛出异常
  14. if (size > this.totalMemorySize)
  15. throw new RecordTooLargeException("The message is " + size +
  16. " bytes when serialized which is larger than the total memory buffer you have configured with the " +
  17. ProducerConfig.BUFFER_MEMORY_CONFIG +
  18. " configuration.");
  19. }

向 accumulator 写数据

  1. public RecordAppendResult append(TopicPartition tp,
  2. long timestamp,
  3. byte[] key,
  4. byte[] value,
  5. Header[] headers,
  6. Callback callback,
  7. long maxTimeToBlock) throws InterruptedException {
  8. // We keep track of the number of appending thread to make sure we do not miss batches in
  9. // abortIncompleteBatches().
  10. appendsInProgress.incrementAndGet();
  11. ByteBuffer buffer = null;
  12. if (headers == null) headers = Record.EMPTY_HEADERS;
  13. try {
  14. // 当前tp有对应queue时直接返回,否则新建一个返回
  15. Deque<ProducerBatch> dq = getOrCreateDeque(tp);
  16. // 在对一个 queue 进行操作时,会保证线程安全
  17. synchronized (dq) {
  18. if (closed)
  19. throw new KafkaException("Producer closed while send in progress");
  20. RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
  21. if (appendResult != null)
  22. return appendResult;
  23. }
  24. // 为 topic-partition 创建一个新的 RecordBatch
  25. byte maxUsableMagic = apiVersions.maxUsableProduceMagic();
  26. int size = Math.max(this.batchSize, AbstractRecords.estimateSizeInBytesUpperBound(maxUsableMagic, compression, key, value, headers));
  27. log.trace("Allocating a new {} byte message buffer for topic {} partition {}", size, tp.topic(), tp.partition());
  28. // 给这个 RecordBatch 初始化一个 buffer
  29. buffer = free.allocate(size, maxTimeToBlock);
  30. synchronized (dq) {
  31. if (closed)
  32. throw new KafkaException("Producer closed while send in progress");
  33. RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
  34. if (appendResult != null) {
  35. // 如果突然发现这个 queue 已经存在,那么就释放这个已经分配的空间
  36. return appendResult;
  37. }
  38. // 给 topic-partition 创建一个 RecordBatch
  39. MemoryRecordsBuilder recordsBuilder = recordsBuilder(buffer, maxUsableMagic);
  40. ProducerBatch batch = new ProducerBatch(tp, recordsBuilder, time.milliseconds());
  41. // 向新的 RecordBatch 中追加数据
  42. FutureRecordMetadata future = Utils.notNull(batch.tryAppend(timestamp, key, value, headers, callback, time.milliseconds()));
  43. // 将 RecordBatch 添加到对应的 queue 中
  44. dq.addLast(batch);
  45. // 向未 ack 的 batch 集合添加这个 batch
  46. incomplete.add(batch);
  47. buffer = null;
  48. // 如果 dp.size()>1 就证明这个 queue 有一个 batch 是可以发送了
  49. return new RecordAppendResult(future, dq.size() > 1 || batch.isFull(), true);
  50. }
  51. } finally {
  52. if (buffer != null)
  53. free.deallocate(buffer);
  54. appendsInProgress.decrementAndGet();
  55. }
  56. }

使用sender线程批量发送

  1. //开始启动KafkaProducer I/O线程
  2. public void run() {
  3. while (running) {
  4. try {
  5. //执行IO线程逻辑
  6. run(time.milliseconds());
  7. } catch (Exception e) {
  8. log.error("Uncaught error in kafka producer I/O thread: ", e);
  9. }
  10. }
  11. // 累加器中可能仍然有请求,或者等待确认,等待他们完成就停止接受请求
  12. while (!forceClose && (this.accumulator.hasUndrained() || this.client.inFlightRequestCount() > 0)) {
  13. try {
  14. run(time.milliseconds());
  15. } catch (Exception e) {
  16. log.error("Uncaught error in kafka producer I/O thread: ", e);
  17. }
  18. }
  19. if (forceClose) {
  20. this.accumulator.abortIncompleteBatches();
  21. }
  22. try {
  23. this.client.close();
  24. } catch (Exception e) {
  25. log.error("Failed to close network client", e);
  26. }
  27. }
  28. void run(long now) {
  29. //首先还是判断是否使用事务发送
  30. if (transactionManager != null) {
  31. try {
  32. if (transactionManager.shouldResetProducerStateAfterResolvingSequences())
  33. transactionManager.resetProducerId();
  34. if (!transactionManager.isTransactional()) {
  35. maybeWaitForProducerId();
  36. } else if (transactionManager.hasUnresolvedSequences() && !transactionManager.hasFatalError()) {
  37. transactionManager.transitionToFatalError(
  38. new KafkaException("The client hasn't received acknowledgment for " +
  39. "some previously sent messages and can no longer retry them. It isn't safe to continue."));
  40. } else if (transactionManager.hasInFlightTransactionalRequest() || maybeSendTransactionalRequest(now)) {
  41. // as long as there are outstanding transactional requests, we simply wait for them to return
  42. client.poll(retryBackoffMs, now);
  43. return;
  44. }
  45. //如果事务管理器转态错误 或者producer没有id 将停止进行发送
  46. if (transactionManager.hasFatalError() || !transactionManager.hasProducerId()) {
  47. RuntimeException lastError = transactionManager.lastError();
  48. if (lastError != null)
  49. maybeAbortBatches(lastError);
  50. client.poll(retryBackoffMs, now);
  51. return;
  52. } else if (transactionManager.hasAbortableError()) {
  53. accumulator.abortUndrainedBatches(transactionManager.lastError());
  54. }
  55. } catch (AuthenticationException e) {
  56. transactionManager.authenticationFailed(e);
  57. }
  58. }
  59. //发送Producer数据
  60. long pollTimeout = sendProducerData(now);
  61. client.poll(pollTimeout, now);
  62. }
  63. private long sendProducerData(long now) {
  64. Cluster cluster = metadata.fetch();
  65. // 获取准备发送数据的分区列表
  66. RecordAccumulator.ReadyCheckResult result = this.accumulator.ready(cluster, now);
  67. //如果有分区leader信息是未知的,那么就强制更新metadata
  68. if (!result.unknownLeaderTopics.isEmpty()) {
  69. for (String topic : result.unknownLeaderTopics)
  70. this.metadata.add(topic);
  71. this.metadata.requestUpdate();
  72. }
  73. // 移除没有准备好发送的Node
  74. Iterator<Node> iter = result.readyNodes.iterator();
  75. long notReadyTimeout = Long.MAX_VALUE;
  76. while (iter.hasNext()) {
  77. Node node = iter.next();
  78. if (!this.client.ready(node, now)) {
  79. iter.remove();
  80. notReadyTimeout = Math.min(notReadyTimeout, this.client.pollDelayMs(node, now));
  81. }
  82. }
  83. //创建Producer请求内容
  84. Map<Integer, List<ProducerBatch>> batches = this.accumulator.drain(cluster, result.readyNodes, this.maxRequestSize, now);
  85. addToInflightBatches(batches);
  86. if (guaranteeMessageOrder) {
  87. // Mute all the partitions drained
  88. for (List<ProducerBatch> batchList : batches.values()) {
  89. for (ProducerBatch batch : batchList)
  90. this.accumulator.mutePartition(batch.topicPartition);
  91. }
  92. }
  93. // 累加器充值下一批次过期时间
  94. accumulator.resetNextBatchExpiryTime();
  95. List<ProducerBatch> expiredInflightBatches = getExpiredInflightBatches(now);
  96. List<ProducerBatch> expiredBatches = this.accumulator.expiredBatches(now);
  97. expiredBatches.addAll(expiredInflightBatches);
  98. if (!expiredBatches.isEmpty())
  99. log.trace("Expired {} batches in accumulator", expiredBatches.size());
  100. for (ProducerBatch expiredBatch : expiredBatches) {
  101. String errorMessage = "Expiring " + expiredBatch.recordCount + " record(s) for " + expiredBatch.topicPartition
  102. + ":" + (now - expiredBatch.createdMs) + " ms has passed since batch creation";
  103. failBatch(expiredBatch, -1, NO_TIMESTAMP, new TimeoutException(errorMessage), false);
  104. if (transactionManager != null && expiredBatch.inRetry()) {
  105. // This ensures that no new batches are drained until the current in flight batches are fully resolved.
  106. transactionManager.markSequenceUnresolved(expiredBatch.topicPartition);
  107. }
  108. }
  109. sensors.updateProduceRequestMetrics(batches);
  110. long pollTimeout = Math.min(result.nextReadyCheckDelayMs, notReadyTimeout);
  111. pollTimeout = Math.min(pollTimeout, this.accumulator.nextExpiryTimeMs() - now);
  112. pollTimeout = Math.max(pollTimeout, 0);
  113. if (!result.readyNodes.isEmpty()) {
  114. pollTimeout = 0;
  115. }
  116. //发送ProduceRequest
  117. sendProduceRequests(batches, now);
  118. return pollTimeout;
  119. }
  120. private void sendProduceRequests(Map<Integer, List<ProducerBatch>> collated, long now) {
  121. for (Map.Entry<Integer, List<ProducerBatch>> entry : collated.entrySet())
  122. sendProduceRequest(now, entry.getKey(), acks, requestTimeoutMs, entry.getValue());
  123. }
  124. private void sendProduceRequest(long now, int destination, short acks, int timeout, List<ProducerBatch> batches) {
  125. if (batches.isEmpty())
  126. return;
  127. Map<TopicPartition, MemoryRecords> produceRecordsByPartition = new HashMap<>(batches.size());
  128. final Map<TopicPartition, ProducerBatch> recordsByPartition = new HashMap<>(batches.size());
  129. // 找到创建记录集时使用的最小版本
  130. byte minUsedMagic = apiVersions.maxUsableProduceMagic();
  131. for (ProducerBatch batch : batches) {
  132. if (batch.magic() < minUsedMagic)
  133. minUsedMagic = batch.magic();
  134. }
  135. for (ProducerBatch batch : batches) {
  136. TopicPartition tp = batch.topicPartition;
  137. MemoryRecords records = batch.records();
  138. if (!records.hasMatchingMagic(minUsedMagic))
  139. records = batch.records().downConvert(minUsedMagic, 0, time).records();
  140. produceRecordsByPartition.put(tp, records);
  141. recordsByPartition.put(tp, batch);
  142. }
  143. String transactionalId = null;
  144. if (transactionManager != null && transactionManager.isTransactional()) {
  145. transactionalId = transactionManager.transactionalId();
  146. }
  147. ProduceRequest.Builder requestBuilder = ProduceRequest.Builder.forMagic(minUsedMagic, acks, timeout,
  148. produceRecordsByPartition, transactionalId);
  149. RequestCompletionHandler callback = new RequestCompletionHandler() {
  150. public void onComplete(ClientResponse response) {
  151. handleProduceResponse(response, recordsByPartition, time.milliseconds());
  152. }
  153. };
  154. String nodeId = Integer.toString(destination);
  155. ClientRequest clientRequest = client.newClientRequest(nodeId, requestBuilder, now, acks != 0,
  156. requestTimeoutMs, callback);
  157. //发送数据
  158. client.send(clientRequest, now);
  159. log.trace("Sent produce request to {}: {}", nodeId, requestBuilder);
  160. }

总结

kafka在发送端的设计非常的精妙,这也是为什么kafka能够达到如此高的性能,我们可以在实际项目中借鉴其中的思想,提高整体业务的效率

参考 :

   kafka官方文档

《Apache Kafka实战》

(转载请注明出处)

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/618465
推荐阅读
相关标签
  

闽ICP备14008679号