当前位置:   article > 正文

4.2.10 Kafka源码剖析, 阅读环境搭建, broker启动流程, topic创建流程, Producer生产者流程, Consumer消费者流程,_check the position for parsitions assigned to the

check the position for parsitions assigned to the consumer

目录

4.1 Kafka源码剖析之源码阅读环境搭建

4.1.1 安装配置Gradle

4.1.2 Scala的安装和配置

4.1.3 Idea配置

4.1.4 源码操作

4.2 Kafka源码剖析之Broker启动流程

4.2.1 启动kafka

4.2.2 查看Kafka.Kafka源码

4.3 Kafka源码剖析之Topic创建流程

4.3.1 Topic创建

4.3.2 手动创建

4.3.3 查看Topic入口

4.3.4 创建Topic

4.4 Kafka源码剖析之Producer生产者流程

4.4.1 Producer示例

4.4.1.1 同步发送

4.4.1.2 异步发送

4.4.2 KafkaProducer实例化

4.4.2 消息发送过程

4.4.2.1 拦截器

4.4.2.2 拦截器核心逻辑

4.4.2.3 发送五步骤

4.4.2.4 MetaData更新机制

4.5 Kafka源码剖析之Consumer消费者流程

4.5.1 Consumer示例

4.5.2 KafkaConsumer实例化

4.5.3 订阅Topic

4.5.4 消息消费过程

4.5.4.1 poll

4.5.4.2 pollOnce

4.5.5 自动提交

4.5.6 手动提交

4.5.6.1 同步提交

4.5.6.2 异步提交


 

4.1 Kafka源码剖析之源码阅读环境搭建

首先下载源码:http://archive.apache.org/dist/kafka/1.0.2/kafka-1.0.2-src.tgz
gradle-4.8.1下载地址:https://services.gradle.org/distributions/gradle-4.8.1-bin.zip
Scala-2.12.12下载地址:https://downloads.lightbend.com/scala/2.12.12/scala-2.12.12.msi

 

4.1.1 安装配置Gradle

解压gradle4.8.-bin.zip到一个目录
配置环境变量,其中GRADLE_HOME指向gradle解压到的根目录,GRADLE_USER_HOME指向gradle的本地仓库位置。

PATH环境变量:

 

进入GRADLE_USER_HOME目录,添加init.gradle,配置gradle的源:
init.gradle内容:

  1. allprojects {
  2. repositories {
  3. maven {
  4. url 'https://maven.aliyun.com/repository/public/'
  5. }
  6. maven {
  7. url
  8. 'https://maven.aliyun.com/nexus/content/repositories/google'
  9. }
  10. maven {
  11. url 'https://maven.aliyun.com/nexus/content/groups/public/'
  12. }
  13. maven {
  14. url
  15. 'https://maven.aliyun.com/nexus/content/repositories/jcenter'
  16. }
  17. all {
  18. ArtifactRepository repo ->
  19. if (repo instanceof MavenArtifactRepository) {
  20. def url = repo.url.toString()
  21. if (url.startsWith('https://repo.maven.apache.org/maven2/')
  22. || url.startsWith('https://repo.maven.org/maven2')
  23. || url.startsWith('https://repo1.maven.org/maven2')
  24. || url.startsWith('https://jcenter.bintray.com/')) {
  25. //project.logger.lifecycle "Repository ${repo.url} replaced by $REPOSITORY_URL. "
  26. remove repo
  27. }
  28. }
  29. }
  30. }
  31. buildscript {
  32. repositories {
  33. maven {
  34. url 'https://maven.aliyun.com/repository/public/'
  35. }
  36. maven {
  37. url
  38. 'https://maven.aliyun.com/nexus/content/repositories/google'
  39. }
  40. maven {
  41. url
  42. 'https://maven.aliyun.com/nexus/content/groups/public/'
  43. }
  44. maven {
  45. url
  46. 'https://maven.aliyun.com/nexus/content/repositories/jcenter'
  47. }
  48. all {
  49. ArtifactRepository repo ->
  50. if (repo instanceof MavenArtifactRepository) {
  51. def url = repo.url.toString()
  52. if (url.startsWith('https://repo1.maven.org/maven2')
  53. || url.startsWith('https://jcenter.bintray.com/')) {
  54. //project.logger.lifecycle "Repository ${repo.url} replaced by $REPOSITORY_URL. "
  55. remove repo
  56. }
  57. }

保存并退出,打开cmd,运行:
设置成功。

 

4.1.2 Scala的安装和配置

双击安装

 

添加gradle的bin目录到PATH中。

 

打开cmd,输入 scala 验证:

输入:quit退出Scala的交互式环境。

 

 

4.1.3 Idea配置

idea安装Scala插件:

 

 

 

4.1.4 源码操作

解压源码:

打开CMD,进入kafka-1.0.2-src目录,执行:gradle
结束后,执行gradle idea(注意不要使用生成的gradlew.bat执行操作)
idea导入源码:

选择Gradle

 

 

4.2 Kafka源码剖析之Broker启动流程

4.2.1 启动kafka

命令如下: kafka-server-start.sh /opt/kafka_2.12-1.0.2/config/server.properties 。
kafka-server-start.sh内容如下:

  1. if [ $# -lt 1 ];
  2. then
  3. echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
  4. exit 1
  5. fi
  6. base_dir=$(dirname $0)
  7. if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
  8. export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  9. fi
  10. if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  11. export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
  12. fi
  13. EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
  14. COMMAND=$1
  15. case $COMMAND in
  16. -daemon)
  17. EXTRA_ARGS="-daemon "$EXTRA_ARGS
  18. shift
  19. ;;
  20. *)
  21. ;;
  22. esac
  23. exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"

 

 

4.2.2 查看Kafka.Kafka源码

  1. def main(args: Array[String]): Unit = {
  2. try {
  3. // 读取启动配置
  4. val serverProps = getPropsFromArgs(args)
  5. // 封装KafkaServer
  6. val kafkaServerStartable = KafkaServerStartable.fromProps(serverProps)
  7. // register signal handler to log termination due to SIGTERM, SIGHUP and SIGINT (control-c)
  8. registerLoggingSignalHandler()
  9. // attach shutdown handler to catch terminating signals as well as normal termination
  10. // 增加回调监听
  11. Runtime.getRuntime().addShutdownHook(new Thread("kafka-shutdown-hook") {
  12. override def run(): Unit = kafkaServerStartable.shutdown()
  13. })
  14. // 启动服务
  15. kafkaServerStartable.startup()
  16. // 等待
  17. kafkaServerStartable.awaitShutdown()
  18. }
  19. catch {
  20. case e: Throwable =>
  21. fatal(e)
  22. Exit.exit(1)
  23. }
  24. Exit.exit(0)
  25. }

上面的 kafkaServerStartabl 封装了 KafkaServer ,最终执行 startup 的是KafkaServer

  1. class KafkaServerStartable(val serverConfig: KafkaConfig, reporters:
  2. Seq[KafkaMetricsReporter]) extends Logging {
  3. private val server = new KafkaServer(serverConfig, kafkaMetricsReporters = reporters)
  4. def this(serverConfig: KafkaConfig) = this(serverConfig, Seq.empty)
  5. // 启动
  6. def startup() {
  7. try server.startup()
  8. catch {
  9. case _: Throwable =>
  10. // KafkaServer.startup() calls shutdown() in case of exceptions, so we invoke `exit` to set the status code
  11. fatal("Exiting Kafka.")
  12. Exit.exit(1)
  13. }
  14. }
  15. // 关闭
  16. def shutdown() {
  17. try server.shutdown()
  18. catch {
  19. case _: Throwable =>
  20. fatal("Halting Kafka.")
  21. Exit.halt(1)
  22. }
  23. }
  24. def setServerState(newState: Byte) {
  25. server.brokerState.newState(newState)
  26. }
  27. def awaitShutdown(): Unit = server.awaitShutdown()
  28. }

下面来看一下 KafkaServe r的 startup 方法,启动了很多东西,后面都会用到,代码中也加入了注释

  1. def startup() {
  2.   try {
  3.    info("starting")
  4.    // 是否关闭
  5.    if (isShuttingDown.get)
  6.     throw new IllegalStateException("Kafka server is still shutting down, cannot re start!")
  7.    // 是否已启动完成
  8.    if (startupComplete.get)
  9.     return
  10.    // 开始启动,并设置已启动变量
  11.    val canStartup = isStartingUp.compareAndSet(false, true)
  12.    if (canStartup) {
  13.     // 设置broker为启动状态
  14.     brokerState.newState(Starting)
  15.     /* start scheduler */
  16.     // 启动定时器,
  17.     kafkaScheduler.startup()
  18.     /* setup zookeeper */
  19.     // 初始化zookeeper配置
  20.     zkUtils = initZk()
  21.     /* Get or create cluster_id */
  22.     // 在zookeeper上生成集群Id
  23.     _clusterId = getOrGenerateClusterId(zkUtils)
  24.     info(s"Cluster ID = $clusterId")
  25.     /* generate brokerId */
  26.     // 从配置文件获取brokerId
  27.     val (brokerId, initialOfflineDirs) = getBrokerIdAndOfflineDirs
  28.     config.brokerId = brokerId
  29.     // 日志上下文
  30.     logContext = new LogContext(s"[KafkaServer id=${config.brokerId}] ")
  31.     this.logIdent = logContext.logPrefix
  32.     /* create and configure metrics */
  33.     // 通过配置文件中的MetricsReporter的实现类来创建实例
  34.     val reporters = config.getConfiguredInstances(KafkaConfig.MetricReporterClassesProp, classOf[MetricsReporter],
  35.      Map[String, AnyRef](KafkaConfig.BrokerIdProp -> (config.brokerId.toString)).asJava)
  36.     // 默认监控会增加jmx
  37.     reporters.add(new JmxReporter(jmxPrefix))
  38.     val metricConfig = KafkaServer.metricConfig(config)
  39.     // 创建metric对象
  40.     metrics = new Metrics(metricConfig, reporters, time, true)
  41.     /* register broker metrics */
  42.     _brokerTopicStats = new BrokerTopicStats
  43.     // 初始化配额管理服务,对于每个producer或者consumer,可以对他们produce或者 consum的速度上 作出限制
  44.     quotaManagers = QuotaFactory.instantiate(config, metrics, time, threadNamePrefix.getOrElse(""))
  45.     // 增加监听器
  46.     notifyClusterListeners(kafkaMetricsReporters ++ reporters.asScala)
  47.     logDirFailureChannel = new LogDirFailureChannel(config.logDirs.size)
  48.     // 创建日志管理组件,创建时会检查log目录下是否有.kafka_cleanshutdown文件, 如果没有的话 broker进入RecoveringFrom UncleanShutdown 状态
  49.     /* start log manager */
  50.     logManager = LogManager(config, initialOfflineDirs, zkUtils, brokerState, kafkaScheduler, time, brokerTopicStats, logDirFailureChannel)
  51.     logManager.startup()
  52.     // 创建元数据管理组件
  53.     metadataCache = new MetadataCache(config.brokerId)
  54.     // 创建凭证提供者组件
  55.     credentialProvider = new CredentialProvider(config.saslEnabledMechanisms)
  56.     // Create and start the socket server acceptor threads so that the bound port is known.
  57.     // Delay starting processors until the end of the initialization sequence to ensure
  58.     // that credentials have been loaded before processing authentications.
  59.     // 创建一个sockerServer组件,并启动。该组件启动后,就会开始接收请求
  60.     socketServer = new SocketServer(config, metrics, time, credentialProvider)
  61.     socketServer.startup(startupProcessors = false)
  62.     // 创建一个副本管理组件,并启动该组件
  63.     /* start replica manager */
  64.     replicaManager = createReplicaManager(isShuttingDown)
  65.     replicaManager.startup()
  66.     // 创建kafka控制器,并启动。该控制器启动后broker会尝试去zk创建节点竞争成为 controller
  67.     /* start kafka controller */
  68.     kafkaController = new KafkaController(config, zkUtils, time, metrics, threadNamePrefix)
  69.     kafkaController.startup()
  70.     // 创建一个集群管理组件
  71.     adminManager = new AdminManager(config, metrics, metadataCache, zkUtils)
  72.     // 创建群组协调器,并且启动
  73.     /* start group coordinator */
  74.     // Hardcode Time.SYSTEM for now as some Streams tests fail otherwise, it would be good to fix the underlying issue
  75.     groupCoordinator = GroupCoordinator(config, zkUtils, replicaManager, Time.SYSTEM)
  76.     groupCoordinator.startup()
  77.     // 启动事务协调器,带有单独的后台线程调度程序,用于事务到期和日志加载
  78.     /* start transaction coordinator, with a separate background thread scheduler for transaction expiration and log loading */
  79.     // Hardcode Time.SYSTEM for now as some Streams tests fail otherwise, it would be good to fix the underlying issue
  80.     transactionCoordinator = TransactionCoordinator(config, replicaManager, new KafkaScheduler(threads = 1, threadNamePrefix = "transaction-log-manager-"), zkUtils, metrics, metadataCache, Time.SYSTEM)
  81.     transactionCoordinator.startup()
  82.     // 构造授权器
  83.     /* Get the authorizer and initialize it if one is specified.*/
  84.     authorizer = Option(config.authorizerClassName).filter(_.nonEmpty).map { authorizerClassName =>
  85.      val authZ = CoreUtils.createObject[Authorizer] (authorizerClassName)
  86.      authZ.configure(config.originals())
  87.      authZ
  88.    }
  89.     // 构造api组件,针对各个接口会处理不同的业务
  90.     /* start processing requests */
  91.     apis = new KafkaApis(socketServer.requestChannel, replicaManager, adminManager, groupCoordinator, transactionCoordinator,
  92.      kafkaController, zkUtils, config.brokerId, config, metadataCache, metrics, authorizer, quotaManagers,
  93.      brokerTopicStats, clusterId, time)
  94.     // 请求处理池
  95.     requestHandlerPool = new KafkaRequestHandlerPool(config.brokerId, socketServer.requestChannel, apis, time,
  96.      config.numIoThreads)
  97.     Mx4jLoader.maybeLoad()
  98.     // 动态配置处理器的相关配置
  99.     /* start dynamic config manager */
  100.     dynamicConfigHandlers = Map[String, ConfigHandler] (ConfigType.Topic -> new TopicConfigHandler(logManager, config, quotaManagers),
  101.      ConfigType.Client -> new ClientIdConfigHandler(quotaManagers),
  102.      ConfigType.User -> new UserConfigHandler(quotaManagers, credentialProvider),
  103.      ConfigType.Broker -> new BrokerConfigHandler(config, quotaManagers))
  104.     // 初始化动态配置管理器,并启动
  105.     // Create the config manager. start listening to notifications
  106.     dynamicConfigManager = new DynamicConfigManager(zkUtils, dynamicConfigHandlers)
  107.     dynamicConfigManager.startup()
  108.     // 通知监听者
  109.     /* tell everyone we are alive */
  110.     val listeners = config.advertisedListeners.map { endpoint =>
  111.      if (endpoint.port == 0)
  112.       endpoint.copy(port = socketServer.boundPort(endpoint.listenerName))
  113.      else
  114.       endpoint
  115.    }
  116.     // kafka健康检查组件
  117.     kafkaHealthcheck = new KafkaHealthcheck(config.brokerId, listeners, zkUtils, config.rack,
  118.      config.interBrokerProtocolVersion)
  119.     kafkaHealthcheck.startup()
  120.     // 记录一下恢复点
  121.     // Now that the broker id is successfully registered via KafkaHealthcheck, checkpoint it
  122.     checkpointBrokerId(config.brokerId)
  123.     // 修改broker状态
  124.     socketServer.startProcessors()
  125.     brokerState.newState(RunningAsBroker)
  126.     shutdownLatch = new CountDownLatch(1)
  127.     startupComplete.set(true)
  128.     isStartingUp.set(false)
  129.     AppInfoParser.registerAppInfo(jmxPrefix, config.brokerId.toString, metrics)
  130.     info("started")
  131.   }
  132.  }
  133.   catch {
  134.    case e: Throwable =>
  135.     fatal("Fatal error during KafkaServer startup. Prepare to shutdown", e)
  136.     isStartingUp.set(false)
  137.     shutdown()
  138.     throw e
  139.  }
  140. }

 

 

4.3 Kafka源码剖析之Topic创建流程

4.3.1 Topic创建

有两种创建方式:自动创建、手动创建。在server.properties中配置auto.create.topics.enable=true 时,kafka在发现该topic不存在的时候会按照默认配置自动创建topic,触发自动创建topic有以下两种情况:

1. Producer向某个不存在的Topic写入消息
2. Consumer从某个不存在的Topic读取消息

 

4.3.2 手动创建

当 auto.create.topics.enable=false 时,需要手动创建topic,否则消息会发送失败。手动创建topic的方式如下:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic kafka_test

--replication-factor: 副本数目
--partitions: 分区数据
--topic: topic名字

 

 

4.3.3 查看Topic入口

查看脚本文件 kafka-topics.sh

exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@" 

最终还是调用的 TopicCommand 类:首先判断参数是否为空,并且create、list、alter、descibe、delete只允许存在一个,进行参数验证,创建 zookeeper 链接,如果参数中包含 create 则开始创建topic,其他情况类似。

  1. def main(args: Array[String]): Unit = {
  2. // 解析传入的参数
  3.   val opts = new TopicCommandOptions(args)
  4.   // 判断参数长度
  5.   if(args.length == 0)
  6.    CommandLineUtils.printUsageAndDie(opts.parser, "Create, delete, describe, or change a topic.")
  7.   // create、list、alter、descibe、delete只允许存在一个
  8.   // should have exactly one action
  9.   val actions = Seq(opts.createOpt, opts.listOpt, opts.alterOpt, opts.describeOpt, opts.deleteOpt).count(opts.options.has _)
  10.   if(actions != 1)
  11.    CommandLineUtils.printUsageAndDie(opts.parser, "Command must include exactly one action: --list, --describe, --create, --alter or --delete")
  12.   // 参数验证
  13.   opts.checkArgs()
  14.   // 初始化zookeeper链接
  15.   val zkUtils = ZkUtils(opts.options.valueOf(opts.zkConnectOpt),
  16.              30000,
  17.              30000,
  18.              JaasUtils.isZkSecurityEnabled())
  19.   var exitCode = 0
  20.   try {
  21.    if(opts.options.has(opts.createOpt))
  22.     // 创建topic
  23.     createTopic(zkUtils, opts)
  24.    else if(opts.options.has(opts.alterOpt))
  25.     // 修改topic
  26.     alterTopic(zkUtils, opts)
  27.    else if(opts.options.has(opts.listOpt))
  28.     // 列出所有的topic,bin/kafka-topics.sh --list --zookeeper localhost:2181
  29.     listTopics(zkUtils, opts)
  30.    else if(opts.options.has(opts.describeOpt))
  31.     // 查看topic描述,bin/kafka-topics.sh --describe --zookeeper localhost:2181
  32.     describeTopic(zkUtils, opts)
  33.    else if(opts.options.has(opts.deleteOpt))
  34.     // 删除topic
  35.     deleteTopic(zkUtils, opts)
  36.  } catch {
  37.    case e: Throwable =>
  38.     println("Error while executing topic command : " + e.getMessage)
  39.     error(Utils.stackTrace(e))
  40.     exitCode = 1
  41.  } finally {
  42.    zkUtils.close()
  43.    Exit.exit(exitCode)
  44.  }
  45. }

 

 

4.3.4 创建Topic

下面我们主要来看一下 createTopic 的执行过程:

  1. def createTopic(zkUtils: ZkUtils, opts: TopicCommandOptions) {
  2.   // 获取参数中指定的topic名称
  3.   val topic = opts.options.valueOf(opts.topicOpt)
  4. // 获取参数中 给当前要创建的主题指定的参数 --config max.message.bytes=1048576
  5.   val configs = parseTopicConfigsToBeAdded(opts)
  6. // --if-not-exists选项, 看有没有该选项
  7.   val ifNotExists = opts.options.has(opts.ifNotExistsOpt)
  8.   if (Topic.hasCollisionChars(topic))
  9.    println("WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.")
  10.   try {
  11.    // 看有没有指定副本分区的分配,
  12.    if (opts.options.has(opts.replicaAssignmentOpt)) {
  13. // 如果客户端指定了topic的partition的replicas分配情况,则直接把所有topic的元数据信息持久化写入到zk,
  14.     // topic的properties写入到/config/topics/{topic}目录,
  15.     // topic的PartitionAssignment写入到/brokers/topics/{topic}目录
  16.     val assignment = parseReplicaAssignment(opts.options.valueOf(opts.replicaAssignmentOpt))
  17.     AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils, topic, assignment, configs, update = false)
  18.   } else {
  19. // 否则需要自动生成topic的PartitionAssignment
  20.     // 检查有没有指定必要的参数, 解析器, 选项, 分区个数, 副本因子
  21.     CommandLineUtils.checkRequiredArgs(opts.parser, opts.options, opts.partitionsOpt, opts.replicationFactorOpt)
  22.     // 获取分区数
  23.     val partitions = opts.options.valueOf(opts.partitionsOpt).intValue
  24.     // 获取副本因子
  25.     val replicas = opts.options.valueOf(opts.replicationFactorOpt).intValue
  26.     // 从0.10.x版本开始,kafka可以支持指定broker的机架信息,如果指定了机架信息则在副本分配时会尽可能地让分区的副本分不到不同的机架上。
  27.     // 指定机架信息是通过kafka的配置文件config/server.properties中的 broker.rack参数来配置的
  28.     val rackAwareMode = if (opts.options.has(opts.disableRackAware)) RackAwareMode.Disabled
  29.     else RackAwareMode.Enforced
  30. // 创建主题
  31.     AdminUtils.createTopic(zkUtils, topic, partitions, replicas, configs, rackAwareMode)
  32.   }
  33.    println("Created topic \"%s\".".format(topic))
  34.  } catch {
  35.    case e: TopicExistsException => if (!ifNotExists) throw e
  36.  }
  37. }

 

1. 如果客户端指定了topic的partition的replicas分配情况,则直接把所有topic的元数据信息持久化写入到zk,topic的properties写入到/config/topics/{topic}目录, topic的PartitionAssignment写入到/brokers/topics/{topic}目录
2. 根据分区数量、副本集、是否指定机架来自动生成topic的分区数据

3. 下面继续来看 AdminUtils.createTopic 方法

  1. def createTopic(zkUtils: ZkUtils,
  2.          topic: String,
  3.          partitions: Int,
  4.          replicationFactor: Int,
  5.          topicConfig: Properties = new Properties,
  6.          rackAwareMode: RackAwareMode = RackAwareMode.Enforced) {
  7.   // 获取集群中每个broker的brokerId和机架信息信息的列表,为下面的
  8.   val brokerMetadatas = getBrokerMetadatas(zkUtils, rackAwareMode)
  9.   // 根据分配策略, 将副本分区分配给broker
  10.   val replicaAssignment = AdminUtils.assignReplicasToBrokers(brokerMetadatas, partitions, replicationFactor)
  11.   // 在zookeeper中创建或更新主题分区分配路径
  12.  AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils, topic, replicaAssignment, topicConfig)
  13. // 到此, 创建主题的过程结束
  14. }

4. 下面继续来看 AdminUtils.assignReplicasToBrokers 方法

  1. def assignReplicasToBrokers(brokerMetadatas: Seq[BrokerMetadata],
  2.                nPartitions: Int,
  3.                replicationFactor: Int,
  4.                fixedStartIndex: Int = -1,
  5.                startPartitionId: Int = -1): Map[Int, Seq[Int]] = {
  6.   if (nPartitions <= 0)
  7.    // 分区个数partitions不能小于等于0
  8.    throw new InvalidPartitionsException("Number of partitions must be larger than 0.")
  9.   if (replicationFactor <= 0)
  10.    // 副本个数replicationFactor不能小于等于0
  11.    throw new InvalidReplicationFactorException("Replication factor must be larger than 0.")
  12.   if (replicationFactor > brokerMetadatas.size)
  13.    // 副本个数replicationFactor不能大于broker的节点个数
  14.    throw new InvalidReplicationFactorException(s"Replication factor: $replicationFactor larger than available brokers: ${brokerMetadatas.size}.")
  15.   if (brokerMetadatas.forall(_.rack.isEmpty))
  16.    // 没有指定机架信息的情况
  17.    assignReplicasToBrokersRackUnaware(nPartitions, replicationFactor, brokerMetadatas.map(_.id), fixedStartIndex,
  18.     startPartitionId)
  19.   else {
  20.    // 针对指定机架信息的情况,更加复杂一点
  21.    if (brokerMetadatas.exists(_.rack.isEmpty))
  22.     throw new AdminOperationException("Not all brokers have rack information for replica rack aware assignment.")
  23.    assignReplicasToBrokersRackAware(nPartitions, replicationFactor, brokerMetadatas, fixedStartIndex,
  24.     startPartitionId)
  25.  }
  26. }

 

1. 未指定机架策略

  1. private def assignReplicasToBrokersRackUnaware(nPartitions: Int,
  2.                         replicationFactor: Int,
  3.                         brokerList: Seq[Int],
  4.                         fixedStartIndex: Int,
  5.                         startPartitionId: Int): Map[Int, Seq[Int]] = {
  6.   val ret = mutable.Map[Int, Seq[Int]]()
  7.   val brokerArray = brokerList.toArray
  8.   val startIndex = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(brokerArray.length)
  9.   var currentPartitionId = math.max(0, startPartitionId)
  10.   var nextReplicaShift = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(brokerArray.length)
  11.   for (_ <- 0 until nPartitions) {
  12.    if (currentPartitionId > 0 && (currentPartitionId % brokerArray.length == 0))
  13.     nextReplicaShift += 1
  14.    val firstReplicaIndex = (currentPartitionId + startIndex) % brokerArray.length
  15.    val replicaBuffer = mutable.ArrayBuffer(brokerArray(firstReplicaIndex))
  16.    for (j <- 0 until replicationFactor - 1)
  17.     replicaBuffer += brokerArray(replicaIndex(firstReplicaIndex, nextReplicaShift, j, brokerArray.length))
  18.    ret.put(currentPartitionId, replicaBuffer)
  19.    currentPartitionId += 1
  20.  }
  21.   ret
  22. }

遍历每个分区partition然后从brokerArray(brokerId的列表)中选取replicationFactor个brokerId分配给这个partition.

 

创建一个可变的Map用来存放本方法将要返回的结果,即分区partition和分配副本的映射关系。由于fixedStartIndex为-1,所以startIndex是一个随机数,用来计算一个起始分配的brokerId,同时由于startPartitionId为-1,所以currentPartitionId的值为0,可见默认创建topic时总是从编号为0的分区依次轮询进行分配。

 

nextReplicaShift表示下一次副本分配相对于前一次分配的位移量,这个字面上理解有点绕,不如举个例子:假设集群中有3个broker节点,即代码中的brokerArray,创建某topic有3个副本和6个分区,那么首先从partitionId(partition的编号)为0的分区开始进行分配,假设第一次计算(由rand.nextInt(brokerArray.length)随机)到nextReplicaShift为1,第一次随机到的startIndex为2,那么partitionId为0的第一个副本的位置(这里指的是brokerArray的数组下标)firstReplicaIndex =(currentPartitionId + startIndex) % brokerArray.length = (0+2)%3 = 2,第二个副本的位置为replicaIndex(firstReplicaIndex, nextReplicaShift, j, brokerArray.length) = replicaIndex(2, nextReplicaShift+1,0, 3)= ? 。

 

继续计算 replicaIndex(2, nextReplicaShift+1,0, 3) = replicaIndex(2, 2,0, 3)= (2+(1+(2+0)%(3-1)))%3=0。继续计算下一个副本的位置replicaIndex(2, 2,1, 3) = (2+(1+(2+1)%(3-1)))%3=1。所以partitionId为0的副本分配位置列表为[2,0,1],如果brokerArray正好是从0开始编号,也正好是顺序不间断的,即brokerArray为[0,1,2]的话,那么当前partitionId为0的副本分配策略为[2,0,1]。如果brokerId不是从零开始,也不是顺序的(有可能之前集群的其中broker几个下线了),最终的brokerArray为[2,5,8],那么partitionId为0的分区的副本分配策略为[8,2,5]。为了便于说明问题,可以简单的假设brokerArray就是[0,1,2]。

 

同样计算下一个分区,即partitionId为1的副本分配策略。此时nextReplicaShift还是为2,没有满足自增的条件。这个分区的firstReplicaIndex = (1+2)%3=0。第二个副本的位置replicaIndex(0,2,0,3) = (0+(1+(2+0)%(3-1)))%3 = 1,第三个副本的位置replicaIndex(0,2,1,3) = 2,最终partitionId为2的分区分配策略为[0,1,2]

 

2. 指定机架策略

  1. private def assignReplicasToBrokersRackAware(nPartitions:Int,
  2.                        replicationFactor: Int,
  3.                        brokerMetadatas: Seq[BrokerMetadata],
  4.                        fixedStartIndex: Int,
  5.                        startPartitionId: Int): Map[Int, Seq[Int]] = {
  6.  val brokerRackMap = brokerMetadatas.collect { case BrokerMetadata(id, Some(rack)) =>    id -> rack  }.toMap   
  7. val numRacks = brokerRackMap.values.toSet.size
  8.   val arrangedBrokerList = getRackAlternatedBrokerList(brokerRackMap)
  9.   val numBrokers = arrangedBrokerList.size
  10.   val ret = mutable.Map[Int, Seq[Int]]()
  11.   val startIndex = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(arrangedBrokerList.size)
  12.   var currentPartitionId = math.max(0, startPartitionId)
  13.   var nextReplicaShift = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(arrangedBrokerList.size)
  14.   for (_ <- 0 until nPartitions) {
  15.    if (currentPartitionId > 0 && (currentPartitionId % arrangedBrokerList.size == 0))
  16.     nextReplicaShift += 1
  17.    val firstReplicaIndex = (currentPartitionId + startIndex) % arrangedBrokerList.size
  18.    val leader = arrangedBrokerList(firstReplicaIndex)
  19.    val replicaBuffer = mutable.ArrayBuffer(leader)
  20.    val racksWithReplicas = mutable.Set(brokerRackMap(leader))
  21.    val brokersWithReplicas = mutable.Set(leader)
  22.    var k = 0
  23.    for (_ <- 0 until replicationFactor - 1) {
  24.     var done = false
  25.     while (!done) {
  26.      val broker = arrangedBrokerList(replicaIndex(firstReplicaIndex, nextReplicaShift * numRacks, k, arrangedBrokerList.size))
  27.      val rack = brokerRackMap(broker)
  28.      // Skip this broker if
  29.      // 1. there is already a broker in the same rack that has assigned a replica AND there is one or more racks
  30.      //  that do not have any replica, or
  31.      // 2. the broker has already assigned a replica AND there is one or more brokers that do not have replica assigned
  32.      if ((!racksWithReplicas.contains(rack) || racksWithReplicas.size == numRacks)
  33.        && (!brokersWithReplicas.contains(broker) || brokersWithReplicas.size == numBrokers)) {
  34.       replicaBuffer += broker
  35.       racksWithReplicas += rack
  36.       brokersWithReplicas += broker
  37.       done = true
  38.     }
  39.      k += 1
  40.    }
  41.   }
  42.    ret.put(currentPartitionId, replicaBuffer)
  43.    currentPartitionId += 1
  44.  }
  45.   ret
  46. }

1. assignReplicasToBrokersRackUnaware的执行前提是所有的broker都没有配置机架信息,而assignReplicasToBrokersRackAware的执行前提是所有的broker都配置了机架信息,如果出现部分broker配置了机架信息而另一部分没有配置的话,则会抛出AdminOperationException的异常,如果还想要顺利创建topic的话,此时需加上“--disable-rack-aware”

 

2. 第一步获得brokerId和rack信息的映射关系列表brokerRackMap ,之后调用getRackAlternatedBrokerList()方法对brokerRackMap做进一步的处理生成一个brokerId的列表。举例:假设目前有3个机架rack1、rack2和rack3,以及9个broker,分别对应关系如下:

rack1: 0, 1, 2
rack2: 3, 4, 5
rack3: 6, 7, 8

那么经过getRackAlternatedBrokerList()方法处理过后就变成了[0, 3, 6, 1,4, 7, 2, 5, 8]这样一个列表,显而易见的这是轮询各个机架上的broker而产生的,之后你可以简单的将这个列表看成是brokerId的列表,对应assignReplicasToBrokersRackUnaware()方法中的brokerArray,但是其中包含了简单的机架分配信息。之后的步骤也和未指定机架信息的算法类似,同样包含startIndex、currentPartiionId, nextReplicaShift的概念,循环为每一个分区分配副本。分配副本时处理第一个副本之外,其余的也调用replicaIndex方法来获得一个broker,但是这里和assignReplicasToBrokersRackUnaware()不同的是,这里不是简单的将这个broker添加到当前分区的副本列表之中,还要经过一层的筛选,满足以下任意一个条件的broker不能被添加到当前分区的副本列表之中:

1. 如果此broker所在的机架中已经存在一个broker拥有该分区的副本,并且还有其他的机架中没有任何一个broker拥有该分区的副本。对应代码中的(!racksWithReplicas.contains(rack) || racksWithReplicas.size == numRacks)

2. 如果此broker中已经拥有该分区的副本,并且还有其他broker中没有该分区的副本。对应代码中的(!brokersWithReplicas.contains(broker) || brokersWithReplicas.size == numBrokers))

5. 无论是带机架信息的策略还是不带机架信息的策略,上层调用方法AdminUtils.assignReplicasToBrokers()最后都是获得一个[Int, Seq[Int]]类型的副本分配列表,其最后作为kafka zookeeper节点/brokers/topics/{topic-name}节点数据。至此kafka的topic创建就讲解完了,有些同学会感到很疑问,全文通篇(包括上一篇)都是在讲述如何分配副本,最后得到的也不过是个分配的方案,并没有真正创建这些副本的环节,其实这个观点没有任何问题,对于通过kafka提供的kafka-topics.sh脚本创建topic的方法来说,它只是提供一个副本的分配方案,并在kafka zookeeper中创建相应的节点而已。kafka broker的服务会注册监听/brokers/topics/目录下是否有节点变化,如果有新节点创建就会监听到,然后根据其节点中的数据(即topic的分区副本分配方案)来创建对应的副本。

 

 

4.4 Kafka源码剖析之Producer生产者流程

4.4.1 Producer示例

首先我们先通过一段代码来展示 KafkaProducer 的使用方法。在下面的示例中,我们使用KafkaProducer 实现向kafka发送消息的功能。在示例程序中,首先将 KafkaProduce 使用的配置写入到 Properties 中,每项配置的具体含义在注释中进行解释。之后以此 Properties 对象为参数构造KafkaProducer 对象,最后通过 send 方法完成发送,代码中包含同步发送、异步发送两种情况。

  1. public static void main(String[] args) throws ExecutionException,
  2. InterruptedException {
  3.     Properties props = new Properties();
  4.     // 客户端id
  5.     props.put("client.id", "KafkaProducerDemo");
  6.     // kafka地址,列表格式为host1:port1,host2:port2,…,无需添加所有的集群地址, kafka会根据提供的地址发现其他的地址(建议多提供几个,以防提供的服务器关闭)
  7.     props.put("bootstrap.servers", "localhost:9092");
  8.     // 发送返回应答方式
  9.     // 0:Producer 往集群发送数据不需要等到集群的返回,不确保消息发送成功。安全性最低但是效率最高。
  10.     // 1:Producer 往集群发送数据只要 Leader 应答就可以发送下一条,只确保Leader接收成功。
  11.     // -1或者all:Producer 往集群发送数据需要所有的ISR Follower都完成从Leader的同步才会发送下一条,确保Leader发送成功和所有的副本都成功接收。安全性最高,但是效率最低。
  12.     props.put("acks", "all");
  13.     // 重试次数
  14.     props.put("retries", 0);
  15.     // 重试间隔时间
  16.     props.put("retries.backoff.ms", 100);
  17.     // 批量发送的大小
  18.     props.put("batch.size", 16384);
  19.     // 一个Batch被创建之后,最多过多久,不管这个Batch有没有写满,都必须发送出去
  20.     props.put("linger.ms", 10);
  21.     // 缓冲区大小
  22.     props.put("buffer.memory", 33554432);
  23.     // key序列化方式
  24.     props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  25.     // value序列化方式
  26.     props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  27.     // topic
  28.     String topic = "lagou_edu";
  29.     Producer<String, String> producer = new KafkaProducer<>(props);
  30.     AtomicInteger count = new AtomicInteger();
  31.     while (true) {
  32.       int num = count.get();
  33.       String key = Integer.toString(num);
  34.       String value = Integer.toString(num);
  35.       ProducerRecord<String, String> record = new ProducerRecord<> (topic, key, value);
  36.       if (num % 2 == 0) {
  37.         // 偶数异步发送
  38.         // 第一个参数record封装了topic、key、value
  39.         // 第二个参数是一个callback对象,当生产者接收到kafka发来的ACK确认消息时,会调用此CallBack对象的onComplete方法
  40.         producer.send(record, (recordMetadata, e) -> {
  41.           System.out.println("num:" + num + " topic:" + recordMetadata.topic() + " offset:" + recordMetadata.offset());
  42.        });
  43.      } else {
  44.         // 同步发送
  45.         // KafkaProducer.send方法返回的类型是Future<RecordMetadata>,通过get方法阻塞当前线程,等待kafka服务端ACK响应
  46.         producer.send(record).get();
  47.      }
  48.       count.incrementAndGet();
  49.       TimeUnit.MILLISECONDS.sleep(100);
  50.    }
  51.  }

 

 

4.4.1.1 同步发送

1. KafkaProducer.send方法返回的类型是Future<RecordMetadata>,通过get方法阻塞当前线程,等待kafka服务端ACK响应

producer.send(record).get() 

4.4.1.2 异步发送

1. 第一个参数record封装了topic、key、value
2. 第二个参数是一个callback对象,当生产者接收到kafka发来的ACK确认消息时,会调用此CallBack对象的onComplete方法

  1. producer.send(record, (recordMetadata, e) -> {
  2.           System.out.println("num:" + num + " topic:" + recordMetadata.topic() + " offset:" + recordMetadata.offset());
  3.        });

 

 

4.4.2 KafkaProducer实例化

了解了 KafkaProducer 的基本使用,开始深入了解的KafkaProducer原理和实现,先看一下构造方法核心逻辑

  1. private KafkaProducer(ProducerConfig config, Serializer<K> keySerializer,
  2. Serializer<V> valueSerializer) {
  3.     try {
  4.       // 获取用户的配置
  5.       Map<String, Object> userProvidedConfigs = config.originals();
  6.       this.producerConfig = config;
  7.       // 系统时间
  8.       this.time = Time.SYSTEM;
  9.       // 获取client.id配置
  10.       String clientId = config.getString(ProducerConfig.CLIENT_ID_CONFIG);
  11.       // 如果client.id为空,设置默认值:producer-num num递增
  12.       if (clientId.length() <= 0)
  13.         clientId = "producer-" + PRODUCER_CLIENT_ID_SEQUENCE.getAndIncrement();
  14.       this.clientId = clientId;
  15.       // 获取事务id,如果没有配置则为null
  16.       String transactionalId = userProvidedConfigs.containsKey(ProducerConfig.TRANSACTIONAL_ID_CONFIG) ?
  17.          (String) userProvidedConfigs.get(ProducerConfig.TRANSACTIONAL_ID_CONFIG) : null;
  18.       LogContext logContext;
  19.       if (transactionalId == null)
  20.         logContext = new LogContext(String.format("[Producer clientId=%s] ", clientId));
  21.       else
  22.         logContext = new LogContext(String.format("[Producer clientId=%s, transactionalId=%s] ", clientId, transactionalId));
  23.       log = logContext.logger(KafkaProducer.class);
  24.       log.trace("Starting the Kafka producer");
  25.       // 创建client-id的监控map
  26.       Map<String, String> metricTags = Collections.singletonMap("client-id", clientId);
  27.       // 设置监控配置,包含样本量、取样时间窗口、记录级别
  28.       MetricConfig metricConfig = new MetricConfig().samples(config.getInt(ProducerConfig.METRICS_NUM_SAMPLES_CONFIG))
  29.          .timeWindow(config.getLong(ProducerConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG), TimeUnit.MILLISECONDS).recordLevel(Sensor.RecordingLevel.forName(config.getString(ProducerConfig.METRICS_RECORDING_LEVEL_CONFIG)))
  30.          .tags(metricTags);
  31.       // 监控数据上报类
  32.       List<MetricsReporter> reporters = config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
  33.           MetricsReporter.class);
  34.       reporters.add(new JmxReporter(JMX_PREFIX));
  35.       this.metrics = new Metrics(metricConfig, reporters, time);
  36.       // 生成生产者监控
  37.       ProducerMetrics metricsRegistry = new ProducerMetrics(this.metrics);
  38.       // 获取用户设置的分区器
  39.       this.partitioner = config.getConfiguredInstance(ProducerConfig.PARTITIONER_CLASS_CONFIG, Partitioner.class);
  40.       // 重试时间 retry.backoff.ms 默认100ms
  41.       long retryBackoffMs = config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG);
  42.       if (keySerializer == null) {
  43.         // 反射生成key序列化方式
  44.         this.keySerializer = ensureExtended(config.getConfiguredInstance(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
  45.             Serializer.class));
  46.         this.keySerializer.configure(config.originals(), true);
  47.      } else {
  48.         config.ignore(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
  49.         this.keySerializer = ensureExtended(keySerializer);
  50.      }
  51.       if (valueSerializer == null) {
  52.         // 反射生成value序列化方式
  53.         this.valueSerializer = ensureExtended(config.getConfiguredInstance(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
  54.             Serializer.class));
  55.         this.valueSerializer.configure(config.originals(), false);
  56.      } else {
  57.         config.ignore(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
  58.         this.valueSerializer = ensureExtended(valueSerializer);
  59.      }
  60.       // load interceptors and make sure they get clientId
  61.       // 确认client.id添加到用户的配置里面
  62.       userProvidedConfigs.put(ProducerConfig.CLIENT_ID_CONFIG, clientId);
  63.       // 获取用户设置的多个拦截器,为空则不处理
  64.       List<ProducerInterceptor<K, V>> interceptorList = (List) (new ProducerConfig(userProvidedConfigs, false)).getConfiguredInstances(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
  65.           ProducerInterceptor.class);
  66.       this.interceptors = interceptorList.isEmpty() ? null : new ProducerInterceptors<>(interceptorList);
  67.       // 集群资源监听器,在元数据变更时会有通知
  68.       ClusterResourceListeners clusterResourceListeners = configureClusterResourceListeners(keySerializer, valueSerializer, interceptorList, reporters);
  69.       // 生产者每隔一段时间都要去更新一下集群的元数据,默认5分钟
  70.       this.metadata = new Metadata(retryBackoffMs, config.getLong(ProducerConfig.METADATA_MAX_AGE_CONFIG),
  71.           true, true, clusterResourceListeners);
  72.       // 生产者往服务端发送消息的时候,规定一条消息最大多大?
  73.       // 如果你超过了这个规定消息的大小,你的消息就不能发送过去。
  74.       // 默认是1M,这个值偏小,在生产环境中,我们需要修改这个值。
  75.       // 经验值是10M。但是大家也可以根据自己公司的情况来。
  76.       this.maxRequestSize = config.getInt(ProducerConfig.MAX_REQUEST_SIZE_CONFIG);
  77.       //指的是缓存总大小
  78.       //默认值是32M,这个值一般是够用,如果有特殊情况的时候,我们可以去修改这个值。
  79.       this.totalMemorySize = config.getLong(ProducerConfig.BUFFER_MEMORY_CONFIG);
  80.       // kafka是支持压缩数据的,可以设置压缩格式,默认是不压缩,支持gzip、 snappy、lz4
  81.       // 一次发送出去的消息就更多。生产者这儿会消耗更多的cpu.
  82.       this.compressionType = CompressionType.forName(config.getString(ProducerConfig.COMPRESSION_TYPE_CONFIG));
  83.       // 配置控制了KafkaProducer.send()并将KafkaProducer.partitionsFor()被阻塞多长时间,由于缓冲区已满或元数据不可用,这些方法可能会被阻塞止
  84.       this.maxBlockTimeMs = config.getLong(ProducerConfig.MAX_BLOCK_MS_CONFIG);
  85.       // 控制客户端等待请求响应的最长时间。如果在超时过去之前未收到响应,客户端将在必要时重新发送请求,或者如果重试耗尽,请求失败
  86.       this.requestTimeoutMs = config.getInt(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG);
  87.       // 事务管理器
  88.       this.transactionManager = configureTransactionState(config, logContext, log);
  89.       // 重试次数,
  90.       int retries = configureRetries(config, transactionManager != null, log);
  91.       // 使用幂等性,需要将 enable.idempotence 配置项设置为true。并且它对单个分区的发送,一次性最多发送5条
  92. // 正在等待请求结果的请求数
  93.       int maxInflightRequests = configureInflightRequests(config, transactionManager != null);
  94.       // 如果开启了幂等性,但是用户指定的ack不为 -1,则会抛出异常
  95.       short acks = configureAcks(config, transactionManager != null, log);
  96.       this.apiVersions = new ApiVersions();
  97.       // 创建核心组件:记录累加器
  98. // 用户消息记录
  99.       this.accumulator = new RecordAccumulator(logContext,
  100.           config.getInt(ProducerConfig.BATCH_SIZE_CONFIG),
  101.           this.totalMemorySize,
  102.           this.compressionType,
  103.           config.getLong(ProducerConfig.LINGER_MS_CONFIG),
  104.           retryBackoffMs,
  105.           metrics,
  106.           time,
  107.           apiVersions,
  108.           transactionManager);
  109.       // 获取broker地址列表
  110.       List<InetSocketAddress> addresses = ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
  111.       // 更新元数据
  112.       this.metadata.update(Cluster.bootstrap(addresses), Collections.<String>emptySet(), time.milliseconds());
  113.       // 创建通道,是否需要加密
  114.       ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config);
  115.       Sensor throttleTimeSensor = Sender.throttleTimeSensor(metricsRegistry.senderMetrics);
  116.       // 初始化了一个重要的管理网路的组件, 真正发送消息的网络客户端
  117.       // connections.max.idle.ms: 默认值是9分钟, 一个网络连接最多空闲多久,超过这个空闲时间,就关闭这个网络连接。
  118.       // max.in.flight.requests.per.connection:默认是5, producer向 broker发送数据的时候,其实是有多个网络连接。每个网络连接可以忍受 producer端发送给 broker 消息然后消息没有响应的个数
  119.       NetworkClient client = new NetworkClient(
  120.           new Selector(config.getLong(ProducerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG),
  121.               this.metrics, time, "producer", channelBuilder, logContext),
  122.           this.metadata,
  123.           clientId,
  124.           maxInflightRequests,
  125.           config.getLong(ProducerConfig.RECONNECT_BACKOFF_MS_CONFIG),
  126.           config.getLong(ProducerConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG),
  127.           config.getInt(ProducerConfig.SEND_BUFFER_CONFIG),
  128.           config.getInt(ProducerConfig.RECEIVE_BUFFER_CONFIG),
  129.           this.requestTimeoutMs,
  130.           time,
  131.           true,
  132.           apiVersions,
  133.           throttleTimeSensor,
  134.           logContext);
  135.       // 发送线程
  136.       this.sender = new Sender(logContext,
  137.           client,
  138.           this.metadata,
  139.           this.accumulator,
  140.           maxInflightRequests == 1,
  141.           config.getInt(ProducerConfig.MAX_REQUEST_SIZE_CONFIG),
  142.           acks,
  143.           retries,
  144.           metricsRegistry.senderMetrics,
  145.           Time.SYSTEM,
  146.           this.requestTimeoutMs,
  147.           config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG),
  148.           this.transactionManager,
  149.           apiVersions);
  150.       // 线程名称
  151.       String ioThreadName = NETWORK_THREAD_PREFIX + " | " + clientId;
  152.       // 启动守护线程
  153.       this.ioThread = new KafkaThread(ioThreadName, this.sender, true);
  154. // 启动发送消息的线程
  155.       this.ioThread.start();
  156.       this.errors = this.metrics.sensor("errors");
  157.       // 把用户配置的参数,但是没有用到的打印出来
  158.       config.logUnused();
  159.       AppInfoParser.registerAppInfo(JMX_PREFIX, clientId, metrics);
  160.       log.debug("Kafka producer started");
  161.    } catch (Throwable t) {
  162.       // call close methods if internal objects are already constructed this is to prevent resource leak. see KAFKA-2121
  163.       close(0, TimeUnit.MILLISECONDS, true);
  164.       // now propagate the exception
  165.       throw new KafkaException("Failed to construct kafka producer", t);
  166.    }
  167.  }

 

4.4.2 消息发送过程

Kafka消息实际发送以 send 方法为入口:

  1. @Override
  2.   public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
  3.     // intercept the record, which can be potentially modified; this method does not throw exceptions
  4. // 判断是都设置了拦截器, 如果有, 就吊用
  5. // 拦截器的吊用按照拦截器社会的顺序进行调用 , 123
  6.     ProducerRecord<K, V> interceptedRecord = this.interceptors == null ? record : this.interceptors.onSend(record);
  7. // doSend
  8.     return doSend(interceptedRecord, callback);
  9.  }

 

4.4.2.1 拦截器

首先方法会先进入拦截器集合 ProducerInterceptors , onSend 方法是遍历拦截器 onSend 方法,拦截器的目的是将数据处理加工, kafka 本身并没有给出默认的拦截器的实现。如果需要使用拦截器功能,必须自己实现 ProducerInterceptor 接口。

  1. public ProducerRecord<K, V> onSend(ProducerRecord<K, V> record) {
  2.     ProducerRecord<K, V> interceptRecord = record;
  3.     // 遍历所有拦截器,顺序执行,如果有异常只打印日志,不会向上抛出
  4.     for (ProducerInterceptor<K, V> interceptor : this.interceptors) {
  5.       try {
  6.         interceptRecord = interceptor.onSend(interceptRecord);
  7.      } catch (Exception e) {
  8.         // do not propagate interceptor exception, log and continue calling other interceptors
  9.         // be careful not to throw exception from here
  10.         if (record != null)
  11.           log.warn("Error executing interceptor onSend callback for topic: {}, partition: {}", record.topic(), record.partition(), e);
  12.         else
  13.           log.warn("Error executing interceptor onSend callback", e);
  14.      }
  15.    }
  16.     return interceptRecord;
  17.  }

 

4.4.2.2 拦截器核心逻辑

ProducerInterceptor 接口包括三个方法:

1. onSend(ProducerRecord):该方法封装进KafkaProducer.send方法中,即它运行在用户主线程中的。Producer确保在消息被序列化以计算分区前调用该方法。用户可以在该方法中对消息做任何操作,但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算

2. onAcknowledgement(RecordMetadata, Exception):该方法会在消息被应答之前或消息发送失败时调用,并且通常都是在producer回调逻辑触发之前。onAcknowledgement运行在producer的IO线程中,因此不要在该方法中放入很重的逻辑,否则会拖慢producer的消息发送效率

3. close:关闭interceptor,主要用于执行一些资源清理工作

4. 拦截器可能被运行在多个线程中,因此在具体实现时用户需要自行确保线程安全。另外倘若指定了多个interceptor,则producer将按照指定顺序调用它们,并仅仅是捕获每个interceptor可能抛出的异常记录到错误日志中而非在向上传递。

 

4.4.2.3 发送五步骤

下面仔细来看一下 doSend 方法的运行过程:

  1. private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback
  2. callback) {
  3.     // 首先创建一个主题分区类
  4.     TopicPartition tp = null;
  5.     try {
  6.       // first make sure the metadata for the topic is available
  7.       // 首先确保该topic的元数据可用
  8.       ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
  9.       long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
  10.       Cluster cluster = clusterAndWaitTime.cluster;
  11.       // 序列化 record 的 key 和 value
  12.       byte[] serializedKey;
  13.       try {
  14.         serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
  15.      } catch (ClassCastException cce) {
  16.         throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
  17.             " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
  18.             " specified in key.serializer", cce);
  19.      }
  20.       byte[] serializedValue;
  21.       try {
  22.         serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
  23.      } catch (ClassCastException cce) {
  24.         throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
  25.             " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
  26.             " specified in value.serializer", cce);
  27.      }
  28.       // 获取该 record 要发送到的 partition
  29.       int partition = partition(record, serializedKey, serializedValue, cluster);
  30. // 将消息发送到哪个主题, 哪个分区
  31.       tp = new TopicPartition(record.topic(), partition);
  32.       // 给header设置只读
  33.       setReadOnly(record.headers());
  34.       Header[] headers = record.headers().toArray();
  35.       int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
  36.           compressionType, serializedKey, serializedValue, headers);
  37.       ensureValidRecordSize(serializedSize);
  38.       long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
  39.       log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
  40.       // producer callback will make sure to call both 'callback' and interceptor callback
  41. // 消息从broker确认之后 , 要调用的拦截器, 顺序按照拦截器设置的顺序代用
  42.       Callback interceptCallback = this.interceptors == null ? callback : new InterceptorCallback<>(callback, this.interceptors, tp);
  43. // 如果是事务的消息发送, 则设置事物管理器
  44.       if (transactionManager != null && transactionManager.isTransactional())
  45.         transactionManager.maybeAddPartitionToTransaction(tp);
  46.       // 向 accumulator 中追加 record 数据,数据会先进行缓存
  47.       RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
  48.           serializedValue, headers, interceptCallback, remainingWaitMs);
  49.       // 如果追加完数据后,对应的 RecordBatch 已经达到了 batch.size 的大小 (或者batch 的剩余空间不足以添加下一条 Record),则唤醒 sender 线程发送数据。
  50. // 如果linger.ms 时间到了, 则创建新的批次, 唤醒发送线程
  51.       if (result.batchIsFull || result.newBatchCreated) {
  52.         log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
  53.         this.sender.wakeup();
  54.      }
  55. // 返回future对象
  56.       return result.future;
  57.       // handling exceptions and record the errors;
  58.       // for API exceptions return them in the future,
  59.       // for other exceptions throw directly
  60.    } catch (ApiException e) {
  61.       log.debug("Exception occurred during message send:", e);
  62.       if (callback != null)
  63.         callback.onCompletion(null, e);
  64.       this.errors.record();
  65.       if (this.interceptors != null)
  66.         this.interceptors.onSendError(record, tp, e);
  67.       return new FutureFailure(e);
  68.    } catch (InterruptedException e) {
  69.       this.errors.record();
  70.       if (this.interceptors != null)
  71.         this.interceptors.onSendError(record, tp, e);
  72.       throw new InterruptException(e);
  73.    } catch (BufferExhaustedException e) {
  74.       this.errors.record();
  75.       this.metrics.sensor("buffer-exhausted-records").record();
  76.       if (this.interceptors != null)
  77.         this.interceptors.onSendError(record, tp, e);
  78.       throw e;
  79.    } catch (KafkaException e) {
  80.       this.errors.record();
  81.       if (this.interceptors != null)
  82.         this.interceptors.onSendError(record, tp, e);
  83.       throw e;
  84.    } catch (Exception e) {
  85.       // we notify interceptor about all exceptions, since onSend is called before anything else in this method
  86.       if (this.interceptors != null)
  87.         this.interceptors.onSendError(record, tp, e);
  88.       throw e;
  89.    }
  90.  }

1. Producer 通过  waitOnMetadata() 方法来获取对应 topic 的 metadata 信息,需要先该topic 是可用的
2. Producer 端对 record 的 key 和 value 值进行序列化操作,在 Consumer 端再进行相应的反序列化
3. 获取partition值,具体分为下面三种情况:

1. 指明 partition 的情况下,直接将指明的值直接作为 partiton 值

2. 没有指明 partition 值但有 key 的情况下,将 key 的 hash 值与 topic 的 partition数进行取余得到 partition 值

3. 既没有 partition 值又没有 key 值的情况下,第一次调用时随机生成一个整数(后面每次调用在这个整数上自增),将这个值与 topic 可用的 partition 总数取余得到partition 值,也就是常说的 round-robin 算法

 

Producer 默认使用的 partitioner 是org.apache.kafka.clients.producer.internals.DefaultPartitioner

4. 向 accumulator 写数据,先将 record 写入到 buffer 中,当达到一个 batch.size 的大小时,再唤起 sender线程去发送 RecordBatch,这里仔细分析一下Producer是如何向buffer写入数据的

1. 获取该 topic-partition 对应的 queue,没有的话会创建一个空的 queue
2. 向 queue 中追加数据,先获取 queue 中最新加入的那个 RecordBatch,如果不存在或者存在但剩余空余不足以添加本条 record 则返回 null,成功写入的话直接返回结果,写入成功
3. 创建一个新的 RecordBatch,初始化内存大小根据 max(batch.size,Records.LOG_OVERHEAD + Record.recordSize(key, value)) 来确定(防止单条record 过大的情况)
4. 向新建的 RecordBatch 写入 record,并将 RecordBatch 添加到 queue 中,返回结果,写入成功

5. 发送 RecordBatch,当 record 写入成功后,如果发现 RecordBatch 已满足发送的条件(通常是 queue 中有多个 batch,那么最先添加的那些 batch 肯定是可以发送了),那么就会唤醒 sender 线程,发送  RecordBatch 。sender 线程对 RecordBatch 的处理是在 run() 方法中进行的,该方法具体实现如下:

1. 获取那些已经可以发送的 RecordBatch 对应的 nodes
2. 如果与node 没有连接(如果可以连接,同时初始化该连接),就证明该 node 暂时不能发送数据,暂时移除该 node
3. 返回该 node 对应的所有可以发送的 RecordBatch 组成的 batches(key 是node.id),并将 RecordBatch 从对应的 queue 中移除
4. 将由于元数据不可用而导致发送超时的 RecordBatch 移除
5. 发送 RecordBatch

 

4.4.2.4 MetaData更新机制

1. metadata.requestUpdate() 将 metadata 的 needUpdate 变量设置为 true(强制更新),并返回当前的版本号(version),通过版本号来判断 metadata 是否完成更新
2. sender.wakeup() 唤醒 sender 线程,sender 线程又会去唤醒NetworkClient线程去更新
3. metadata.awaitUpdate(version, remainingWaitMs) 等待 metadata 的更新
4. 所以,每次 Producer 请求更新 metadata 时,会有以下几种情况:

1. 如果 node 可以发送请求,则直接发送请求
2. 如果该 node 正在建立连接,则直接返回
3. 如果该 node 还没建立连接,则向 broker 初始化链接
5. NetworkClient的poll方法中判断是否需要更新meta数据, handleCompletedReceives 处理metadata 的更新,最终是调用的 DefaultMetadataUpdater 中的handleCompletedMetadataResponse 方法处理

 

 

4.5 Kafka源码剖析之Consumer消费者流程

4.5.1 Consumer示例

KafkaConsumer
消费者的根本目的是从Kafka服务端拉取消息,并交给业务逻辑进行处理。
开发人员不必关心与Kafka服务端之间网络连接的管理、心跳检测、请求超时重试等底层操作也不必关心订阅Topic的分区数量、分区Leader副本的网络拓扑以及消费组的Rebalance等细节,另外还提供了自动提交offset的功能。

案例:

  1. public static void main(String[] args) throws InterruptedException {
  2.     // 是否自动提交
  3.     Boolean autoCommit = false;
  4.     // 是否异步提交
  5.     Boolean isSync = true;
  6.     Properties props = new Properties();
  7.     // kafka地址,列表格式为host1:port1,host2:port2,…,无需添加所有的集群地址, kafka会根据提供的地址发现其他的地址(建议多提供几个,以防提供的服务器关闭)
  8.     props.put("bootstrap.servers", "localhost:9092");
  9.     // 消费组
  10.     props.put("group.id", "test");
  11.     // 开启自动提交offset
  12.     props.put("enable.auto.commit", autoCommit.toString());
  13.     // 1s自动提交
  14.     props.put("auto.commit.interval.ms", "1000");
  15.     // 消费者和群组协调器的最大心跳时间,如果超过该时间则认为该消费者已经死亡或者故 障,需要踢出消费者组
  16.     props.put("session.timeout.ms", "60000");
  17.     // 一次poll间隔最大时间
  18.     props.put("max.poll.interval.ms", "1000");
  19.     // 当消费者读取偏移量无效的情况下,需要重置消费起始位置,默认为latest(从消费者 启动后生成的记录),另外一个选项值是 earliest,将从有效的最小位移位置开始消费
  20.     props.put("auto.offset.reset", "latest");
  21.     // consumer端一次拉取数据的最大字节数
  22.     props.put("fetch.max.bytes", "1024000");
  23.     // key序列化方式
  24.     props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  25.     // value序列化方式
  26.     props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  27.     KafkaConsumer<String, String> consumer = new KafkaConsumer<> (props);
  28.     String topic = "lagou_edu";
  29.     // 订阅topic列表
  30.     consumer.subscribe(Arrays.asList(topic));
  31.     while (true) {
  32.       // 消息拉取
  33.       ConsumerRecords<String, String> records = consumer.poll(100);
  34.       for (ConsumerRecord<String, String> record : records) {
  35.         System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
  36.      }
  37.       if (!autoCommit) {
  38.         if (isSync) {
  39.           // 处理完成单次消息以后,提交当前的offset,如果失败会一直重试直至成功
  40.           consumer.commitSync();
  41.        } else {
  42.           // 异步提交
  43.           consumer.commitAsync((offsets, exception) -> {
  44.             exception.printStackTrace();
  45.             System.out.println(offsets.size());
  46.          });
  47.        }
  48.      }
  49.       TimeUnit.SECONDS.sleep(3);
  50.    }
  51.  }

 

Kafka服务端并不会记录消费者的消费位置,而是由消费者自己决定如何保存如何记录其消费的offset。在Kafka服务端中添加了一个名为“__consumer_offsets"的内部topic来保存消费者提交的offset,当出现消费者上、下线时会触发Consumer Group进行Rebalance操作,对分区进行重新分配,待Rebalance操作完成后。消费者就可以读取该topic中记录的offset,并从此offset位置继续消费。当然,使用该topic记录消费者的offset只是默认选项,开发人员可以根据业务需求将offset记录在别的存储中。

 

在消费者消费消息的过程中,提交offset的时机非常重要,因为它决定了消费者故障重启后的消费位置。在上面的示例中,我们通过将 enable.auto.commit 选项设置为true可以起到自动提交offset的功能, auto.commit.interval.ms 选项则设置了自动提交的时间间隔。每次在调用KafkaConsumer.poll() 方法时都会检测是否需要自动提交,并提交上次 poll() 方法返回的最后一个消息的offset。为了避免消息丢失,建议poll()方法之前要处理完上次poll()方法拉取的全部消息。KafkaConsumer中还提供了两个手动提交offset的方法,分别是 commitSync() 和 commitAsync() ,它们都可以指定提交的offset值,区别在于前者是同步提交,后者是异步提交。

 

4.5.2 KafkaConsumer实例化

了解了 KafkaConsumer 的基本使用,开始深入了解 KafkaConsumer 原理和实现,先看一下构造方法核心逻辑

  1. private KafkaConsumer(ConsumerConfig config,
  2.              Deserializer<K> keyDeserializer,
  3.              Deserializer<V> valueDeserializer) {
  4.     try {
  5.       // 获取client.id,如果为空则默认生成一个,默认:consumer-1
  6.       String clientId = config.getString(ConsumerConfig.CLIENT_ID_CONFIG);
  7.       if (clientId.isEmpty())
  8.         clientId = "consumer-" + CONSUMER_CLIENT_ID_SEQUENCE.getAndIncrement();
  9.       this.clientId = clientId;
  10.       // 获取消费组名
  11.       String groupId = config.getString(ConsumerConfig.GROUP_ID_CONFIG);
  12.       LogContext logContext = new LogContext("[Consumer clientId=" + clientId + ", groupId=" + groupId + "] ");
  13.       this.log = logContext.logger(getClass());
  14.       log.debug("Initializing the Kafka consumer");
  15.       this.requestTimeoutMs = config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG);
  16.       int sessionTimeOutMs = config.getInt(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG);
  17.       int fetchMaxWaitMs = config.getInt(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG);
  18.       if (this.requestTimeoutMs <= sessionTimeOutMs || this.requestTimeoutMs <= fetchMaxWaitMs)
  19.         throw new ConfigException(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG + " should be greater than " + ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG + " and " + ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG);
  20.       this.time = Time.SYSTEM;
  21.       // 与生产者逻辑相同
  22.       Map<String, String> metricsTags = Collections.singletonMap("client-id", clientId);
  23.       MetricConfig metricConfig = new MetricConfig().samples(config.getInt(ConsumerConfig.METRICS_NUM_SAMPLES_CO NFIG))
  24.          .timeWindow(config.getLong(ConsumerConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG) , TimeUnit.MILLISECONDS)
  25.          .recordLevel(Sensor.RecordingLevel.forName(config.getString(ConsumerConfig .METRICS_RECORDING_LEVEL_CONFIG)))
  26.          .tags(metricsTags);
  27.       List<MetricsReporter> reporters = config.getConfiguredInstances(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG,
  28.           MetricsReporter.class);
  29.       reporters.add(new JmxReporter(JMX_PREFIX));
  30.       this.metrics = new Metrics(metricConfig, reporters, time);
  31.       this.retryBackoffMs = config.getLong(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG);
  32.       // 消费者拦截器
  33.       // load interceptors and make sure they get clientId
  34.       Map<String, Object> userProvidedConfigs = config.originals();
  35.       userProvidedConfigs.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId);
  36.       List<ConsumerInterceptor<K, V>> interceptorList = (List) (new ConsumerConfig(userProvidedConfigs, false)).getConfiguredInstances(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
  37.           ConsumerInterceptor.class);
  38.       this.interceptors = interceptorList.isEmpty() ? null : new ConsumerInterceptors<>(interceptorList);
  39.       // key反序列化
  40.       if (keyDeserializer == null) {
  41.         this.keyDeserializer = config.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
  42.             Deserializer.class);
  43.         this.keyDeserializer.configure(config.originals(), true);
  44.      } else { config.ignore(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG);
  45.         this.keyDeserializer = keyDeserializer;
  46.      }
  47.       // value反序列化
  48.       if (valueDeserializer == null) {
  49.         this.valueDeserializer = config.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
  50.             Deserializer.class);
  51.         this.valueDeserializer.configure(config.originals(), false);
  52.      } else {
  53.        config.ignore(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG);
  54.         this.valueDeserializer = valueDeserializer;
  55.      }
  56.       ClusterResourceListeners clusterResourceListeners = configureClusterResourceListeners(keyDeserializer, valueDeserializer, reporters, interceptorList);
  57.       this.metadata = new Metadata(retryBackoffMs, config.getLong(ConsumerConfig.METADATA_MAX_AGE_CONFIG),
  58.           true, false, clusterResourceListeners);
  59.       List<InetSocketAddress> addresses = ClientUtils.parseAndValidateAddresses(config.getList(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG));
  60. // 更新集群元数据
  61.       this.metadata.update(Cluster.bootstrap(addresses), Collections.<String>emptySet(), 0);
  62.       String metricGrpPrefix = "consumer";
  63.       ConsumerMetrics metricsRegistry = new ConsumerMetrics(metricsTags.keySet(), "consumer");
  64.       ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config);
  65.       // 事务隔离级别
  66.       IsolationLevel isolationLevel = IsolationLevel.valueOf(
  67.          config.getString(ConsumerConfig.ISOLATION_LEVEL_CONFIG).toUpperCase(Locale.ROOT));
  68.       Sensor throttleTimeSensor = Fetcher.throttleTimeSensor(metrics, metricsRegistry.fetcherMetrics);
  69.       // 网络组件
  70.       NetworkClient netClient = new NetworkClient(
  71.           new Selector(config.getLong(ConsumerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG), metrics, time, metricGrpPrefix, channelBuilder, logContext),
  72.           this.metadata,
  73.           clientId,
  74.           100, // a fixed large enough value will suffice for max in-flight requests
  75.           config.getLong(ConsumerConfig.RECONNECT_BACKOFF_MS_CONFIG),
  76.           config.getLong(ConsumerConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG),
  77.           config.getInt(ConsumerConfig.SEND_BUFFER_CONFIG),
  78.           config.getInt(ConsumerConfig.RECEIVE_BUFFER_CONFIG), config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG),
  79.           time,
  80.           true,
  81.           new ApiVersions(),
  82.           throttleTimeSensor,
  83.           logContext);
  84.       // 实例化消费者网络客户端
  85.       this.client = new ConsumerNetworkClient(
  86.           logContext,
  87.           netClient,
  88.           metadata,
  89.           time,
  90.           retryBackoffMs,
  91.           config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG));
  92.       // offset重置策略,默认是自动提交
  93.       OffsetResetStrategy offsetResetStrategy = OffsetResetStrategy.valueOf(config.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
  94. // 创建订阅的对象, 封装订阅信息
  95.       this.subscriptions = new SubscriptionState(offsetResetStrategy);
  96. // 消费组和主题分区分配器
  97.       this.assignors = config.getConfiguredInstances(
  98.           ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
  99.           PartitionAssignor.class);
  100.       // offset协调者
  101.       this.coordinator = new ConsumerCoordinator(logContext,
  102.           this.client,
  103.           groupId,
  104.           config.getInt(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG),
  105.           config.getInt(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG),
  106.           config.getInt(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG),
  107.           assignors,
  108.           this.metadata,
  109.           this.subscriptions,
  110.           metrics,
  111.           metricGrpPrefix,
  112.           this.time,
  113.           retryBackoffMs,
  114.           config.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG),
  115.           config.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG),
  116.           this.interceptors,
  117.           config.getBoolean(ConsumerConfig.EXCLUDE_INTERNAL_TOPICS_CONFIG),
  118.           config.getBoolean(ConsumerConfig.LEAVE_GROUP_ON_CLOSE_CONFIG));
  119.       // 拉取器, 通过网络获取消息的对象
  120.       this.fetcher = new Fetcher<>(
  121.           logContext,
  122.           this.client,
  123.           config.getInt(ConsumerConfig.FETCH_MIN_BYTES_CONFIG),
  124.           config.getInt(ConsumerConfig.FETCH_MAX_BYTES_CONFIG),
  125.          config.getInt(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG),
  126.           config.getInt(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG),
  127.           config.getInt(ConsumerConfig.MAX_POLL_RECORDS_CONFIG),
  128.           config.getBoolean(ConsumerConfig.CHECK_CRCS_CONFIG),
  129.           this.keyDeserializer,
  130.           this.valueDeserializer,
  131.           this.metadata,
  132.           this.subscriptions,
  133.           metrics,
  134.           metricsRegistry.fetcherMetrics,
  135.           this.time,
  136.           this.retryBackoffMs,
  137.           isolationLevel);
  138.       // 打印用户设置,但是没有使用的配置项
  139.       config.logUnused();
  140.       AppInfoParser.registerAppInfo(JMX_PREFIX, clientId, metrics);
  141.       log.debug("Kafka consumer initialized");
  142.    } catch (Throwable t) {
  143.       // call close methods if internal objects are already constructed
  144.       // this is to prevent resource leak. see KAFKA-2121
  145.       close(0, true);
  146.       // now propagate the exception
  147.       throw new KafkaException("Failed to construct kafka consumer", t);
  148.    }
  149.  }

1. 初始化参数配置

  • client.id、group.id、消费者拦截器、key/value序列化、事务隔离级别

2. 初始化网络客户端 NetworkClient
3. 初始化消费者网络客户端 ConsumerNetworkClient
4. 初始化offset提交策略,默认自动提交
5. 初始化消费者协调器 ConsumerCoordinator
6. 初始化拉取器 Fetcher

 

 

4.5.3 订阅Topic

下面我们先来看一下subscribe方法都有哪些逻辑:

  1. public void subscribe(Collection<String> topics, ConsumerRebalanceListener listener) {
  2.  // 轻量级锁
  3.  acquireAndEnsureOpen();
  4.  try {
  5.   if (topics == null) {
  6.    throw new IllegalArgumentException("Topic collection to subscribe to cannot be null");
  7.  } else if (topics.isEmpty()) {
  8.    // topics为空,则开始取消订阅的逻辑
  9.    this.unsubscribe();
  10.  } else {
  11.    // topic合法性判断,包含null或者空字符串直接抛异常
  12.    for (String topic : topics) {
  13.     if (topic == null || topic.trim().isEmpty())
  14.      throw new IllegalArgumentException("Topic collection to subscribe to cannot contain null or empty topic");
  15.    }
  16.    // 如果没有消费协调者直接抛异常
  17.    throwIfNoAssignorsConfigured();
  18.    log.debug("Subscribed to topic(s): {}", Utils.join(topics, ", "));
  19.    // 开始订阅
  20.    this.subscriptions.subscribe(new HashSet<>(topics), listener);
  21.    // 更新元数据,如果metadata当前不包括所有的topics则标记强制更新
  22.    metadata.setTopics(subscriptions.groupSubscription());
  23.   }
  24. } finally {
  25.   release();
  26. }
  27. }
  28. public void subscribe(Set<String> topics, ConsumerRebalanceListener listener) {
  29.  if (listener == null)
  30.   throw new IllegalArgumentException("RebalanceListener cannot be null");
  31.  // 按照指定的Topic名字进行订阅,自动分配分区
  32.  setSubscriptionType(SubscriptionType.AUTO_TOPICS);
  33.  // 监听
  34.  this.listener = listener;
  35.  // 修改订阅信息
  36.  changeSubscription(topics);
  37. }
  38.  
  39. private void changeSubscription(Set<String> topicsToSubscribe) {
  40.  if (!this.subscription.equals(topicsToSubscribe)) {
  41.   // 如果使用AUTO_TOPICS或AUTO_PARTITION模式,则使用此集合记录所有订阅的Topic
  42.   this.subscription = topicsToSubscribe;
  43.   // Consumer Group中会选一个Leader,Leader会使用这个集合记录Consumer Group中所 有消费者订阅的Topic,而其他的Follower的这个集合只会保存自身订阅的Topic
  44.   this.groupSubscription.addAll(topicsToSubscribe);
  45. }
  46. }

1. KafkaConsumer不是线程安全类,开启轻量级锁,topics为空抛异常,topics是空集合开始取消订阅,再次判断topics集合中是否有非法数据,判断消费者协调者是否为空。开始订阅对应topic。listener默认为 NoOpConsumerRebalanceListener ,一个空操作

轻量级锁:分别记录了当前使用KafkaConsumer的线程id和重入次数,KafkaConsumer的acquire()和release()方法实现了一个”轻量级锁“,它并非真正的锁,仅是检测是否有多线程并发操作KafkaConsumer而已

2. 每一个KafkaConsumer实例内部都拥有一个SubscriptionState对象,subscribe内部调用了subscribe方法,subscribe方法订阅信息记录到 SubscriptionState ,多次订阅会覆盖旧数据。
3. 更新metadata,判断如果metadata中不包含当前groupSubscription,开始标记更新(后面会有更新的逻辑),并且消费者侧的topic不会过期

 

4.5.4 消息消费过程

下面KafkaConsumer的核心方法poll是如何拉取消息的,先来看一下下面的代码:

4.5.4.1 poll

  1. public ConsumerRecords<K, V> poll(long timeout) {
  2.     // 使用轻量级锁检测kafkaConsumer是否被其他线程使用, 主要是网络
  3.     acquireAndEnsureOpen();
  4.     try {
  5.       // 超时时间小于0抛异常
  6.       if (timeout < 0)
  7.         throw new IllegalArgumentException("Timeout must not be negative");
  8.       // 订阅类型为NONE抛异常,表示当前消费者没有订阅任何topic或者没有分配分区
  9.       if (this.subscriptions.hasNoSubscriptionOrUserAssignment())
  10.         throw new IllegalStateException("Consumer is not subscribed to any topics or assigned any partitions");
  11.       // poll for new data until the timeout expires
  12.       long start = time.milliseconds();
  13.       long remaining = timeout;
  14.       do {
  15.         // 核心方法,拉取消息
  16.         Map<TopicPartition, List<ConsumerRecord<K, V>>> records = pollOnce(remaining);
  17.         if (!records.isEmpty()) {
  18.           // before returning the fetched records, we can send off the next round of fetches
  19.           // and avoid block waiting for their responses to enable pipelining while the user
  20.           // is handling the fetched records.
  21.           //
  22.           // NOTE: since the consumed position has already been updated, we must not allow
  23.           // wakeups or any other errors to be triggered prior to returning the fetched records.
  24.           // 如果拉取到了消息,发送一次消息拉取的请求,不会阻塞不会被中断
  25.           // 在返回数据之前,发送下次的 fetch 请求,避免用户在下次获取数据 时线程 block
  26.           if (fetcher.sendFetches() > 0 || client.hasPendingRequests())
  27.             client.pollNoWakeup();
  28.           // 经过拦截器处理后返回
  29.           if (this.interceptors == null)
  30.             return new ConsumerRecords<>(records);
  31.           else
  32. // 消费的消息, 经过拦截器的处理
  33.             return this.interceptors.onConsume(new ConsumerRecords<>(records));
  34.        }
  35.         long elapsed = time.milliseconds() - start;
  36.         // 拉取超时就结束
  37.         remaining = timeout - elapsed;
  38.      } while (remaining > 0);
  39.       return ConsumerRecords.empty();
  40.    } finally {
  41.       release();
  42.    }
  43.  }

1. 使用轻量级锁检测kafkaConsumer是否被其他线程使用
2. 检查超时时间是否小于0,小于0抛出异常,停止消费
3. 检查这个 consumer 是否订阅的相应的 topic-partition
4. 调用 pollOnce() 方法获取相应的 records
5. 在返回获取的 records 前,发送下一次的 fetch 请求,避免用户在下次请求时线程 block 在pollOnce() 方法中
6. 如果在给定的时间(timeout)内获取不到可用的 records,返回空数据

 

这里可以看出,poll 方法的真正实现是在 pollOnce 方法中,poll 方法通过 pollOnce 方法获取可用的数据

 

4.5.4.2 pollOnce

  1. // 除了获取新数据外,还会做一些必要的 offset-commit和reset-offset的操作
  2. private Map<TopicPartition, List<ConsumerRecord<K, V>>> pollOnce(long timeout) {
  3.     client.maybeTriggerWakeup();
  4.     // 1. 获取 GroupCoordinator 地址并连接、加入 Group、sync Group、自动 commit, join 及 sync 期间 group 会进行 rebalance
  5.     coordinator.poll(time.milliseconds(), timeout);
  6.     // 2. 更新订阅的 topic-partition 的 offset(如果订阅的 topic-partition list 没有有效的 offset 的情况下)
  7.     if (!subscriptions.hasAllFetchPositions())
  8.      updateFetchPositions(this.subscriptions.missingFetchPositions());
  9.     // 3. 获取 fetcher 已经拉取到的数据
  10.     Map<TopicPartition, List<ConsumerRecord<K, V>>> records = fetcher.fetchedRecords();
  11.     if (!records.isEmpty())
  12.       return records;
  13.     // 4. 发送 fetch 请求,会从多个 topic-partition 拉取数据(只要对应的 topic- partition 没有未完成的请求)
  14.     fetcher.sendFetches();
  15.     long now = time.milliseconds();
  16.     long pollTimeout = Math.min(coordinator.timeToNextPoll(now), timeout);
  17.     // 5. 调用 poll 方法发送请求(底层发送请求的接口)
  18.     client.poll(pollTimeout, now, new PollCondition() {
  19.       @Override
  20.       public boolean shouldBlock() {
  21.         // since a fetch might be completed by the background thread, we need this poll condition
  22.         // to ensure that we do not block unnecessarily in poll()
  23.         return !fetcher.hasCompletedFetches();
  24.      }
  25.    });
  26.     // 6. 如果 group 需要 rebalance,直接返回空数据,这样更快地让 group 进行稳定 状态
  27.     if (coordinator.needRejoin())
  28.       return Collections.emptyMap();
  29.     // 获取到请求的结果
  30.     return fetcher.fetchedRecords();
  31.  }

pollOnce 可以简单分为6步来看,其作用分别如下:

1 coordinator.poll()

获取 GroupCoordinator 的地址,并建立相应 tcp 连接,发送 join-group、sync-group,之后才真正加入到了一个 group 中,这时会获取其要消费的 topic-partition 列表,如果设置了自动 commit,也会在这一步进行 commit。总之,对于一个新建的 group,group 状态将会从 Empty –>PreparingRebalance –> AwaiSync –> Stable;

1. 获取 GroupCoordinator 的地址,并建立相应 tcp 连接;
2. 发送 join-group 请求,然后 group 将会进行 rebalance;
3. 发送 sync-group 请求,之后才正在加入到了一个 group 中,这时会通过请求获取其要消费的 topic-partition 列表;
4. 如果设置了自动 commit,也会在这一步进行 commit offset

 

2 updateFetchPositions()

这个方法主要是用来更新这个 consumer 实例订阅的 topic-partition 列表的 fetch-offset 信息。目的就是为了获取其订阅的每个 topic-partition 对应的 position,这样 Fetcher 才知道从哪个 offset 开始去拉取这个 topic-partition 的数据

  1. private void updateFetchPositions(Set<TopicPartition> partitions) {
  2.     // 先重置那些调用 seekToBegin 和 seekToEnd 的 offset 的 tp,设置其 the fetch position 的 offset
  3.     fetcher.resetOffsetsIfNeeded(partitions);
  4.     if (!subscriptions.hasAllFetchPositions(partitions)) {
  5.       // 获取所有分配 tp 的 offset, 即 committed offset, 更新到 TopicPartitionState 中的 committed offset 中
  6.       coordinator.refreshCommittedOffsetsIfNeeded();
  7.       // 如果 the fetch position 值无效,则将上步获取的 committed offset 设 置为 the fetch position
  8.       fetcher.updateFetchPositions(partitions);
  9.    }
  10.  }

在 Fetcher 中,这个 consumer 实例订阅的每个 topic-partition 都会有一个对应的TopicPartitionState 对象,在这个对象中会记录以下这些内容:

  1. private static class TopicPartitionState {
  2.     // Fetcher 下次去拉取时的 offset,Fecher 在拉取时需要知道这个值
  3.     private Long position; // last consumed position
  4.     // 最后一次获取的高水位标记
  5.     private Long highWatermark; // the high watermark from last fetch
  6.     private Long lastStableOffset;
  7.     // consumer 已经处理完的最新一条消息的 offset,consumer 主动调用 offset- commit 时会更新这个值;
  8.     private OffsetAndMetadata committed;  // last committed position
  9.     // 是否暂停
  10.     private boolean paused;  // whether this partition has been paused by the user
  11.     // 这 topic-partition offset 重置的策略,重置之后,这个策略就会改为 null, 防止再次操作
  12.     private OffsetResetStrategy resetStrategy;  // the strategy to use if the offset needs resetting
  13. }

 

3 fetcher.fetchedRecords()

返回其 fetched records,并更新其 fetch-position offset,只有在 offset-commit 时(自动commit 时,是在第一步实现的),才会更新其 committed offset;

  1. public Map<TopicPartition, List<ConsumerRecord<K, V>>> fetchedRecords() {
  2.     Map<TopicPartition, List<ConsumerRecord<K, V>>> fetched = new HashMap<>();
  3.     // 在 max.poll.records 中设置单词最大的拉取条数
  4.     int recordsRemaining = maxPollRecords;
  5.     try {
  6.       while (recordsRemaining > 0) {
  7.         if (nextInLineRecords == null || nextInLineRecords.isFetched) {
  8.           // 从队列中获取但不移除此队列的头;如果此队列为空,返回null
  9.           CompletedFetch completedFetch = completedFetches.peek();
  10.           if (completedFetch == null) break;
  11.           // 获取下一个要处理的 nextInLineRecords
  12.           nextInLineRecords = parseCompletedFetch(completedFetch);
  13.           completedFetches.poll();
  14.        } else {
  15.           // 拉取records,更新 position
  16.           List<ConsumerRecord<K, V>> records = fetchRecords(nextInLineRecords, recordsRemaining);
  17.           TopicPartition partition = nextInLineRecords.partition;
  18.           if (!records.isEmpty()) {
  19.             List<ConsumerRecord<K, V>> currentRecords = fetched.get(partition);
  20.             if (currentRecords == null) {
  21.               fetched.put(partition, records);
  22.            } else {
  23.               List<ConsumerRecord<K, V>> newRecords = new ArrayList<>(records.size() + currentRecords.size());
  24.               newRecords.addAll(currentRecords);
  25.               newRecords.addAll(records);
  26.               fetched.put(partition, newRecords);
  27.            }
  28.             recordsRemaining -= records.size();
  29.          }
  30.        }
  31.      }
  32.    } catch (KafkaException e) {
  33.       if (fetched.isEmpty())
  34.         throw e;
  35.    }
  36.     return fetched;
  37.  }
  38.   private List<ConsumerRecord<K, V>> fetchRecords(PartitionRecords partitionRecords, int maxRecords) {
  39.     if (!subscriptions.isAssigned(partitionRecords.partition)) {
  40.       log.debug("Not returning fetched records for partition {} since it is no longer assigned",
  41.           partitionRecords.partition);
  42.    } else {
  43.       long position = subscriptions.position(partitionRecords.partition);
  44.       // 这个 tp 不能来消费了,比如调用 pause方法暂停消费
  45.       if (!subscriptions.isFetchable(partitionRecords.partition)) {
  46.         log.debug("Not returning fetched records for assigned partition {} since it is no longer fetchable",
  47.             partitionRecords.partition);
  48.      } else if (partitionRecords.nextFetchOffset == position) {
  49.         // 获取该 tp 对应的records,并更新 partitionRecords 的 fetchOffset(用于判断是否顺序)
  50.         List<ConsumerRecord<K, V>> partRecords = partitionRecords.fetchRecords(maxRecords);
  51.         long nextOffset = partitionRecords.nextFetchOffset;
  52.         log.trace("Returning fetched records at offset {} for assigned partition {} and update " +
  53.             "position to {}", position, partitionRecords.partition, nextOffset);
  54.         // 更新消费的到 offset( the fetch position)
  55.         subscriptions.position(partitionRecords.partition, nextOffset);
  56.         // 获取 Lag(即 position与 hw 之间差值),hw 为 null 时,才返回 null
  57.         Long partitionLag = subscriptions.partitionLag(partitionRecords.partition, isolationLevel);
  58.         if (partitionLag != null)
  59.           this.sensors.recordPartitionLag(partitionRecords.partition, partitionLag);
  60.         return partRecords;
  61.      } else {
  62.         log.debug("Ignoring fetched records for {} at offset {} since the current position is {}",
  63.             partitionRecords.partition, partitionRecords.nextFetchOffset, position);
  64.      }
  65.    }
  66.     partitionRecords.drain();
  67.     return emptyList();
  68.  }

 

4 fetcher.sendFetches()

只要订阅的 topic-partition list 没有未处理的 fetch 请求,就发送对这个 topic-partition 的 fetch请求,在真正发送时,还是会按 node 级别去发送,leader 是同一个 node 的 topic-partition 会合成一个请求去发送;

  1. // 向订阅的所有 partition (只要该 leader 暂时没有拉取请求)所在 leader 发送 fetch请求
  2.   public int sendFetches() {
  3.     // 1. 创建 Fetch Request
  4.     Map<Node, FetchRequest.Builder> fetchRequestMap = createFetchRequests();
  5.     for (Map.Entry<Node, FetchRequest.Builder> fetchEntry : fetchRequestMap.entrySet()) {
  6.       final FetchRequest.Builder request = fetchEntry.getValue();
  7.       final Node fetchTarget = fetchEntry.getKey();
  8.       log.debug("Sending {} fetch for partitions {} to broker {}", isolationLevel, request.fetchData().keySet(), fetchTarget);
  9.       // 2 发送 Fetch Request
  10.       client.send(fetchTarget, request)
  11.         .addListener(new RequestFutureListener<ClientResponse>() {
  12.             @Override
  13.             public void onSuccess(ClientResponse resp) {
  14.               FetchResponse response = (FetchResponse)resp.responseBody();
  15.               if (!matchesRequestedPartitions(request,response)) {
  16.                 log.warn("Ignoring fetch response containing partitions {} since it does not match "
  17. "the requested partitions {}", response.responseData().keySet(), request.fetchData().keySet());
  18.                 return;
  19.               }
  20.               Set<TopicPartition> partitions = new HashSet<>(response.responseData().keySet());
  21.               FetchResponseMetricAggregator metricAggregator = new FetchResponseMetricAggregator(sensors, partitions);
  22.               for (Map.Entry<TopicPartition, FetchResponse.PartitionData> entry : response.responseData().entrySet()) {
  23.                 TopicPartition partition = entry.getKey();
  24.                 long fetchOffset = request.fetchData().get(partition).fetchOffset;
  25.                 FetchResponse.PartitionData fetchData = entry.getValue();
  26.                 log.debug("Fetch {} at offset {} for partition {} returned fetch data {}",
  27.                     isolationLevel, fetchOffset, partition, fetchData);
  28.                 completedFetches.add(new CompletedFetch(partition, fetchOffset, fetchData, metricAggregator,
  29.                    resp.requestHeader().apiVersion()));
  30.              }
  31.             
  32.  sensors.fetchLatency.record(resp.requestLatencyMs());
  33.             }
  34.             @Override
  35.             public void onFailure(RuntimeException e) {
  36.               log.debug("Fetch request {} to {} failed", request.fetchData(), fetchTarget, e);
  37.            }
  38.          });
  39.    }
  40.     return fetchRequestMap.size();
  41.  }

1. createFetchRequests():为订阅的所有 topic-partition list 创建 fetch 请求(只要该topic-partition 没有还在处理的请求),创建的 fetch 请求依然是按照 node 级别创建的;
2. client.send():发送 fetch 请求,并设置相应的 Listener,请求处理成功的话,就加入到completedFetches 中,在加入这个 completedFetches 集合时,是按照 topic-partition 级别去加入,这样也就方便了后续的处理。

从这里可以看出,在每次发送 fetch 请求时,都会向所有可发送的 topic-partition 发送 fetch 请求,调用一次 fetcher.sendFetches,拉取到的数据,可需要多次 pollOnce 循环才能处理完,因为Fetcher 线程是在后台运行,这也保证了尽可能少地阻塞用户的处理线程,因为如果 Fetcher 中没有可处理的数据,用户的线程是会阻塞在 poll 方法中的

 

5 client.poll()

调用底层 NetworkClient 提供的接口去发送相应的请求;

6 coordinator.needRejoin()

如果当前实例分配的 topic-partition 列表发送了变化,那么这个 consumer group 就需要进行rebalance

 

 

4.5.5 自动提交

最简单的提交方式是让悄费者自动提交偏移量。如果enable.auto.commit被设为 true,消费者会自动把从 poll() 方法接收到的最大偏移量提交上去。提交时间间隔由 auto.commit.interval.ms 控制,默认值是 5s。与消费者里的其他东西 一样,自动提交也是在轮询(poll() )里进行的。消费者每次在进行轮询时会检查是否该提交偏移量了,如果是,那 么就会提交从上一次轮询返回的偏移量。

不过,这种简便的方式也会带来一些问题,来看一下下面的例子: 假设我们仍然使用默认的 5s提交时间间隔,在最近一次提交之后的 3s发生了再均衡,再 均衡之后,消费者从最后一次提交的偏移量位置开始读取消息。这个时候偏移量已经落后 了 3s,所以在这 3s 内到达的消息会被重复处理。可以通过修改提交时间间隔来更频繁地提交偏移量,减小可能出现重复消息的时间窗,不过这种情况是无也完全避免的

 

 

4.5.6 手动提交

4.5.6.1 同步提交

取消自动提交,把 auto.commit.offset 设为 false,让应用程序决定何时提交 偏 移量。使用commitSync() 提交偏移量最简单也最可靠。这个 API会提交由 poll() 方法返回 的最新偏移量,提交成功后马上返回,如果提交失败就抛出异常

  1. while (true) {
  2.  // 消息拉取
  3.  ConsumerRecords<String, String> records = consumer.poll(100);
  4.  for (ConsumerRecord<String, String> record : records) {
  5.   System.out.printf("offset = %d, key = %s, value = %s%n"
  6. , record.offset(),record.key(), record.value());
  7. }
  8.  // 处理完成单次消息以后,提交当前的offset,如果提交失败就抛出异常
  9.  consumer.commitSync();
  10. }

 

 

4.5.6.2 异步提交

同步提交有一个不足之处,在 broker对提交请求作出回应之前,应用程序会一直阻塞,这样会限制应用程序的吞吐量。我们可以通过降低提交频率来提升吞吐量,但如果发生了再均衡, 会增加重复消息的数量。这个时候可以使用异步提交 API。我们只管发送提交请求,无需等待 broker的响应。

  1. while (true) {
  2. // 消息拉取
  3. ConsumerRecords<String, String> records = consumer.poll(100);
  4. for (ConsumerRecord<String, String> record : records) {
  5. System.out.printf("offset = %d, key = %s, value = %s%n"
  6. , record.offset(), record.key(), record.value());
  7. }
  8. // 异步提交
  9. consumer.commitAsync((offsets, exception) -> {
  10. exception.printStackTrace();
  11. System.out.println(offsets.size());
  12. });
  13. }

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Li_阴宅/article/detail/936746
推荐阅读
相关标签
  

闽ICP备14008679号