当前位置:   article > 正文

基于Confluent Kafka部署Kafka Connect集群,Kafka Connect集群加载debezium插件_confluent.kafka

confluent.kafka

一、下载Confluent Kafka

Confluent Kafka的下载地址:

下载社区免费版本:

在这里插入图片描述

二、配置文件connect-distributed.properties

核心参数如下所示:

  • /data/src/confluent-7.3.3/etc/schema-registry/connect-distributed.properties
bootstrap.servers=realtime-kafka-001:9092,realtime-kafka-003:9092,realtime-kafka-002:9092


group.id=datasight-confluent-test-debezium-cluster-status

key.converter=org.apache.kafka.connect.json.JsonConverter

value.converter=org.apache.kafka.connect.json.JsonConverter

key.converter.schemas.enable=true
value.converter.schemas.enable=true

config.storage.topic=offline_confluent_test_debezium_cluster_connect_configs
offset.storage.topic=offline_confluent_test_debezium_cluster_connect_offsets
status.storage.topic=offline_confluent_test_debezium_cluster_connect_statuses


config.storage.replication.factor=3
offset.storage.replication.factor=3
status.storage.replication.factor=3

offset.storage.partitions=25
status.storage.partitions=5
config.storage.partitions=1

internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true

#rest.host.name=0.0.0.0
#rest.port=8083

#rest.advertised.host.name=0.0.0.0
#rest.advertised.port=8083

plugin.path=/data/service/debezium/connectors2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

三、启动脚本connect-distributed

  • /data/src/confluent-7.3.3/bin/connect-distributed

  • connect-distributed的脚本内容如下所示,可以不需要修改

  • 如果需要导出kafka connector的jmx,则需要设置jmx导出端口和jmx导出器,详细的部署方式可以参考博主下面这篇技术博客:

if [ $# -lt 1 ];
then
        echo "USAGE: $0 [-daemon] connect-distributed.properties"
        exit 1
fi

base_dir=$(dirname $0)

###
### Classpath additions for Confluent Platform releases (LSB-style layout)
###
#cd -P deals with symlink from /bin to /usr/bin
java_base_dir=$( cd -P "$base_dir/../share/java" && pwd )

# confluent-common: required by kafka-serde-tools
# kafka-serde-tools (e.g. Avro serializer): bundled with confluent-schema-registry package
for library in "confluent-security/connect" "kafka" "confluent-common" "kafka-serde-tools" "monitoring-interceptors"; do
  dir="$java_base_dir/$library"
  if [ -d "$dir" ]; then
    classpath_prefix="$CLASSPATH:"
    if [ "x$CLASSPATH" = "x" ]; then
      classpath_prefix=""
    fi
    CLASSPATH="$classpath_prefix$dir/*"
  fi
done

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
  LOG4J_CONFIG_DIR_NORMAL_INSTALL="/etc/kafka"
  LOG4J_CONFIG_NORMAL_INSTALL="${LOG4J_CONFIG_DIR_NORMAL_INSTALL}/connect-log4j.properties"
  LOG4J_CONFIG_DIR_ZIP_INSTALL="$base_dir/../etc/kafka"
  LOG4J_CONFIG_ZIP_INSTALL="${LOG4J_CONFIG_DIR_ZIP_INSTALL}/connect-log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_ZIP_INSTALL}"
  else # Fallback to normal default
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties -Dlog4j.config.dir=$base_dir/../config"
  fi
fi
export KAFKA_LOG4J_OPTS

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
fi

EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'}

COMMAND=$1
case $COMMAND in
  -daemon)
    EXTRA_ARGS="-daemon "$EXTRA_ARGS
    shift
    ;;
  *)
    ;;
esac

export CLASSPATH
exec $(dirname $0)/kafka-run-class $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60

四、启动Kafka Connect集群

启动命令如下所示:

/data/src/confluent-7.3.3/bin/connect-distributed /data/src/confluent-7.3.3/etc/schema-registry/connect-distributed.properties
  • 1

正常启动Kafka Connect集群完整输出如下所示:

[2023-06-21 16:43:01,249] INFO EnrichedConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.debezium.connector.mysql.MySqlConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        exactly.once.support = requested
        header.converter = null
        key.converter = null
        name = mysql-dw-valuekey-test
        offsets.storage.topic = null
        predicates = []
        tasks.max = 1
        topic.creation.default.exclude = []
        topic.creation.default.include = [.*]
        topic.creation.default.partitions = 12
        topic.creation.default.replication.factor = 3
        topic.creation.groups = []
        transaction.boundary = poll
        transaction.boundary.interval.ms = null
        transforms = [unwrap, moveFieldsToHeader, moveHeadersToValue, Reroute]
        transforms.Reroute.key.enforce.uniqueness = true
        transforms.Reroute.key.field.regex = null
        transforms.Reroute.key.field.replacement = null
        transforms.Reroute.logical.table.cache.size = 16
        transforms.Reroute.negate = false
        transforms.Reroute.predicate = 
        transforms.Reroute.topic.regex = debezium-dw-encryption-test.dw.(.*)
        transforms.Reroute.topic.replacement = debezium-test-dw-encryption-all3
        transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter
        transforms.moveFieldsToHeader.fields = [cdc_code, product]
        transforms.moveFieldsToHeader.headers = [product_code, productname]
        transforms.moveFieldsToHeader.negate = false
        transforms.moveFieldsToHeader.operation = copy
        transforms.moveFieldsToHeader.predicate = 
        transforms.moveFieldsToHeader.type = class org.apache.kafka.connect.transforms.HeaderFrom$Value
        transforms.moveHeadersToValue.fields = [product_code2, productname2]
        transforms.moveHeadersToValue.headers = [product_code, productname]
        transforms.moveHeadersToValue.negate = false
        transforms.moveHeadersToValue.operation = copy
        transforms.moveHeadersToValue.predicate = 
        transforms.moveHeadersToValue.type = class io.debezium.transforms.HeaderToValue
        transforms.unwrap.add.fields = []
        transforms.unwrap.add.headers = []
        transforms.unwrap.delete.handling.mode = drop
        transforms.unwrap.drop.tombstones = true
        transforms.unwrap.negate = false
        transforms.unwrap.predicate = 
        transforms.unwrap.route.by.field = 
        transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState
        value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)
[2023-06-21 16:43:01,253] INFO [mysql-dw-valuekey-test|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:849)
Jun 21, 2023 4:43:01 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2023-06-21 16:43:01,482] INFO Started o.e.j.s.ServletContextHandler@2b80497f{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921)
[2023-06-21 16:43:01,482] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:324)
[2023-06-21 16:43:01,482] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

五、加载debezium插件

  • 下载debezium插件到plugin.path=/data/service/debezium/connectors2设置的目录下
  • 然后重新启动Kafka Connect集群就能够成功加载debezium插件

重启Kafka Connect集群查看debezium插件是否加载成功,如下所示:成功加载到了debezium 插件

[{
"class":"io.debezium.connector.mysql.MySqlConnector",
"type":"source",
"version":"2.2.1.Final"},

{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"7.3.3-ce"},

{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.3-ce"},

{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.3-ce"}]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

六、总结和延伸

总结:

  • 至此成功部署了具有一个节点的Kafka Connect集群,如果需要更多节点,需要在多台服务器上启动Kafka Connect,从而组成一个多节点的Kafka Connect集群

基于Kafka Connect加载debezium插件的更多的内容可以参考博主以下几篇技术博客或者Debezium 专栏:

延伸:

  • 组成一个Kafka Connect集群后,需要启动多个connector进行Kafka Connect集群稳定性、可靠性测试。
  • 可以进一步部署Kafka Connect集群UI
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/543444
推荐阅读
相关标签
  

闽ICP备14008679号