赞
踩
Confluent Kafka的下载地址:
下载社区免费版本:
核心参数如下所示:
bootstrap.servers=realtime-kafka-001:9092,realtime-kafka-003:9092,realtime-kafka-002:9092 group.id=datasight-confluent-test-debezium-cluster-status key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=true value.converter.schemas.enable=true config.storage.topic=offline_confluent_test_debezium_cluster_connect_configs offset.storage.topic=offline_confluent_test_debezium_cluster_connect_offsets status.storage.topic=offline_confluent_test_debezium_cluster_connect_statuses config.storage.replication.factor=3 offset.storage.replication.factor=3 status.storage.replication.factor=3 offset.storage.partitions=25 status.storage.partitions=5 config.storage.partitions=1 internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=true internal.value.converter.schemas.enable=true #rest.host.name=0.0.0.0 #rest.port=8083 #rest.advertised.host.name=0.0.0.0 #rest.advertised.port=8083 plugin.path=/data/service/debezium/connectors2
/data/src/confluent-7.3.3/bin/connect-distributed
connect-distributed的脚本内容如下所示,可以不需要修改
如果需要导出kafka connector的jmx,则需要设置jmx导出端口和jmx导出器,详细的部署方式可以参考博主下面这篇技术博客:
if [ $# -lt 1 ]; then echo "USAGE: $0 [-daemon] connect-distributed.properties" exit 1 fi base_dir=$(dirname $0) ### ### Classpath additions for Confluent Platform releases (LSB-style layout) ### #cd -P deals with symlink from /bin to /usr/bin java_base_dir=$( cd -P "$base_dir/../share/java" && pwd ) # confluent-common: required by kafka-serde-tools # kafka-serde-tools (e.g. Avro serializer): bundled with confluent-schema-registry package for library in "confluent-security/connect" "kafka" "confluent-common" "kafka-serde-tools" "monitoring-interceptors"; do dir="$java_base_dir/$library" if [ -d "$dir" ]; then classpath_prefix="$CLASSPATH:" if [ "x$CLASSPATH" = "x" ]; then classpath_prefix="" fi CLASSPATH="$classpath_prefix$dir/*" fi done if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then LOG4J_CONFIG_DIR_NORMAL_INSTALL="/etc/kafka" LOG4J_CONFIG_NORMAL_INSTALL="${LOG4J_CONFIG_DIR_NORMAL_INSTALL}/connect-log4j.properties" LOG4J_CONFIG_DIR_ZIP_INSTALL="$base_dir/../etc/kafka" LOG4J_CONFIG_ZIP_INSTALL="${LOG4J_CONFIG_DIR_ZIP_INSTALL}/connect-log4j.properties" if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_NORMAL_INSTALL}" elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_ZIP_INSTALL}" else # Fallback to normal default KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties -Dlog4j.config.dir=$base_dir/../config" fi fi export KAFKA_LOG4J_OPTS if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G" fi EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'} COMMAND=$1 case $COMMAND in -daemon) EXTRA_ARGS="-daemon "$EXTRA_ARGS shift ;; *) ;; esac export CLASSPATH exec $(dirname $0)/kafka-run-class $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"
启动命令如下所示:
/data/src/confluent-7.3.3/bin/connect-distributed /data/src/confluent-7.3.3/etc/schema-registry/connect-distributed.properties
正常启动Kafka Connect集群完整输出如下所示:
[2023-06-21 16:43:01,249] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-dw-valuekey-test offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 12 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap, moveFieldsToHeader, moveHeadersToValue, Reroute] transforms.Reroute.key.enforce.uniqueness = true transforms.Reroute.key.field.regex = null transforms.Reroute.key.field.replacement = null transforms.Reroute.logical.table.cache.size = 16 transforms.Reroute.negate = false transforms.Reroute.predicate = transforms.Reroute.topic.regex = debezium-dw-encryption-test.dw.(.*) transforms.Reroute.topic.replacement = debezium-test-dw-encryption-all3 transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter transforms.moveFieldsToHeader.fields = [cdc_code, product] transforms.moveFieldsToHeader.headers = [product_code, productname] transforms.moveFieldsToHeader.negate = false transforms.moveFieldsToHeader.operation = copy transforms.moveFieldsToHeader.predicate = transforms.moveFieldsToHeader.type = class org.apache.kafka.connect.transforms.HeaderFrom$Value transforms.moveHeadersToValue.fields = [product_code2, productname2] transforms.moveHeadersToValue.headers = [product_code, productname] transforms.moveHeadersToValue.negate = false transforms.moveHeadersToValue.operation = copy transforms.moveHeadersToValue.predicate = transforms.moveHeadersToValue.type = class io.debezium.transforms.HeaderToValue transforms.unwrap.add.fields = [] transforms.unwrap.add.headers = [] transforms.unwrap.delete.handling.mode = drop transforms.unwrap.drop.tombstones = true transforms.unwrap.negate = false transforms.unwrap.predicate = transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376) [2023-06-21 16:43:01,253] INFO [mysql-dw-valuekey-test|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:849) Jun 21, 2023 4:43:01 PM org.glassfish.jersey.internal.Errors logErrors WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. [2023-06-21 16:43:01,482] INFO Started o.e.j.s.ServletContextHandler@2b80497f{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921) [2023-06-21 16:43:01,482] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:324) [2023-06-21 16:43:01,482] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
重启Kafka Connect集群查看debezium插件是否加载成功,如下所示:成功加载到了debezium 插件
[{
"class":"io.debezium.connector.mysql.MySqlConnector",
"type":"source",
"version":"2.2.1.Final"},
{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"7.3.3-ce"},
{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.3-ce"},
{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.3-ce"}]
总结:
基于Kafka Connect加载debezium插件的更多的内容可以参考博主以下几篇技术博客或者Debezium 专栏:
延伸:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。