当前位置:   article > 正文

Flink CDC实时同步PG数据库_flink cdc pg

flink cdc pg

版本:

JDK:1.8

Flink:1.16.2

Scala:2.11

Hadoop:3.1.3

github地址:https://github.com/rockets0421/FlinkCDC-PG.git 

一、前置准备工作

1、更改配置文件postgresql.conf

# 更改wal日志方式为logical
wal_level = logical # minimal, replica, or logical

# 更改solts最大数量(默认值为10),flink-cdc默认一张表占用一个slots
max_replication_slots = 20 # max number of replication slots

# 更改wal发送最大进程数(默认值为10),这个值和上面的solts设置一样
max_wal_senders = 20 # max number of walsender processes
# 中断那些停止活动超过指定毫秒数的复制连接,可以适当设置大一点(默认60s)
wal_sender_timeout = 180s # in milliseconds; 0 disable 

wal_level是必须更改的,其它参数选着性更改,如果同步表数量超过10张建议修改为合适的值

更改配置文件postgresql.conf完成,需要重启pg服务生效

2、新建用户并且给用户复制流权限

-- pg新建用户

CREATE USER user WITH PASSWORD 'pwd';

-- 给用户复制流权限
ALTER ROLE user replication;

-- 给用户登录数据库权限
grant CONNECT ON DATABASE test to user;


-- 把当前库public下所有表查询权限赋给用户

GRANT SELECT ON ALL TABLES IN SCHEMA public TO user;

3、发布表

-- 设置发布为true
update pg_publication set puballtables=true where pubname is not null;
-- 把所有表进行发布
CREATE PUBLICATION dbz_publication FOR ALL TABLES;
-- 查询哪些表已经发布
select * from pg_publication_tables;

4、更改表的复制标识包含更新和删除的值

-- 更改复制标识包含更新和删除之前值
ALTER TABLE xxxxxx REPLICA IDENTITY FULL;
-- 查看复制标识(为f标识说明设置成功)
select relreplident from pg_class where relname='xxxxxx';

 二、Flink读取PG数据

1、加载依赖

  1. <dependency>
  2. <groupId>com.ververica</groupId>
  3. <artifactId>flink-connector-postgres-cdc</artifactId>
  4. <version>2.2.0</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.postgresql</groupId>
  8. <artifactId>postgresql</artifactId>
  9. <version>42.3.1</version>
  10. </dependency>
  11. <dependency>
  12. <groupId>org.apache.flink</groupId>
  13. <artifactId>flink-connector-kafka_${scala.version}</artifactId>
  14. <version>${flink.version}</version>
  15. <exclusions>
  16. <exclusion>
  17. <artifactId>kafka-clients</artifactId>
  18. <groupId>org.apache.kafka</groupId>
  19. </exclusion>
  20. </exclusions>
  21. </dependency>

注意:如果依赖中有flink-connector-kafka,可能会有冲突,需要手动<exclisions>排除冲突

2、使用Flink CDC创建pg的source

  1. import com.ververica.cdc.connectors.postgres.PostgreSQLSource;
  2. import com.ververica.cdc.debezium.JsonDebeziumDeserializationSchema;
  3. import org.apache.flink.api.common.serialization.SimpleStringSchema;
  4. import org.apache.flink.streaming.api.functions.source.SourceFunction;
  5. public static SourceFunction getPGSource(String database, String schemaList,String tableList, String slotName) {
  6. Properties properties = new Properties();
  7. properties.setProperty("snapshot.mode", "always"); //always:全量+增量 never:增量
  8. properties.setProperty("debezium.slot.name", "pg_cdc");
  9. //在作业停止后自动清理 slot
  10. properties.setProperty("debezium.slot.drop.on.stop", "true");
  11. properties.setProperty("include.schema.changes", "true");
  12. // PostGres 数据库
  13. SourceFunction<String> sourceFunction = PostgreSQLSource.<String>builder()
  14. .hostname("localhost")
  15. .port(5432)
  16. .database(database) // monitor postgres database
  17. .schemaList(schemaList) // monitor inventory schema
  18. .tableList(tableList) // monitor products table 支持正则表达式
  19. .username("postgres")
  20. .password("postgres")
  21. .decodingPluginName("pgoutput")
  22. //Flink CDC 默认一张表占用一个 slot。多个未指定 slot.name 的连接会产生冲突。
  23. .slotName(slotName)
  24. .deserializer(new MyDebezium()) // 自定义序列化器,解决pg数据库日期格式的数据问题和时区问题
  25. //.deserializer(new JsonDebeziumDeserializationSchema()) //
  26. .debeziumProperties(properties)
  27. .build();
  28. return sourceFunction;
  29. }

3、自定义序列化器

  1. import com.alibaba.fastjson.JSONObject;
  2. import com.ververica.cdc.debezium.DebeziumDeserializationSchema;
  3. import com.ververica.cdc.debezium.utils.TemporalConversions;
  4. import io.debezium.time.*;
  5. import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
  6. import org.apache.flink.api.common.typeinfo.TypeInformation;
  7. import org.apache.flink.util.Collector;
  8. import org.apache.kafka.connect.data.Field;
  9. import org.apache.kafka.connect.data.SchemaBuilder;
  10. import org.apache.kafka.connect.data.Struct;
  11. import org.apache.kafka.connect.source.SourceRecord;
  12. import java.time.Instant;
  13. import java.time.LocalDateTime;
  14. import java.time.ZoneId;
  15. public class MyDebezium implements DebeziumDeserializationSchema<String> {
  16. // 日期格式转换时区
  17. private static String serverTimeZone = "Asia/Shanghai";
  18. @Override
  19. public void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {
  20. // 1. 创建一个JSONObject用来存放最终封装好的数据
  21. JSONObject result = new JSONObject();
  22. // 2. 解析主键
  23. Struct key = (Struct)sourceRecord.key();
  24. JSONObject keyJs = parseStruct(key);
  25. // 3. 解析值
  26. Struct value = (Struct) sourceRecord.value();
  27. Struct source = value.getStruct("source");
  28. JSONObject beforeJson = parseStruct(value.getStruct("before"));
  29. JSONObject afterJson = parseStruct(value.getStruct("after"));
  30. //将数据封装到JSONObject中
  31. result.put("db", source.get("db").toString().toLowerCase());
  32. //result.put("schema", source.get("schema").toString().toLowerCase()); 架构名 看是否需要
  33. result.put("table", source.get("table").toString().toLowerCase());
  34. result.put("key", keyJs);
  35. result.put("op", value.get("op").toString());
  36. result.put("op_ts", LocalDateTime.ofInstant(Instant.ofEpochMilli(source.getInt64("ts_ms")), ZoneId.of(serverTimeZone)));
  37. result.put("current_ts", LocalDateTime.ofInstant(Instant.ofEpochMilli(value.getInt64("ts_ms")), ZoneId.of(serverTimeZone)));
  38. result.put("before", beforeJson);
  39. result.put("after", afterJson);
  40. //将数据发送至下游
  41. collector.collect(result.toJSONString());
  42. }
  43. private JSONObject parseStruct(Struct valueStruct) {
  44. if (valueStruct == null) return null;
  45. JSONObject dataJson = new JSONObject();
  46. for (Field field : valueStruct.schema().fields()) {
  47. Object v = valueStruct.get(field);
  48. String type = field.schema().name();
  49. Object val = null;
  50. if (v instanceof Long) {
  51. long vl = (Long) v;
  52. val = convertLongToTime(vl, type);
  53. } else if (v instanceof Integer){
  54. int iv = (Integer) v;
  55. val = convertIntToDate(iv, type);
  56. } else if (v == null) {
  57. val = null;
  58. } else {
  59. val = convertObjToTime(v, type);
  60. }
  61. dataJson.put(field.name().toLowerCase(), val);
  62. }
  63. return dataJson;
  64. }
  65. private Object convertObjToTime(Object obj, String type) {
  66. Object val = obj;
  67. if (Time.SCHEMA_NAME.equals(type) || MicroTime.SCHEMA_NAME.equals(type) || NanoTime.SCHEMA_NAME.equals(type)) {
  68. val = java.sql.Time.valueOf(TemporalConversions.toLocalTime(obj)).toString();
  69. } else if (Timestamp.SCHEMA_NAME.equals(type) || MicroTimestamp.SCHEMA_NAME.equals(type) || NanoTimestamp.SCHEMA_NAME.equals(type) || ZonedTimestamp.SCHEMA_NAME.equals(type)) {
  70. val = java.sql.Timestamp.valueOf(TemporalConversions.toLocalDateTime(obj, ZoneId.of(serverTimeZone))).toString();
  71. }
  72. return val;
  73. }
  74. private Object convertIntToDate(int obj, String type) {
  75. SchemaBuilder date_schema = SchemaBuilder.int64().name("org.apache.kafka.connect.data.Date");
  76. Object val = obj;
  77. if (Date.SCHEMA_NAME.equals(type)) {
  78. val = org.apache.kafka.connect.data.Date.toLogical(date_schema, obj).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalDate().toString();
  79. }
  80. return val;
  81. }
  82. private Object convertLongToTime(long obj, String type) {
  83. SchemaBuilder time_schema = SchemaBuilder.int64().name("org.apache.kafka.connect.data.Time");
  84. SchemaBuilder date_schema = SchemaBuilder.int64().name("org.apache.kafka.connect.data.Date");
  85. SchemaBuilder timestamp_schema = SchemaBuilder.int64().name("org.apache.kafka.connect.data.Timestamp");
  86. Object val = obj;
  87. if (Time.SCHEMA_NAME.equals(type)) {
  88. val = org.apache.kafka.connect.data.Time.toLogical(time_schema, (int)obj).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalTime().toString();
  89. } else if (MicroTime.SCHEMA_NAME.equals(type)) {
  90. val = org.apache.kafka.connect.data.Time.toLogical(time_schema, (int)(obj / 1000)).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalTime().toString();
  91. } else if (NanoTime.SCHEMA_NAME.equals(type)) {
  92. val = org.apache.kafka.connect.data.Time.toLogical(time_schema, (int)(obj / 1000 / 1000)).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalTime().toString();
  93. } else if (Timestamp.SCHEMA_NAME.equals(type)) {
  94. LocalDateTime t = org.apache.kafka.connect.data.Timestamp.toLogical(timestamp_schema, obj).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalDateTime();
  95. val = java.sql.Timestamp.valueOf(t).toString();
  96. } else if (MicroTimestamp.SCHEMA_NAME.equals(type)) {
  97. LocalDateTime t = org.apache.kafka.connect.data.Timestamp.toLogical(timestamp_schema, obj / 1000).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalDateTime();
  98. val = java.sql.Timestamp.valueOf(t).toString();
  99. } else if (NanoTimestamp.SCHEMA_NAME.equals(type)) {
  100. LocalDateTime t = org.apache.kafka.connect.data.Timestamp.toLogical(timestamp_schema, obj / 1000 / 1000).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalDateTime();
  101. val = java.sql.Timestamp.valueOf(t).toString();
  102. } else if (Date.SCHEMA_NAME.equals(type)) {
  103. val = org.apache.kafka.connect.data.Date.toLogical(date_schema, (int)obj).toInstant().atZone(ZoneId.of(serverTimeZone)).toLocalDate().toString();
  104. }
  105. return val;
  106. }
  107. @Override
  108. public TypeInformation<String> getProducedType() {
  109. return BasicTypeInfo.STRING_TYPE_INFO;
  110. }
  111. }

注:这段代码来自于https://huaweicloud.csdn.net/63356ef0d3efff3090b56c01.html中的scala版本

三、项目遇坑

1、类型不匹配报错问题

使用DataStream API方式读取多表数据,就无法定义数据格式,所以统一转为JsonObject格式,如果需要使用流中的数据去进行sql占位符的填充,再去pg中执行,就会提示类型不匹配

方法一:使用函数,用PG的函数去进行类型的强转

a.a1 = b.b1::int8 或者 a.a1::varchar = b.b1

缺点:需要改动SQL语句

方法二:隐式类型自动转换

MySQL、Oracle等都是默认对数据类型进行了隐式的转换,在其他数据库varchar等字符串类型和数字可以进行自动的隐式转换,但是PG确没有这么处理,但可以通过PG的自定义类型转换定义自己想要的隐式类型转换

  1. --创建类型转换
  2. --注:创建cast需要有pg_cast系统表的权限
  3. --注:当创建类型转换使用自动隐式转换的话如果出现多个匹配的转换此时pg会因为不知道选择哪一个去处理类型转换而报错,
  4. --如果出现多个隐式自动转换都匹配此时还是需要手动添加转换以达到效果,或者删除多余的类型转换
  5. CREATE CAST (INTEGER AS VARCHAR) WITH INOUT AS IMPLICIT;
  6. CREATE CAST (VARCHAR AS INTEGER) WITH INOUT AS IMPLICIT;
  7. CREATE CAST (BIGINT AS VARCHAR) WITH INOUT AS IMPLICIT;
  8. CREATE CAST (VARCHAR AS BIGINT) WITH INOUT AS IMPLICIT;
  9. CREATE CAST (DATE VARCHAR) WITH INOUT AS IMPLICIT;
  10. CREATE CAST (VARCHAR AS DATE ) WITH INOUT AS IMPLICIT;
  11. --查询当前的类型转换
  12. --这个查询是当前所有的CAST
  13. select
  14. (select typname from pg_type where oid = t.castsource) as "castsource",
  15. (select typname from pg_type where oid = t.casttarget) as "casttarget",
  16. castcontext,
  17. castmethod
  18. from pg_cast as t

注意:如果使用隐式转换,需要在连接 URL 中通过设置 stringtype=unspecified 来禁用 JDBC 驱动对数据类型的预测

2、没有发布所有表时

如果已经所有表进行发布,那就不用看这个问题
CREATE PUBLICATION dbz_publication FOR ALL TABLES;

如果上游数据库只发不了某些表,那么在进行CDC操作时,就需要手动指定publication.name的发布名称,否则debezium会自动执行发布所有表这一操作,一旦账号权限不足,会在数据库造成异常,提示permission denied for database xxx;那么也会无法读取数据

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/200298
推荐阅读
相关标签
  

闽ICP备14008679号