当前位置:   article > 正文

Flink实时数仓(六)【DWD 层搭建(四)交易域、互动域、用户域实现】

Flink实时数仓(六)【DWD 层搭建(四)交易域、互动域、用户域实现】

前言

        今天的任务是完成 DWD 层剩余的事实表;今年的秋招开得比往年早,所以要抓紧时间了,据了解,今年的 hc 还是不多,要是晚点投铁定寄中寄了;

        今天还是个周末,不过记忆里我好像整个大学都没有好好放松过过一个周末,毕竟只有周末的空教室才是最多的,可以抢到带插座的位置;emm... 等一切尘埃落定一定好好放松放松~

1、交易域下单事务事实表

        昨天我们创建了订单与处理表(把订单明细、订单、订单明细活动、订单明细优惠券等订单相关和表和 MySQL lookup 字典表关联到了一起),就是为了之后做关于订单的事实表的时候不用再去频繁关联造成重复计算;

1.1、实现思路

  • 从 Kafka dwd_trade_order_pre_process 主题读取订单预处理数据
  • 筛选下单明细数据:新增数据(type = 'insert')
  • 写入 Kafka 下单明细主题

1.2、代码实现

1.2.1、读取订单与处理表

这里只需要保留有用的数据,其它字段(比如 old 、order_status 这些字段这里完全没用,是给取消订单表用的)直接丢弃掉 

  1. // TODO 2. 读取订单预处理表
  2. tableEnv.executeSql("create table dwd_trade_order_pre_process(" +
  3. "id ," +
  4. "order_id ," +
  5. "user_id ," +
  6. "sku_id ," +
  7. "sku_name ," +
  8. "province_id ," +
  9. "activity_id ," +
  10. "activity_rule_id ," +
  11. "coupon_id ," +
  12. "date_id ," +
  13. "create_time ," +
  14. "operate_date_id ," +
  15. "operate_time ," +
  16. "source_id ," +
  17. "source_type_id ," +
  18. "source_type_name ," +
  19. "sku_num ," +
  20. "split_original_amount ," +
  21. "split_activity_amount ," +
  22. "split_coupon_amount ," +
  23. "split_total_amount ," +
  24. "`type`" +
  25. ")" + MyKafkaUtil.getKafkaDDL("dwd_trade_order_pre_process", "trade_detail"));

1.2.2、过滤出下单数据

订单预处理表的粒度是商品,所以一旦有订单被下单,这里新增记录数 = 订单内商品件数

  1. // TODO 3. 过滤出下单数据,即新增数据
  2. Table filterTable = tableEnv.sqlQuery("SELECT " +
  3. "id ," +
  4. "order_id ," +
  5. "user_id ," +
  6. "sku_id ," +
  7. "sku_name ," +
  8. "sku_num ," +
  9. "province_id ," +
  10. "activity_id ," +
  11. "activity_rule_id ," +
  12. "coupon_id ," +
  13. "create_time ," +
  14. "operate_date_id ," +
  15. "operate_time ," +
  16. "source_id ," +
  17. "source_type_id ," +
  18. "source_type_name ," +
  19. "split_activity_amount ," +
  20. "split_coupon_amount ," +
  21. "split_total_amount" +
  22. ") " +
  23. "FROM dwd_trade_order_pre_process " +
  24. "WHERE `type`='insert'"
  25. );
  26. tableEnv.createTemporaryView("filter_table",filterTable);

1.2.3、创建下单数据表

  1. // TODO 4. 创建 dwd 层下单数据表
  2. tableEnv.executeSql("CREATE TABLE dwd_trade_order_detail (" +
  3. " id STRING," +
  4. " order_id STRING," +
  5. " user_id STRING," +
  6. " sku_id STRING," +
  7. " sku_name STRING," +
  8. " sku_num STRING," +
  9. " province_id STRING," +
  10. " activity_id STRING," +
  11. " activity_rule_id STRING," +
  12. " coupon_id STRING," +
  13. " create_time STRING," +
  14. " operate_date_id STRING," +
  15. " operate_time STRING," +
  16. " source_id STRING," +
  17. " source_type_id STRING," +
  18. " source_type_name STRING," +
  19. " split_activity_amount STRING," +
  20. " split_coupon_amount STRING," +
  21. " split_total_amount STRING" +
  22. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_trade_order_detail"));

1.2.4、将数据写出到 Kafka

  1. // TODO 5. 将数据写出到 kafka
  2. tableEnv.executeSql("INSERT INTO dwd_trade_order_detail SELECT * FROM filter_table");

2、交易域取消订单事务事实表

这个需求也非常简单

 思路

  • 读取预处理表(上面的下单表不需要保留 old、order_status字段,但是这里必须保留)
  • 过滤出取消订单数据
    • type = 'update'
    • order_status = '1003'
    • old['order_status'] is not null(这个条件可有可无)
  • 写出到 Kafka
  1. public class DwdTradeCancelDetail {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 获取执行环境
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(1); // 生产环境中设置为kafka主题的分区数
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // 1.1 开启checkpoint
  8. env.enableCheckpointing(5 * 60000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointStorage("hdfs://hadoop102:8020/s/ck");
  10. env.getCheckpointConfig().setCheckpointTimeout(10 * 60000L);
  11. env.getCheckpointConfig().setMaxConcurrentCheckpoints(2); // 设置最大共存的checkpoint数量
  12. env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3,5000L)); // 固定频率重启: 尝试3次重启,每5s重启一次
  13. // 1.2 设置状态后端
  14. env.setStateBackend(new HashMapStateBackend());
  15. // TODO 2. 读取订单预处理表
  16. tableEnv.executeSql("create table dwd_trade_order_pre_process(" +
  17. "id ," +
  18. "order_id ," +
  19. "user_id ," +
  20. "order_status ," +
  21. "sku_id ," +
  22. "sku_name ," +
  23. "province_id ," +
  24. "activity_id ," +
  25. "activity_rule_id ," +
  26. "coupon_id ," +
  27. "date_id ," +
  28. "create_time ," +
  29. "operate_date_id ," +
  30. "operate_time ," +
  31. "source_id ," +
  32. "source_type_id ," +
  33. "source_type_name ," +
  34. "sku_num ," +
  35. "split_original_amount ," +
  36. "split_activity_amount ," +
  37. "split_coupon_amount ," +
  38. "split_total_amount ," +
  39. "`type` ," +
  40. "`old` " +
  41. ")" + MyKafkaUtil.getKafkaDDL("dwd_trade_order_pre_process", "trade_detail"));
  42. // TODO 3. 过滤出取消订单的数据
  43. Table filterTable = tableEnv.sqlQuery("" +
  44. "SELECT" +
  45. " id ," +
  46. " order_id ," +
  47. " user_id ," +
  48. " order_status ," +
  49. " sku_id ," +
  50. " sku_name ," +
  51. " province_id ," +
  52. " activity_id ," +
  53. " activity_rule_id ," +
  54. " coupon_id ," +
  55. " date_id ," +
  56. " create_time ," +
  57. " operate_date_id ," +
  58. " operate_time ," +
  59. " source_id ," +
  60. " source_type_id ," +
  61. " source_type_name ," +
  62. " sku_num ," +
  63. " split_original_amount ," +
  64. " split_activity_amount ," +
  65. " split_coupon_amount ," +
  66. " split_total_amount ," +
  67. " `type` ," +
  68. " `old` " +
  69. "FROM dwd_trade_order_pre_process " +
  70. "WHERE `type`='update' " +
  71. "old['order_status'] is not null" +
  72. "AND order_status='1003'"
  73. );
  74. tableEnv.createTemporaryView("filter_table",filterTable);
  75. // TODO 4. 创建 dwd 层取消订单表
  76. tableEnv.executeSql("CREATE TABLE dwd_trade_cancel_detail" +
  77. "id STRING," +
  78. "order_id STRING," +
  79. "user_id STRING," +
  80. "order_status STRING," +
  81. "sku_id STRING," +
  82. "sku_name STRING," +
  83. "province_id STRING," +
  84. "activity_id STRING," +
  85. "activity_rule_id STRING," +
  86. "coupon_id STRING," +
  87. "date_id STRING," +
  88. "create_time STRING," +
  89. "operate_date_id STRING," +
  90. "operate_time STRING," +
  91. "source_id STRING," +
  92. "source_type_id STRING," +
  93. "source_type_name STRING," +
  94. "sku_num STRING," +
  95. "split_original_amount STRING," +
  96. "split_activity_amount STRING," +
  97. "split_coupon_amount STRING," +
  98. "split_total_amount STRING," +
  99. "`type` STRING," +
  100. "`old` map<String,String> " + MyKafkaUtil.getKafkaSinkDDL("dwd_trade_cancel_detail")
  101. );
  102. // TODO 5. 写出数据
  103. tableEnv.executeSql("INSERT INTO dwd_trade_cancel_detail SELECT * FROM filter_table");
  104. // TODO 6. 启动任务
  105. env.execute("DwdTradeCancelDetail");
  106. }
  107. }

3、交易域支付成功事务事实表

        支付成功表的主表并不是订单表或者订单明细表,而是支付表,在业务系统中存在支付表(payment_info),每行代表一个订单的支付情况(成功或失败)。

        但是我们应该拿它去和下单明细表做关联,因为我们如果之后需要对支付记录做一个数据分析,就应该尽可能保证支付事实表应该有一个细的粒度和丰富的维度,而关联下单明细表之后不仅粒度商品,还增加了很多sku维度(比如商品类型、)

3.1、实现思路

1)设置 ttl

        支付成功事务事实表需要将业务数据库中的支付信息表 payment_info 数据与订单明细表关联。订单明细数据是在下单时生成的,经过一系列的处理进入订单明细主题,通常支付操作在下单后 15min 内完成即可,因此,支付明细数据可能比订单明细数据滞后 15min。考虑到可能的乱序问题,ttl 设置为 15min + 5s。

2)获取订单明细数据

        用户必然要先下单才有可能支付成功,因此支付成功明细数据集必然是订单明细数据集的子集,所以我们直接从 dwd 层的下单明细表去读取;

3)筛选支付表数据

        获取支付类型、回调时间(支付成功时间)、支付成功时间戳。

        生产环境下,用户支付后,业务数据库的支付表会插入一条数据,此时的回调时间和回调内容为空。通常底层会调用第三方支付接口,接口会返回回调信息,如果支付成功则回调信息不为空,此时会更新支付表,补全回调时间和回调内容字段的值,并将 payment_status 字段的值修改为支付成功对应的状态码(本项目为 1602)。支付成功之后,支付表数据不会发生变化。因此,只要操作类型为 update 且状态码为 1602 即为支付成功数据。

由上述分析可知,支付成功对应的业务数据库变化日志应满足两个条件:

(1)payment_status 字段的值为 1602;

(2)操作类型为 update。

4)构建 MySQL-LookUp 字典表

        为的是对支付方式字段做维度退化;

5)关联上述三张表形成支付成功宽表,写入 Kafka 支付成功主题

        支付成功业务过程的最细粒度为一个 sku 的支付成功记录,payment_info 表的粒度与最细粒度相同,将其作为主表。

(1) payment_info 表在订单明细表中必然存在对应数据,主表不存在独有数据,因此通过内连接与订单明细表关联;

(2) 与字典表的关联是为了获取 payment_type 对应的支付类型名称,主表不存在独有数据,通过内连接与字典表关联。下文与字典表的关联同理,不再赘述。

3.2、代码实现

3.2.1、读取 topic_db 数据

 注意:这个程序中我们需要消费三张表,其中两张来组 Kafka 需要我们指定消费者组,所以我们这里可以把它们的组设置为一样的,毕竟都在一个程序里:

  1. // TODO 2. 读取 topic_db 数据(这里的消费者组id尽量和下面保持一致)
  2. tableEnv.executeSql(MyKafkaUtil.getTopicDb("dwd_trade_pay_detail_suc"));

3.2.2、过滤出支付成功数据

  1. // TODO 3. 过滤出支付成功的数据
  2. Table paymentInfo = tableEnv.sqlQuery("SELECT " +
  3. "data['user_id'] user_id, " +
  4. "data['order_id'] order_id, " +
  5. "data['payment_type'] payment_type, " +
  6. "data['callback_time'] callback_time, " +
  7. "pt " + // 用来和 lookup 关联
  8. "FROM topic_db " +
  9. "WHERE `database` = `gmall` " +
  10. "AND `table` = `payment_info` " +
  11. "AND `type` = 'update' " +
  12. "AND data['payment_status'] = '1602'"
  13. );
  14. tableEnv.createTemporaryView("payment_info",paymentInfo);

3.2.3、消费下单明细事务事实表

  1. // TODO 3. 过滤出支付成功的数据
  2. Table paymentInfo = tableEnv.sqlQuery("SELECT " +
  3. "data['user_id'] user_id, " +
  4. "data['order_id'] order_id, " +
  5. "data['payment_type'] payment_type, " +
  6. "data['callback_time'] callback_time, " +
  7. "pt " + // 用来和 lookup 关联
  8. "FROM topic_db " +
  9. "WHERE `database` = `gmall` " +
  10. "AND `table` = `payment_info` " +
  11. "AND `type` = 'update' " +
  12. "AND data['payment_status'] = '1602'"
  13. );
  14. tableEnv.createTemporaryView("payment_info",paymentInfo);

3.2.4、读取 MySQL lookup 表

  1. // TODO 5. 读取 mysql lookup 表
  2. tableEnv.executeSql(MysqlUtil.getBaseDicLookUpDDL());

3.2.5、关联 3 张表

关联支付成功数据、下单明细和字典表:

  1. // TODO 6. 关联 3 张表
  2. Table resultTable = tableEnv.sqlQuery("SELECT " +
  3. " od.id order_detail_id," +
  4. " od.order_id," +
  5. " od.user_id," +
  6. " od.sku_id," +
  7. " od.sku_name," +
  8. " od.province_id," +
  9. " od.activity_id," +
  10. " od.activity_rule_id," +
  11. " od.coupon_id," +
  12. " pi.payment_type payment_type_code," +
  13. " dic.dic_name payment_type_name," +
  14. " pi.callback_time," +
  15. " od.source_id," +
  16. " od.source_type_code," +
  17. " od.source_type_name," +
  18. " od.sku_num," +
  19. " od.split_activity_amount," +
  20. " od.split_coupon_amount," +
  21. " od.split_total_amount split_payment_amount " +
  22. "FROM payment_info pi" +
  23. "join dwd_trade_order_detail od" +
  24. "on pi.order_id = od.order_id" +
  25. "join `base_dic` for system_time as of pi.proc_time as dic" +
  26. "on pi.payment_type = dic.dic_code ");
  27. tableEnv.createTemporaryView("result_table",resultTable);

3.2.6、创建 Kafka 支付成功事务事实表并写入数据

  1. // TODO 7. 创建 kafka 支付成功表
  2. tableEnv.executeSql(
  3. "CREATE TABLE dwd_trade_pay_detail_suc (" +
  4. " order_detail_id string," +
  5. " order_id string," +
  6. " user_id string," +
  7. " sku_id string," +
  8. " sku_name string," +
  9. " province_id string," +
  10. " activity_id string," +
  11. " activity_rule_id string," +
  12. " coupon_id string," +
  13. " payment_type_code string," +
  14. " payment_type_name string," +
  15. " callback_time string," +
  16. " source_id string," +
  17. " source_type_code string," +
  18. " source_type_name string," +
  19. " sku_num string," +
  20. " split_activity_amount string," +
  21. " split_coupon_amount string," +
  22. " split_payment_amount string," +
  23. " primary key(order_detail_id) not enforced "
  24. + MyKafkaUtil.getUpsertKafkaDDL("dwd_trade_pay_detail_suc")
  25. );

这里加主键是为了方便让相同 key 到一个分区中,方便下游做去重(也可以不加);

最后的 env 不需要执行,因为我们执行的是 tableEnv ,如果执行 env 的话会有警告:没有操作可执行;

4、交易域退单事务事实表

退单指的是支付成功后取消订单的操作,而取消订单指的是还没支付时取消订单的操作;

4.1、思路

        退单并不需要关联下单事务事实表,因为退单表的粒度本身就是商品(退单表包含 sku_id,有了这个 sku_id 足够之后去DIM关联 sku 维表了);我们这里需要关联订单表,为的是从中获得一些分析维度(比如 province_id,我认为还可以添加活动id和优惠券id等维度外键)。

退单操作会影响到订单表(order_status 从 1002['已支付'] 变为 1005['退款中']),所以我们可以在过滤订单表的时候,同时过滤出退单的数据,减少性能的消耗;

注意:这里我们关联的是订单表,因为退单肯定会触发修改订单表,它俩是几乎同时发生的,所以我们可以一次性从 topic_db 中读取出来。也没必要去订单预处理表去读(订单预处理表还依赖订单表,所以根本没必要)。

所以,大致思路就是:

  • 创建 topic_db 表
  • 过滤出订单表中的退单数据和退单表
    • 退单表过滤(table = 'order_refund_info')
    • 订单表中的退单数据(order_status='1005' & type='update' & order_status字段发生更新)
  • 创建 mysql 字典表的 lookup 表(获得退款类型和退款原因类型)
  • 对 3 张表进行关联
  • 创建 kafak dwd_trade_order_refund 表
  • 写入数据

4.2、实现

4.2.1、创建 topic_db 表并过滤出退单数据

  1. // TODO 2. 读取 topic_db 数据
  2. tableEnv.executeSql(MyKafkaUtil.getTopicDb("dwd_trade_order_refund"));
  3. // TODO 3. 过滤出退单表
  4. Table refundTable = tableEnv.sqlQuery("SELECT " +
  5. "data['d'] id, " +
  6. "data['user_id'] user_id, " +
  7. "data['order_id'] order_id, " +
  8. "data['sku_id'] sku_id, " +
  9. "data['refund_type'] refund_type, " +
  10. "data['refund_num'] refund_num, " +
  11. "data['refund_amount'] refund_amount, " +
  12. "data['refund_reason_type'] refund_reason_type, " +
  13. "data['refund_reason_txt'] refund_reason_txt, " +
  14. "data['create_time'] create_time, " +
  15. "pt " +
  16. "FROM topic_db " +
  17. "WHERE `database`='gmall' " +
  18. "AND `table`='order_refund_info' "
  19. );
  20. tableEnv.createTemporaryView("order_refund_info",refundTable);

4.2.2、过滤出订单表中的退单数据

  1. // TODO 4. 过滤出订单表中的退单数据
  2. Table orderInfoRefund = tableEnv.sqlQuery("SELECT " +
  3. "data['id'] id," +
  4. "data['province_id'] province_id," +
  5. "`old` " +
  6. "FROM topic_db " +
  7. "WHERE `database`='gmall' " +
  8. "AND `table`='order_info ' " +
  9. "AND `type` = 'update' " +
  10. "AND order_status = '1005' " +
  11. "AND old['order_status'] is not null"
  12. );
  13. tableEnv.createTemporaryView("order_info_refund",orderInfoRefund);

4.2.3、创建 MySQL lookup 退单表

为的是获得退款类型和退款原因类型

  1. // TODO 5. 读取 mysql lookup 表
  2. tableEnv.executeSql(MysqlUtil.getBaseDicLookUpDDL());

4.2.4、关联 3 张表的数据

  1. // TODO 6. 关联 3 张表
  2. Table resultTable = tableEnv.sqlQuery("SELECT " +
  3. "ri.id," +
  4. "ri.user_id," +
  5. "ri.order_id," +
  6. "ri.sku_id," +
  7. "oi.province_id," +
  8. "date_format(ri.create_time,'yyyy-MM-dd') date_id," +
  9. "ri.create_time," +
  10. "ri.refund_type," +
  11. "type_dic.dic_name," +
  12. "ri.refund_reason_type," +
  13. "reason_dic.dic_name," +
  14. "ri.refund_reason_txt," +
  15. "ri.refund_num," +
  16. "ri.refund_amount " +
  17. "from order_refund_info ri" +
  18. "join " +
  19. "order_info_refund oi" +
  20. "on ri.order_id = oi.id" +
  21. "join " +
  22. "base_dic for system_time as of ri.proc_time as type_dic" +
  23. "on ri.refund_type = type_dic.dic_code" +
  24. "join" +
  25. "base_dic for system_time as of ri.proc_time as reason_dic" +
  26. "on ri.refund_reason_type=reason_dic.dic_code"
  27. );
  28. tableEnv.createTemporaryView("result_table",resultTable);

4.2.5、创建退单事务事实表并写入数据

  1. // TODO 7. 创建 Kafka 退单事务事实表
  2. tableEnv.executeSql("CREATE TABLE dwd_trade_order_refund (" +
  3. "id string," +
  4. "user_id string," +
  5. "order_id string," +
  6. "sku_id string," +
  7. "province_id string," +
  8. "date_id string," +
  9. "create_time string," +
  10. "refund_type_code string," +
  11. "refund_type_name string," +
  12. "refund_reason_type_code string," +
  13. "refund_reason_type_name string," +
  14. "refund_reason_txt string," +
  15. "refund_num string," +
  16. "refund_amount string," +
  17. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_trade_order_refund")
  18. );
  19. // TODO 8. 写入数据
  20. tableEnv.executeSql("INSERT INTO dwd_trade_order_refund SELECT * FROM result_table");

5、交易域退款成功事务事实表

退款成功这个业务过程会影响到退款表(插入数据)、订单表(订单状态)和退单表(退款状态)

主要任务

  • 从退款表中提取退款成功的数据,关联字典表对支付类型做维度退化
  • 从订单表中提取退款成功的订单数据
  • 从退单表中提取退款成功的商品明细数据

补充

  • 退款表的粒度同样是商品(包含 sku_id)
  • 关联订单表为的是获取(province_id 和 user_id)
  • 关联退单表是为的获得商品的 sku_num
  • 关联字典表是为了对退款表的支付类型做维度退化

5.1、思路

思路和前面都是一样的:

  • 创建 topic_db 表
  • 过滤出退款表中退款成功的数据
    • refund_status = '0705'
  • 过滤出订单表和退单表中退款成功的数据
    • 订单表:order_status = '1006'
    • 退单表:redund_status = '0705'
  • 创建字典表的 lookup 表(为的是对支付类型字段做维度退化)
  • 关联 4 张表
    • 根据 order_id 进行 join
  • 创建 Kafka 主题并写入数据

5.2、实现

一样的套路,不解释了

  1. public class DwdTradeRefundPaySuc {
  2. public static void main(String[] args) {
  3. // TODO 1. 获取执行环境
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(1); // 生产环境中设置为kafka主题的分区数
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // 1.1 开启checkpoint
  8. env.enableCheckpointing(5 * 60000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointStorage("hdfs://hadoop102:8020/s/ck");
  10. env.getCheckpointConfig().setCheckpointTimeout(10 * 60000L);
  11. env.getCheckpointConfig().setMaxConcurrentCheckpoints(2); // 设置最大共存的checkpoint数量
  12. env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3,5000L)); // 固定频率重启: 尝试3次重启,每5s重启一次
  13. // 1.2 设置状态后端
  14. env.setStateBackend(new HashMapStateBackend());
  15. // 1.3 设置状态 ttl
  16. tableEnv.getConfig().setIdleStateRetention(Duration.ofSeconds(5));
  17. // TODO 2. 读取 topic_db 数据
  18. tableEnv.executeSql(MyKafkaUtil.getTopicDb("dwd_trade_refund_pay_suc"));
  19. // TODO 3. 过滤出退款表
  20. Table refundPayment = tableEnv.sqlQuery("SELECT " +
  21. "data['id'] id," +
  22. "data['order_id'] order_id," +
  23. "data['sku_id'] sku_id," +
  24. "data['payment_type'] payment_type," +
  25. "data['callback_time'] callback_time," +
  26. "data['total_amount'] total_amount," +
  27. "pt " +
  28. "FROM topic_db " +
  29. "WHERE `database`='gmall' " +
  30. "AND `table` = 'refund_payment' " +
  31. "AND data['refund_status']='0705' " +
  32. "AND `type` = 'update' " +
  33. "AND old['refund_status'] is not null"
  34. );
  35. tableEnv.createTemporaryView("refund_payment",refundPayment);
  36. // TODO 4. 过滤出订单表中退款成功的数据
  37. Table orderRefundSuc = tableEnv.sqlQuery("SELECT " +
  38. "data['id'] id, " +
  39. "data['user_id'] user_id, " +
  40. "data['province_id'] province_id, " +
  41. "pt " +
  42. "FROM topic_db " +
  43. "WHERE `database`='gmall' " +
  44. "AND `table`='order_info' " +
  45. "AND `type` = 'update'" +
  46. "AND data['order_status'] = '1006' " +
  47. "AND old['order_status'] is not null"
  48. );
  49. tableEnv.createTemporaryView("order_refund_suc",orderRefundSuc);
  50. // TODO 5. 过滤出退单表中退款成功的数据
  51. Table refundSuc = tableEnv.sqlQuery("SELECT " +
  52. "data['order_id'] order_id, " +
  53. "data['sku_id'] sku_id, " +
  54. "data['refund_num'] refund_num, " +
  55. "pt " +
  56. "FROM topic_db " +
  57. "WHERE `database`='gmall' " +
  58. "AND `table`='order_refund_info' " +
  59. "AND `type` = 'update'" +
  60. "AND data['order_status'] = '0705' " +
  61. "AND old['refund_status'] is not null"
  62. );
  63. tableEnv.createTemporaryView("order_refund_info",refundSuc);
  64. // TODO 6. 创建 MySQL lookup表
  65. tableEnv.executeSql(MysqlUtil.getBaseDicLookUpDDL());
  66. // TODO 7. 关联 4 张表
  67. Table resultTable = tableEnv.sqlQuery("select" +
  68. "rp.id," +
  69. "oi.user_id," +
  70. "rp.order_id," +
  71. "rp.sku_id," +
  72. "oi.province_id," +
  73. "rp.payment_type," +
  74. "dic.dic_name payment_type_name," +
  75. "date_format(rp.callback_time,'yyyy-MM-dd') date_id," +
  76. "rp.callback_time," +
  77. "ri.refund_num," +
  78. "rp.total_amount," +
  79. "from refund_payment rp " +
  80. "join " +
  81. "order_info oi" +
  82. "on rp.order_id = oi.id" +
  83. "join" +
  84. "order_refund_info ri" +
  85. "on rp.order_id = ri.order_id" +
  86. "and rp.sku_id = ri.sku_id" +
  87. "join " +
  88. "base_dic for system_time as of rp.proc_time as dic" +
  89. "on rp.payment_type = dic.dic_code");
  90. tableEnv.createTemporaryView("result_table", resultTable);
  91. // TODO 8. 创建 Kafka 退款成功事务事实表
  92. tableEnv.executeSql("create table dwd_trade_refund_pay_suc(" +
  93. "id string," +
  94. "user_id string," +
  95. "order_id string," +
  96. "sku_id string," +
  97. "province_id string," +
  98. "payment_type_code string," +
  99. "payment_type_name string," +
  100. "date_id string," +
  101. "callback_time string," +
  102. "refund_num string," +
  103. "refund_amount string "+
  104. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_trade_refund_pay_suc"));
  105. // TODO 8. 写入数据
  106. tableEnv.executeSql("INSERT INTO dwd_trade_refund_pay_suc SELECT * FROM result_table");
  107. }
  108. }

6、工具域优惠券类事务事实表

我们先看看业务系统中 coupon_use 中的数据:

        这张表的粒度是一个订单,每行数据代表的哪个用户在什么时候领取的券,什么时候使用了券( using_time 表示使用时间,used_time 表示支付时间),什么时候过期等信息;

6.1、思路

        每发生优惠券领取这样一个业务过程,coupon_use 数据库中就会 insert 一条新的数据,所以我们只需要直接从 topic_db 中过滤出 type = 'insert' 的数据即可;

        而关于优惠券下单使用(下单)和优惠券使用(支付)这两个业务过程除了过滤出 type = 'update' 之外,只需要分别对 using_time(下单) 和 used_time(支付) 是否为 null 进行过滤即可;

总结

  • 优惠券领取
    • type = ''insert'
  • 优惠券使用(下单)
    • type = 'update' AND using_time is not null AND used_time is null
    • 或者 type = 'update' AND data[coupon_status] = '1402' AND old[coupon_status] = '1401'
  • 优惠券使用(支付)
    • type = 'update' AND used_time is not null
    • 或者 type = 'update' AND data[coupon_status] = '1402' AND old[coupon_status] = '1401'

6.2、实现

6.2.1、工具域优惠券领取事务事实表

很简单,不多废话了

  1. public class DwdToolCouponGet {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // TODO 2. 状态后端设置
  8. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  10. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  11. env.getCheckpointConfig().enableExternalizedCheckpoints(
  12. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  13. );
  14. env.setRestartStrategy(RestartStrategies.failureRateRestart(
  15. 3, Time.days(1), Time.minutes(1)
  16. ));
  17. env.setStateBackend(new HashMapStateBackend());
  18. env.getCheckpointConfig().setCheckpointStorage(
  19. "hdfs://hadoop102:8020/ck"
  20. );
  21. System.setProperty("HADOOP_USER_NAME", "lyh");
  22. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  23. tableEnv.executeSql("create table `topic_db`(" +
  24. "`database` string," +
  25. "`table` string," +
  26. "`data` map<string, string>," +
  27. "`type` string," +
  28. "`ts` string" +
  29. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_tool_coupon_get"));
  30. // TODO 4. 读取优惠券领用数据,封装为表
  31. Table resultTable = tableEnv.sqlQuery("select" +
  32. "data['id']," +
  33. "data['coupon_id']," +
  34. "data['user_id']," +
  35. "date_format(data['get_time'],'yyyy-MM-dd') date_id," +
  36. "data['get_time']," +
  37. "ts" +
  38. "from topic_db" +
  39. "where `table` = 'coupon_use'" +
  40. "and `type` = 'insert'");
  41. tableEnv.createTemporaryView("result_table", resultTable);
  42. // TODO 5. 建立 Kafka-Connector dwd_tool_coupon_get 表
  43. tableEnv.executeSql("create table dwd_tool_coupon_get (" +
  44. "id string," +
  45. "coupon_id string," +
  46. "user_id string," +
  47. "date_id string," +
  48. "get_time string," +
  49. "ts string" +
  50. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_tool_coupon_get"));
  51. // TODO 6. 将数据写入 Kafka-Connector 表
  52. tableEnv.executeSql("insert into dwd_tool_coupon_get select * from result_table");
  53. }
  54. }

6.2.2、工具域优惠券使用(下单)事务事实表

  1. public class DwdToolCouponOrder {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // TODO 2. 状态后端设置
  8. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  10. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  11. env.getCheckpointConfig().enableExternalizedCheckpoints(
  12. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  13. );
  14. env.setRestartStrategy(RestartStrategies.failureRateRestart(
  15. 3, Time.days(1), Time.minutes(1)
  16. ));
  17. env.setStateBackend(new HashMapStateBackend());
  18. env.getCheckpointConfig().setCheckpointStorage(
  19. "hdfs://hadoop102:8020/ck"
  20. );
  21. System.setProperty("HADOOP_USER_NAME", "lyh");
  22. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  23. tableEnv.executeSql("create table `topic_db` (" +
  24. "`database` string," +
  25. "`table` string," +
  26. "`data` map<string, string>," +
  27. "`type` string," +
  28. "`old` map<string, string>," +
  29. "`ts` string" +
  30. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_tool_coupon_order"));
  31. // TODO 4. 读取优惠券领用表数据,筛选满足条件的优惠券下单数据
  32. Table couponUseOrder = tableEnv.sqlQuery("select" +
  33. "data['id'] id," +
  34. "data['coupon_id'] coupon_id," +
  35. "data['user_id'] user_id," +
  36. "data['order_id'] order_id," +
  37. "date_format(data['using_time'],'yyyy-MM-dd') date_id," +
  38. "data['using_time'] using_time," +
  39. "ts" +
  40. "from topic_db" +
  41. "where `table` = 'coupon_use'" +
  42. "and `type` = 'update'" +
  43. "and data['coupon_status'] = '1402'" +
  44. "and `old`['coupon_status'] = '1401'");
  45. tableEnv.createTemporaryView("result_table", couponUseOrder);
  46. // TODO 5. 建立 Kafka-Connector dwd_tool_coupon_order 表
  47. tableEnv.executeSql("create table dwd_tool_coupon_order(" +
  48. "id string," +
  49. "coupon_id string," +
  50. "user_id string," +
  51. "order_id string," +
  52. "date_id string," +
  53. "order_time string," +
  54. "ts string" +
  55. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_tool_coupon_order"));
  56. // TODO 6. 将数据写入 Kafka-Connector 表
  57. tableEnv.executeSql("" +
  58. "insert into dwd_tool_coupon_order select " +
  59. "id," +
  60. "coupon_id," +
  61. "user_id," +
  62. "order_id," +
  63. "date_id," +
  64. "using_time order_time," +
  65. "ts from result_table");
  66. }
  67. }

6.2.3、工具域优惠券使用(支付)事务事实表

  1. public class DwdToolCouponPay {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // TODO 2. 状态后端设置
  8. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  10. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  11. env.getCheckpointConfig().enableExternalizedCheckpoints(
  12. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  13. );
  14. env.setRestartStrategy(RestartStrategies.failureRateRestart(
  15. 3, Time.days(1), Time.minutes(1)
  16. ));
  17. env.setStateBackend(new HashMapStateBackend());
  18. env.getCheckpointConfig().setCheckpointStorage(
  19. "hdfs://hadoop102:8020/ck"
  20. );
  21. System.setProperty("HADOOP_USER_NAME", "lyh");
  22. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  23. tableEnv.executeSql("create table `topic_db` (" +
  24. "`database` string," +
  25. "`table` string," +
  26. "`data` map<string, string>," +
  27. "`type` string," +
  28. "`old` string," +
  29. "`ts` string" +
  30. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_tool_coupon_pay"));
  31. // TODO 4. 读取优惠券领用表数据,筛选优惠券使用(支付)数据
  32. Table couponUsePay = tableEnv.sqlQuery("select" +
  33. "data['id'] id," +
  34. "data['coupon_id'] coupon_id," +
  35. "data['user_id'] user_id," +
  36. "data['order_id'] order_id," +
  37. "date_format(data['used_time'],'yyyy-MM-dd') date_id," +
  38. "data['used_time'] used_time," +
  39. "`old`," +
  40. "ts" +
  41. "from topic_db" +
  42. "where `table` = 'coupon_use'" +
  43. "and `type` = 'update'" +
  44. "and data['used_time'] is not null");
  45. tableEnv.createTemporaryView("coupon_use_pay", couponUsePay);
  46. // TODO 5. 建立 Kafka-Connector dwd_tool_coupon_order 表
  47. tableEnv.executeSql("create table dwd_tool_coupon_pay(" +
  48. "id string," +
  49. "coupon_id string," +
  50. "user_id string," +
  51. "order_id string," +
  52. "date_id string," +
  53. "payment_time string," +
  54. "ts string" +
  55. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_tool_coupon_pay"));
  56. // TODO 6. 将数据写入 Kafka-Connector 表
  57. tableEnv.executeSql("" +
  58. "insert into dwd_tool_coupon_pay select " +
  59. "id," +
  60. "coupon_id," +
  61. "user_id," +
  62. "order_id," +
  63. "date_id," +
  64. "used_time payment_time," +
  65. "ts from coupon_use_pay");
  66. }
  67. }

7、互动域

7.1、思路分析

7.2、实现

7.2.1、互动域收藏商品事务事实表

 我可以看一下业务系统中的收藏表:

        可以看到,收藏表的粒度是商品,每收藏一件商品 favor_info 表就会 insert 一条数据,每取消收藏一件商品,并不会删除记录,而是将收藏表的 is_cancel 字段设为 1 并给 cancel_time 字段补充值;所以过滤条件很简单:

  1. insert 
  2. update 并且 is_cancel = 1 (取消收藏后又再次收藏)
  1. public class DwdInteractionFavorAdd {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // TODO 2. 状态后端设置
  8. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  9. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  10. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  11. env.getCheckpointConfig().enableExternalizedCheckpoints(
  12. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  13. );
  14. env.setRestartStrategy(RestartStrategies.failureRateRestart(
  15. 3, Time.days(1), Time.minutes(1)
  16. ));
  17. env.setStateBackend(new HashMapStateBackend());
  18. env.getCheckpointConfig().setCheckpointStorage(
  19. "hdfs://hadoop102:8020/ck"
  20. );
  21. System.setProperty("HADOOP_USER_NAME", "lyh");
  22. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  23. tableEnv.executeSql("create table topic_db(" +
  24. "`database` string," +
  25. "`table` string," +
  26. "`type` string," +
  27. "`data` map<string, string>," +
  28. "`ts` string" +
  29. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_interaction_favor_add"));
  30. // TODO 4. 读取收藏表数据
  31. Table favorInfo = tableEnv.sqlQuery("select" +
  32. "data['id'] id," +
  33. "data['user_id'] user_id," +
  34. "data['sku_id'] sku_id," +
  35. "date_format(data['create_time'],'yyyy-MM-dd') date_id," +
  36. "data['create_time'] create_time," +
  37. "ts" +
  38. "from topic_db" +
  39. "where `table` = 'favor_info'" +
  40. "and (`type` = 'insert' or (`type` = 'insert' and data['is_cancel'] = '1'))");
  41. tableEnv.createTemporaryView("favor_info", favorInfo);
  42. // TODO 5. 创建 Kafka-Connector dwd_interaction_favor_add 表
  43. tableEnv.executeSql("create table dwd_interaction_favor_add (" +
  44. "id string," +
  45. "user_id string," +
  46. "sku_id string," +
  47. "date_id string," +
  48. "create_time string," +
  49. "ts string" +
  50. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_interaction_favor_add"));
  51. // TODO 6. 将数据写入 Kafka-Connector 表
  52. tableEnv.executeSql("" +
  53. "insert into dwd_interaction_favor_add select * from favor_info");
  54. }
  55. }

7.2.2、互动域评价事务事实表

任务:建立 MySQL-Lookup 字典表,读取评论表数据,关联字典表以获取评价(好评、中评、差评、自动),将结果写入 Kafka 评价主题

所以这张表的逻辑页很简单:

  • 从 topic_db 中过滤出评价表的数据
  • 从评价表中过滤出 type='insert' 的数据(其实也只有 insert 类型了)
  • 创建 mysql lookup 字典表(为的是对评价等级进行维度退化)
  • 关联两张表
  • 写入Kafka
  1. public class DwdInteractionComment {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. // 获取配置对象
  8. Configuration configuration = tableEnv.getConfig().getConfiguration();
  9. // 为表关联时状态中存储的数据设置过期时间
  10. configuration.setString("table.exec.state.ttl", "5 s");
  11. // TODO 2. 状态后端设置
  12. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  13. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  14. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  15. env.getCheckpointConfig().enableExternalizedCheckpoints(
  16. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  17. );
  18. env.setRestartStrategy(RestartStrategies.failureRateRestart(
  19. 3, Time.days(1), Time.minutes(1)
  20. ));
  21. env.setStateBackend(new HashMapStateBackend());
  22. env.getCheckpointConfig().setCheckpointStorage(
  23. "hdfs://hadoop102:8020/ck"
  24. );
  25. System.setProperty("HADOOP_USER_NAME", "lyh");
  26. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  27. tableEnv.executeSql("create table topic_db(" +
  28. "`database` string," +
  29. "`table` string," +
  30. "`type` string," +
  31. "`data` map<string, string>," +
  32. "`proc_time` as PROCTIME()," +
  33. "`ts` string" +
  34. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_interaction_comment"));
  35. // TODO 4. 读取评论表数据
  36. Table commentInfo = tableEnv.sqlQuery("select" +
  37. "data['id'] id," +
  38. "data['user_id'] user_id," +
  39. "data['sku_id'] sku_id," +
  40. "data['order_id'] order_id," +
  41. "data['create_time'] create_time," +
  42. "data['appraise'] appraise," +
  43. "proc_time," +
  44. "ts" +
  45. "from topic_db" +
  46. "where `table` = 'comment_info'" +
  47. "and `type` = 'insert'");
  48. tableEnv.createTemporaryView("comment_info", commentInfo);
  49. // TODO 5. 建立 MySQL-LookUp 字典表
  50. tableEnv.executeSql(MysqlUtil.getBaseDicLookUpDDL());
  51. // TODO 6. 关联两张表
  52. Table resultTable = tableEnv.sqlQuery("select" +
  53. "ci.id," +
  54. "ci.user_id," +
  55. "ci.sku_id," +
  56. "ci.order_id," +
  57. "date_format(ci.create_time,'yyyy-MM-dd') date_id," +
  58. "ci.create_time," +
  59. "ci.appraise," +
  60. "dic.dic_name," +
  61. "ts" +
  62. "from comment_info ci" +
  63. "join" +
  64. "base_dic for system_time as of ci.proc_time as dic" +
  65. "on ci.appraise = dic.dic_code");
  66. tableEnv.createTemporaryView("result_table", resultTable);
  67. // TODO 7. 建立 Kafka-Connector dwd_interaction_comment 表
  68. tableEnv.executeSql("create table dwd_interaction_comment(" +
  69. "id string," +
  70. "user_id string," +
  71. "sku_id string," +
  72. "order_id string," +
  73. "date_id string," +
  74. "create_time string," +
  75. "appraise_code string," +
  76. "appraise_name string," +
  77. "ts string" +
  78. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_interaction_comment"));
  79. // TODO 8. 将关联结果写入 Kafka-Connector 表
  80. tableEnv.executeSql("" +
  81. "insert into dwd_interaction_comment select * from result_table");
  82. }
  83. }

7.2.3、用户域用户注册事务事实表

主要任务:读取用户表数据,获取注册时间,将用户注册信息写入 Kafka 用户注册主题

如何分辨是新用户还是老用户很简单:新 insert 进来的都是新用户,所以思路很简单:

  • 从 topic_db 中过滤出用户表的数据
  • 从用户表中过滤出 type = 'insert' 的数据
  • 写入 Kafka
  1. public class DwdUserRegister {
  2. public static void main(String[] args) throws Exception {
  3. // TODO 1. 环境准备
  4. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  5. env.setParallelism(4);
  6. StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
  7. tableEnv.getConfig().setLocalTimeZone(ZoneId.of("GMT+8"));
  8. // TODO 2. 启用状态后端
  9. env.enableCheckpointing(3000L, CheckpointingMode.EXACTLY_ONCE);
  10. env.getCheckpointConfig().setCheckpointTimeout(60 * 1000L);
  11. env.getCheckpointConfig().enableExternalizedCheckpoints(
  12. CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION
  13. );
  14. env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3000L);
  15. env.setRestartStrategy(
  16. RestartStrategies.failureRateRestart(3, Time.days(1L), Time.minutes(3L))
  17. );
  18. env.setStateBackend(new HashMapStateBackend());
  19. env.getCheckpointConfig().setCheckpointStorage("hdfs://hadoop102:8020/ck");
  20. System.setProperty("HADOOP_USER_NAME", "lyh");
  21. // TODO 3. 从 Kafka 读取业务数据,封装为 Flink SQL 表
  22. tableEnv.executeSql("create table topic_db(" +
  23. "`database` string," +
  24. "`table` string," +
  25. "`type` string," +
  26. "`data` map<string, string>," +
  27. "`ts` string" +
  28. ")" + MyKafkaUtil.getKafkaDDL("topic_db", "dwd_trade_order_detail"));
  29. // TODO 4. 读取用户表数据
  30. Table userInfo = tableEnv.sqlQuery("select" +
  31. "data['id'] user_id," +
  32. "data['create_time'] create_time," +
  33. "ts" +
  34. "from topic_db" +
  35. "where `table` = 'user_info'" +
  36. "and `type` = 'insert'");
  37. tableEnv.createTemporaryView("user_info", userInfo);
  38. // TODO 5. 创建 Kafka-Connector dwd_user_register 表
  39. tableEnv.executeSql("create table `dwd_user_register`(" +
  40. "`user_id` string," +
  41. "`date_id` string," +
  42. "`create_time` string," +
  43. "`ts` string" +
  44. ")" + MyKafkaUtil.getKafkaSinkDDL("dwd_user_register"));
  45. // TODO 6. 将输入写入 Kafka-Connector 表
  46. tableEnv.executeSql("insert into dwd_user_register" +
  47. "select " +
  48. "user_id," +
  49. "date_format(create_time, 'yyyy-MM-dd') date_id," +
  50. "create_time," +
  51. "ts" +
  52. "from user_info");
  53. }
  54. }

总结

        至此,DWD 层搭建完毕,实时数仓的 DWD 层没有离线数仓那么多分类(周期快照、累积快照等),所以做起来虽然不是那么简单,但是有迹可循;

        明天开始就是 DWS 层的搭建了,终于可以用到 clickhouse 了,浅浅期待一下~

        这次做实时数仓项目比离线快很多,但是并不是囫囵吞枣,看视频三年了,是在水视频过任务还是自己真心想理解一个项目自己很清楚。这个实时数仓和之前学的离线数仓的数据源都是一样的,对这个模拟出来的业务系统比较熟悉了,而且离线和实时有很多共同点所以学的很快,但是毕竟时间也花费了不少,每天的早八晚八;

        很多人做项目也好,学新技术也好,看到几十个小时的时长就想着过任务一样,自己骗自己看过视频就算学会了,就算过去了,好像再也不用看了一样。前期基础不好好学,后期很难有那种"灵光乍现"、"触类旁通"的感觉,因为知识储备就不够。就像我自己每天说要看书(今天下单一本《乡土中国》),但是老想想着刷刷抖音,骗自己抖音的地摊文化更有营养,到头来现在在写作或者表达的时候还是经常词穷;

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/寸_铁/article/detail/943134
推荐阅读
相关标签
  

闽ICP备14008679号