当前位置:   article > 正文

记录一次sharding-jdbc的demo,按天分表,动态更新sharding节点配置_sharding更新节点怎么做

sharding更新节点怎么做


前言

之前在上级公司上班时候,项目使用了sharding-jdbc,但是没有自己从头到尾的进行过相关配置,所以就自己写了一个入门demo。


一、项目准备

整个demo使用了springboot+mybatis-plus+hutool,数据库:mysql,当然少不了sharding-jdbc。
demo里选择动态更新sharding中节点配置

二、正文

demo选择按照表字段 'create_time’为分表字段,按天分表,两个示例表为1对1关联表,

1、数据库

这里建表就手动建了,正式的可以选择使用定时任务去建表

CREATE TABLE `user_20220523` (
  `user_id` varchar(30) NOT NULL,
  `user_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
CREATE TABLE `user_20220524` (
  `user_id` varchar(30) NOT NULL,
  `user_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
CREATE TABLE `user_address_20220523` (
  `id` int NOT NULL AUTO_INCREMENT,
  `user_id` varchar(30) NOT NULL,
  `address` varchar(100) DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=45 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
CREATE TABLE `user_address_20220524` (
  `id` int NOT NULL AUTO_INCREMENT,
  `user_id` varchar(30) NOT NULL,
  `address` varchar(100) DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=45 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

2、引入依赖

这里引入了,spring-boot相关、mysql相关、mybatis-plus、lombok,hutool、sharding-jdbc

<dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid</artifactId>
            <version>1.1.22</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
        </dependency>
        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>
            <version>3.5.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.shardingsphere</groupId>
            <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
            <version>4.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
        <dependency>
            <groupId>cn.hutool</groupId>
            <artifactId>hutool-all</artifactId>
            <version>5.8.1</version>
        </dependency>
    </dependencies>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

3、项目代码结构

实体类、service、mapper是用mybatisX生成的,我就不粘贴了。

4、配置文件

sharding的配置写在了配置文件里,这里只粘贴了sharding的相关配置。
表的节点这里随便写一个初始,项目启动会动态的去生成相关的节点配置。
分片使用标准策略、指定范围算法和精确算法的类名。
两个逻辑表是关联表,使用binding-tables绑定表关系

spring:
  shardingsphere:
    datasource:
      names: 'sharding0'  #数据库
      sharding0:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://...
        username: ...
        password: '...'
    sharding:
      tables:
        user:
          actual-data-nodes: sharding0.user{'20220524'} #初始随便配置,项目启动自动生成相关配置
          table-strategy:
            standard:
              sharding-column: create_time
              range-algorithm-class-name: com.wxp.config.CreateTimeDayTableShardingAlgorithm
              precise-algorithm-class-name: com.wxp.config.CreateTimeDayTableShardingAlgorithm
        user_address:
          actual-data-nodes: sharding0.user_address{'20220524'} #初始随便配置,项目启动自动生成相关配置
          table-strategy:
            standard:
              sharding-column: create_time
              range-algorithm-class-name: com.wxp.config.CreateTimeDayTableShardingAlgorithm
              precise-algorithm-class-name: com.wxp.config.CreateTimeDayTableShardingAlgorithm
      binding-tables:
        - user,user_address
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

5、相关配置类

5.1、LocalDateTime类型转换

@Component
public class LocalDateTimeTypeHandler extends BaseTypeHandler<LocalDateTime> {

    @Override
    public void setNonNullParameter(PreparedStatement preparedStatement, int i, LocalDateTime localDateTime, JdbcType jdbcType) throws SQLException {
        preparedStatement.setObject(i, localDateTime);
    }

    @Override
    public LocalDateTime getNullableResult(ResultSet resultSet, String columnName) throws SQLException {
        Timestamp timestamp = resultSet.getTimestamp(columnName);
        return getLocalDateTime(timestamp);
    }

    @Override
    public LocalDateTime getNullableResult(ResultSet resultSet, int columnIndex) throws SQLException {
        Timestamp timestamp = resultSet.getTimestamp(columnIndex);
        return getLocalDateTime(timestamp);
    }

    @Override
    public LocalDateTime getNullableResult(CallableStatement callableStatement, int columnIndex) throws SQLException {
        Timestamp timestamp = callableStatement.getTimestamp(columnIndex);
        return getLocalDateTime(timestamp);
    }

    private LocalDateTime getLocalDateTime(Timestamp timestamp) {
        if (timestamp != null) {
            return timestamp.toLocalDateTime();
        }
        return null;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

5.2分片算法

@Component
@Data
public class CreateTimeDayTableShardingAlgorithm
        implements PreciseShardingAlgorithm<LocalDateTime>, RangeShardingAlgorithm<LocalDateTime> {

	/**
     * 精准算法返回单个节点
     */
    @Override
    public String doSharding(Collection<String> collection, PreciseShardingValue<LocalDateTime> preciseShardingValue) {
        LocalDateTime time = preciseShardingValue.getValue();
        //格式化列值
        String format = LocalDateTimeUtil.format(time, "yyyyMMdd");
        for (String s : collection) {
            if (s.endsWith(format)) {
                return s;
            }
        }
        return null;
    }

	/**
     * 范围算法
     * SQL中有 BETWEEN AND、>、<、>=、<= 等条件的分片
     * 返回多个表节点
     */
    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<LocalDateTime> rangeShardingValue) {
    	//获取sql的分片列的值 valueRange共有两个值
        // lowerEndpoint:起始 当<、<= 为无限会报错,取当前节点的初始值,我这里是写死的,可以写成动态的
        // upperEndpoint:结束 当>、>= 为无限会报错,取当前节点的最后的值,同上
        // BETWEEN 两个都有值
        Range<LocalDateTime> valueRange = rangeShardingValue.getValueRange();
        LocalDateTime start = null;
        try {
            start = valueRange.lowerEndpoint();
        } catch (Exception e) {
            //<、<=
            start = LocalDateTimeUtil.parse("2022-05-23 00:00:00","yyyy-MM-dd HH:mm:ss");
        }
        LocalDateTime end = null;
        try {
            end = valueRange.upperEndpoint();
        } catch (Exception e) {
            //>、>=
            end = LocalDateTimeUtil.parse("2022-05-24 00:00:00","yyyy-MM-dd HH:mm:ss");
        }
        Set<String> suffixList = new HashSet<>();
        // 获取当前节点的真实表名
        Iterator<String> iterator = availableTargetNames.iterator();
        String tableName = iterator.next();
        String name = tableName.substring(0, tableName.lastIndexOf("_"));
        // 拼接表名
        if (ObjectUtils.isNotNull(start) && ObjectUtils.isNotNull(end)) {
            String startName = LocalDateTimeUtil.format(start, "yyyyMMdd");
            String endName = LocalDateTimeUtil.format(end, "yyyyMMdd");
            // 循环开始、结束时间拼接,当表名在当前所有配置节点里存在时,添加到list里
            while (!startName.equals(endName)) {
                if (availableTargetNames.contains(name + "_" + startName)) {
                    suffixList.add(name + "_" + startName);
                }
                start = start.plusDays(1L);
                startName =LocalDateTimeUtil.format(start, "yyyyMMdd");
            }
            if (availableTargetNames.contains(name + "_" + endName)) {
                suffixList.add(name + "_" + endName);
            }
        }
        if (CollectionUtils.isNotEmpty(suffixList)) {
            return suffixList;
        }
        return availableTargetNames;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74

5.3 项目启动动态更新节点配置

如果定时任务动态建表,需要在定时任务执行该方法更新节点配置。

@Component
public class AutoConfigShardingDateNodes implements ApplicationRunner {

    @Resource
    private DataSource dataSource;

    @Override
    public void run(ApplicationArguments args) throws Exception {
        //sharding数据源
        ShardingDataSource shardingDataSource = (ShardingDataSource) this.dataSource;
        //所有的拆分表
        ShardingRule rule = shardingDataSource.getRuntimeContext().getRule();
        Collection<TableRule> tableRules = rule.getTableRules();
        for (TableRule tableRule : tableRules) {
            // 表的节点
            List<DataNode> dataNodes = tableRule.getActualDataNodes();
            // 动态刷新actualDataNodesField
            Field actualDataNodesField = TableRule.class.getDeclaredField("actualDataNodes");
            Field modifiersField = Field.class.getDeclaredField("modifiers");
            modifiersField.setAccessible(true);
            // 设置修饰符:private
            modifiersField.setInt(actualDataNodesField, actualDataNodesField.getModifiers() & ~Modifier.FINAL);
            // 我这里就两个写死的表,这里可以获取now
            // 下面循环次数可以控制当前的项目可以访问的表格(只可以看几天内的数据)
            LocalDateTime localDateTime = LocalDateTimeUtil.parse("2022-05-24 00:00:00","yyyy-MM-dd HH:mm:ss");
            // 新节点
            List<DataNode> newDataNodes = new ArrayList<>();
            // 数据源
            String dataSourceName = dataNodes.get(0).getDataSourceName();
            // 逻辑表名
            String logicTableName = tableRule.getLogicTable();
            StringBuilder stringBuilder = new StringBuilder().append(dataSourceName).append(".").append(logicTableName);
            int length = stringBuilder.length();
            // 循环动态拼接所有节点表名
            for (int i = 0; i < 2; i++) {
                stringBuilder.setLength(length);
                stringBuilder.append(localDateTime.format(DateTimeFormatter.ofPattern("_yyyyMMdd")));
                DataNode dataNode = new DataNode(stringBuilder.toString());
                newDataNodes.add(dataNode);
                localDateTime = localDateTime.minusDays(1);
            }
            actualDataNodesField.setAccessible(true);
            // 数据更新回去
            actualDataNodesField.set(tableRule, newDataNodes);
            Set<String> actualTables = Sets.newHashSet();
            Map<DataNode, Integer> dataNodeIntegerMap = Maps.newHashMap();

            AtomicInteger a = new AtomicInteger(0);
            newDataNodes.forEach((dataNode -> {
                actualTables.add(dataNode.getTableName());
                if (a.intValue() == 0) {
                    a.incrementAndGet();
                    dataNodeIntegerMap.put(dataNode, 0);
                }
                else {
                    dataNodeIntegerMap.put(dataNode, a.intValue());
                    a.incrementAndGet();
                }
            }));
            
            // 动态刷新:actualTables
            Field actualTablesField = TableRule.class.getDeclaredField("actualTables");
            actualTablesField.setAccessible(true);
            actualTablesField.set(tableRule, actualTables);
            
            // 动态刷新:dataNodeIndexMap
            Field dataNodeIndexMapField = TableRule.class.getDeclaredField("dataNodeIndexMap");
            dataNodeIndexMapField.setAccessible(true);
            dataNodeIndexMapField.set(tableRule, dataNodeIntegerMap);
            
            // 动态刷新:datasourceToTablesMap
            Map<String, Collection<String>> datasourceToTablesMap = Maps.newHashMap();
            datasourceToTablesMap.put(dataSourceName, actualTables);
            Field datasourceToTablesMapField = TableRule.class.getDeclaredField("datasourceToTablesMap");
            datasourceToTablesMapField.setAccessible(true);
            datasourceToTablesMapField.set(tableRule, datasourceToTablesMap);
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79

三、测试

1、insert

@GetMapping("/initdata")
    public void initData() {
        LocalDateTime now = LocalDateTimeUtil.parse("2022-05-24 11:11:11", "yyyy-MM-dd HH:mm:ss");
        String userId = IdUtil.getSnowflakeNextIdStr();
        User user = new User();
        user.setUserId(userId);
        user.setUserName("张三");
        user.setCreateTime(now);
        userService.save(user);
        UserAddress address = new UserAddress();
        address.setUserId(user.getUserId());
        address.setAddress("北京市");
        address.setCreateTime(now);
        userAddressService.save(address);
        User user2 = new User();
        userId = IdUtil.getSnowflakeNextIdStr();
        user2.setUserId(userId);
        user2.setUserName("李四");
        user2.setCreateTime(now);
        userService.save(user2);
        UserAddress address2 = new UserAddress();
        address2.setUserId(user2.getUserId());
        address2.setAddress("南京市");
        address2.setCreateTime(now);
        userAddressService.save(address2);
        LocalDateTime yesterday = now.minusDays(1);
        userId = IdUtil.getSnowflakeNextIdStr();
        User user3 = new User();
        user3.setUserId(userId);
        user3.setUserName("王五");
        user3.setCreateTime(yesterday);
        userService.save(user3);
        UserAddress address3 = new UserAddress();
        address3.setUserId(user3.getUserId());
        address3.setAddress("天津市");
        address3.setCreateTime(yesterday);
        userAddressService.save(address3);
        userId = IdUtil.getSnowflakeNextIdStr();
        User user4 = new User();
        user4.setUserId(userId);
        user4.setUserName("天天");
        user4.setCreateTime(yesterday);
        userService.save(user4);
        UserAddress address4 = new UserAddress();
        address4.setUserId(user4.getUserId());
        address4.setAddress("深圳市");
        address4.setCreateTime(yesterday);
        userAddressService.save(address4);
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

日志就复制这一行,剩下的脑补把


2、select

2.1查询大于等于,只执行了这一个sql,范围算法生效。

	@GetMapping("/get")
    public List<User> get() {
        LocalDateTime now = LocalDateTimeUtil.parse("2022-05-24 00:00:00", "yyyy-MM-dd HH:mm:ss");
        return userService.lambdaQuery().ge(User::getCreateTime,now).list();
    }
  • 1
  • 2
  • 3
  • 4
  • 5

在这里插入图片描述

2.2联表查询所有

	@GetMapping("/getAll")
    public List<UserVo> getAll() {
        return userService.listVO();
    }
  • 1
  • 2
  • 3
  • 4

中间service截图省略,下面mapper.xml

	<select id="listVO" resultType="com.wxp.model.vo.UserVo">
        select  u.user_id,u.user_name,u.create_time,ua.address
        from user u
        left join user_address ua
        on u.user_id = ua.user_id
    </select>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

在这里插入图片描述
就到这了

最后

这里就只写了分表的demo,比较简单。如有错误,望理解!!!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/407138
推荐阅读
相关标签
  

闽ICP备14008679号