当前位置:   article > 正文

SpringBoot集成ShardingSphere(手工配置)_springboot创建shardingspere

springboot创建shardingspere

在中大型项目开发过程中,如何存储大量的数据是我们不可回避的难题,对这个问题的处理直接关系到系统的稳定性和可用性,我曾经负责过一个公司重点的老项目(交接过来的,哈哈~~),由于开发的时候没能正确预估数据的增长量,两三年之后有若干张表的单表数据量已经达到了五千万以上(未分库分表),我们都知道出于性能考虑mysql的单表数据量不要超过一千万,而且这个系统的数据库还有其他问题,比如和其他系统数据共库等等,最终导致系统所在库的表达到了1500多张,若干表的单表数据量也非常大。尽管后来采取了查询限流,强制索引,限定查询条件,归档数据,读写分离等等措施,这个系统的可维护性依然非常低。对于这样大数据量的系统而言必须采取分表和分库结合的策略。

目前市面上比较常用且开源的分库分表中间件有很多,之前有接触过MyCat,一种基于代理模式的数据库中间件。最近了解了下ShardingSphere,它是基于客户端分表分库的数据库中间件,当然也有代理模式的,只是没有广泛采用,详情见官网上的文档:https://shardingsphere.apache.org/document/legacy/3.x/document/cn/overview/。下面记录下ShardingSphere和SpringBoot整合实现分库分表的大致步骤。分两篇博客展开,分别介绍基于sharding-jdbc-core依赖的手工配置的集成方法和基于sharding-jdbc-spring-boot-starter依赖的自动配置的方法,除了集成了SpringBoot还使用了常用的MyBatis+Druid等技术。这一篇介绍下手工配置的方法,这种方法的优点是灵活。

  • 首先创建SpringBoot项目,我是用的是2.1.6版本,引入sharding-jdbc-core依赖和druid、MyBatis等依赖,开发过程中还发现了一个好用的工具依赖hutool,此时的pom.xml文件如下:
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  4. <modelVersion>4.0.0</modelVersion>
  5. <parent>
  6. <groupId>org.springframework.boot</groupId>
  7. <artifactId>spring-boot-starter-parent</artifactId>
  8. <version>2.1.6.RELEASE</version>
  9. <relativePath/>
  10. </parent>
  11. <groupId>com.hyc</groupId>
  12. <artifactId>shard3-manual</artifactId>
  13. <version>0.0.1-SNAPSHOT</version>
  14. <name>shard3-manual</name>
  15. <description>Spring Boot集成shardingsphere3.x</description>
  16. <properties>
  17. <java.version>1.8</java.version>
  18. </properties>
  19. <dependencies>
  20. <dependency>
  21. <groupId>org.springframework.boot</groupId>
  22. <artifactId>spring-boot-starter-actuator</artifactId>
  23. </dependency>
  24. <dependency>
  25. <groupId>org.springframework.boot</groupId>
  26. <artifactId>spring-boot-starter-web</artifactId>
  27. </dependency>
  28. <dependency>
  29. <groupId>org.mybatis.spring.boot</groupId>
  30. <artifactId>mybatis-spring-boot-starter</artifactId>
  31. <version>2.0.1</version>
  32. </dependency>
  33. <dependency>
  34. <groupId>org.springframework.boot</groupId>
  35. <artifactId>spring-boot-devtools</artifactId>
  36. <scope>runtime</scope>
  37. <optional>true</optional>
  38. </dependency>
  39. <dependency>
  40. <groupId>mysql</groupId>
  41. <artifactId>mysql-connector-java</artifactId>
  42. <scope>runtime</scope>
  43. </dependency>
  44. <dependency>
  45. <groupId>org.springframework.boot</groupId>
  46. <artifactId>spring-boot-configuration-processor</artifactId>
  47. <optional>true</optional>
  48. </dependency>
  49. <dependency>
  50. <groupId>org.projectlombok</groupId>
  51. <artifactId>lombok</artifactId>
  52. <optional>true</optional>
  53. </dependency>
  54. <dependency>
  55. <groupId>org.springframework.boot</groupId>
  56. <artifactId>spring-boot-starter-test</artifactId>
  57. <scope>test</scope>
  58. </dependency>
  59. <!-- shardingsphere start -->
  60. <dependency>
  61. <groupId>io.shardingsphere</groupId>
  62. <artifactId>sharding-jdbc-core</artifactId>
  63. <version>3.1.0</version>
  64. </dependency>
  65. <dependency>
  66. <groupId>org.apache.commons</groupId>
  67. <artifactId>commons-lang3</artifactId>
  68. <version>3.9</version>
  69. </dependency>
  70. <dependency>
  71. <groupId>cn.hutool</groupId>
  72. <artifactId>hutool-all</artifactId>
  73. <version>4.5.13</version>
  74. </dependency>
  75. <dependency>
  76. <groupId>com.alibaba</groupId>
  77. <artifactId>fastjson</artifactId>
  78. <version>1.2.58</version>
  79. </dependency>
  80. <dependency>
  81. <groupId>com.alibaba</groupId>
  82. <artifactId>druid</artifactId>
  83. <version>1.1.17</version>
  84. </dependency>
  85. </dependencies>
  86. <build>
  87. <plugins>
  88. <plugin>
  89. <groupId>org.springframework.boot</groupId>
  90. <artifactId>spring-boot-maven-plugin</artifactId>
  91. </plugin>
  92. </plugins>
  93. </build>
  94. </project>
  • 本地建test和test2两个分库,在每个分库下建立一样的分表,用户表,用户地址表,订单表,订单明细表和商品表。设想用户地址表依赖于用户表,订单明细表依赖于订单表,同一个用户的用户表数据和订单数据在一个库中不在一套表中,即后缀一致(最好能在一套表中),以上的表,除了商品表不需要分表以外,其他表都需要分表(商品的数据在我们公司大约几十万条不需要分表,因此作为全局表放在每个库里)以提高查询性能。建表语句如下:
  1. SET NAMES utf8mb4;
  2. -- ----------------------------
  3. -- Table structure for t_order_0
  4. -- ----------------------------
  5. DROP TABLE IF EXISTS `t_order_0`;
  6. CREATE TABLE `t_order_0` (
  7. `order_id` bigint(32) NOT NULL COMMENT '主键',
  8. `order_no` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '订单号',
  9. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  10. `order_amount` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '订单总额',
  11. `order_status` int(4) NOT NULL DEFAULT 1 COMMENT '订单状态,1-进行中,2-已完成,3-已取消',
  12. `remark` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '留言',
  13. PRIMARY KEY (`order_id`),
  14. INDEX `idx_order_user_id`(`user_id`),
  15. INDEX `idx_order_order_no`(`order_no`)
  16. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '订单表';
  17. -- ----------------------------
  18. -- Table structure for t_order_1
  19. -- ----------------------------
  20. DROP TABLE IF EXISTS `t_order_1`;
  21. CREATE TABLE `t_order_1` (
  22. `order_id` bigint(32) NOT NULL COMMENT '主键',
  23. `order_no` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '订单号',
  24. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  25. `order_amount` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '订单总额',
  26. `order_status` int(4) NOT NULL DEFAULT 1 COMMENT '订单状态,1-进行中,2-已完成,3-已取消',
  27. `remark` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '留言',
  28. PRIMARY KEY (`order_id`),
  29. INDEX `idx_order_user_id`(`user_id`),
  30. INDEX `idx_order_order_no`(`order_no`)
  31. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '订单表';
  32. -- ----------------------------
  33. -- Table structure for t_order_item_0
  34. -- ----------------------------
  35. DROP TABLE IF EXISTS `t_order_item_0`;
  36. CREATE TABLE `t_order_item_0` (
  37. `order_item_id` bigint(32) NOT NULL COMMENT '主键',
  38. `order_id` bigint(32) NOT NULL COMMENT '订单id',
  39. `product_id` bigint(32) NOT NULL COMMENT '商品id',
  40. `item_price` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '单价',
  41. `total_num` int(4) NOT NULL DEFAULT 1 COMMENT '数量',
  42. `total_price` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '订单总额',
  43. `order_time` datetime(0) NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '下单时间',
  44. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  45. PRIMARY KEY (`order_item_id`),
  46. INDEX `idx_order_item_order_id`(`order_id`),
  47. INDEX `idx_order_item_user_id`(`user_id`),
  48. INDEX `idx_order_item_product_id`(`product_id`),
  49. INDEX `idx_order_item_order_time`(`order_time`)
  50. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '订单明细表';
  51. -- ----------------------------
  52. -- Table structure for t_order_item_1
  53. -- ----------------------------
  54. DROP TABLE IF EXISTS `t_order_item_1`;
  55. CREATE TABLE `t_order_item_1` (
  56. `order_item_id` bigint(32) NOT NULL COMMENT '主键',
  57. `order_id` bigint(32) NOT NULL COMMENT '订单id',
  58. `product_id` bigint(32) NOT NULL COMMENT '商品id',
  59. `item_price` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '单价',
  60. `total_num` int(4) NOT NULL DEFAULT 1 COMMENT '数量',
  61. `total_price` decimal(20, 2) NOT NULL DEFAULT 0.00 COMMENT '订单总额',
  62. `order_time` datetime(0) NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '下单时间',
  63. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  64. PRIMARY KEY (`order_item_id`),
  65. INDEX `idx_order_item_order_id`(`order_id`),
  66. INDEX `idx_order_item_user_id`(`user_id`),
  67. INDEX `idx_order_item_product_id`(`product_id`),
  68. INDEX `idx_order_item_order_time`(`order_time`)
  69. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '订单明细表';
  70. -- ----------------------------
  71. -- Table structure for t_product
  72. -- ----------------------------
  73. DROP TABLE IF EXISTS `t_product`;
  74. CREATE TABLE `t_product` (
  75. `product_id` bigint(32) NOT NULL COMMENT '主键',
  76. `code` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '商品编码',
  77. `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '商品名称',
  78. `desc` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '商品描述',
  79. PRIMARY KEY (`product_id`),
  80. INDEX `idx_user_product_code`(`code`)
  81. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '商品表';
  82. -- ----------------------------
  83. -- Table structure for t_user_0
  84. -- ----------------------------
  85. DROP TABLE IF EXISTS `t_user_0`;
  86. CREATE TABLE `t_user_0` (
  87. `user_id` bigint(32) NOT NULL COMMENT '主键',
  88. `id_number` varchar(18) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '身份证号码',
  89. `name` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '姓名',
  90. `age` int(4) DEFAULT NULL COMMENT '年龄',
  91. `gender` int(2) DEFAULT 1 COMMENT '性别:1-男;2-女',
  92. `birth_date` date DEFAULT NULL COMMENT '出生日期',
  93. PRIMARY KEY (`user_id`)
  94. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '用户表';
  95. -- ----------------------------
  96. -- Table structure for t_user_1
  97. -- ----------------------------
  98. DROP TABLE IF EXISTS `t_user_1`;
  99. CREATE TABLE `t_user_1` (
  100. `user_id` bigint(32) NOT NULL COMMENT '主键',
  101. `id_number` varchar(18) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '身份证号码',
  102. `name` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '姓名',
  103. `age` int(4) DEFAULT NULL COMMENT '年龄',
  104. `gender` int(2) DEFAULT 1 COMMENT '性别:1-男;2-女',
  105. `birth_date` date DEFAULT NULL COMMENT '出生日期',
  106. PRIMARY KEY (`user_id`)
  107. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '用户表';
  108. -- ----------------------------
  109. -- Table structure for t_user_address_0
  110. -- ----------------------------
  111. DROP TABLE IF EXISTS `t_user_address_0`;
  112. CREATE TABLE `t_user_address_0` (
  113. `address_id` bigint(32) NOT NULL COMMENT '主键',
  114. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  115. `province` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '省',
  116. `city` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '市',
  117. `district` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '区',
  118. `detail` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '详细地址',
  119. `sort` int(4) DEFAULT 1 COMMENT '排序',
  120. `gender` int(2) DEFAULT 1 COMMENT '性别:1-男;2-女',
  121. PRIMARY KEY (`address_id`),
  122. INDEX `idx_user_address_user_id`(`user_id`)
  123. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '地址表';
  124. -- ----------------------------
  125. -- Table structure for t_user_address_1
  126. -- ----------------------------
  127. DROP TABLE IF EXISTS `t_user_address_1`;
  128. CREATE TABLE `t_user_address_1` (
  129. `address_id` bigint(32) NOT NULL COMMENT '主键',
  130. `user_id` bigint(32) NOT NULL COMMENT '用户id',
  131. `province` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '省',
  132. `city` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '市',
  133. `district` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '区',
  134. `detail` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '详细地址',
  135. `sort` int(4) DEFAULT 1 COMMENT '排序',
  136. `gender` int(2) DEFAULT 1 COMMENT '性别:1-男;2-女',
  137. PRIMARY KEY (`address_id`),
  138. INDEX `idx_user_address_user_id`(`user_id`)
  139. ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '地址表';
  • 使用Mybatis-Generator生成相关代码,这里只用用户表来举例,其他的表类似,限于篇幅不再一一列举
  1. package com.hyc.entity;
  2. import lombok.Data;
  3. import java.util.Date;
  4. @Data
  5. public class User {
  6. private Long userId;
  7. private String idNumber;
  8. private String name;
  9. private Integer age;
  10. private Integer gender;
  11. private Date birthDate;
  12. }
  1. package com.hyc.dao;
  2. import com.hyc.entity.User;
  3. import com.hyc.vo.ListUserVo;
  4. import org.apache.ibatis.annotations.Param;
  5. import java.util.List;
  6. public interface UserMapper {
  7. int deleteByPrimaryKey(Long userId);
  8. int insert(User record);
  9. int insertSelective(User record);
  10. User selectByPrimaryKey(Long userId);
  11. int updateByPrimaryKeySelective(User record);
  12. int updateByPrimaryKey(User record);
  13. List<User> listByCondition(@Param("condition") ListUserVo query);
  14. int count(@Param("condition") ListUserVo query);
  15. }
  1. <?xml version="1.0" encoding="UTF-8" ?>
  2. <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
  3. <mapper namespace="com.hyc.dao.UserMapper" >
  4. <resultMap id="BaseResultMap" type="com.hyc.entity.User" >
  5. <id column="user_id" property="userId" jdbcType="BIGINT" />
  6. <result column="id_number" property="idNumber" jdbcType="VARCHAR" />
  7. <result column="name" property="name" jdbcType="VARCHAR" />
  8. <result column="age" property="age" jdbcType="INTEGER" />
  9. <result column="gender" property="gender" jdbcType="INTEGER" />
  10. <result column="birth_date" property="birthDate" jdbcType="DATE" />
  11. </resultMap>
  12. <sql id="Base_Column_List" >
  13. user_id, id_number, name, age, gender, birth_date
  14. </sql>
  15. <sql id="queryListConditon">
  16. <if test="condition.userId != null">
  17. and user_id = #{condition.userId, jdbcType=BIGINT}
  18. </if>
  19. <if test="condition.code != null and condition.code !=''">
  20. and code = #{condition.code, jdbcType=VARCHAR}
  21. </if>
  22. <if test="condition.name != null and condition.name !=''">
  23. and name = #{condition.name, jdbcType=VARCHAR}
  24. </if>
  25. <if test="condition.age != null">
  26. and age = #{condition.age, jdbcType=INTEGER}
  27. </if>
  28. <if test="condition.gender != null">
  29. and gender = #{condition.gender, jdbcType=INTEGER}
  30. </if>
  31. <if test="condition.joinDateStart != null">
  32. and <![CDATA[ join_date >= #{condition.joinDateStart, jdbcType=TIMESTAMP} ]]>
  33. </if>
  34. <if test="condition.joinDateEnd != null">
  35. and <![CDATA[ join_date <= #{condition.joinDateEnd, jdbcType=TIMESTAMP} ]]>
  36. </if>
  37. </sql>
  38. <select id="listByCondition" resultMap="BaseResultMap" parameterType="java.util.Map" >
  39. select
  40. <include refid="Base_Column_List" />
  41. from t_user
  42. <where>
  43. <include refid="queryListConditon"/>
  44. </where>
  45. order by id desc
  46. <if test="condition.start != null and condition.pageSize != null ">
  47. limit #{condition.start},#{condition.pageSize}
  48. </if>
  49. </select>
  50. <select id="count" resultType="java.lang.Integer" parameterType="java.util.Map" >
  51. select
  52. count(1)
  53. from t_user
  54. <where>
  55. <include refid="queryListConditon"/>
  56. </where>
  57. </select>
  58. <select id="selectByPrimaryKey" resultMap="BaseResultMap" parameterType="java.lang.Long" >
  59. select
  60. <include refid="Base_Column_List" />
  61. from t_user
  62. where user_id = #{userId,jdbcType=BIGINT}
  63. </select>
  64. <delete id="deleteByPrimaryKey" parameterType="java.lang.Long" >
  65. delete from t_user
  66. where user_id = #{userId,jdbcType=BIGINT}
  67. </delete>
  68. <insert id="insert" parameterType="com.hyc.entity.User" useGeneratedKeys="true" keyProperty="userId">
  69. insert into t_user (user_id, id_number, name,
  70. age, gender, birth_date
  71. )
  72. values (#{userId,jdbcType=BIGINT}, #{idNumber,jdbcType=VARCHAR}, #{name,jdbcType=VARCHAR},
  73. #{age,jdbcType=INTEGER}, #{gender,jdbcType=INTEGER}, #{birthDate,jdbcType=DATE}
  74. )
  75. </insert>
  76. <insert id="insertSelective" parameterType="com.hyc.entity.User" useGeneratedKeys="true" keyProperty="userId">
  77. insert into t_user
  78. <trim prefix="(" suffix=")" suffixOverrides="," >
  79. <if test="userId != null" >
  80. user_id,
  81. </if>
  82. <if test="idNumber != null" >
  83. id_number,
  84. </if>
  85. <if test="name != null" >
  86. name,
  87. </if>
  88. <if test="age != null" >
  89. age,
  90. </if>
  91. <if test="gender != null" >
  92. gender,
  93. </if>
  94. <if test="birthDate != null" >
  95. birth_date,
  96. </if>
  97. </trim>
  98. <trim prefix="values (" suffix=")" suffixOverrides="," >
  99. <if test="userId != null" >
  100. #{userId,jdbcType=BIGINT},
  101. </if>
  102. <if test="idNumber != null" >
  103. #{idNumber,jdbcType=VARCHAR},
  104. </if>
  105. <if test="name != null" >
  106. #{name,jdbcType=VARCHAR},
  107. </if>
  108. <if test="age != null" >
  109. #{age,jdbcType=INTEGER},
  110. </if>
  111. <if test="gender != null" >
  112. #{gender,jdbcType=INTEGER},
  113. </if>
  114. <if test="birthDate != null" >
  115. #{birthDate,jdbcType=DATE},
  116. </if>
  117. </trim>
  118. </insert>
  119. <update id="updateByPrimaryKeySelective" parameterType="com.hyc.entity.User" >
  120. update t_user
  121. <set >
  122. <if test="idNumber != null" >
  123. id_number = #{idNumber,jdbcType=VARCHAR},
  124. </if>
  125. <if test="name != null" >
  126. name = #{name,jdbcType=VARCHAR},
  127. </if>
  128. <if test="age != null" >
  129. age = #{age,jdbcType=INTEGER},
  130. </if>
  131. <if test="gender != null" >
  132. gender = #{gender,jdbcType=INTEGER},
  133. </if>
  134. <if test="birthDate != null" >
  135. birth_date = #{birthDate,jdbcType=DATE},
  136. </if>
  137. </set>
  138. where user_id = #{userId,jdbcType=BIGINT}
  139. </update>
  140. <update id="updateByPrimaryKey" parameterType="com.hyc.entity.User" >
  141. update t_user
  142. set id_number = #{idNumber,jdbcType=VARCHAR},
  143. name = #{name,jdbcType=VARCHAR},
  144. age = #{age,jdbcType=INTEGER},
  145. gender = #{gender,jdbcType=INTEGER},
  146. birth_date = #{birthDate,jdbcType=DATE}
  147. where user_id = #{userId,jdbcType=BIGINT}
  148. </update>
  149. </mapper>
  • 写分库和分表的算法,分库分表时必然会根据表里的字段设计分表分库的算法,对于用户表和用户地址表用user_id取模来分库,使用gender来分表。对于订单和订单明细表,使用user_id取模来分库,order_id取模来分表。因此需要用到两个分片算法,根据性别分片和根据id分片。
  1. package com.hyc.dbstrategy;
  2. import com.google.common.collect.Range;
  3. import com.hyc.enums.GenderEnum;
  4. import io.shardingsphere.api.algorithm.sharding.PreciseShardingValue;
  5. import io.shardingsphere.api.algorithm.sharding.RangeShardingValue;
  6. import io.shardingsphere.api.algorithm.sharding.standard.PreciseShardingAlgorithm;
  7. import io.shardingsphere.api.algorithm.sharding.standard.RangeShardingAlgorithm;
  8. import java.util.Collection;
  9. import java.util.LinkedHashSet;
  10. public class GenderShardingAlgorithm implements PreciseShardingAlgorithm<Integer>, RangeShardingAlgorithm<Integer> {
  11. /**
  12. * Sharding.
  13. *
  14. * @param availableTargetNames available data sources or tables's names
  15. * @param shardingValue sharding value
  16. * @return sharding result for data source or table's name
  17. */
  18. @Override
  19. public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Integer> shardingValue) {
  20. String databaseName = availableTargetNames.stream().findFirst().get();
  21. for (String dbName : availableTargetNames) {
  22. if (dbName.endsWith(genderToTableSuffix(shardingValue.getValue()))) {
  23. databaseName = dbName;
  24. }
  25. }
  26. return databaseName;
  27. }
  28. /**
  29. * Sharding.
  30. *
  31. * @param availableTargetNames available data sources or tables's names
  32. * @param shardingValue sharding value
  33. * @return sharding results for data sources or tables's names
  34. */
  35. @Override
  36. public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Integer> shardingValue) {
  37. Collection<String> dbs = new LinkedHashSet<>(availableTargetNames.size());
  38. Range<Integer> range = (Range<Integer>) shardingValue.getValueRange();
  39. for (Integer i = range.lowerEndpoint(); i <= range.upperEndpoint(); i++) {
  40. for (String dbName : availableTargetNames) {
  41. if (dbName.endsWith(genderToTableSuffix(i))) {
  42. dbs.add(dbName);
  43. }
  44. }
  45. }
  46. return dbs;
  47. }
  48. /**
  49. * 字段与分库的映射关系
  50. *
  51. * @param gender
  52. * @return
  53. */
  54. private String genderToTableSuffix(Integer gender) {
  55. return gender.equals(GenderEnum.MALE.getCode()) ? "0" : "1";
  56. }
  57. }
  1. package com.hyc.dbstrategy;
  2. import com.google.common.collect.Range;
  3. import io.shardingsphere.api.algorithm.sharding.PreciseShardingValue;
  4. import io.shardingsphere.api.algorithm.sharding.RangeShardingValue;
  5. import io.shardingsphere.api.algorithm.sharding.standard.PreciseShardingAlgorithm;
  6. import io.shardingsphere.api.algorithm.sharding.standard.RangeShardingAlgorithm;
  7. import java.util.Collection;
  8. import java.util.LinkedHashSet;
  9. public class IdShardingAlgorithm implements PreciseShardingAlgorithm<Long>, RangeShardingAlgorithm<Long> {
  10. /**
  11. * Sharding.
  12. *
  13. * @param availableTargetNames available data sources or tables's names
  14. * @param shardingValue sharding value
  15. * @return sharding result for data source or table's name
  16. */
  17. @Override
  18. public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> shardingValue) {
  19. String table = availableTargetNames.stream().findFirst().get();
  20. for (String tableName : availableTargetNames) {
  21. if (tableName.endsWith(idToTableSuffix(shardingValue.getValue()))) {
  22. table = tableName;
  23. }
  24. }
  25. return table;
  26. }
  27. /**
  28. * Sharding.
  29. *
  30. * @param availableTargetNames available data sources or tables's names
  31. * @param shardingValue sharding value
  32. * @return sharding results for data sources or tables's names
  33. */
  34. @Override
  35. public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Long> shardingValue) {
  36. Collection<String> dbs = new LinkedHashSet<>(availableTargetNames.size());
  37. Range<Long> range = (Range<Long>) shardingValue.getValueRange();
  38. for (long i = range.lowerEndpoint(); i <= range.upperEndpoint(); i++) {
  39. for (String dbName : availableTargetNames) {
  40. if (dbName.endsWith(idToTableSuffix(i))) {
  41. dbs.add(dbName);
  42. }
  43. }
  44. }
  45. return dbs;
  46. }
  47. /**
  48. * 字段与分表的映射关系
  49. *
  50. * @param id
  51. * @return 表后缀(201906、201907等)
  52. */
  53. private String idToTableSuffix(Long id) {
  54. return String.valueOf(id % 2);
  55. }
  56. }
  • 编写ID生成算法,在数据分片的场景中使用MySQL主键自增就不太合适了,因此我使用了snowflake算法来生成主键,代码是从网上搜的,ShardingSphere支持我们制定主键生成,只需要实现KeyGenerator接口即可
  1. package com.hyc.keygen;
  2. import io.shardingsphere.core.keygen.KeyGenerator;
  3. import java.util.Random;
  4. public final class SnowflakeShardingKeyGenerator implements KeyGenerator {
  5. /**
  6. * 开始时间截 (2015-01-01)
  7. */
  8. private final long twepoch = 1420041600000L;
  9. /**
  10. * 机器id所占的位数
  11. */
  12. private final long workerIdBits = 5L;
  13. /**
  14. * 数据标识id所占的位数
  15. */
  16. private final long dataCenterIdBits = 5L;
  17. /**
  18. * 支持的最大机器id,结果是31 (这个移位算法可以很快的计算出几位二进制数所能表示的最大十进制数)
  19. */
  20. private final long maxWorkerId = -1L ^ (-1L << workerIdBits);
  21. /**
  22. * 支持的最大数据标识id,结果是31
  23. */
  24. private final long maxDataCenterId = -1L ^ (-1L << dataCenterIdBits);
  25. /**
  26. * 序列在id中占的位数
  27. */
  28. private final long sequenceBits = 12L;
  29. /**
  30. * 机器ID向左移12位
  31. */
  32. private final long workerIdShift = sequenceBits;
  33. /**
  34. * 数据标识id向左移17位(12+5)
  35. */
  36. private final long datacenterIdShift = sequenceBits + workerIdBits;
  37. /**
  38. * 时间截向左移22位(5+5+12)
  39. */
  40. private final long timestampLeftShift = sequenceBits + workerIdBits + dataCenterIdBits;
  41. /**
  42. * 生成序列的掩码,这里为4095 (0b111111111111=0xfff=4095)
  43. */
  44. private final long sequenceMask = -1L ^ (-1L << sequenceBits);
  45. /**
  46. * 工作机器ID(0~31)
  47. */
  48. private long workerId;
  49. /**
  50. * 数据中心ID(0~31)
  51. */
  52. private long dataCenterId;
  53. /**
  54. * 毫秒内序列(0~4095)
  55. */
  56. private long sequence = 0L;
  57. /**
  58. * 上次生成ID的时间截
  59. */
  60. private long lastTimestamp = -1L;
  61. private Random random = new Random();
  62. /**
  63. * 构造函数
  64. *
  65. * @param workerId 工作ID (0~31)
  66. * @param dataCenterId 数据中心ID (0~31)
  67. */
  68. public SnowflakeShardingKeyGenerator(long workerId, long dataCenterId) {
  69. if (workerId > maxWorkerId || workerId < 0) {
  70. throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));
  71. }
  72. if (dataCenterId > maxDataCenterId || dataCenterId < 0) {
  73. throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDataCenterId));
  74. }
  75. this.workerId = workerId;
  76. this.dataCenterId = dataCenterId;
  77. }
  78. /**
  79. * 阻塞到下一个毫秒,直到获得新的时间戳
  80. *
  81. * @param lastTimestamp 上次生成ID的时间截
  82. * @return 当前时间戳
  83. */
  84. private long tilNextMillis(long lastTimestamp) {
  85. long timestamp = timeGen();
  86. while (timestamp <= lastTimestamp) {
  87. timestamp = timeGen();
  88. }
  89. return timestamp;
  90. }
  91. /**
  92. * 返回以毫秒为单位的当前时间
  93. *
  94. * @return 当前时间(毫秒)
  95. */
  96. private long timeGen() {
  97. return System.currentTimeMillis();
  98. }
  99. /**
  100. * Generate key.
  101. *
  102. * @return generated key
  103. */
  104. @Override
  105. public Number generateKey() {
  106. long timestamp = timeGen();
  107. //如果当前时间小于上一次ID生成的时间戳,说明系统时钟回退过这个时候应当抛出异常
  108. if (timestamp < lastTimestamp) {
  109. throw new RuntimeException(
  110. String.format("Clock moved backwards. Refusing to generate id for %d milliseconds", lastTimestamp - timestamp));
  111. }
  112. //如果是同一时间生成的,则进行毫秒内序列
  113. if (lastTimestamp == timestamp) {
  114. sequence = (sequence + 1) & sequenceMask;
  115. //毫秒内序列溢出
  116. if (sequence == 0) {
  117. //阻塞到下一个毫秒,获得新的时间戳
  118. timestamp = tilNextMillis(lastTimestamp);
  119. }
  120. }
  121. //时间戳改变,毫秒内序列重置
  122. else {
  123. sequence = 0L;
  124. }
  125. //上次生成ID的时间截
  126. lastTimestamp = timestamp;
  127. long result = ((timestamp - twepoch) << timestampLeftShift)
  128. | (dataCenterId << datacenterIdShift)
  129. | (workerId << workerIdShift)
  130. | sequence;
  131. int randomNum = random.nextInt(10);
  132. //移位并通过或运算拼到一起组成64位的ID
  133. return result + randomNum;
  134. }
  135. public static void main(String[] args) {
  136. SnowflakeShardingKeyGenerator generator = new SnowflakeShardingKeyGenerator(0,0);
  137. for (int i = 0; i < 20; i++) {
  138. System.out.println(generator.generateKey());
  139. }
  140. }
  141. }
  • 配置数据库连接、MyBatis集成和编写属性文件等等
  1. #data source0
  2. sharding.ds0.type=com.alibaba.druid.pool.DruidDataSource
  3. sharding.ds0.jdbcUrl=jdbc:mysql://127.0.0.1:3306/test1?serverTimezone=GMT%2B8&useSSL=false
  4. sharding.ds0.username=root
  5. sharding.ds0.password=123456
  6. #data source1
  7. sharding.ds1.type=com.alibaba.druid.pool.DruidDataSource
  8. sharding.ds1.jdbcUrl=jdbc:mysql://127.0.0.1:3306/test2?serverTimezone=GMT%2B8&useSSL=false
  9. sharding.ds1.username=root
  10. sharding.ds1.password=123456
  11. #sql日志
  12. logging.level.com.hyc.dao=debug
  13. #actuator端口
  14. management.server.port=9001
  15. #开放所有页面节点 默认只开启了health、info两个节点
  16. management.endpoints.web.exposure.include=*
  17. #显示健康具体信息 默认不会显示详细信息
  18. management.endpoint.health.show-details=always
  19. snow.work.id=1
  20. snow.datacenter.id=2
  21. mybatis.configuration.map-underscore-to-camel-case=true
  22. mybatis.type-aliases-package=com.hyc.entity
  23. mybatis.mapper-locations=classpath:mappers/*.xml

 

  1. package com.hyc.props;
  2. import lombok.Data;
  3. import org.springframework.boot.context.properties.ConfigurationProperties;
  4. @Data
  5. @ConfigurationProperties(prefix = "sharding.ds0")
  6. public class FirstDsProp {
  7. private String jdbcUrl;
  8. private String username;
  9. private String password;
  10. private String type;
  11. }
  1. package com.hyc.props;
  2. import lombok.Data;
  3. import org.springframework.boot.context.properties.ConfigurationProperties;
  4. @Data
  5. @ConfigurationProperties(prefix = "sharding.ds1")
  6. public class SecondDsProp {
  7. private String jdbcUrl;
  8. private String username;
  9. private String password;
  10. private String type;
  11. }

 

  • 下面就是最核心的部分了,数据分片相关配置,需要配置的东西很多,包括:各个表的配置规则TableRuleConfiguration、数据源DataSource,这部分可以参考官方文档说明来配置
  1. package com.hyc.config;
  2. import com.alibaba.druid.filter.Filter;
  3. import com.alibaba.druid.filter.stat.StatFilter;
  4. import com.alibaba.druid.pool.DruidDataSource;
  5. import com.alibaba.druid.support.http.StatViewServlet;
  6. import com.google.common.collect.Lists;
  7. import com.hyc.dbstrategy.GenderShardingAlgorithm;
  8. import com.hyc.dbstrategy.IdShardingAlgorithm;
  9. import com.hyc.keygen.SnowflakeShardingKeyGenerator;
  10. import com.hyc.props.FirstDsProp;
  11. import com.hyc.props.SecondDsProp;
  12. import com.hyc.util.DataSourceUtil;
  13. import io.shardingsphere.api.config.rule.ShardingRuleConfiguration;
  14. import io.shardingsphere.api.config.rule.TableRuleConfiguration;
  15. import io.shardingsphere.api.config.strategy.StandardShardingStrategyConfiguration;
  16. import io.shardingsphere.shardingjdbc.api.ShardingDataSourceFactory;
  17. import org.apache.ibatis.session.SqlSessionFactory;
  18. import org.mybatis.spring.SqlSessionFactoryBean;
  19. import org.mybatis.spring.SqlSessionTemplate;
  20. import org.mybatis.spring.annotation.MapperScan;
  21. import org.springframework.beans.factory.annotation.Autowired;
  22. import org.springframework.beans.factory.annotation.Qualifier;
  23. import org.springframework.beans.factory.annotation.Value;
  24. import org.springframework.boot.context.properties.EnableConfigurationProperties;
  25. import org.springframework.boot.web.servlet.ServletRegistrationBean;
  26. import org.springframework.context.annotation.Bean;
  27. import org.springframework.context.annotation.Configuration;
  28. import org.springframework.context.annotation.Primary;
  29. import org.springframework.core.env.Environment;
  30. import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
  31. import org.springframework.jdbc.datasource.DataSourceTransactionManager;
  32. import org.springframework.transaction.annotation.EnableTransactionManagement;
  33. import javax.sql.DataSource;
  34. import java.sql.SQLException;
  35. import java.util.HashMap;
  36. import java.util.Map;
  37. import java.util.Properties;
  38. import java.util.concurrent.ConcurrentHashMap;
  39. @Configuration
  40. @EnableConfigurationProperties({FirstDsProp.class, SecondDsProp.class})
  41. @EnableTransactionManagement(proxyTargetClass = true)
  42. @MapperScan(basePackages = "com.hyc.dao", sqlSessionTemplateRef = "sqlSessionTemplate")
  43. public class DataSourceConfig {
  44. @Value("${snow.work.id:0}")
  45. private Long workId;
  46. @Value("${snow.datacenter.id:0}")
  47. private Long datacenterId;
  48. @Autowired
  49. private Environment env;
  50. /**
  51. * druid数据源1
  52. *
  53. * @param firstDSProp
  54. * @return
  55. */
  56. @Bean("ds0")
  57. public DataSource ds0(FirstDsProp firstDSProp) {
  58. Map<String, Object> dsMap = new HashMap<>();
  59. dsMap.put("type", firstDSProp.getType());
  60. dsMap.put("url", firstDSProp.getJdbcUrl());
  61. dsMap.put("username", firstDSProp.getUsername());
  62. dsMap.put("password", firstDSProp.getPassword());
  63. DruidDataSource ds = (DruidDataSource) DataSourceUtil.buildDataSource(dsMap);
  64. ds.setProxyFilters(Lists.newArrayList(statFilter()));
  65. // 每个分区最大的连接数
  66. ds.setMaxActive(20);
  67. // 每个分区最小的连接数
  68. ds.setMinIdle(5);
  69. return ds;
  70. }
  71. /**
  72. * druid数据源2
  73. *
  74. * @param secondDsProp
  75. * @return
  76. */
  77. @Bean("ds1")
  78. public DataSource ds1(SecondDsProp secondDsProp) {
  79. Map<String, Object> dsMap = new HashMap<>();
  80. dsMap.put("type", secondDsProp.getType());
  81. dsMap.put("url", secondDsProp.getJdbcUrl());
  82. dsMap.put("username", secondDsProp.getUsername());
  83. dsMap.put("password", secondDsProp.getPassword());
  84. DruidDataSource ds = (DruidDataSource) DataSourceUtil.buildDataSource(dsMap);
  85. ds.setProxyFilters(Lists.newArrayList(statFilter()));
  86. // 每个分区最大的连接数
  87. ds.setMaxActive(20);
  88. // 每个分区最小的连接数
  89. ds.setMinIdle(5);
  90. return ds;
  91. }
  92. @Bean
  93. public Filter statFilter() {
  94. StatFilter filter = new StatFilter();
  95. filter.setSlowSqlMillis(5000);
  96. filter.setLogSlowSql(true);
  97. filter.setMergeSql(true);
  98. return filter;
  99. }
  100. @Bean
  101. public ServletRegistrationBean statViewServlet() {
  102. //创建servlet注册实体
  103. ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new StatViewServlet(), "/druid/*");
  104. //设置ip白名单
  105. servletRegistrationBean.addInitParameter("allow", "127.0.0.1");
  106. //设置控制台管理用户
  107. servletRegistrationBean.addInitParameter("loginUsername", "admin");
  108. servletRegistrationBean.addInitParameter("loginPassword", "123456");
  109. //是否可以重置数据
  110. servletRegistrationBean.addInitParameter("resetEnable", "false");
  111. return servletRegistrationBean;
  112. }
  113. /**
  114. * shardingjdbc数据源
  115. *
  116. * @return
  117. * @throws SQLException
  118. */
  119. @Bean("dataSource")
  120. public DataSource dataSource(@Qualifier("ds0") DataSource ds0, @Qualifier("ds1") DataSource ds1) throws SQLException {
  121. // 配置真实数据源
  122. Map<String, DataSource> dataSourceMap = new HashMap<>();
  123. dataSourceMap.put("ds0", ds0);
  124. dataSourceMap.put("ds1", ds1);
  125. // 配置分片规则
  126. ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
  127. shardingRuleConfig.getTableRuleConfigs().add(userRuleConfig());
  128. shardingRuleConfig.getTableRuleConfigs().add(addressRuleConfig());
  129. shardingRuleConfig.getTableRuleConfigs().add(orderRuleConfig());
  130. shardingRuleConfig.getTableRuleConfigs().add(orderItemRuleConfig());
  131. shardingRuleConfig.setDefaultDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  132. shardingRuleConfig.getBindingTableGroups().add("t_user, t_user_address");
  133. shardingRuleConfig.getBindingTableGroups().add("t_order, t_order_item");
  134. shardingRuleConfig.getBroadcastTables().add("t_product");
  135. Properties p = new Properties();
  136. p.setProperty("sql.show",Boolean.TRUE.toString());
  137. // 获取数据源对象
  138. DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, new ConcurrentHashMap(), p);
  139. return dataSource;
  140. }
  141. /**
  142. * 需要手动配置事务管理器
  143. * @param dataSource
  144. * @return
  145. */
  146. @Bean
  147. public DataSourceTransactionManager transactitonManager(@Qualifier("dataSource") DataSource dataSource){
  148. return new DataSourceTransactionManager(dataSource);
  149. }
  150. @Bean("sqlSessionFactory")
  151. @Primary
  152. public SqlSessionFactory sqlSessionFactory(@Qualifier("dataSource") DataSource dataSource) throws Exception {
  153. SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
  154. bean.setDataSource(dataSource);
  155. bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mappers/*.xml"));
  156. return bean.getObject();
  157. }
  158. @Bean("sqlSessionTemplate")
  159. @Primary
  160. public SqlSessionTemplate sqlSessionTemplate(@Qualifier("sqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
  161. return new SqlSessionTemplate(sqlSessionFactory);
  162. }
  163. private TableRuleConfiguration userRuleConfig() {
  164. TableRuleConfiguration tableRuleConfig = new TableRuleConfiguration();
  165. tableRuleConfig.setLogicTable("t_user");
  166. tableRuleConfig.setActualDataNodes("ds${0..1}.t_user_${0..1}");
  167. tableRuleConfig.setKeyGeneratorColumnName("user_id");
  168. tableRuleConfig.setKeyGenerator(new SnowflakeShardingKeyGenerator(workId, datacenterId));
  169. tableRuleConfig.setDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  170. tableRuleConfig.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("gender", new GenderShardingAlgorithm(), new GenderShardingAlgorithm()));
  171. return tableRuleConfig;
  172. }
  173. private TableRuleConfiguration addressRuleConfig() {
  174. TableRuleConfiguration tableRuleConfig = new TableRuleConfiguration();
  175. tableRuleConfig.setLogicTable("t_user_address");
  176. tableRuleConfig.setActualDataNodes("ds${0..1}.t_user_address_${0..1}");
  177. tableRuleConfig.setKeyGeneratorColumnName("address_id");
  178. tableRuleConfig.setKeyGenerator(new SnowflakeShardingKeyGenerator(workId, datacenterId));
  179. tableRuleConfig.setDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  180. tableRuleConfig.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("gender", new GenderShardingAlgorithm(), new GenderShardingAlgorithm()));
  181. return tableRuleConfig;
  182. }
  183. private TableRuleConfiguration orderRuleConfig() {
  184. TableRuleConfiguration tableRuleConfig = new TableRuleConfiguration();
  185. tableRuleConfig.setLogicTable("t_order");
  186. tableRuleConfig.setActualDataNodes("ds${0..1}.t_order_${0..1}");
  187. tableRuleConfig.setKeyGeneratorColumnName("order_id");
  188. tableRuleConfig.setKeyGenerator(new SnowflakeShardingKeyGenerator(workId, datacenterId));
  189. tableRuleConfig.setDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  190. tableRuleConfig.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("order_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  191. return tableRuleConfig;
  192. }
  193. private TableRuleConfiguration orderItemRuleConfig() {
  194. TableRuleConfiguration tableRuleConfig = new TableRuleConfiguration();
  195. tableRuleConfig.setLogicTable("t_order_item");
  196. tableRuleConfig.setActualDataNodes("ds${0..1}.t_order_item_${0..1}");
  197. tableRuleConfig.setKeyGeneratorColumnName("order_item_id");
  198. tableRuleConfig.setKeyGenerator(new SnowflakeShardingKeyGenerator(workId, datacenterId));
  199. tableRuleConfig.setDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  200. tableRuleConfig.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("order_id", new IdShardingAlgorithm(), new IdShardingAlgorithm()));
  201. return tableRuleConfig;
  202. }
  203. }
  1. package com.hyc.util;
  2. import lombok.extern.slf4j.Slf4j;
  3. import org.springframework.boot.jdbc.DataSourceBuilder;
  4. import javax.sql.DataSource;
  5. import java.util.Map;
  6. @Slf4j
  7. public class DataSourceUtil {
  8. private static final String DATASOURCE_TYPE_DEFAULT = "com.zaxxer.hikari.HikariDataSource";
  9. public static DataSource buildDataSource(Map<String, Object> dataSourceMap) {
  10. Object type = dataSourceMap.get("type");
  11. if (type == null) {
  12. type = DATASOURCE_TYPE_DEFAULT;
  13. }
  14. try {
  15. Class<? extends DataSource> dataSourceType;
  16. dataSourceType = (Class<? extends DataSource>) Class.forName((String) type);
  17. //String driverClassName = dataSourceMap.get("driver").toString();
  18. String url = dataSourceMap.get("url").toString();
  19. String username = dataSourceMap.get("username").toString();
  20. String password = dataSourceMap.get("password").toString();
  21. // 自定义DataSource配置
  22. DataSourceBuilder factory = DataSourceBuilder.create().url(url).username(username).password(password).type(dataSourceType);
  23. return factory.build();
  24. } catch (Exception e) {
  25. log.error("构建数据源" + type + "出错", e);
  26. }
  27. return null;
  28. }
  29. }

 

  • 启动类
  1. package com.hyc;
  2. import org.springframework.boot.SpringApplication;
  3. import org.springframework.boot.autoconfigure.SpringBootApplication;
  4. import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
  5. @SpringBootApplication(exclude={DataSourceAutoConfiguration.class})
  6. public class Shard3ManualApplication {
  7. public static void main(String[] args) {
  8. SpringApplication.run(Shard3ManualApplication.class, args);
  9. }
  10. }
  • 编写controller和service
  1. package com.hyc.service;
  2. import com.hyc.dao.*;
  3. import com.hyc.entity.*;
  4. import org.springframework.beans.factory.annotation.Autowired;
  5. import org.springframework.stereotype.Service;
  6. import org.springframework.transaction.annotation.Transactional;
  7. import java.util.HashMap;
  8. import java.util.Map;
  9. @Service
  10. public class BussinessService {
  11. @Autowired
  12. private UserMapper userMapper;
  13. @Autowired
  14. private UserAddressMapper addressMapper;
  15. @Autowired
  16. private OrderMapper orderMapper;
  17. @Autowired
  18. private OrderItemMapper orderItemMapper;
  19. @Autowired
  20. private ProductMapper productMapper;
  21. @Transactional
  22. public void saveAll(User user, UserAddress address, Order order, OrderItem orderItem) {
  23. userMapper.insertSelective(user);
  24. address.setUserId(user.getUserId());
  25. addressMapper.insertSelective(address);
  26. order.setUserId(user.getUserId());
  27. orderMapper.insertSelective(order);
  28. orderItem.setOrderId(order.getOrderId());
  29. orderItem.setUserId(user.getUserId());
  30. orderItemMapper.insertSelective(orderItem);
  31. }
  32. @Transactional
  33. public void saveProduct(Product product) {
  34. productMapper.insertSelective(product);
  35. }
  36. public Map<String, Object> findAll() {
  37. Map<String, Object> result = new HashMap<>();
  38. Long userId = 594099642262884355L;
  39. User user = userMapper.selectByPrimaryKey(userId);
  40. result.put("user", user);
  41. UserAddress address = addressMapper.selectByUserId(userId);
  42. result.put("address", address);
  43. Order order = orderMapper.selectByUserId(userId);
  44. result.put("order", order);
  45. OrderItem orderItem = orderItemMapper.selectByOrderId(order.getOrderId());
  46. result.put("orderItem", orderItem);
  47. return result;
  48. }
  49. }
  1. package com.hyc.controller;
  2. import cn.hutool.core.date.DateUtil;
  3. import com.alibaba.fastjson.JSON;
  4. import com.hyc.entity.*;
  5. import com.hyc.enums.GenderEnum;
  6. import com.hyc.enums.OrderStatusEnum;
  7. import com.hyc.service.BussinessService;
  8. import com.hyc.util.SnowflakeIdGenerator;
  9. import lombok.extern.slf4j.Slf4j;
  10. import org.springframework.beans.factory.annotation.Autowired;
  11. import org.springframework.beans.propertyeditors.CustomDateEditor;
  12. import org.springframework.web.bind.WebDataBinder;
  13. import org.springframework.web.bind.annotation.GetMapping;
  14. import org.springframework.web.bind.annotation.InitBinder;
  15. import org.springframework.web.bind.annotation.RestController;
  16. import org.springframework.web.context.request.WebRequest;
  17. import java.math.BigDecimal;
  18. import java.text.DateFormat;
  19. import java.text.SimpleDateFormat;
  20. import java.util.Date;
  21. import java.util.HashMap;
  22. import java.util.Map;
  23. @Slf4j
  24. @RestController
  25. public class BussinessController {
  26. @Autowired
  27. private BussinessService bussinessService;
  28. @Autowired
  29. private SnowflakeIdGenerator snowflakeIdGenerator;
  30. @InitBinder
  31. public void initBinder(WebDataBinder binder, WebRequest request) {
  32. //转换日期
  33. DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
  34. binder.registerCustomEditor(Date.class, new CustomDateEditor(dateFormat, true));
  35. }
  36. @GetMapping("/buss/createProduct")
  37. public String createProduct() {
  38. for (int i = 1; i < 10; i++) {
  39. Product product = new Product();
  40. product.setProductId(snowflakeIdGenerator.nextId());
  41. product.setCode("P00" + i);
  42. product.setName("商品" + i);
  43. product.setDesc("商品描述" + i);
  44. bussinessService.saveProduct(product);
  45. }
  46. return "成功";
  47. }
  48. @GetMapping("/buss/create")
  49. public String create() {
  50. for (int i = 1; i <= 21; i++) {
  51. User user = new User();
  52. user.setName("王大毛" + i);
  53. user.setGender(GenderEnum.MALE.getCode());
  54. user.setAge(20 + i);
  55. user.setBirthDate(DateUtil.parseDate("1989-08-16"));
  56. user.setIdNumber("4101231989691" + i);
  57. UserAddress address = new UserAddress();
  58. address.setCity("某某市");
  59. address.setDetail("某某街道");
  60. address.setDistrict("某某区");
  61. address.setProvince("江苏省");
  62. address.setSort(1);
  63. address.setGender(user.getGender());
  64. Order order = new Order();
  65. order.setOrderAmount(new BigDecimal(100));
  66. order.setOrderNo("ORDER_00" + i);
  67. order.setOrderStatus(OrderStatusEnum.PROCESSING.getCode());
  68. order.setRemark("测试");
  69. OrderItem orderItem = new OrderItem();
  70. orderItem.setItemPrice(new BigDecimal(5));
  71. orderItem.setOrderTime(DateUtil.parse("2019-06-27 17:50:05"));
  72. orderItem.setProductId(593860920283758592L);
  73. orderItem.setTotalNum(20);
  74. orderItem.setTotalPrice(new BigDecimal(100));
  75. bussinessService.saveAll(user, address, order, orderItem);
  76. }
  77. for (int i = 1; i <= 21; i++) {
  78. User user = new User();
  79. user.setName("王大莉" + i);
  80. user.setGender(GenderEnum.FEMALE.getCode());
  81. user.setAge(20 + i);
  82. user.setBirthDate(DateUtil.parseDate("1989-08-16"));
  83. user.setIdNumber("1101231989691" + i);
  84. UserAddress address = new UserAddress();
  85. address.setCity("某某市");
  86. address.setDetail("某某街道");
  87. address.setDistrict("某某区");
  88. address.setProvince("江苏省");
  89. address.setSort(1);
  90. address.setGender(user.getGender());
  91. Order order = new Order();
  92. order.setOrderAmount(new BigDecimal(100));
  93. order.setOrderNo("ORDER_00" + i);
  94. order.setOrderStatus(OrderStatusEnum.PROCESSING.getCode());
  95. order.setRemark("测试");
  96. OrderItem orderItem = new OrderItem();
  97. orderItem.setItemPrice(new BigDecimal(5));
  98. orderItem.setOrderTime(DateUtil.parse("2019-06-27 17:50:05"));
  99. orderItem.setProductId(593860924259958784L);
  100. orderItem.setTotalNum(20);
  101. orderItem.setTotalPrice(new BigDecimal(100));
  102. bussinessService.saveAll(user, address, order, orderItem);
  103. }
  104. return "成功";
  105. }
  106. @GetMapping("/buss/all")
  107. public String findAll(){
  108. Map<String,Object> result = new HashMap<>();
  109. result = bussinessService.findAll();
  110. return JSON.toJSONString(result);
  111. }
  112. }
  • 测试

请求http://localhost:8080/buss/createProduct创建商品,可以发现商品表作为广播表在两个库中已经插入了两份一模一样的数据

请求http://localhost:8080/buss/create,创建用户、订单等,观察数据库,以用户表举例,用户表按照user_id的奇偶性分表保存在了两个库里

也按照性别的不同存在了不同的分表里面

 

按照我们的预想分库分表,还可以做查询和事务等相关测试,这里不一一列举了 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/210899
推荐阅读
相关标签
  

闽ICP备14008679号