当前位置:   article > 正文

通过Docker启动DB2,并在Spring Boot整合DB2(Druid连接池)_springboot db2

springboot db2

1 简介

DB2是IBM的一款优秀的关系型数据库,简单学习一下。

2 Docker安装DB2

为了快速启动,直接使用Docker来安装DB2。先下载镜像如下:

  1. docker pull ibmcom/db2
  2. # or
  3. docker pull ibmcom/db2:11.5.0.0

启动数据库如下:

  1. $. docker run -itd \
  2. --name mydb2 \
  3. --privileged=true \
  4. -p 50000:50000 \
  5. -e LICENSE=accept \
  6. -e DB2INST1_PASSWORD=pkslow \
  7. -e DBNAME=testdb \
  8. ibmcom/db2:11.5.0.0
  9. # or
  10. $. docker run -itd \
  11. --name mydb2 \
  12. --privileged=true \
  13. -p 50000:50000 \
  14. -e LICENSE=accept \
  15. -e DB2INST1_PASSWORD=<choose an instance password> \
  16. -e DBNAME=testdb \
  17. -v <db storage dir>:/database \
  18. ibmcom/db2
  19. # use
  20. $. docker run -itd \
  21. --name mydb2 \
  22. --privileged=true \
  23. -p 50000:50000 \
  24. -e LICENSE=accept \
  25. -e DB2INST1_PASSWORD=HoupuTech__ \
  26. -e DBNAME=langpf \
  27. -v /data/db2:/database/ibmcom/db2 \
  28. ibmcom/db2:latest

这样获得的数据库,具体信息如下:

  • 连接URL:jdbc:db2://localhost:50000/testdb
  • 用户名:db2inst1
  • 密码:pkslow

在IDEA上连接如下:

默认的Schema为DB2INST1,测试SQL如下:

创建表:

  1. # 创建自增主键数据库表
  2. # 建表SQL
  3. CREATE TABLE TEST_SCHEMA.mytest_increment_key_table(id BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY(START WITH 1,INCREMENT BY 1,NO CACHE)
  4. PRIMARY KEY, timer_wheel_id BIGINT NOT NULL, create_time TIMESTAMP NOT NULL);

插入数据:

insert into TEST_SCHEMA.mytest_increment_key_table(timer_wheel_id, create_time) values (1, '2022-03-06 23:12:21.333')

查询:

SELECT * FROM TEST_SCHEMA.mytest_increment_key_table;

Spring Boot整合DB2

添加相关依赖:

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6. <!-- 定义公共资源版本 -->
  7. <parent>
  8. <groupId>org.springframework.boot</groupId>
  9. <artifactId>spring-boot-starter-parent</artifactId>
  10. <version>2.7.12</version>
  11. <relativePath/>
  12. </parent>
  13. <properties>
  14. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  15. <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  16. <java.version>11</java.version>
  17. <project-name>druid-hibernate-db2</project-name>
  18. </properties>
  19. <groupId>org.find</groupId>
  20. <artifactId>${project-name}</artifactId>
  21. <version>1.0</version>
  22. <packaging>jar</packaging>
  23. <name>${project-name}</name>
  24. <description>${project-name}</description>
  25. <dependencies>
  26. <!-- ============= db2 数据库 start ============== -->
  27. <!--阿里druid数据库链接依赖-->
  28. <dependency>
  29. <groupId>com.alibaba</groupId>
  30. <artifactId>druid-spring-boot-starter</artifactId>
  31. <version>1.2.16</version>
  32. </dependency>
  33. <dependency>
  34. <groupId>log4j</groupId>
  35. <artifactId>log4j</artifactId>
  36. <version>1.2.17</version>
  37. </dependency>
  38. <!-- 引入jpa -->
  39. <!--依赖下面的spring-boot-starter-jdbc-->
  40. <dependency>
  41. <groupId>org.springframework.boot</groupId>
  42. <artifactId>spring-boot-starter-data-jpa</artifactId>
  43. </dependency>
  44. <!--事务管理:原子性,一致性,隔离性,持久性-->
  45. <!--依赖下面的spring-jdbc-->
  46. <!--<dependency>-->
  47. <!-- <groupId>org.springframework.boot</groupId>-->
  48. <!-- <artifactId>spring-boot-starter-jdbc</artifactId>-->
  49. <!--</dependency>-->
  50. <!--<dependency>-->
  51. <!-- <groupId>org.springframework</groupId>-->
  52. <!-- <artifactId>spring-jdbc</artifactId>-->
  53. <!-- <version>4.1.0.RELEASE</version>-->
  54. <!--</dependency>-->
  55. <!--db2链接依赖-->
  56. <!--<dependency>-->
  57. <!-- <groupId>com.ibm.db2</groupId>-->
  58. <!-- <artifactId>db2jcc4</artifactId>-->
  59. <!-- <version>4.32.28</version>-->
  60. <!--</dependency>-->
  61. <dependency>
  62. <groupId>com.ibm.db2</groupId>
  63. <artifactId>jcc</artifactId>
  64. </dependency>
  65. <!-- ============= db2 数据库 end ============== -->
  66. <!-- ================== 应用 =================== -->
  67. <dependency>
  68. <groupId>org.apache.commons</groupId>
  69. <artifactId>commons-lang3</artifactId>
  70. </dependency>
  71. <dependency>
  72. <groupId>com.google.guava</groupId>
  73. <artifactId>guava</artifactId>
  74. <version>31.1-jre</version>
  75. </dependency>
  76. <dependency>
  77. <groupId>com.alibaba</groupId>
  78. <artifactId>fastjson</artifactId>
  79. <version>2.0.32</version>
  80. </dependency>
  81. <!-- javax api -->
  82. <dependency>
  83. <groupId>javax.inject</groupId>
  84. <artifactId>javax.inject</artifactId>
  85. <version>1</version>
  86. </dependency>
  87. <!-- 上边引入 parent,因此 下边无需指定版本 -->
  88. <!-- 包含 mvc,aop 等jar资源 -->
  89. <dependency>
  90. <groupId>org.springframework.boot</groupId>
  91. <artifactId>spring-boot-starter-web</artifactId>
  92. </dependency>
  93. <dependency>
  94. <groupId>org.springframework.boot</groupId>
  95. <artifactId>spring-boot-starter-test</artifactId>
  96. <scope>test</scope>
  97. </dependency>
  98. <dependency>
  99. <groupId>org.springframework.boot</groupId>
  100. <artifactId>spring-boot-starter-validation</artifactId>
  101. </dependency>
  102. </dependencies>
  103. <build>
  104. <plugins>
  105. <!-- 解决资源文件的编码问题 -->
  106. <plugin>
  107. <groupId>org.apache.maven.plugins</groupId>
  108. <artifactId>maven-resources-plugin</artifactId>
  109. <configuration>
  110. <encoding>UTF-8</encoding>
  111. </configuration>
  112. </plugin>
  113. <!-- maven打source包 -->
  114. <plugin>
  115. <groupId>org.apache.maven.plugins</groupId>
  116. <artifactId>maven-source-plugin</artifactId>
  117. <executions>
  118. <execution>
  119. <id>attach-sources</id>
  120. <!--<phase>verify</phase>-->
  121. <goals>
  122. <!--jar, jar-no-fork-->
  123. <goal>jar</goal>
  124. </goals>
  125. </execution>
  126. </executions>
  127. </plugin>
  128. <!-- spring Boot在编译的时候, 是有默认JDK版本的, 这里自定义指定JDK版本 -->
  129. <plugin>
  130. <artifactId>maven-compiler-plugin</artifactId>
  131. <configuration>
  132. <source>11</source>
  133. <target>11</target>
  134. <encoding>UTF-8</encoding>
  135. </configuration>
  136. </plugin>
  137. <!--拷贝依赖jar到指定的目录-->
  138. <plugin>
  139. <groupId>org.apache.maven.plugins</groupId>
  140. <artifactId>maven-dependency-plugin</artifactId>
  141. <executions>
  142. <execution>
  143. <id>copy-dependencies</id>
  144. <phase>package</phase>
  145. <goals>
  146. <goal>copy-dependencies</goal>
  147. </goals>
  148. <configuration>
  149. <outputDirectory>${project.build.directory}/lib</outputDirectory>
  150. <overWriteReleases>false</overWriteReleases>
  151. <overWriteSnapshots>false</overWriteSnapshots>
  152. <overWriteIfNewer>true</overWriteIfNewer>
  153. </configuration>
  154. </execution>
  155. </executions>
  156. </plugin>
  157. <plugin>
  158. <groupId>org.apache.maven.plugins</groupId>
  159. <artifactId>maven-jar-plugin</artifactId>
  160. <configuration>
  161. <archive>
  162. <manifest>
  163. <!-- maven-jar-plugin用于生成META-INF/MANIFEST.MF文件的部分内容, -->
  164. <addClasspath>true</addClasspath>
  165. <!-- 指定依赖包所在目录。 -->
  166. <classpathPrefix>lib/</classpathPrefix>
  167. <!-- 指定MANIFEST.MF中的Main-Class, -->
  168. <mainClass>org.fiend.MySpringbootApp</mainClass>
  169. <useUniqueVersions>false</useUniqueVersions>
  170. </manifest>
  171. </archive>
  172. <excludes>
  173. <!--<exclude>*.properties</exclude>-->
  174. <!--<exclude>*.yml</exclude>-->
  175. <!--<exclude>*.xml</exclude>-->
  176. <!--<exclude>org/fiend/controller/HomeController.class</exclude>-->
  177. </excludes>
  178. </configuration>
  179. </plugin>
  180. </plugins>
  181. </build>
  182. </project>

配置相关属性:

  1. # tidb 环境配置
  2. server:
  3. port: 8100
  4. max-http-header-size: 8192
  5. tomcat:
  6. max-connections: 10000 # 最大连接数, 默认为10000
  7. accept-count: 500 # 最大连接等待数, 默认100
  8. threads:
  9. max: 475 # 最大工作线程数, 默认200
  10. min-spare: 400 #最小工作线程数, 默认10
  11. servlet:
  12. encoding:
  13. charset: UTF-8
  14. force: true
  15. enabled: true
  16. logging:
  17. level:
  18. root: info
  19. spring:
  20. application:
  21. name: druid-hibernate-db2
  22. main:
  23. allow-bean-definition-overriding: true
  24. datasource:
  25. url: jdbc:db2://192.168.1.22:50000/MYTEST:currentSchema=TEST_SCHEMA;
  26. username: username
  27. password: 123123
  28. driver-class-name: com.ibm.db2.jcc.DB2Driver
  29. # type: com.alibaba.druid.pool.DruidDataSource
  30. type: com.alibaba.druid.pool.DruidDataSource
  31. # 连接池配置
  32. initial-size: 5
  33. max-active: 20
  34. max-idle: 10
  35. min-idle: 5
  36. # # ============================== druid ============================== #
  37. druid:
  38. # 最大活跃数
  39. maxActive: 20
  40. # 初始化数量
  41. initialSize: 1
  42. # 最大连接等待超时时间
  43. maxWait: 60000
  44. # 打开PSCache, 并且指定每个连接PSCache的大小
  45. poolPreparedStatements: true
  46. maxPoolPreparedStatementPerConnectionSize: 20
  47. #通过connectionProperties属性来打开mergeSql功能;慢SQL记录
  48. #connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
  49. minIdle: 1
  50. timeBetweenEvictionRunsMillis: 60000
  51. minEvictableIdleTimeMillis: 300000
  52. # validationQuery: select 1 from dual
  53. testWhileIdle: true
  54. testOnBorrow: false
  55. testOnReturn: false
  56. #配置监控统计拦截的filters, 去掉后监控界面sql将无法统计,'wall'用于防火墙
  57. filters: stat, wall, log4j
  58. # ============================= spring jpa 配置 ============================= #
  59. jpa:
  60. # 禁止 hibernate sql 输出
  61. show-sql: false
  62. database-platform: org.hibernate.dialect.DB2Dialect
  63. hibernate:
  64. # create, create-drop, update, none 和 validate 五个属性
  65. # create : Create the schema and destroy previous data. 会根据model类来生成表,但是每次运行都会删除上一次的表,重新生成表,哪怕2次没有任何改变
  66. # create-drop : Create and then destroy the schema at the end of the session.
  67. # 根据model类生成表,但是sessionFactory一关闭,表就自动删除
  68. # update : Update the schema if necessary. 最常用的属性, 也根据model类生成表,即使表结构改变了,表中的行仍然存在,不会删除以前的行
  69. # validate : Validate the schema, make no changes to the database.
  70. # 只会和数据库中的表进行比较,不会创建新表, 但是会插入新值
  71. # none : Disable DDL handling.
  72. # 这里优先级较低(相对hbm2ddl.auto),不会生效
  73. ddl-auto: none
  74. properties:
  75. hibernate:
  76. dialect: org.hibernate.dialect.DB2Dialect
  77. # 用于配置自动创建、更新或验证数据库表结构的行为
  78. # 1. create:每次应用程序启动时,Hibernate 会删除现有的数据库表并重新创建它们。这是最简单的选项,但也是最危险的选项,因为它会丢失所有现有数据。
  79. # 2. update:每次应用程序启动时,Hibernate 会检查数据库表结构与映射文件(或实体类)的差异,并根据需要更新表结构。如果表不存在,Hibernate将创建新表;如果表已经存在,它将添加缺少的列或索引。但是,它不会删除或修改现有的列或约束。这是在开发和测试环境中常用的选项。
  80. # 3. validate:Hibernate 会在应用程序启动时验证数据库表结构与映射文件(或实体类)是否匹配,但不会对其进行任何更改。如果存在结构不匹配的情况,Hibernate会抛出异常并停止应用程序启动。
  81. # 4. none:Hibernate 不会自动创建、更新或验证数据库表结构。这意味着您需要手动管理数据库表结构的创建和更新。
  82. # 请注意,虽然 hbm2ddl.auto 属性在开发和测试环境中可能很方便,但在生产环境中慎重使用。
  83. # 在生产环境中,建议手动管理数据库表结构,并使用数据库迁移工具(如FlywayLiquibase)来进行版本控制和演化
  84. hbm2ddl.auto: none
  85. jdbc.lob.non_contextual_creation: true
  86. format_sql: true
  87. temp:
  88. # 兼容SpringBoot2.X, 关闭 Hibernate尝试验证MysqlCLOB特性
  89. use_jdbc_metadata_defaults: false

创建Entity:

  1. package org.fiend.entity;
  2. import com.fasterxml.jackson.annotation.JsonFormat;
  3. import com.fasterxml.jackson.annotation.JsonProperty;
  4. import org.springframework.format.annotation.DateTimeFormat;
  5. import javax.persistence.*;
  6. import javax.validation.constraints.NotNull;
  7. import java.io.Serializable;
  8. import java.sql.Timestamp;
  9. /**
  10. * @author langpf 2023/06/10
  11. */
  12. @Entity
  13. @Table(catalog = "TEST_SCHEMA", name = "mytest_increment_key_table")
  14. public class IncrementKeyEntity implements Serializable {
  15. @Id
  16. @Column(name = "id")
  17. @GeneratedValue(strategy = GenerationType.IDENTITY)
  18. private Long id;
  19. @Column(name = "timer_wheel_id", nullable = false)
  20. private Long timerWheelId;
  21. @NotNull(message = "create_time 不可为空!")
  22. @JsonProperty("create_time")
  23. @Column(name = "create_time", nullable = false, columnDefinition = "timestamp")
  24. @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS")
  25. @DateTimeFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS")
  26. private Timestamp createTime;
  27. public Long getId() {
  28. return id;
  29. }
  30. public void setId(Long id) {
  31. this.id = id;
  32. }
  33. public Long getTimerWheelId() {
  34. return timerWheelId;
  35. }
  36. public void setTimerWheelId(Long timerWheelId) {
  37. this.timerWheelId = timerWheelId;
  38. }
  39. public Timestamp getCreateTime() {
  40. return createTime;
  41. }
  42. public void setCreateTime(Timestamp createTime) {
  43. this.createTime = createTime;
  44. }
  45. }

创建Repository类用于操作数据库:

  1. package org.fiend.repository;
  2. import org.fiend.entity.IncrementKeyEntity;
  3. import org.springframework.data.jpa.repository.JpaRepository;
  4. import org.springframework.stereotype.Repository;
  5. /**
  6. * @author langpf 2018/12/10
  7. */
  8. @Repository
  9. public interface IncrementKeyRepo extends JpaRepository<IncrementKeyEntity, Long> {
  10. }

测试方法

IncrementKeyController.java
  1. package org.fiend.c.web.controller;
  2. import com.alibaba.fastjson.JSONObject;
  3. import org.fiend.c.web.service.IncrementKeyTestService;
  4. import org.fiend.entity.IncrementKeyEntity;
  5. import org.slf4j.Logger;
  6. import org.slf4j.LoggerFactory;
  7. import org.springframework.beans.factory.annotation.Autowired;
  8. import org.springframework.web.bind.annotation.RequestMapping;
  9. import org.springframework.web.bind.annotation.RestController;
  10. import java.util.List;
  11. /**
  12. * @author 86133 2023-07-11 15:43:00
  13. */
  14. @RestController
  15. public class IncrementKeyController {
  16. Logger log = LoggerFactory.getLogger(getClass());
  17. @Autowired
  18. IncrementKeyTestService incrementKeyTestService;
  19. @RequestMapping(value = "getData")
  20. public List<IncrementKeyEntity> getData() {
  21. return incrementKeyTestService.getData();
  22. }
  23. @RequestMapping(value = "insertByJdbc")
  24. public String insertByJdbc() {
  25. return incrementKeyTestService.insertByJdbc();
  26. }
  27. @RequestMapping(value = "insertByHibernate")
  28. public String insertByHibernate() {
  29. return incrementKeyTestService.insertByHibernate();
  30. }
  31. @RequestMapping(value = "batchInsertByJdbc")
  32. public JSONObject batchInsertByJdbc(int count) {
  33. incrementKeyTestService.batchInsertByJdbc(count);
  34. return incrementKeyTestService.getIncrementSpendTime();
  35. }
  36. @RequestMapping(value = "batchInsertByHibernate")
  37. public JSONObject batchInsertByHibernate(int count) {
  38. incrementKeyTestService.batchInsertByHibernate(count);
  39. return incrementKeyTestService.getIncrementSpendTime();
  40. }
  41. @RequestMapping(value = "clearSpendTime")
  42. public String clearSpendTime() {
  43. return incrementKeyTestService.clearSpendTime();
  44. }
  45. @RequestMapping(value = "getIncrementSpendTime")
  46. public JSONObject getIncrementSpendTime() {
  47. return incrementKeyTestService.getIncrementSpendTime();
  48. }
  49. }

 IncrementKeyTestService.java

  1. package org.fiend.c.web.service;
  2. import com.alibaba.fastjson.JSONObject;
  3. import com.google.common.collect.Lists;
  4. import org.fiend.entity.IncrementKeyEntity;
  5. import org.fiend.repository.IncrementKeyRepo;
  6. import org.fiend.util.DateUtil;
  7. import org.slf4j.Logger;
  8. import org.slf4j.LoggerFactory;
  9. import org.springframework.beans.factory.annotation.Autowired;
  10. import org.springframework.jdbc.core.JdbcTemplate;
  11. import org.springframework.stereotype.Service;
  12. import org.springframework.transaction.annotation.Transactional;
  13. import java.util.List;
  14. import java.util.concurrent.atomic.AtomicInteger;
  15. import java.util.concurrent.atomic.AtomicLong;
  16. import java.util.concurrent.locks.ReentrantReadWriteLock;
  17. /**
  18. * @author Administrator
  19. */
  20. @Service
  21. public class IncrementKeyTestService {
  22. Logger log = LoggerFactory.getLogger(getClass());
  23. @Autowired
  24. JdbcTemplate jdbcTemplate;
  25. @Autowired
  26. IncrementKeyRepo incrementKeyRepo;
  27. AtomicLong maxSpendTime = new AtomicLong(0);
  28. AtomicLong totalSpendTime = new AtomicLong(0);
  29. AtomicInteger totalCount = new AtomicInteger(0);
  30. private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
  31. private final ReentrantReadWriteLock.WriteLock writeLock = readWriteLock.writeLock();
  32. @Transactional
  33. public List<IncrementKeyEntity> getData() {
  34. return incrementKeyRepo.findAll();
  35. }
  36. public String insertByJdbc() {
  37. long start = System.currentTimeMillis();
  38. String msg = "insert user failed!";
  39. String sql2 = "insert into TEST_SCHEMA.mytest_increment_key_table(timer_wheel_id, create_time) values (?, ?)";
  40. int result = jdbcTemplate.update(sql2, 2, DateUtil.getTimestamp());
  41. if (result < 1) {
  42. log.error(msg);
  43. throw new RuntimeException(msg);
  44. }
  45. long spendTime = System.currentTimeMillis() - start;
  46. calcSpendTime(spendTime);
  47. log.info("spend time: {}ms", spendTime);
  48. return "ok";
  49. }
  50. public String insertByHibernate() {
  51. IncrementKeyEntity incrementKeyEntity;
  52. incrementKeyEntity = new IncrementKeyEntity();
  53. incrementKeyEntity.setTimerWheelId((long) 2);
  54. incrementKeyEntity.setCreateTime(DateUtil.getTimestamp());
  55. long start = System.currentTimeMillis();
  56. incrementKeyRepo.save(incrementKeyEntity);
  57. long spendTime = System.currentTimeMillis() - start;
  58. calcSpendTime(spendTime);
  59. log.info("spend time: {}ms", spendTime);
  60. return "ok";
  61. }
  62. public void batchInsertByJdbc(int count) {
  63. clearSpendTime();
  64. int loopTimes = 10;
  65. int i = loopTimes;
  66. while (i > 0) {
  67. List<Object[]> batchArgs = Lists.newArrayList();
  68. int j = count;
  69. while (j > 0) {
  70. batchArgs.add(new Object[]{j, DateUtil.getTimestamp()});
  71. j--;
  72. }
  73. String sql2 = "insert into mytest_increment_key_table(timer_wheel_id, create_time) values (?, ?)";
  74. long start = System.currentTimeMillis();
  75. int[] result = jdbcTemplate.batchUpdate(sql2, batchArgs);
  76. long spendTime = System.currentTimeMillis() - start;
  77. log.info("spendTime: {}ms, result: {}", spendTime, result);
  78. calcSpendTime(spendTime);
  79. i--;
  80. }
  81. log.info("batch size: {}, maxSpendTime: {}ms, avg spend time: {}ms", count,
  82. maxSpendTime.get(), totalSpendTime.get() / loopTimes);
  83. }
  84. public void batchInsertByHibernate(int count) {
  85. clearSpendTime();
  86. int loopTimes = 10;
  87. int i = loopTimes;
  88. while (i > 0) {
  89. List<IncrementKeyEntity> list = Lists.newArrayList();
  90. int j = count;
  91. IncrementKeyEntity incrementKeyEntity;
  92. while (j > 0) {
  93. incrementKeyEntity = new IncrementKeyEntity();
  94. incrementKeyEntity.setTimerWheelId((long) 2);
  95. incrementKeyEntity.setCreateTime(DateUtil.getTimestamp());
  96. list.add(incrementKeyEntity);
  97. j--;
  98. }
  99. long start = System.currentTimeMillis();
  100. incrementKeyRepo.saveAll(list);
  101. long spendTime = System.currentTimeMillis() - start;
  102. log.info("spend time: {}ms", spendTime);
  103. calcSpendTime(spendTime);
  104. i--;
  105. }
  106. log.info("batch size: {}, maxSpendTime: {}ms, avg spend time: {}ms", count,
  107. maxSpendTime.get(), totalSpendTime.get() / loopTimes);
  108. }
  109. public String clearSpendTime() {
  110. maxSpendTime.set(0);
  111. totalSpendTime.set(0);
  112. totalCount.set(0);
  113. log.info("max spend time: {}ms, avg spend time: {}ms",
  114. maxSpendTime.get(), totalCount.get() == 0 ? 0 : totalSpendTime.get() / totalCount.get());
  115. return "ok";
  116. }
  117. public JSONObject getIncrementSpendTime() {
  118. JSONObject json = new JSONObject();
  119. json.put("maxSpendTime", maxSpendTime.get() + "ms");
  120. json.put("avgSpendTime", totalCount.get() == 0 ? 0 + "ms" : totalSpendTime.get() / totalCount.get() + "ms");
  121. log.info(JSONObject.toJSONString(json));
  122. return json;
  123. }
  124. private void calcSpendTime(long spendTime) {
  125. writeLock.lock();
  126. if (spendTime > maxSpendTime.get()) {
  127. maxSpendTime.set(spendTime);
  128. }
  129. writeLock.unlock();
  130. totalCount.incrementAndGet();
  131. totalSpendTime.addAndGet(spendTime);
  132. }
  133. }

表数据如下:

Hibernate与Jdbc测试结果

  1. # 建表SQL
  2. CREATE TABLE TEST_SCHEMA.mytest_increment_key_table(id BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY(START WITH 1,INCREMENT BY 1,NO CACHE)
  3. PRIMARY KEY, timer_wheel_id BIGINT NOT NULL, create_time TIMESTAMP NOT NULL);
  4. ## insert sql
  5. insert into TEST_SCHEMA.mytest_increment_key_table(timer_wheel_id, create_time) values (1, '2022-03-06 23:12:21.333')
  6. # ---------------- IncrementKey测试 ---------------- #
  7. ### springboot 启动
  8. 测试机器: (2核1G)
  9. 启动参数: java -jar -Xms868m -Xmx868m druid-hibernate-db2-1.0.jar
  10. ## 单条数据写入效率测试
  11. ### 测试参数
  12. jmeter 线程数:90, 常数吞吐量: 21000.0/min, 持续时间90s
  13. ### 测试数据
  14. #### 调用 insertByJdbc 方法
  15. "avgSpendTime": "9ms", "maxSpendTime": "277ms", 吞吐量: 333/sec
  16. #### 调用 insertByHibernate 方法
  17. "avgSpendTime": "11ms", "maxSpendTime": "351ms", 吞吐量: 329/sec
  18. ### 结论
  19. 采用jdbc写入, 吞吐量和平均耗时均略优于Hibernate方式,因此推荐, 采用jdbc的方式写入数据
  20. ## 批量写入效率测试
  21. ### 测试数据
  22. 调用 batchInsertByJdbc 方法
  23. batch size: 10, "avgSpendTime":"16ms","maxSpendTime":"44ms"
  24. batch size: 20, "avgSpendTime":"22ms","maxSpendTime":"28ms"
  25. batch size: 30, "avgSpendTime":"34ms","maxSpendTime":"41ms"
  26. batch size: 40, "avgSpendTime":"43ms","maxSpendTime":"69ms"
  27. batch size: 50, "avgSpendTime":"53ms","maxSpendTime":"63ms"
  28. 调用 batchInsertByHibernate 方法
  29. batch size: 10, "avgSpendTime":"79ms", "maxSpendTime":"381ms"
  30. batch size: 20, "avgSpendTime":"88ms", "maxSpendTime":"143ms"
  31. batch size: 30, "avgSpendTime":"118ms", "maxSpendTime":"126ms"
  32. batch size: 40, "avgSpendTime":"147ms", "maxSpendTime":"161ms"
  33. batch size: 50, "avgSpendTime":"192ms", "maxSpendTime":"253ms"
  34. ### 结论
  35. 在相同batch size下, 批量写入, 无论是平均耗时, 还是最大耗时, Jdbc均远远优于Hibernate.


References:

Docker Image

Statements Insert

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/代码探险家/article/detail/857147
推荐阅读
相关标签
  

闽ICP备14008679号