postgresql批量新增数据时,批量插入的数据量太大了,造成了IO异常
只能把这么多数据先进行分割,再批量插入,但是感觉这种方式不是最优解,先暂时顶着,具体List分割方式如下:
package cn.ucmed.otaka.healthcare.cloud.util; import java.util.HashMap; import java.util.List; import java.util.Map; public class PartitionArray<T> { public Map<Integer, List<T>> partition(List<T> tArray, int capacity) { if (tArray.isEmpty() || capacity < 1) { return null; } Map<Integer, List<T>> result = new HashMap<>(); int size = tArray.size(); int count = ((Double) Math.ceil(size * 1.0 / capacity)).intValue(); for (int i = 0; i < count; i++) { int end = capacity * (i + 1); if (end > size) end = size; result.put(i, tArray.subList(capacity * i, end)); } return result; } }
原先批量调用的地方做个处理:
try { PartitionArray<MDynamicFuncReleaseHistory> partitionArray = new PartitionArray<>(); Map<Integer, List<MDynamicFuncReleaseHistory>> batchList = partitionArray.partition(releaseHistoryList, INSERT_CAPACITY); batchList.forEach((k, v) -> { mDynamicFuncReleaseHistoryMapper.batchInsert(v); }); } catch (Exception e) { log.error(e.getMessage()); throw new BusinessException(500, "新增MDynamicFuncReleaseHistory失败"); }