当前位置:   article > 正文

FastDFS实现大文件分片上传_fastdfs分片上传

fastdfs分片上传

一、Docker方式安装FastDFS

1、拉取镜像命令

docker pull delron/fastdfs 
  • 1

2、使用docker镜像构建tracker容器(跟踪服务器,起到调度的作用)

docker run -dti --network=host --name tracker -v /var/fdfs/tracker:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs tracker
  • 1

3、使用docker镜像构建storage容器(存储服务器,提供容量和备份服务)

docker run -dti  --network=host --name storage -e TRACKER_SERVER=10.0.0.18:22122 -v /var/fdfs/storage:/var/fdfs  -v /etc/localtime:/etc/localtime  delron/fastdfs storage
  • 1

4、测试

[root@vizhuo-zabbix-server ~]# docker exec -it storage bash
[root@vizhuo-zabbix-server nginx-1.12.2]# cd /var/fdfs
[root@vizhuo-zabbix-server fdfs]# echo hello 这是一个测试用例>a.txt
[root@vizhuo-zabbix-server fdfs]# ll
total 16
-rw-r--r--   1 root root   31 Jan 27 14:59 a.txt
drwxr-xr-x 259 root root 8192 Jan 27 14:58 data
drwxr-xr-x   2 root root   26 Jan 27 14:58 logs
[root@vizhuo-zabbix-server fdfs]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf a.txt
group1/M00/00/00/CgAAEmHyQvmAWM6zAAAAH93k9Eg435.txt
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

参考 https://www.cnblogs.com/braveym/p/15540132.html

二、集成fastdfs-client

1、引入fastdfs

<!-- fastdfs包 -->
<dependency>
    <groupId>com.github.tobato</groupId>
    <artifactId>fastdfs-client</artifactId>
    <version>1.25.2-RELEASE</version>
</dependency>
<!-- huTool工具包 -->
<dependency>
    <groupId>cn.hutool</groupId>
    <artifactId>hutool-all</artifactId>
    <version>4.0.12</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

2、分片上传前的检测

 @GetMapping("/check_before_upload")
    @ApiOperation("分片上传前的检测")
    public RespMsgBean checkBeforeUpload(@RequestParam("userId") Long userId, @RequestParam("fileMd5") String fileMd5) throws RedisConnectException {
        return fileService.checkFile(userId, fileMd5);
    }
  • 1
  • 2
  • 3
  • 4
  • 5
public RespMsgBean checkFile(Long userId, String fileMd5) throws RedisConnectException {
        if (StringUtils.isEmpty(fileMd5)) {
            return RespMsgBean.failure("fileMd5不能为空");
        }
        if (userId == null) {
            return RespMsgBean.failure("userId不能为空");
        }
        String userIdStr = userId.toString();
        CheckFileDto checkFileDto = new CheckFileDto();


        //模拟从mysql中查询文件表的md5,这里从redis里查询
        List<String> fileList = redisUtils.getListAll(UpLoadConstant.completedList);
        if (CollUtil.isNotEmpty(fileList)) {
            for (String e : fileList) {
                JSONObject obj = JSONUtil.parseObj(e);
                if (obj.get("md5").equals(fileMd5)) {
                    checkFileDto.setTotalSize(obj.getLong("length"));
                    checkFileDto.setViewPath(obj.getStr("url"));
                    return RespMsgBean.success(checkFileDto);
                }
            }
        }

        // 查询是否有相同md5文件已存在,已存在直接返回
//        FileDo fileDo = fileDao.findOneByColumn("scode", fileMd5);
//        if (fileDo != null) {
//            FileVo fileVo = doToVo(fileDo);
//            return RespMsgBean.success("文件已存在", fileVo);
//        } else {
        // 查询锁占用
        String lockName = UpLoadConstant.currLocks + fileMd5;
        long lock = redisUtils.incrBy(lockName, 1);
        String lockOwner = UpLoadConstant.lockOwner + fileMd5;
        String chunkCurrkey = UpLoadConstant.chunkCurr + fileMd5;
        if (lock > 1) {
            checkFileDto.setLock(1);
            // 检查是否为锁的拥有者,如果是放行
            String oWner = redisUtils.get(lockOwner);
            if (StringUtils.isEmpty(oWner)) {
                return RespMsgBean.failure("无法获取文件锁拥有者");
            } else {
                if (oWner.equals(userIdStr)) {
                    String chunkCurr = redisUtils.get(chunkCurrkey);
                    if (StringUtils.isEmpty(chunkCurr)) {
                        return RespMsgBean.failure("无法获取当前文件chunkCurr");
                    }
                    checkFileDto.setChunkCurr(Convert.toInt(chunkCurr));
                    return RespMsgBean.success("", null);
                } else {
                    return RespMsgBean.failure("当前文件已有人在上传,您暂无法上传该文件");
                }
            }
        } else {
            // 初始化锁.分块
            redisUtils.set(lockOwner, userIdStr);
            // 第一块索引是0,与前端保持一致
            redisUtils.set(chunkCurrkey, "0");
            checkFileDto.setChunkCurr(0);
            return RespMsgBean.success("验证成功", null);
        }
//        }
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63

3、分片文件上传

 @PostMapping("/upload_big_file_chunk")
    @ApiOperation("分片上传大文件")
    public RespMsgBean uploadBigFileChunk(@RequestParam("file") @ApiParam(value = "文件", required = true) MultipartFile file,
                                          @RequestParam("userId") @ApiParam(value = "用户id", required = true) Long userId,
                                          @RequestParam("fileMd5") @ApiParam(value = "文件MD5值", required = true) String fileMd5,
                                          @RequestParam("fileName") @ApiParam(value = "文件名称", required = true) String fileName,
                                          @RequestParam("totalChunks") @ApiParam(value = "总块数", required = true) Integer totalChunks,
                                          @RequestParam("chunkNumber") @ApiParam(value = "当前块数", required = true) Integer chunkNumber,
                                          @RequestParam("currentChunkSize") @ApiParam(value = "当前块的大小", required = true) Integer currentChunkSize,
                                          @RequestParam("bizId") @ApiParam(value = "业务Id", required = true) String bizId,
                                          @RequestParam("bizCode") @ApiParam(value = "业务编码", required = true) String bizCode) throws RedisConnectException {
        return fileService.uploadBigFileChunk(file, userId, fileMd5, fileName, totalChunks, chunkNumber, currentChunkSize, bizId, bizCode);
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
public RespMsgBean uploadBigFileChunk(MultipartFile file, Long userId, String fileMd5, String fileName, Integer chunks, Integer chunk, Integer chunkSize, String bizId, String bizCode) throws RedisConnectException {
//        ServiceAssert.isTrue(!file.isEmpty(), 0, "文件不能为空");
//        ServiceAssert.notNull(userId, 0, "用户id不能为空");
//        ServiceAssert.isTrue(StringUtils.isNotBlank(fileMd5), 0, "文件fd5不能为空");
//        ServiceAssert.isTrue(!"undefined".equals(fileMd5), 0, "文件fd5不能为undefined");
//        ServiceAssert.isTrue(StringUtils.isNotBlank(fileName), 0, "文件名称不能为空");
//        ServiceAssert.isTrue(chunks != null && chunk != null && chunkSize != null, 0, "文件块数有误");
        // 存储在fastdfs不带组的路径
        String noGroupPath = "";
        logger.info("当前文件的Md5:{}", fileMd5);
        String chunkLockName = UpLoadConstant.chunkLock + fileMd5;

        // 真正的拥有者
        boolean currOwner = false;
        Integer currentChunkInFront = 0;
        try {
            if (chunk == null) {
                chunk = 0;
            }
            if (chunks == null) {
                chunks = 1;
            }

            long lock = redisUtils.incrBy(chunkLockName, 1);
            if (lock > 1) {
                logger.info("请求块锁失败");
                return RespMsgBean.failure("请求块锁失败");
            }
            // 写入锁的当前拥有者
            currOwner = true;

            // redis中记录当前应该传第几块(从0开始)
            String currentChunkKey = UpLoadConstant.chunkCurr + fileMd5;
            String currentChunkInRedisStr = redisUtils.get(currentChunkKey);
            logger.info("当前块的大小:{}", chunkSize);
            if (StringUtils.isEmpty(currentChunkInRedisStr)) {
                logger.info("无法获取当前文件chunkCurr");
                return RespMsgBean.failure("无法获取当前文件chunkCurr");
            }
            Integer currentChunkInRedis = Convert.toInt(currentChunkInRedisStr);
            currentChunkInFront = chunk;

            if (currentChunkInFront < currentChunkInRedis) {
                logger.info("当前文件块已上传");
                return RespMsgBean.failure("当前文件块已上传", "001");
            } else if (currentChunkInFront > currentChunkInRedis) {
                logger.info("当前文件块需要等待上传,稍后请重试");
                return RespMsgBean.failure("当前文件块需要等待上传,稍后请重试");
            }

            logger.info("***********开始上传第{}块**********", currentChunkInRedis);
            StorePath path = null;
            if (!file.isEmpty()) {
                try {
                    if (currentChunkInFront == 0) {
                        redisUtils.set(currentChunkKey, Convert.toStr(currentChunkInRedis + 1));
                        logger.info("{}:redis块+1", currentChunkInFront);
                        try {
                            path = appendFileStorageClient.uploadAppenderFile(UpLoadConstant.DEFAULT_GROUP, file.getInputStream(),
                                    file.getSize(), FileUtil.extName(fileName));
                            // 记录第一个分片上传的大小
                            redisUtils.set(UpLoadConstant.fastDfsSize + fileMd5, String.valueOf(chunkSize));
                            logger.info("{}:更新完fastDfs", currentChunkInFront);
                            if (path == null) {
                                redisUtils.set(currentChunkKey, Convert.toStr(currentChunkInRedis));
                                logger.info("获取远程文件路径出错");
                                return RespMsgBean.failure("获取远程文件路径出错");
                            }
                        } catch (Exception e) {
                            redisUtils.set(currentChunkKey, Convert.toStr(currentChunkInRedis));
                            logger.error("初次上传远程文件出错", e);
                            return RespMsgBean.failure("上传远程服务器文件出错");
                        }
                        noGroupPath = path.getPath();
                        redisUtils.set(UpLoadConstant.fastDfsPath + fileMd5, path.getPath());
                        logger.info("上传文件 result = {}", path);
                    } else {
                        redisUtils.set(currentChunkKey, Convert.toStr(currentChunkInRedis + 1));
                        logger.info("{}:redis块+1", currentChunkInFront);
                        noGroupPath = redisUtils.get(UpLoadConstant.fastDfsPath + fileMd5);
                        if (noGroupPath == null) {
                            logger.info("无法获取已上传服务器文件地址");
                            return RespMsgBean.failure("无法获取已上传服务器文件地址");
                        }
                        try {
                            String alreadySize = redisUtils.get(UpLoadConstant.fastDfsSize + fileMd5);
                            // 追加方式实际实用如果中途出错多次,可能会出现重复追加情况,这里改成修改模式,即时多次传来重复文件块,依然可以保证文件拼接正确
                            assert alreadySize != null;
                            appendFileStorageClient.modifyFile(UpLoadConstant.DEFAULT_GROUP, noGroupPath, file.getInputStream(),
                                    file.getSize(), Long.parseLong(alreadySize));
                            // 记录分片上传的大小
                            redisUtils.set(UpLoadConstant.fastDfsSize + fileMd5, String.valueOf(Long.parseLong(alreadySize) + chunkSize));
                            logger.info("{}:更新完fastdfs", currentChunkInFront);
                        } catch (Exception e) {
                            redisUtils.set(currentChunkKey, Convert.toStr(currentChunkInRedis));
                            logger.error("更新远程文件出错", e);
                            return RespMsgBean.failure("更新远程文件出错");
                        }
                    }
                    if (currentChunkInFront + 1 == chunks) {
                        // 最后一块,清空upload,写入数据库
                        long size = Long.parseLong(Objects.requireNonNull(redisUtils.get(UpLoadConstant.fastDfsSize + fileMd5)));
                        // 持久化上传完成文件,也可以存储在mysql中
                        noGroupPath = redisUtils.get(UpLoadConstant.fastDfsPath + fileMd5);
                        String url = UpLoadConstant.DEFAULT_GROUP + "/" + noGroupPath;
                        FileDo fileDo = new FileDo(fileName, url, "", size, bizId, bizCode);
                        fileDo.setCreateUser(userId);
                        fileDo.setUpdateUser(userId);
//                        FileVo fileVo = saveFileDo4BigFile(fileDo, fileMd5);
                        redisUtils.rpush(UpLoadConstant.completedList, JSONUtil.toJsonStr(fileDo));
                        redisUtils.delete(UpLoadConstant.chunkCurr + fileMd5,
                                UpLoadConstant.fastDfsPath + fileMd5,
                                UpLoadConstant.currLocks + fileMd5,
                                UpLoadConstant.lockOwner + fileMd5,
                                UpLoadConstant.fastDfsSize + fileMd5);
                        logger.info("***********正常结束**********");
                        return RespMsgBean.success(fileDo,"success");
                    }
                    if (currentChunkInFront + 1 > chunks) {
                        return RespMsgBean.failure("文件已上传结束");
                    }
                } catch (Exception e) {
                    logger.error("上传文件错误", e);
                    return RespMsgBean.failure("上传错误 " + e.getMessage());
                }
            }
        } finally {
            // 锁的当前拥有者才能释放块上传锁
            if (currOwner) {
                redisUtils.set(chunkLockName, "0");
            }
        }
        logger.info("***********第{}块上传成功**********", currentChunkInFront);
        return RespMsgBean.success("第" + currentChunkInFront + "块上传成功");
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/170108
推荐阅读
相关标签
  

闽ICP备14008679号