当前位置:   article > 正文

FastDFS V6.06 阿里云集群安装配置双IP(踩坑)_fastdfs last synced timestamp(03h:59m:04s delay)

fastdfs last synced timestamp(03h:59m:04s delay)

 

安装之前

如果您看到了本教程请务必先阅读安装之前,看看是否是您期望的效果。可以利用FastDFS实现一个简单的对象存储功能。

  • 实现效果

1.利用阿里云和腾讯云的3台学生主机 ECS 搭建一个分布式集群,一台作为 tracker 节点,两台作为 storage 节点,同属一个group

2.同组内的 storage 自动进行备份,通过公网而不是内网,利用了FastDFS V6.x 版本以上的双IP特性(这么做会安全隐患,仅供学习或者部署异地集群和混合云的一些应用)。

3.用 nginx 进行反向代理,通过 IP文件 ID(oYYBAF6tCQ6ABANFAAAAHkJn4NE610.txt)在浏览器访问到资源文件。

4.Java 的客户端程序,可以整合进 SpringBoot 测试文件的上传和删除。

  • 双 IP 特性

v6.0 新增特性说明如下(摘抄自 FastDFS 的微信公众号):

支持双IP,一个内网IP,一个外网IP,支持NAT方式的内网和外网双IP,解决跨机房或混合云部署问题。

双IP特性和规则:

tracker 和 storage 均可支持双 IP,一个内网 IP,一个外网 IP

FastDFS 支持双 IP 特性后,将完全兼容以前单 IP 的设计和逻辑。对于storage server要使用双 IP 特性,必须使用 FastDFS V4.0 引入的 storage server id 特性,也就是把双IP配置到 storage_ids.conf 中。

连接规则

storage 连接storage server,优先尝试连接内网 IP,失败了再尝试连接外网 IP。

client 向 tracker server 获取 storage server IP,tracker server采用规则如下:

- 外网过来的请求,返回外网 IP;

- 内网过来的请求,返回内网 IP。

内网 IP 地址段为:10. 打头,192.168. 打头以及 172.[16-32). 打头的IP地址。注:[16-32)为范围表示方式,表示大于等于16且小于32的整数。

  • 题外话

为了配置这个环境前前后后花了大概四五天,不光是为了记录,同时希望养成好的安装习惯和写文档的习惯。网上几乎所有的相关的博客都翻了一个遍,最后通过 issues 列表找到了症结所在,是docker镜像里的版本太低(V5.11)不支持双ip,storage向tracker注册的地址是内网地址,一个是希望配置成公网同步,但是很多教程的集群配置都是虚拟机本地配的都是在一个网段所以可以正常同步,但是阿里云一直不同步。

  • 血泪教训

1.注意软件版本,版本的特性,尤其根据一些博客安装时,以及是否是你想要的效果。

2.装环境最好看官网教程或者 Github 的 wiki ,填写配置文件时,先看作者给的例子 XXX.conf.sample,比很多教程靠谱。

https://github.com/happyfish100/fastdfs/issues

https://github.com/happyfish100/fastdfs/wiki

3.出问题优先查看日志的 error 信息,然后先到 issues 列表看看有没有其他人遇到,注意官方的回答,可以比较准确定位问题或者给出解决方案。

环境

名称版本说明
阿里云 ECSUbuntu 18.04.4专有网络
腾讯云 ECScentos7.x 专有网络

注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口! 

注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口!

注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口!

安装

Github 的 wiki 给了一个安装教程,可以去参考也可以按照我下面的步骤,基本是一致的,这里主要的区别就是双 IP 的设置,我会进行说明。下面以 ubuntu 为例:

  • 编译环境
  1. # gcc make 等编译工具在 ECS 的服务器已经内置如果没有的自行安装
  2. # 下载 fastdfs 仓库
  3. apt-get install git
  4. # nginx 相关依赖
  5. # PCRE 库支持正则表达式
  6. apt-get install libpcre3 libpcre3-dev
  7. # zlib 库用于对 HTTP 包的内容做gzip格式的压缩
  8. apt-get install zlib1g-dev
  9. # 在更安全的 SSL 协议上传输 HTTP
  10. apt-get install openssl libssl-dev
  • 安装 FastDFS
  1. # 磁盘目录说明
  2. # 所有安装包 /usr/local/src
  3. # 数据存储位置以及日志 /home/dfs/
  4. # 创建数据存储目录
  5. mkdir /home/dfs
  6. # 切换到安装目录准备下载安装包
  7. cd /usr/local/src
  8. # 1.安装 libfatscommon
  9. # 下载源码仓库
  10. git clone https://github.com/happyfish100/libfastcommon.git --depth 1
  11. cd libfastcommon/
  12. # 编译安装
  13. ./make.sh && ./make.sh install
  14. # 2.安装 FastDFS
  15. # 返回上一级目录
  16. cd ../
  17. git clone https://github.com/happyfish100/fastdfs.git --depth 1
  18. cd fastdfs/
  19. # 编译安装
  20. ./make.sh && ./make.sh install
  21. # tracker & storage 配置文件准备 client 用于测试
  22. cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
  23. cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
  24. cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
  25. cp /usr/local/src/fastdfs/conf/http.conf /etc/fdfs/
  26. cp /usr/local/src/fastdfs/conf/mime.types /etc/fdfs/
  27. # 3.安装 fastdfs-nginx-module
  28. # 返回上一级目录
  29. cd ../
  30. git clone https://github.com/happyfish100/fastdfs-nginx-module.git --depth 1
  31. cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs
  32. # 4.安装 nginx
  33. # 下载nginx压缩包
  34. wget http://nginx.org/download/nginx-1.15.4.tar.gz
  35. tar -zxvf nginx-1.15.4.tar.gz #解压
  36. cd nginx-1.15.4/
  37. # 添加fastdfs-nginx-module模块
  38. ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src/
  39. # 编译安装
  40. make && make install
  • 配置 tracker
  1. vim /etc/fdfs/tracker.conf
  2. # 需要修改的内容如下
  3. # 注意服务器是 NAT 方式所以此处绑定的是内网 IP 可以在阿里云后台看到
  4. # 可以使用命令查看 ifconfig -a 没有公网的IP
  5. bind_addr = 172.xxx.xxx.xxx
  6. # tracker 服务器端口(默认22122,一般不修改)
  7. port=22122
  8. # 存储日志和数据的根目录
  9. base_path=/home/dfs
  10. # use_storage_id 设置为 true 后需要在 storage_ids.conf 设置双 IP
  11. # 原注释如下:
  12. # if use storage server ID instead of IP address
  13. # if you want to use dual IPs for storage server, you MUST set
  14. # this parameter to true, and configure the dual IPs in the file
  15. # configured by following item "storage_ids_filename", such as storage_ids.conf
  16. # default value is false
  17. # since V4.00
  18. use_storage_id = true

配置 storage_ids.conf

  1. # <id> <group_name> <ip_or_hostname[:port]>
  2. #
  3. # id is a natural number (1, 2, 3 etc.),
  4. # 6 bits of the id length is enough, such as 100001
  5. #
  6. # storage ip or hostname can be dual IPs seperated by comma,
  7. # one is an inner (intranet) IP and another is an outer (extranet) IP,
  8. # or two different types of inner (intranet) IPs
  9. # for example: 192.168.2.100,122.244.141.46
  10. # another eg.: 192.168.1.10,172.17.4.21
  11. #
  12. # the port is optional. if you run more than one storaged instances
  13. # in a server, you must specified the port to distinguish different instances.
  14. # 一个内网 IP 一个公网 IP
  15. 100001 group1 172.XXX.XXX.XXX,123.XXX.XXX.XXX
  16. 100002 group1 172.XXX.XXX.XXX,123.XXX.XXX.XXX
  • 配置 storage
  1. vim /etc/fdfs/storage.conf
  2. # 需要修改的内容如下
  3. # storage服务端口(默认23000,一般不修改)
  4. port=23000
  5. # 数据和日志文件存储根目录
  6. base_path=/home/dfs
  7. # 第一个存储目录
  8. store_path0=/home/dfs
  9. # 重点是这个,一定注意格式 内网,外网:端口号
  10. # 重点是这个,一定注意格式 内网,外网:端口号
  11. # 重点是这个,一定注意格式 内网,外网:端口号
  12. # tracker_server can ocur more than once for multi tracker servers.
  13. # the value format of tracker_server is "HOST:PORT",
  14. # the HOST can be hostname or ip address,
  15. # and the HOST can be dual IPs or hostnames seperated by comma,
  16. # the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
  17. # or two different types of inner (intranet) IPs.
  18. # for example: 192.168.2.100,122.244.141.46:22122
  19. # another eg.: 192.168.1.10,172.17.4.21:22122
  20. tracker_server = 192.168.2.100,122.244.141.46:22122
  21. # http访问文件的端口(默认8888,看情况修改,和nginx中保持一致)
  22. http.server_port=8888
  • 配置 nginx
  1. vim /etc/fdfs/mod_fastdfs.conf
  2. #需要修改的内容如下
  3. tracker_server=123.XXX.XXX.XXX:22122 # 公网 IP
  4. url_have_group_name=true
  5. store_path0=/home/dfs
  6. #配置nginx.config
  7. vim /usr/local/nginx/conf/nginx.conf
  8. #添加如下配置
  9. server {
  10. listen 8888; #该端口为storage.conf中的http.server_port相同
  11. server_name localhost;
  12. location ~/group[0-9]/ {
  13. ngx_fastdfs_module;
  14. }
  15. error_page 500 502 503 504 /50x.html;
  16. location = /50x.html {
  17. root html;
  18. }
  19. }

顺利的话已经基本安装完成 ,想提前测试可以去配置一下 client.conf 照着 wiki 就行,也可以不配,后面用 java 代码测试。

FastDFS 常用命令

建议把他们复制到文本中,需要经常复制粘贴。

  1. # tracker
  2. # 启动 tracker 服务
  3. /etc/init.d/fdfs_trackerd start
  4. # 重启动 tracker 服务
  5. /etc/init.d/fdfs_trackerd restart
  6. # 停止 tracker 服务
  7. /etc/init.d/fdfs_trackerd stop
  8. chkconfig fdfs_trackerd on #自启动tracker服务
  9. # storage
  10. #启动 storage 服务
  11. /etc/init.d/fdfs_storaged start
  12. # 重启 storage 服务
  13. /etc/init.d/fdfs_storaged restart
  14. # 停止 storage 服务
  15. /etc/init.d/fdfs_storaged stop
  16. # 重启 storage 服务
  17. chkconfig fdfs_storaged on
  18. # nginx
  19. # 启动 nginx
  20. /usr/local/nginx/sbin/nginx
  21. # 重启 nginx
  22. /usr/local/nginx/sbin/nginx -s reload
  23. # 停止 nginx
  24. /usr/local/nginx/sbin/nginx -s stop
  25. # 检测集群
  26. # 会显示会有几台服务器和详细信息
  27. /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

监控集群的例子

节点的状态是 ACTIVE 才能正常提供服务,否则请查看相关的 tracker.log storage.log

添加正常进行同步时注意下面的时间:

        last_heart_beat_time = 2020-05-02 23:06:00
        last_source_update = 2020-05-02 13:50:26
        last_sync_update = 2020-05-02 13:50:25
        last_synced_timestamp = 2020-05-02 13:50:24 (0s delay)

如果同步时间一直是 1970-xxx 显示 never sync,则说明我们的公网配置同步失败,请查看相关日志。

  1. [2020-05-02 23:06:09] DEBUG - base_path=/home/dfs, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=1, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
  2. server_count=1, server_index=0
  3. tracker server is xxx.xxx.xxx.xxx:22122
  4. group count: 1
  5. Group 1:
  6. group name = group1
  7. disk total space = 40,188 MB
  8. disk free space = 33,438 MB
  9. trunk free space = 0 MB
  10. storage server count = 2
  11. active server count = 2
  12. storage server port = 23000
  13. storage HTTP port = 8888
  14. store path count = 1
  15. subdir count per path = 256
  16. current write server index = 1
  17. current trunk file id = 0
  18. Storage 1:
  19. id = 100001
  20. ip_addr = xxx.xxx.xxx.xxx ACTIVE
  21. http domain =
  22. version = 6.06
  23. join time = 2020-05-02 13:28:17
  24. up time = 2020-05-02 13:29:51
  25. total storage = 40,188 MB
  26. free storage = 33,438 MB
  27. upload priority = 10
  28. store_path_count = 1
  29. subdir_count_per_path = 256
  30. storage_port = 23000
  31. storage_http_port = 8888
  32. current_write_path = 0
  33. source storage id =
  34. if_trunk_server = 0
  35. connection.alloc_count = 256
  36. connection.current_count = 1
  37. connection.max_count = 2
  38. total_upload_count = 3
  39. success_upload_count = 3
  40. total_append_count = 0
  41. success_append_count = 0
  42. total_modify_count = 0
  43. success_modify_count = 0
  44. total_truncate_count = 0
  45. success_truncate_count = 0
  46. total_set_meta_count = 2
  47. success_set_meta_count = 2
  48. total_delete_count = 2
  49. success_delete_count = 2
  50. total_download_count = 0
  51. success_download_count = 0
  52. total_get_meta_count = 2
  53. success_get_meta_count = 2
  54. total_create_link_count = 0
  55. success_create_link_count = 0
  56. total_delete_link_count = 0
  57. success_delete_link_count = 0
  58. total_upload_bytes = 90
  59. success_upload_bytes = 90
  60. total_append_bytes = 0
  61. success_append_bytes = 0
  62. total_modify_bytes = 0
  63. success_modify_bytes = 0
  64. stotal_download_bytes = 0
  65. success_download_bytes = 0
  66. total_sync_in_bytes = 60
  67. success_sync_in_bytes = 60
  68. total_sync_out_bytes = 0
  69. success_sync_out_bytes = 0
  70. total_file_open_count = 7
  71. success_file_open_count = 7
  72. total_file_read_count = 2
  73. success_file_read_count = 2
  74. total_file_write_count = 5
  75. success_file_write_count = 5
  76. last_heart_beat_time = 2020-05-02 23:06:00
  77. last_source_update = 2020-05-02 13:50:26
  78. last_sync_update = 2020-05-02 13:50:25
  79. last_synced_timestamp = 2020-05-02 13:50:24 (0s delay)
  80. Storage 2:
  81. id = 100002
  82. ip_addr = xxx.xxx.xxx.xxx ACTIVE
  83. http domain =
  84. version = 6.06
  85. join time = 2020-05-02 13:29:34
  86. up time = 2020-05-02 13:44:34
  87. total storage = 50,267 MB
  88. free storage = 38,433 MB
  89. upload priority = 10
  90. store_path_count = 1
  91. subdir_count_per_path = 256
  92. storage_port = 23000
  93. storage_http_port = 8888
  94. current_write_path = 0
  95. source storage id = 100001
  96. if_trunk_server = 0
  97. connection.alloc_count = 256
  98. connection.current_count = 1
  99. connection.max_count = 2
  100. total_upload_count = 2
  101. success_upload_count = 2
  102. total_append_count = 0
  103. success_append_count = 0
  104. total_modify_count = 0
  105. success_modify_count = 0
  106. total_truncate_count = 0
  107. success_truncate_count = 0
  108. total_set_meta_count = 0
  109. success_set_meta_count = 0
  110. total_delete_count = 2
  111. success_delete_count = 2
  112. total_download_count = 0
  113. success_download_count = 0
  114. total_get_meta_count = 0
  115. success_get_meta_count = 0
  116. total_create_link_count = 0
  117. success_create_link_count = 0
  118. total_delete_link_count = 0
  119. success_delete_link_count = 0
  120. total_upload_bytes = 60
  121. success_upload_bytes = 60
  122. total_append_bytes = 0
  123. success_append_bytes = 0
  124. total_modify_bytes = 0
  125. success_modify_bytes = 0
  126. stotal_download_bytes = 0
  127. success_download_bytes = 0
  128. total_sync_in_bytes = 154
  129. success_sync_in_bytes = 154
  130. total_sync_out_bytes = 0
  131. success_sync_out_bytes = 0
  132. total_file_open_count = 7
  133. success_file_open_count = 7
  134. total_file_read_count = 0
  135. success_file_read_count = 0
  136. total_file_write_count = 7
  137. success_file_write_count = 7
  138. last_heart_beat_time = 2020-05-02 23:06:04
  139. last_source_update = 2020-05-02 13:50:24
  140. last_sync_update = 2020-05-02 13:50:26
  141. last_synced_timestamp = 2020-05-02 13:50:26 (0s delay)

 

Java 客户端

Java 的客户端我使用的是 GitHub 上的一个实现,可以很好整合进 SpringBoot,导入依赖后怎么使用直接查看它的文档就行,项目连接如下:

https://github.com/tobato/FastDFS_Client

具体可以参考里面的测试代码,为了方便我把里面的一些代码拷到了自己项目进行单元测试,下面的代码是不能直接拿过去直接运行的仅做说明,你可以照着里面的测试代码简单写一下,上传和删除成功了就基本没啥问题。

  1. import com.github.tobato.fastdfs.domain.fdfs.MetaData;
  2. import com.github.tobato.fastdfs.domain.fdfs.StorePath;
  3. import com.github.tobato.fastdfs.service.FastFileStorageClient;
  4. import com.wechatapp.promise.fm.fastdfs.FastdfsTestApplication;
  5. import com.wechatapp.promise.fm.fastdfs.service.domain.RandomTextFile;
  6. import lombok.extern.slf4j.Slf4j;
  7. import org.junit.Test;
  8. import org.junit.runner.RunWith;
  9. import org.springframework.beans.factory.annotation.Autowired;
  10. import org.springframework.boot.test.context.SpringBootTest;
  11. import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
  12. import java.util.HashSet;
  13. import java.util.Set;
  14. import static org.junit.Assert.assertEquals;
  15. import static org.junit.jupiter.api.Assertions.assertNotNull;
  16. // 对一个随机的 txt 文本进行上传 删除
  17. @RunWith(SpringJUnit4ClassRunner.class)
  18. @SpringBootTest(classes = FastdfsTestApplication.class)
  19. @Slf4j
  20. public class FastFileStorageClientTest {
  21. @Autowired
  22. protected FastFileStorageClient storageClient;
  23. /**
  24. * 上传文件,并且设置MetaData,上传成功后 storage 会出现两个文件
  25. * 一个是源文件一个记录 MetaData
  26. */
  27. @Test
  28. public void testUploadFileAndMetaData() {
  29. log.info("##################上传文件..##");
  30. RandomTextFile file = new RandomTextFile();
  31. // Metadata
  32. Set<MetaData> metaDataSet = createMetaData();
  33. // 上传文件和Metadata
  34. StorePath path = storageClient.uploadFile(file.getInputStream(), file.getFileSize(), file.getFileExtName(),
  35. metaDataSet);
  36. assertNotNull(path);
  37. log.info("##################上传文件路径{}", path);
  38. // 验证获取MetaData
  39. log.info("##################获取Metadata##");
  40. Set<MetaData> fetchMetaData = storageClient.getMetadata(path.getGroup(), path.getPath());
  41. assertEquals(fetchMetaData, metaDataSet);
  42. log.info("##################删除文件..##");
  43. storageClient.deleteFile(path.getGroup(), path.getPath());
  44. }
  45. /**
  46. * 不带MetaData也应该能上传成功
  47. */
  48. @Test
  49. public void testUploadFileWithoutMetaData() {
  50. log.info("##################上传文件..##");
  51. RandomTextFile file = new RandomTextFile();
  52. // 上传文件和Metadata
  53. StorePath path = storageClient.uploadFile(file.getInputStream(), file.getFileSize(), file.getFileExtName(),
  54. null);
  55. assertNotNull(path);
  56. log.info("##################path:{}",path.getFullPath());
  57. log.info("##################删除文件..##");
  58. storageClient.deleteFile(path.getFullPath());
  59. }
  60. private Set<MetaData> createMetaData() {
  61. Set<MetaData> metaDataSet = new HashSet<>();
  62. metaDataSet.add(new MetaData("Author", "lhh"));
  63. metaDataSet.add(new MetaData("CreateDate", "2020-04-28"));
  64. return metaDataSet;
  65. }
  66. }

配置文件

  1. # ===================================================================
  2. # 分布式文件系统FDFS配置
  3. # ===================================================================
  4. fdfs:
  5. so-timeout: 1501
  6. connect-timeout: 1601
  7. thumb-image:
  8. width: 150
  9. height: 150
  10. tracker-list:
  11. - 123.xxx.xxx.xxx:22122
  12. # 可以写多个
  13. # - 123.xxx.xxx.xxx:22122
  14. # - 123.xxx.xxx.xxx:22122
  15. ---
  16. spring:
  17. profiles: customized_pool
  18. fdfs:
  19. so-timeout: 1501
  20. connect-timeout: 601
  21. thumb-image:
  22. width: 150
  23. height: 150
  24. tracker-list:
  25. - 123.xxx.xxx.xxx:22122
  26. # 可以写多个
  27. # - 123.xxx.xxx.xxx:22122
  28. # - 123.xxx.xxx.xxx:22122
  29. pool:
  30. #从池中借出的对象的最大数目
  31. max-total: 153
  32. max-wait-millis: 102
  33. jmx-name-base: 1
  34. jmx-name-prefix: 1

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/512409
推荐阅读
相关标签
  

闽ICP备14008679号