赞
踩
如果您看到了本教程请务必先阅读安装之前,看看是否是您期望的效果。可以利用FastDFS实现一个简单的对象存储功能。
1.利用阿里云和腾讯云的3台学生主机 ECS 搭建一个分布式集群,一台作为 tracker 节点,两台作为 storage 节点,同属一个group。
2.同组内的 storage 自动进行备份,通过公网而不是内网,利用了FastDFS V6.x 版本以上的双IP特性(这么做会安全隐患,仅供学习或者部署异地集群和混合云的一些应用)。
3.用 nginx 进行反向代理,通过 IP + 文件 ID(oYYBAF6tCQ6ABANFAAAAHkJn4NE610.txt)在浏览器访问到资源文件。
4.Java 的客户端程序,可以整合进 SpringBoot 测试文件的上传和删除。
v6.0 新增特性说明如下(摘抄自 FastDFS 的微信公众号):
支持双IP,一个内网IP,一个外网IP,支持NAT方式的内网和外网双IP,解决跨机房或混合云部署问题。
双IP特性和规则:
tracker 和 storage 均可支持双 IP,一个内网 IP,一个外网 IP。
FastDFS 支持双 IP 特性后,将完全兼容以前单 IP 的设计和逻辑。对于storage server要使用双 IP 特性,必须使用 FastDFS V4.0 引入的 storage server id 特性,也就是把双IP配置到 storage_ids.conf 中。
连接规则
storage 连接storage server,优先尝试连接内网 IP,失败了再尝试连接外网 IP。
client 向 tracker server 获取 storage server IP,tracker server采用规则如下:
- 外网过来的请求,返回外网 IP;
- 内网过来的请求,返回内网 IP。
内网 IP 地址段为:10. 打头,192.168. 打头以及 172.[16-32). 打头的IP地址。注:[16-32)为范围表示方式,表示大于等于16且小于32的整数。
为了配置这个环境前前后后花了大概四五天,不光是为了记录,同时希望养成好的安装习惯和写文档的习惯。网上几乎所有的相关的博客都翻了一个遍,最后通过 issues 列表找到了症结所在,是docker镜像里的版本太低(V5.11)不支持双ip,storage向tracker注册的地址是内网地址,一个是希望配置成公网同步,但是很多教程的集群配置都是虚拟机本地配的都是在一个网段所以可以正常同步,但是阿里云一直不同步。
1.注意软件版本,版本的特性,尤其根据一些博客安装时,以及是否是你想要的效果。
2.装环境最好看官网教程或者 Github 的 wiki ,填写配置文件时,先看作者给的例子 XXX.conf.sample,比很多教程靠谱。
https://github.com/happyfish100/fastdfs/issues
https://github.com/happyfish100/fastdfs/wiki
3.出问题优先查看日志的 error 信息,然后先到 issues 列表看看有没有其他人遇到,注意官方的回答,可以比较准确定位问题或者给出解决方案。
名称 | 版本 | 说明 |
阿里云 ECS | Ubuntu 18.04.4 | 专有网络 |
腾讯云 ECS | centos7.x | 专有网络 |
注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口!
注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口!
注意!注意!注意!阿里云、腾讯云安全组要开放相关的端口!
Github 的 wiki 给了一个安装教程,可以去参考也可以按照我下面的步骤,基本是一致的,这里主要的区别就是双 IP 的设置,我会进行说明。下面以 ubuntu 为例:
- # gcc make 等编译工具在 ECS 的服务器已经内置如果没有的自行安装
-
- # 下载 fastdfs 仓库
- apt-get install git
-
- # nginx 相关依赖
-
- # PCRE 库支持正则表达式
- apt-get install libpcre3 libpcre3-dev
-
- # zlib 库用于对 HTTP 包的内容做gzip格式的压缩
- apt-get install zlib1g-dev
-
- # 在更安全的 SSL 协议上传输 HTTP
- apt-get install openssl libssl-dev
- # 磁盘目录说明
- # 所有安装包 /usr/local/src
- # 数据存储位置以及日志 /home/dfs/
-
- # 创建数据存储目录
- mkdir /home/dfs
-
- # 切换到安装目录准备下载安装包
- cd /usr/local/src
-
- # 1.安装 libfatscommon
- # 下载源码仓库
- git clone https://github.com/happyfish100/libfastcommon.git --depth 1
- cd libfastcommon/
- # 编译安装
- ./make.sh && ./make.sh install
-
- # 2.安装 FastDFS
- # 返回上一级目录
- cd ../
- git clone https://github.com/happyfish100/fastdfs.git --depth 1
- cd fastdfs/
- # 编译安装
- ./make.sh && ./make.sh install
-
- # tracker & storage 配置文件准备 client 用于测试
- cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
- cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
- cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
- cp /usr/local/src/fastdfs/conf/http.conf /etc/fdfs/
- cp /usr/local/src/fastdfs/conf/mime.types /etc/fdfs/
-
- # 3.安装 fastdfs-nginx-module
- # 返回上一级目录
- cd ../
- git clone https://github.com/happyfish100/fastdfs-nginx-module.git --depth 1
- cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs
-
- # 4.安装 nginx
- # 下载nginx压缩包
- wget http://nginx.org/download/nginx-1.15.4.tar.gz
- tar -zxvf nginx-1.15.4.tar.gz #解压
- cd nginx-1.15.4/
- # 添加fastdfs-nginx-module模块
- ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src/
- # 编译安装
- make && make install
- vim /etc/fdfs/tracker.conf
-
- # 需要修改的内容如下
-
- # 注意服务器是 NAT 方式所以此处绑定的是内网 IP 可以在阿里云后台看到
- # 可以使用命令查看 ifconfig -a 没有公网的IP
- bind_addr = 172.xxx.xxx.xxx
-
- # tracker 服务器端口(默认22122,一般不修改)
- port=22122
-
- # 存储日志和数据的根目录
- base_path=/home/dfs
-
- # use_storage_id 设置为 true 后需要在 storage_ids.conf 设置双 IP
- # 原注释如下:
- # if use storage server ID instead of IP address
- # if you want to use dual IPs for storage server, you MUST set
- # this parameter to true, and configure the dual IPs in the file
- # configured by following item "storage_ids_filename", such as storage_ids.conf
- # default value is false
- # since V4.00
- use_storage_id = true
配置 storage_ids.conf
- # <id> <group_name> <ip_or_hostname[:port]>
- #
- # id is a natural number (1, 2, 3 etc.),
- # 6 bits of the id length is enough, such as 100001
- #
- # storage ip or hostname can be dual IPs seperated by comma,
- # one is an inner (intranet) IP and another is an outer (extranet) IP,
- # or two different types of inner (intranet) IPs
- # for example: 192.168.2.100,122.244.141.46
- # another eg.: 192.168.1.10,172.17.4.21
- #
- # the port is optional. if you run more than one storaged instances
- # in a server, you must specified the port to distinguish different instances.
- # 一个内网 IP 一个公网 IP
- 100001 group1 172.XXX.XXX.XXX,123.XXX.XXX.XXX
- 100002 group1 172.XXX.XXX.XXX,123.XXX.XXX.XXX
- vim /etc/fdfs/storage.conf
- # 需要修改的内容如下
-
- # storage服务端口(默认23000,一般不修改)
- port=23000
-
- # 数据和日志文件存储根目录
- base_path=/home/dfs
-
- # 第一个存储目录
- store_path0=/home/dfs
-
- # 重点是这个,一定注意格式 内网,外网:端口号
- # 重点是这个,一定注意格式 内网,外网:端口号
- # 重点是这个,一定注意格式 内网,外网:端口号
- # tracker_server can ocur more than once for multi tracker servers.
- # the value format of tracker_server is "HOST:PORT",
- # the HOST can be hostname or ip address,
- # and the HOST can be dual IPs or hostnames seperated by comma,
- # the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
- # or two different types of inner (intranet) IPs.
- # for example: 192.168.2.100,122.244.141.46:22122
- # another eg.: 192.168.1.10,172.17.4.21:22122
- tracker_server = 192.168.2.100,122.244.141.46:22122
-
- # http访问文件的端口(默认8888,看情况修改,和nginx中保持一致)
- http.server_port=8888
- vim /etc/fdfs/mod_fastdfs.conf
-
- #需要修改的内容如下
- tracker_server=123.XXX.XXX.XXX:22122 # 公网 IP
- url_have_group_name=true
- store_path0=/home/dfs
-
- #配置nginx.config
-
- vim /usr/local/nginx/conf/nginx.conf
- #添加如下配置
- server {
- listen 8888; #该端口为storage.conf中的http.server_port相同
- server_name localhost;
- location ~/group[0-9]/ {
- ngx_fastdfs_module;
- }
- error_page 500 502 503 504 /50x.html;
- location = /50x.html {
- root html;
- }
- }
顺利的话已经基本安装完成 ,想提前测试可以去配置一下 client.conf 照着 wiki 就行,也可以不配,后面用 java 代码测试。
建议把他们复制到文本中,需要经常复制粘贴。
- # tracker
- # 启动 tracker 服务
- /etc/init.d/fdfs_trackerd start
-
- # 重启动 tracker 服务
- /etc/init.d/fdfs_trackerd restart
-
- # 停止 tracker 服务
- /etc/init.d/fdfs_trackerd stop
- chkconfig fdfs_trackerd on #自启动tracker服务
-
- # storage
- #启动 storage 服务
- /etc/init.d/fdfs_storaged start
-
- # 重启 storage 服务
- /etc/init.d/fdfs_storaged restart
-
- # 停止 storage 服务
- /etc/init.d/fdfs_storaged stop
-
- # 重启 storage 服务
- chkconfig fdfs_storaged on
-
- # nginx
- # 启动 nginx
- /usr/local/nginx/sbin/nginx
-
- # 重启 nginx
- /usr/local/nginx/sbin/nginx -s reload
-
- # 停止 nginx
- /usr/local/nginx/sbin/nginx -s stop
-
- # 检测集群
- # 会显示会有几台服务器和详细信息
- /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
监控集群的例子
节点的状态是 ACTIVE 才能正常提供服务,否则请查看相关的 tracker.log storage.log
添加正常进行同步时注意下面的时间:
last_heart_beat_time = 2020-05-02 23:06:00
last_source_update = 2020-05-02 13:50:26
last_sync_update = 2020-05-02 13:50:25
last_synced_timestamp = 2020-05-02 13:50:24 (0s delay)
如果同步时间一直是 1970-xxx 显示 never sync,则说明我们的公网配置同步失败,请查看相关日志。
- [2020-05-02 23:06:09] DEBUG - base_path=/home/dfs, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=1, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
-
- server_count=1, server_index=0
-
- tracker server is xxx.xxx.xxx.xxx:22122
-
- group count: 1
-
- Group 1:
- group name = group1
- disk total space = 40,188 MB
- disk free space = 33,438 MB
- trunk free space = 0 MB
- storage server count = 2
- active server count = 2
- storage server port = 23000
- storage HTTP port = 8888
- store path count = 1
- subdir count per path = 256
- current write server index = 1
- current trunk file id = 0
-
- Storage 1:
- id = 100001
- ip_addr = xxx.xxx.xxx.xxx ACTIVE
- http domain =
- version = 6.06
- join time = 2020-05-02 13:28:17
- up time = 2020-05-02 13:29:51
- total storage = 40,188 MB
- free storage = 33,438 MB
- upload priority = 10
- store_path_count = 1
- subdir_count_per_path = 256
- storage_port = 23000
- storage_http_port = 8888
- current_write_path = 0
- source storage id =
- if_trunk_server = 0
- connection.alloc_count = 256
- connection.current_count = 1
- connection.max_count = 2
- total_upload_count = 3
- success_upload_count = 3
- total_append_count = 0
- success_append_count = 0
- total_modify_count = 0
- success_modify_count = 0
- total_truncate_count = 0
- success_truncate_count = 0
- total_set_meta_count = 2
- success_set_meta_count = 2
- total_delete_count = 2
- success_delete_count = 2
- total_download_count = 0
- success_download_count = 0
- total_get_meta_count = 2
- success_get_meta_count = 2
- total_create_link_count = 0
- success_create_link_count = 0
- total_delete_link_count = 0
- success_delete_link_count = 0
- total_upload_bytes = 90
- success_upload_bytes = 90
- total_append_bytes = 0
- success_append_bytes = 0
- total_modify_bytes = 0
- success_modify_bytes = 0
- stotal_download_bytes = 0
- success_download_bytes = 0
- total_sync_in_bytes = 60
- success_sync_in_bytes = 60
- total_sync_out_bytes = 0
- success_sync_out_bytes = 0
- total_file_open_count = 7
- success_file_open_count = 7
- total_file_read_count = 2
- success_file_read_count = 2
- total_file_write_count = 5
- success_file_write_count = 5
- last_heart_beat_time = 2020-05-02 23:06:00
- last_source_update = 2020-05-02 13:50:26
- last_sync_update = 2020-05-02 13:50:25
- last_synced_timestamp = 2020-05-02 13:50:24 (0s delay)
- Storage 2:
- id = 100002
- ip_addr = xxx.xxx.xxx.xxx ACTIVE
- http domain =
- version = 6.06
- join time = 2020-05-02 13:29:34
- up time = 2020-05-02 13:44:34
- total storage = 50,267 MB
- free storage = 38,433 MB
- upload priority = 10
- store_path_count = 1
- subdir_count_per_path = 256
- storage_port = 23000
- storage_http_port = 8888
- current_write_path = 0
- source storage id = 100001
- if_trunk_server = 0
- connection.alloc_count = 256
- connection.current_count = 1
- connection.max_count = 2
- total_upload_count = 2
- success_upload_count = 2
- total_append_count = 0
- success_append_count = 0
- total_modify_count = 0
- success_modify_count = 0
- total_truncate_count = 0
- success_truncate_count = 0
- total_set_meta_count = 0
- success_set_meta_count = 0
- total_delete_count = 2
- success_delete_count = 2
- total_download_count = 0
- success_download_count = 0
- total_get_meta_count = 0
- success_get_meta_count = 0
- total_create_link_count = 0
- success_create_link_count = 0
- total_delete_link_count = 0
- success_delete_link_count = 0
- total_upload_bytes = 60
- success_upload_bytes = 60
- total_append_bytes = 0
- success_append_bytes = 0
- total_modify_bytes = 0
- success_modify_bytes = 0
- stotal_download_bytes = 0
- success_download_bytes = 0
- total_sync_in_bytes = 154
- success_sync_in_bytes = 154
- total_sync_out_bytes = 0
- success_sync_out_bytes = 0
- total_file_open_count = 7
- success_file_open_count = 7
- total_file_read_count = 0
- success_file_read_count = 0
- total_file_write_count = 7
- success_file_write_count = 7
- last_heart_beat_time = 2020-05-02 23:06:04
- last_source_update = 2020-05-02 13:50:24
- last_sync_update = 2020-05-02 13:50:26
- last_synced_timestamp = 2020-05-02 13:50:26 (0s delay)
Java 的客户端我使用的是 GitHub 上的一个实现,可以很好整合进 SpringBoot,导入依赖后怎么使用直接查看它的文档就行,项目连接如下:
https://github.com/tobato/FastDFS_Client
具体可以参考里面的测试代码,为了方便我把里面的一些代码拷到了自己项目进行单元测试,下面的代码是不能直接拿过去直接运行的仅做说明,你可以照着里面的测试代码简单写一下,上传和删除成功了就基本没啥问题。
- import com.github.tobato.fastdfs.domain.fdfs.MetaData;
- import com.github.tobato.fastdfs.domain.fdfs.StorePath;
- import com.github.tobato.fastdfs.service.FastFileStorageClient;
- import com.wechatapp.promise.fm.fastdfs.FastdfsTestApplication;
- import com.wechatapp.promise.fm.fastdfs.service.domain.RandomTextFile;
- import lombok.extern.slf4j.Slf4j;
- import org.junit.Test;
- import org.junit.runner.RunWith;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.boot.test.context.SpringBootTest;
- import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
-
- import java.util.HashSet;
- import java.util.Set;
-
- import static org.junit.Assert.assertEquals;
- import static org.junit.jupiter.api.Assertions.assertNotNull;
-
- // 对一个随机的 txt 文本进行上传 删除
- @RunWith(SpringJUnit4ClassRunner.class)
- @SpringBootTest(classes = FastdfsTestApplication.class)
- @Slf4j
- public class FastFileStorageClientTest {
-
- @Autowired
- protected FastFileStorageClient storageClient;
- /**
- * 上传文件,并且设置MetaData,上传成功后 storage 会出现两个文件
- * 一个是源文件一个记录 MetaData
- */
- @Test
- public void testUploadFileAndMetaData() {
-
- log.info("##################上传文件..##");
- RandomTextFile file = new RandomTextFile();
- // Metadata
- Set<MetaData> metaDataSet = createMetaData();
- // 上传文件和Metadata
- StorePath path = storageClient.uploadFile(file.getInputStream(), file.getFileSize(), file.getFileExtName(),
- metaDataSet);
- assertNotNull(path);
- log.info("##################上传文件路径{}", path);
-
- // 验证获取MetaData
- log.info("##################获取Metadata##");
- Set<MetaData> fetchMetaData = storageClient.getMetadata(path.getGroup(), path.getPath());
- assertEquals(fetchMetaData, metaDataSet);
-
- log.info("##################删除文件..##");
- storageClient.deleteFile(path.getGroup(), path.getPath());
- }
-
- /**
- * 不带MetaData也应该能上传成功
- */
- @Test
- public void testUploadFileWithoutMetaData() {
-
- log.info("##################上传文件..##");
- RandomTextFile file = new RandomTextFile();
- // 上传文件和Metadata
- StorePath path = storageClient.uploadFile(file.getInputStream(), file.getFileSize(), file.getFileExtName(),
- null);
-
- assertNotNull(path);
- log.info("##################path:{}",path.getFullPath());
-
- log.info("##################删除文件..##");
- storageClient.deleteFile(path.getFullPath());
- }
-
- private Set<MetaData> createMetaData() {
- Set<MetaData> metaDataSet = new HashSet<>();
- metaDataSet.add(new MetaData("Author", "lhh"));
- metaDataSet.add(new MetaData("CreateDate", "2020-04-28"));
- return metaDataSet;
- }
- }
配置文件
- # ===================================================================
- # 分布式文件系统FDFS配置
- # ===================================================================
- fdfs:
- so-timeout: 1501
- connect-timeout: 1601
- thumb-image:
- width: 150
- height: 150
- tracker-list:
- - 123.xxx.xxx.xxx:22122
- # 可以写多个
- # - 123.xxx.xxx.xxx:22122
- # - 123.xxx.xxx.xxx:22122
-
- ---
- spring:
- profiles: customized_pool
- fdfs:
- so-timeout: 1501
- connect-timeout: 601
- thumb-image:
- width: 150
- height: 150
- tracker-list:
- - 123.xxx.xxx.xxx:22122
- # 可以写多个
- # - 123.xxx.xxx.xxx:22122
- # - 123.xxx.xxx.xxx:22122
-
- pool:
- #从池中借出的对象的最大数目
- max-total: 153
- max-wait-millis: 102
- jmx-name-base: 1
- jmx-name-prefix: 1
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。