当前位置:   article > 正文

Ceph学习笔记1-Mimic版本多节点部署

mimci01

特别说明:

  1. 本方法也可以用于单节点部署,只部署一个Monitor(只是会形成单点故障而已),最低要求是使用两个分区创建2OSD(因为默认最小副本是2);如果不需要使用CephFS,则可以不部署MDS服务;如果不使用对象存储,则可以不部署RGW服务。
  2. Ceph11.x (kraken) 版本开始新增Manager服务,是可选的,从12.x (luminous)版本开始是必选的。

系统环境

  • 3个节点的主机DNS名及IP配置(主机名和DNS名称一样):
  1. $ cat /etc/hosts
  2. ...
  3. 172.29.101.166 osdev01
  4. 172.29.101.167 osdev02
  5. 172.29.101.168 osdev03
  6. ...
  • 内核及发行版版本:
  1. $ uname -r
  2. 3.10.0-862.11.6.el7.x86_64
  3. $ cat /etc/redhat-release
  4. CentOS Linux release 7.5.1804 (Core)
  • 3个节点使用sdbOSD磁盘,使用dd命令清除其中可能存在的分区信息(会破坏磁盘数据,谨慎操作):
  1. $ lsblk
  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  3. sda 8:0 0 222.6G 0 disk
  4. ├─sda1 8:1 0 1G 0 part /boot
  5. └─sda2 8:2 0 221.6G 0 part /
  6. sdb 8:16 0 7.3T 0 disk
  7. $ dd if=/dev/zero of=/dev/sdb bs=512 count=1024

系统配置

Yum配置

  • 安装epel仓库:
$ yum install -y epel-release
  • 安装yum优先级插件:
$ yum install -y yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms

系统配置

  • 安装和开启NTP服务:
  1. $ yum install -y ntp ntpdate ntp-doc
  2. $ systemctl enable ntpd.service && systemctl start ntpd.service && systemctl status ntpd.service
  • 添加osdev用户,并放开sudo权限(也可以直接使用root用户,此步骤只是出于安全考虑):
  1. $ useradd -d /home/osdev -m osdev
  2. $ passwd osdev
  3. $ echo "osdev ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/osdev
  4. $ chmod 0440 /etc/sudoers.d/osdev
  • 关闭防火墙:
$ systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
  • 关闭SELinux
  1. $ sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config && cat /etc/selinux/config
  2. # setenforce 0 && sestatus
  3. $ reboot
  4. $ sestatus
  5. SELinux status: disabled

SSH配置

  • 安装SSH服务软件包:
$ yum install -y openssh-server
  • SSH免密登录:
  1. $ ssh-keygen
  2. $ ssh-copy-id osdev@osdev01
  3. $ ssh-copy-id osdev@osdev02
  4. $ ssh-copy-id osdev@osdev03
  • 配置SSH默认用户,或者在执行cepy-deploy命令时使用--username指定用户名(这个配置会导致Kolla-Ansible也把这个用户作为默认用户使用,导致权限不足而出现错误。可以在osdev用户下进行如下配置,在root用户下使用Kolla-Ansible即可):
  1. $ vi ~/.ssh/config
  2. Host osdev01
  3. Hostname osdev01
  4. User osdev
  5. Host osdev02
  6. Hostname osdev02
  7. User osdev
  8. Host osdev03
  9. Hostname osdev03
  10. User osdev
  • 测免密登录是否正确:
  1. [root@osdev01 ~]# ssh osdev01
  2. Last login: Wed Aug 22 16:53:56 2018 from osdev01
  3. [osdev@osdev01 ~]$ exit
  4. 登出
  5. Connection to osdev01 closed.
  6. [root@osdev01 ~]# ssh osdev02
  7. Last login: Wed Aug 22 16:55:06 2018 from osdev01
  8. [osdev@osdev02 ~]$ exit
  9. 登出
  10. Connection to osdev02 closed.
  11. [root@osdev01 ~]# ssh osdev03
  12. Last login: Wed Aug 22 16:55:35 2018 from osdev01
  13. [osdev@osdev03 ~]$ exit
  14. 登出
  15. Connection to osdev03 closed.

开始部署

初始化系统

  • 安装ceph-deploy
$ yum install -y ceph-deploy
  • 创建ceph-deploy配置目录:
  1. $ su - osdev
  2. $ mkdir -pv /opt/ceph/deploy && cd /opt/ceph/deploy
  • 创建一个Ceph集群,使用osdev01osdev02osdev03Monitor节点:
$ ceph-deploy new osdev01 osdev02 osdev03
  • 查看生成的配置文件:
  1. $ ls
  2. ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
  3. $ cat ceph.conf
  4. [global]
  5. fsid = 42ded78e-211b-4095-b795-a33f116727fc
  6. mon_initial_members = osdev01, osdev02, osdev03
  7. mon_host = 172.29.101.166,172.29.101.167,172.29.101.168
  8. auth_cluster_required = cephx
  9. auth_service_required = cephx
  10. auth_client_required = cephx
  • 编辑Ceph集群配置:
  1. $ vi ceph.conf
  2. public_network = 172.29.101.0/24
  3. cluster_network = 172.29.101.0/24
  4. osd_pool_default_size = 3
  5. osd_pool_default_min_size = 1
  6. osd_pool_default_pg_num = 8
  7. osd_pool_default_pgp_num = 8
  8. osd_crush_chooseleaf_type = 1
  9. [mon]
  10. mon_clock_drift_allowed = 0.5
  11. [osd]
  12. osd_mkfs_type = xfs
  13. osd_mkfs_options_xfs = -f
  14. filestore_max_sync_interval = 5
  15. filestore_min_sync_interval = 0.1
  16. filestore_fd_cache_size = 655350
  17. filestore_omap_header_cache_size = 655350
  18. filestore_fd_cache_random = true
  19. osd op threads = 8
  20. osd disk threads = 4
  21. filestore op threads = 8
  22. max_open_files = 655350

安装软件包

  • 3个节点上安装Ceph软件包(如果出现错误,则先到3个节点上分别先删除软件包):
  1. # sudo yum remove -y ceph-release
  2. $ ceph-deploy install osdev01 osdev02 osdev03

部署Monitor

  • 部署初始Monitor
$ ceph-deploy mon create-initial
  • 查看生成的配置和秘钥文件:
  1. $ ls
  2. ceph.bootstrap-mds.keyring ceph.bootstrap-mgr.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-rgw.keyring ceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
  3. $ sudo chmod a+r /etc/ceph/ceph.client.admin.keyring
  • 拷贝配置和秘钥文件到指定节点:
$ ceph-deploy --overwrite-conf admin osdev01 osdev02 osdev03
  • 配置osdev01Monitor剩余可用数据空间警告比例:
  1. $ ceph -s
  2. cluster:
  3. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  4. health: HEALTH_WARN
  5. mon osdev01 is low on available space
  6. services:
  7. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  8. mgr: osdev03(active), standbys: osdev02, osdev01
  9. osd: 3 osds: 3 up, 3 in
  10. rgw: 3 daemons active
  11. data:
  12. pools: 10 pools, 176 pgs
  13. objects: 578 objects, 477 MiB
  14. usage: 4.0 GiB used, 22 TiB / 22 TiB avail
  15. pgs: 176 active+clean
  16. $ ceph daemon mon.osdev01 config get mon_data_avail_warn
  17. {
  18. "mon_data_avail_warn": "30"
  19. }
  20. $ ceph daemon mon.osdev01 config set mon_data_avail_warn 10
  21. {
  22. "success": "mon_data_avail_warn = '10' (not observed, change may require restart) "
  23. }
  24. $ vi /etc/ceph/ceph.conf
  25. [mon]
  26. mon_clock_drift_allowed = 0.5
  27. mon allow pool delete = true
  28. mon_data_avail_warn = 10
  29. $ systemctl restart ceph-mon@osdev01.service
  30. $ ceph daemon mon.osdev01 config get mon_data_avail_warn
  31. {
  32. "mon_data_avail_warn": "10"
  33. }
  34. $ ceph -s
  35. cluster:
  36. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  37. health: HEALTH_OK
  38. services:
  39. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  40. mgr: osdev03(active), standbys: osdev02, osdev01
  41. osd: 3 osds: 3 up, 3 in
  42. rgw: 3 daemons active
  43. data:
  44. pools: 10 pools, 176 pgs
  45. objects: 578 objects, 477 MiB
  46. usage: 4.0 GiB used, 22 TiB / 22 TiB avail
  47. pgs: 176 active+clean

移除Monitor

  • 移除osdev01上的Monitor服务:
  1. $ ceph-deploy mon destroy osdev01
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon destroy osdev01
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] username : None
  6. [ceph_deploy.cli][INFO ] verbose : False
  7. [ceph_deploy.cli][INFO ] overwrite_conf : False
  8. [ceph_deploy.cli][INFO ] subcommand : destroy
  9. [ceph_deploy.cli][INFO ] quiet : False
  10. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2a70e3db00>
  11. [ceph_deploy.cli][INFO ] cluster : ceph
  12. [ceph_deploy.cli][INFO ] mon : ['osdev01']
  13. [ceph_deploy.cli][INFO ] func : <function mon at 0x7f2a7129c848>
  14. [ceph_deploy.cli][INFO ] ceph_conf : None
  15. [ceph_deploy.cli][INFO ] default_release : False
  16. [ceph_deploy.mon][DEBUG ] Removing mon from osdev01
  17. [osdev01][DEBUG ] connected to host: osdev01
  18. [osdev01][DEBUG ] detect platform information from remote host
  19. [osdev01][DEBUG ] detect machine type
  20. [osdev01][DEBUG ] find the location of an executable
  21. [osdev01][DEBUG ] get remote short hostname
  22. [osdev01][INFO ] Running command: ceph --cluster=ceph -n mon. -k /var/lib/ceph/mon/ceph-osdev01/keyring mon remove osdev01
  23. [osdev01][WARNIN] removing mon.osdev01 at 172.29.101.166:6789/0, there will be 2 monitors
  24. [osdev01][INFO ] polling the daemon to verify it stopped
  25. [osdev01][INFO ] Running command: systemctl stop ceph-mon@osdev01.service
  26. [osdev01][INFO ] Running command: mkdir -p /var/lib/ceph/mon-removed
  27. [osdev01][DEBUG ] move old monitor data
  • 重新在osdev01上添加Monitor服务:
  1. $ ceph-deploy mon add osdev01
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add osdev01
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] username : None
  6. [ceph_deploy.cli][INFO ] verbose : False
  7. [ceph_deploy.cli][INFO ] overwrite_conf : False
  8. [ceph_deploy.cli][INFO ] subcommand : add
  9. [ceph_deploy.cli][INFO ] quiet : False
  10. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f791d413878>
  11. [ceph_deploy.cli][INFO ] cluster : ceph
  12. [ceph_deploy.cli][INFO ] mon : ['osdev01']
  13. [ceph_deploy.cli][INFO ] func : <function mon at 0x7f791d870848>
  14. [ceph_deploy.cli][INFO ] address : None
  15. [ceph_deploy.cli][INFO ] ceph_conf : None
  16. [ceph_deploy.cli][INFO ] default_release : False
  17. [ceph_deploy.mon][INFO ] ensuring configuration of new mon host: osdev01
  18. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osdev01
  19. [osdev01][DEBUG ] connected to host: osdev01
  20. [osdev01][DEBUG ] detect platform information from remote host
  21. [osdev01][DEBUG ] detect machine type
  22. [osdev01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  23. [ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host osdev01
  24. [ceph_deploy.mon][DEBUG ] using mon address by resolving host: 172.29.101.166
  25. [ceph_deploy.mon][DEBUG ] detecting platform for host osdev01 ...
  26. [osdev01][DEBUG ] connected to host: osdev01
  27. [osdev01][DEBUG ] detect platform information from remote host
  28. [osdev01][DEBUG ] detect machine type
  29. [osdev01][DEBUG ] find the location of an executable
  30. [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
  31. [osdev01][DEBUG ] determining if provided host has same hostname in remote
  32. [osdev01][DEBUG ] get remote short hostname
  33. [osdev01][DEBUG ] adding mon to osdev01
  34. [osdev01][DEBUG ] get remote short hostname
  35. [osdev01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  36. [osdev01][DEBUG ] create the mon path if it does not exist
  37. [osdev01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-osdev01/done
  38. [osdev01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-osdev01/done
  39. [osdev01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-osdev01.mon.keyring
  40. [osdev01][DEBUG ] create the monitor keyring file
  41. [osdev01][INFO ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.osdev01.monmap
  42. [osdev01][WARNIN] got monmap epoch 3
  43. [osdev01][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i osdev01 --monmap /var/lib/ceph/tmp/ceph.osdev01.monmap --keyring /var/lib/ceph/tmp/ceph-osdev01.mon.keyring --setuser 167 --setgroup 167
  44. [osdev01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-osdev01.mon.keyring
  45. [osdev01][DEBUG ] create a done file to avoid re-doing the mon deployment
  46. [osdev01][DEBUG ] create the init path if it does not exist
  47. [osdev01][INFO ] Running command: systemctl enable ceph.target
  48. [osdev01][INFO ] Running command: systemctl enable ceph-mon@osdev01
  49. [osdev01][INFO ] Running command: systemctl start ceph-mon@osdev01
  50. [osdev01][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.osdev01.asok mon_status
  51. [osdev01][WARNIN] monitor osdev01 does not exist in monmap
  52. [osdev01][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.osdev01.asok mon_status
  53. [osdev01][DEBUG ] ********************************************************************************
  54. [osdev01][DEBUG ] status for monitor: mon.osdev01
  55. [osdev01][DEBUG ] {
  56. [osdev01][DEBUG ] "election_epoch": 0,
  57. [osdev01][DEBUG ] "extra_probe_peers": [],
  58. [osdev01][DEBUG ] "feature_map": {
  59. [osdev01][DEBUG ] "client": [
  60. [osdev01][DEBUG ] {
  61. [osdev01][DEBUG ] "features": "0x1ffddff8eea4fffb",
  62. [osdev01][DEBUG ] "num": 1,
  63. [osdev01][DEBUG ] "release": "luminous"
  64. [osdev01][DEBUG ] },
  65. [osdev01][DEBUG ] {
  66. [osdev01][DEBUG ] "features": "0x3ffddff8ffa4fffb",
  67. [osdev01][DEBUG ] "num": 1,
  68. [osdev01][DEBUG ] "release": "luminous"
  69. [osdev01][DEBUG ] }
  70. [osdev01][DEBUG ] ],
  71. [osdev01][DEBUG ] "mds": [
  72. [osdev01][DEBUG ] {
  73. [osdev01][DEBUG ] "features": "0x3ffddff8ffa4fffb",
  74. [osdev01][DEBUG ] "num": 2,
  75. [osdev01][DEBUG ] "release": "luminous"
  76. [osdev01][DEBUG ] }
  77. [osdev01][DEBUG ] ],
  78. [osdev01][DEBUG ] "mgr": [
  79. [osdev01][DEBUG ] {
  80. [osdev01][DEBUG ] "features": "0x3ffddff8ffa4fffb",
  81. [osdev01][DEBUG ] "num": 3,
  82. [osdev01][DEBUG ] "release": "luminous"
  83. [osdev01][DEBUG ] }
  84. [osdev01][DEBUG ] ],
  85. [osdev01][DEBUG ] "mon": [
  86. [osdev01][DEBUG ] {
  87. [osdev01][DEBUG ] "features": "0x3ffddff8ffa4fffb",
  88. [osdev01][DEBUG ] "num": 1,
  89. [osdev01][DEBUG ] "release": "luminous"
  90. [osdev01][DEBUG ] }
  91. [osdev01][DEBUG ] ],
  92. [osdev01][DEBUG ] "osd": [
  93. [osdev01][DEBUG ] {
  94. [osdev01][DEBUG ] "features": "0x3ffddff8ffa4fffb",
  95. [osdev01][DEBUG ] "num": 2,
  96. [osdev01][DEBUG ] "release": "luminous"
  97. [osdev01][DEBUG ] }
  98. [osdev01][DEBUG ] ]
  99. [osdev01][DEBUG ] },
  100. [osdev01][DEBUG ] "features": {
  101. [osdev01][DEBUG ] "quorum_con": "0",
  102. [osdev01][DEBUG ] "quorum_mon": [],
  103. [osdev01][DEBUG ] "required_con": "144115188346404864",
  104. [osdev01][DEBUG ] "required_mon": [
  105. [osdev01][DEBUG ] "kraken",
  106. [osdev01][DEBUG ] "luminous",
  107. [osdev01][DEBUG ] "mimic",
  108. [osdev01][DEBUG ] "osdmap-prune"
  109. [osdev01][DEBUG ] ]
  110. [osdev01][DEBUG ] },
  111. [osdev01][DEBUG ] "monmap": {
  112. [osdev01][DEBUG ] "created": "2018-08-23 10:55:27.755434",
  113. [osdev01][DEBUG ] "epoch": 3,
  114. [osdev01][DEBUG ] "features": {
  115. [osdev01][DEBUG ] "optional": [],
  116. [osdev01][DEBUG ] "persistent": [
  117. [osdev01][DEBUG ] "kraken",
  118. [osdev01][DEBUG ] "luminous",
  119. [osdev01][DEBUG ] "mimic",
  120. [osdev01][DEBUG ] "osdmap-prune"
  121. [osdev01][DEBUG ] ]
  122. [osdev01][DEBUG ] },
  123. [osdev01][DEBUG ] "fsid": "383237bd-becf-49d5-9bd6-deb0bc35ab2a",
  124. [osdev01][DEBUG ] "modified": "2018-09-19 14:57:08.984472",
  125. [osdev01][DEBUG ] "mons": [
  126. [osdev01][DEBUG ] {
  127. [osdev01][DEBUG ] "addr": "172.29.101.167:6789/0",
  128. [osdev01][DEBUG ] "name": "osdev02",
  129. [osdev01][DEBUG ] "public_addr": "172.29.101.167:6789/0",
  130. [osdev01][DEBUG ] "rank": 0
  131. [osdev01][DEBUG ] },
  132. [osdev01][DEBUG ] {
  133. [osdev01][DEBUG ] "addr": "172.29.101.168:6789/0",
  134. [osdev01][DEBUG ] "name": "osdev03",
  135. [osdev01][DEBUG ] "public_addr": "172.29.101.168:6789/0",
  136. [osdev01][DEBUG ] "rank": 1
  137. [osdev01][DEBUG ] }
  138. [osdev01][DEBUG ] ]
  139. [osdev01][DEBUG ] },
  140. [osdev01][DEBUG ] "name": "osdev01",
  141. [osdev01][DEBUG ] "outside_quorum": [],
  142. [osdev01][DEBUG ] "quorum": [],
  143. [osdev01][DEBUG ] "rank": -1,
  144. [osdev01][DEBUG ] "state": "probing",
  145. [osdev01][DEBUG ] "sync_provider": []
  146. [osdev01][DEBUG ] }
  147. [osdev01][DEBUG ] ********************************************************************************
  148. [osdev01][INFO ] monitor: mon.osdev01 is currently at the state of probing

部署Manager

  • 3个节点上安装Manager服务(从kraken版本开始增加该服务,从luminous版本开始是必选):
$ ceph-deploy mgr create osdev01 osdev02 osdev03
  • 查看集群状态,3Manager只有一个是被激活的:
  1. $ sudo ceph -s
  2. cluster:
  3. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  4. health: HEALTH_WARN
  5. mon osdev01 is low on available space
  6. services:
  7. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  8. mgr: osdev01(active), standbys: osdev03, osdev02
  9. osd: 0 osds: 0 up, 0 in
  10. data:
  11. pools: 0 pools, 0 pgs
  12. objects: 0 objects, 0 B
  13. usage: 0 B used, 0 B / 0 B avail
  14. pgs:
  • 查看当前的集群投票状态:
  1. $ sudo ceph quorum_status --format json-pretty
  2. {
  3. "election_epoch": 8,
  4. "quorum": [
  5. 0,
  6. 1,
  7. 2
  8. ],
  9. "quorum_names": [
  10. "osdev01",
  11. "osdev02",
  12. "osdev03"
  13. ],
  14. "quorum_leader_name": "osdev01",
  15. "monmap": {
  16. "epoch": 2,
  17. "fsid": "383237bd-becf-49d5-9bd6-deb0bc35ab2a",
  18. "modified": "2018-08-23 10:55:53.598952",
  19. "created": "2018-08-23 10:55:27.755434",
  20. "features": {
  21. "persistent": [
  22. "kraken",
  23. "luminous",
  24. "mimic",
  25. "osdmap-prune"
  26. ],
  27. "optional": []
  28. },
  29. "mons": [
  30. {
  31. "rank": 0,
  32. "name": "osdev01",
  33. "addr": "172.29.101.166:6789/0",
  34. "public_addr": "172.29.101.166:6789/0"
  35. },
  36. {
  37. "rank": 1,
  38. "name": "osdev02",
  39. "addr": "172.29.101.167:6789/0",
  40. "public_addr": "172.29.101.167:6789/0"
  41. },
  42. {
  43. "rank": 2,
  44. "name": "osdev03",
  45. "addr": "172.29.101.168:6789/0",
  46. "public_addr": "172.29.101.168:6789/0"
  47. }
  48. ]
  49. }
  50. }

部署OSD

  • 如果之前部署过OSD,则清理掉其中的LVM卷:
$ sudo lvs | awk 'NR!=1 {if($1~"osd-block-") print $2 "/" $1}' | xargs -I {} sudo lvremove -y {}
  • 清除磁盘数据(如果之前dd处理过,以及没有LVM卷,则可省略):
  1. $ ceph-deploy disk zap osdev01 /dev/sdb
  2. $ ceph-deploy disk zap osdev02 /dev/sdb
  3. $ ceph-deploy disk zap osdev03 /dev/sdb
  • 3个节点上部署OSD服务,默认使用bluestore,没有journalblock_db
  1. $ ceph-deploy osd create --data /dev/sdb osdev01
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/osdev/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb osdev01
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] verbose : False
  6. [ceph_deploy.cli][INFO ] bluestore : None
  7. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff06aa94d88>
  8. [ceph_deploy.cli][INFO ] cluster : ceph
  9. [ceph_deploy.cli][INFO ] fs_type : xfs
  10. [ceph_deploy.cli][INFO ] block_wal : None
  11. [ceph_deploy.cli][INFO ] default_release : False
  12. [ceph_deploy.cli][INFO ] username : None
  13. [ceph_deploy.cli][INFO ] journal : None
  14. [ceph_deploy.cli][INFO ] subcommand : create
  15. [ceph_deploy.cli][INFO ] host : osdev01
  16. [ceph_deploy.cli][INFO ] filestore : None
  17. [ceph_deploy.cli][INFO ] func : <function osd at 0x7ff06b2efb90>
  18. [ceph_deploy.cli][INFO ] ceph_conf : None
  19. [ceph_deploy.cli][INFO ] zap_disk : False
  20. [ceph_deploy.cli][INFO ] data : /dev/sdb
  21. [ceph_deploy.cli][INFO ] block_db : None
  22. [ceph_deploy.cli][INFO ] dmcrypt : False
  23. [ceph_deploy.cli][INFO ] overwrite_conf : False
  24. [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
  25. [ceph_deploy.cli][INFO ] quiet : False
  26. [ceph_deploy.cli][INFO ] debug : False
  27. [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
  28. [osdev01][DEBUG ] connection detected need for sudo
  29. [osdev01][DEBUG ] connected to host: osdev01
  30. [osdev01][DEBUG ] detect platform information from remote host
  31. [osdev01][DEBUG ] detect machine type
  32. [osdev01][DEBUG ] find the location of an executable
  33. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
  34. [ceph_deploy.osd][DEBUG ] Deploying osd to osdev01
  35. [osdev01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  36. [osdev01][DEBUG ] find the location of an executable
  37. [osdev01][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
  38. [osdev01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
  39. [osdev01][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3c3d6c5a-c82e-4318-a8fb-134de5444ca7
  40. [osdev01][DEBUG ] Running command: /usr/sbin/vgcreate --force --yes ceph-95b94aa4-22df-401c-822b-dd62f82f6b08 /dev/sdb
  41. [osdev01][DEBUG ] stdout: Physical volume "/dev/sdb" successfully created.
  42. [osdev01][DEBUG ] stdout: Volume group "ceph-95b94aa4-22df-401c-822b-dd62f82f6b08" successfully created
  43. [osdev01][DEBUG ] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7 ceph-95b94aa4-22df-401c-822b-dd62f82f6b08
  44. [osdev01][DEBUG ] stdout: Logical volume "osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7" created.
  45. [osdev01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
  46. [osdev01][DEBUG ] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
  47. [osdev01][DEBUG ] Running command: /bin/chown -h ceph:ceph /dev/ceph-95b94aa4-22df-401c-822b-dd62f82f6b08/osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7
  48. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
  49. [osdev01][DEBUG ] Running command: /bin/ln -s /dev/ceph-95b94aa4-22df-401c-822b-dd62f82f6b08/osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7 /var/lib/ceph/osd/ceph-1/block
  50. [osdev01][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
  51. [osdev01][DEBUG ] stderr: got monmap epoch 1
  52. [osdev01][DEBUG ] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDxF35bOAdNHBAAelXgl7laeMnVsGAlHl0dxQ==
  53. [osdev01][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-1/keyring
  54. [osdev01][DEBUG ] added entity osd.1 auth auth(auid = 18446744073709551615 key=AQDxF35bOAdNHBAAelXgl7laeMnVsGAlHl0dxQ== with 0 caps)
  55. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
  56. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
  57. [osdev01][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 3c3d6c5a-c82e-4318-a8fb-134de5444ca7 --setuser ceph --setgroup ceph
  58. [osdev01][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
  59. [osdev01][DEBUG ] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-95b94aa4-22df-401c-822b-dd62f82f6b08/osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
  60. [osdev01][DEBUG ] Running command: /bin/ln -snf /dev/ceph-95b94aa4-22df-401c-822b-dd62f82f6b08/osd-block-3c3d6c5a-c82e-4318-a8fb-134de5444ca7 /var/lib/ceph/osd/ceph-1/block
  61. [osdev01][DEBUG ] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
  62. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
  63. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
  64. [osdev01][DEBUG ] Running command: /bin/systemctl enable ceph-volume@lvm-1-3c3d6c5a-c82e-4318-a8fb-134de5444ca7
  65. [osdev01][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-3c3d6c5a-c82e-4318-a8fb-134de5444ca7.service to /usr/lib/systemd/system/ceph-volume@.service.
  66. [osdev01][DEBUG ] Running command: /bin/systemctl start ceph-osd@1
  67. [osdev01][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 1
  68. [osdev01][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
  69. [osdev01][INFO ] checking OSD status...
  70. [osdev01][DEBUG ] find the location of an executable
  71. [osdev01][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
  72. [osdev01][WARNIN] there is 1 OSD down
  73. [osdev01][WARNIN] there is 1 OSD out
  74. [ceph_deploy.osd][DEBUG ] Host osdev01 is now ready for osd use.
  75. $ ceph-deploy osd create --data /dev/sdb osdev02
  76. $ ceph-deploy osd create --data /dev/sdb osdev03
  • 查看OSD的分区状况,新版Ceph默认使用bluestore
  1. $ ceph-deploy osd list osdev01
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd list osdev01
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] username : None
  6. [ceph_deploy.cli][INFO ] verbose : False
  7. [ceph_deploy.cli][INFO ] debug : False
  8. [ceph_deploy.cli][INFO ] overwrite_conf : False
  9. [ceph_deploy.cli][INFO ] subcommand : list
  10. [ceph_deploy.cli][INFO ] quiet : False
  11. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f54d13d9ef0>
  12. [ceph_deploy.cli][INFO ] cluster : ceph
  13. [ceph_deploy.cli][INFO ] host : ['osdev01']
  14. [ceph_deploy.cli][INFO ] func : <function osd at 0x7f54d1c34b90>
  15. [ceph_deploy.cli][INFO ] ceph_conf : None
  16. [ceph_deploy.cli][INFO ] default_release : False
  17. [osdev01][DEBUG ] connected to host: osdev01
  18. [osdev01][DEBUG ] detect platform information from remote host
  19. [osdev01][DEBUG ] detect machine type
  20. [osdev01][DEBUG ] find the location of an executable
  21. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
  22. [ceph_deploy.osd][DEBUG ] Listing disks on osdev01...
  23. [osdev01][DEBUG ] find the location of an executable
  24. [osdev01][INFO ] Running command: /usr/sbin/ceph-volume lvm list
  25. [osdev01][DEBUG ]
  26. [osdev01][DEBUG ]
  27. [osdev01][DEBUG ] ====== osd.0 =======
  28. [osdev01][DEBUG ]
  29. [osdev01][DEBUG ] [block] /dev/ceph-a2130090-fb78-4b65-838f-7496c63fa025/osd-block-2cb30e7c-7b98-4a6c-816a-2de7201a7669
  30. [osdev01][DEBUG ]
  31. [osdev01][DEBUG ] type block
  32. [osdev01][DEBUG ] osd id 0
  33. [osdev01][DEBUG ] cluster fsid 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  34. [osdev01][DEBUG ] cluster name ceph
  35. [osdev01][DEBUG ] osd fsid 2cb30e7c-7b98-4a6c-816a-2de7201a7669
  36. [osdev01][DEBUG ] encrypted 0
  37. [osdev01][DEBUG ] cephx lockbox secret
  38. [osdev01][DEBUG ] block uuid AL5bfk-acAQ-9guP-tl61-A4Jf-RQOF-nFnE9o
  39. [osdev01][DEBUG ] block device /dev/ceph-a2130090-fb78-4b65-838f-7496c63fa025/osd-block-2cb30e7c-7b98-4a6c-816a-2de7201a7669
  40. [osdev01][DEBUG ] vdo 0
  41. [osdev01][DEBUG ] crush device class None
  42. [osdev01][DEBUG ] devices /dev/sdb
  43. $ lvs
  44. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  45. osd-block-2cb30e7c-7b98-4a6c-816a-2de7201a7669 ceph-a2130090-fb78-4b65-838f-7496c63fa025 -wi-ao---- <7.28t
  46. $ pvs
  47. PV VG Fmt Attr PSize PFree
  48. /dev/sdb ceph-a2130090-fb78-4b65-838f-7496c63fa025 lvm2 a-- <7.28t 0
  49. # osdev01
  50. $ df -h | grep ceph
  51. tmpfs 189G 24K 189G 1% /var/lib/ceph/osd/ceph-0
  52. $ ll /var/lib/ceph/osd/ceph-0
  53. 总用量 24
  54. lrwxrwxrwx 1 ceph ceph 93 8月 29 15:15 block -> /dev/ceph-a2130090-fb78-4b65-838f-7496c63fa025/osd-block-2cb30e7c-7b98-4a6c-816a-2de7201a7669
  55. -rw------- 1 ceph ceph 37 8月 29 15:15 ceph_fsid
  56. -rw------- 1 ceph ceph 37 8月 29 15:15 fsid
  57. -rw------- 1 ceph ceph 55 8月 29 15:15 keyring
  58. -rw------- 1 ceph ceph 6 8月 29 15:15 ready
  59. -rw------- 1 ceph ceph 10 8月 29 15:15 type
  60. -rw------- 1 ceph ceph 2 8月 29 15:15 whoami
  61. $ cat /var/lib/ceph/osd/ceph-0/whoami
  62. 0
  63. $ cat /var/lib/ceph/osd/ceph-0/type
  64. bluestore
  65. $ cat /var/lib/ceph/osd/ceph-0/ready
  66. ready
  67. $ cat /var/lib/ceph/osd/ceph-0/fsid
  68. 2cb30e7c-7b98-4a6c-816a-2de7201a7669
  69. # osdev02
  70. $ df -h | grep ceph
  71. tmpfs 189G 48K 189G 1% /var/lib/ceph/osd/ceph-1
  • 查看集群状态:
  1. $ sudo ceph health
  2. HEALTH_WARN mon osdev01 is low on available space
  3. $ sudo ceph -s
  4. cluster:
  5. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  6. health: HEALTH_WARN
  7. mon osdev01 is low on available space
  8. services:
  9. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  10. mgr: osdev01(active), standbys: osdev03, osdev02
  11. osd: 3 osds: 3 up, 3 in
  12. data:
  13. pools: 0 pools, 0 pgs
  14. objects: 0 objects, 0 B
  15. usage: 3.0 GiB used, 22 TiB / 22 TiB avail
  16. pgs:
  • 查看OSD状态:
  1. $ sudo ceph osd tree
  2. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  3. -1 21.83066 root default
  4. -3 7.27689 host osdev01
  5. 0 hdd 7.27689 osd.0 up 1.00000 1.00000
  6. -5 7.27689 host osdev02
  7. 1 hdd 7.27689 osd.1 up 1.00000 1.00000
  8. -7 7.27689 host osdev03
  9. 2 hdd 7.27689 osd.2 up 1.00000 1.00000

移除OSD

  • 删除OSD
  1. $ ceph osd out 0
  2. marked out osd.0.
  • 观察数据迁移:
$ ceph -w
  • 在对应的节点上停止OSD服务:
$ systemctl stop ceph-osd@0
  • 删除该OSDCRUSH表:
  1. $ ceph osd crush remove osd.0
  2. removed item id 0 name 'osd.0' from crush map
  • 删除该OSD的认证:
  1. $ ceph auth del osd.0
  2. updated
  • 清理OSD的磁盘:
  1. $ sudo lvs | awk 'NR!=1 {if($1~"osd-block-") print $2 "/" $1}' | xargs -I {} sudo lvremove -y {}
  2. Logical volume "osd-block-2cb30e7c-7b98-4a6c-816a-2de7201a7669" successfully removed
  3. $ ceph-deploy disk zap osdev01 /dev/sdb
  4. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  5. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap osdev01 /dev/sdb
  6. [ceph_deploy.cli][INFO ] ceph-deploy options:
  7. [ceph_deploy.cli][INFO ] username : None
  8. [ceph_deploy.cli][INFO ] verbose : False
  9. [ceph_deploy.cli][INFO ] debug : False
  10. [ceph_deploy.cli][INFO ] overwrite_conf : False
  11. [ceph_deploy.cli][INFO ] subcommand : zap
  12. [ceph_deploy.cli][INFO ] quiet : False
  13. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3029a04c20>
  14. [ceph_deploy.cli][INFO ] cluster : ceph
  15. [ceph_deploy.cli][INFO ] host : osdev01
  16. [ceph_deploy.cli][INFO ] func : <function disk at 0x7f3029e50d70>
  17. [ceph_deploy.cli][INFO ] ceph_conf : None
  18. [ceph_deploy.cli][INFO ] default_release : False
  19. [ceph_deploy.cli][INFO ] disk : ['/dev/sdb']
  20. [ceph_deploy.osd][DEBUG ] zapping /dev/sdb on osdev01
  21. [osdev01][DEBUG ] connected to host: osdev01
  22. [osdev01][DEBUG ] detect platform information from remote host
  23. [osdev01][DEBUG ] detect machine type
  24. [osdev01][DEBUG ] find the location of an executable
  25. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
  26. [osdev01][DEBUG ] zeroing last few blocks of device
  27. [osdev01][DEBUG ] find the location of an executable
  28. [osdev01][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
  29. [osdev01][DEBUG ] --> Zapping: /dev/sdb
  30. [osdev01][DEBUG ] Running command: /usr/sbin/cryptsetup status /dev/mapper/
  31. [osdev01][DEBUG ] stdout: /dev/mapper/ is inactive.
  32. [osdev01][DEBUG ] Running command: /usr/sbin/wipefs --all /dev/sdb
  33. [osdev01][DEBUG ] stdout: /dev/sdb:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
  34. [osdev01][DEBUG ] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10
  35. [osdev01][DEBUG ] stderr: 记录了10+0 的读入
  36. [osdev01][DEBUG ] 记录了10+0 的写出
  37. [osdev01][DEBUG ] 10485760字节(10 MB)已复制
  38. [osdev01][DEBUG ] stderr: ,0.0131341 秒,798 MB/秒
  39. [osdev01][DEBUG ] --> Zapping successful for: /dev/sdb
  • 重新添加OSD
  1. $ ceph-deploy osd create --data /dev/sdb osdev01
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb osdev01
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] verbose : False
  6. [ceph_deploy.cli][INFO ] bluestore : None
  7. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6594673d40>
  8. [ceph_deploy.cli][INFO ] cluster : ceph
  9. [ceph_deploy.cli][INFO ] fs_type : xfs
  10. [ceph_deploy.cli][INFO ] block_wal : None
  11. [ceph_deploy.cli][INFO ] default_release : False
  12. [ceph_deploy.cli][INFO ] username : None
  13. [ceph_deploy.cli][INFO ] journal : None
  14. [ceph_deploy.cli][INFO ] subcommand : create
  15. [ceph_deploy.cli][INFO ] host : osdev01
  16. [ceph_deploy.cli][INFO ] filestore : None
  17. [ceph_deploy.cli][INFO ] func : <function osd at 0x7f6594abacf8>
  18. [ceph_deploy.cli][INFO ] ceph_conf : None
  19. [ceph_deploy.cli][INFO ] zap_disk : False
  20. [ceph_deploy.cli][INFO ] data : /dev/sdb
  21. [ceph_deploy.cli][INFO ] block_db : None
  22. [ceph_deploy.cli][INFO ] dmcrypt : False
  23. [ceph_deploy.cli][INFO ] overwrite_conf : False
  24. [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
  25. [ceph_deploy.cli][INFO ] quiet : False
  26. [ceph_deploy.cli][INFO ] debug : False
  27. [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
  28. [osdev01][DEBUG ] connected to host: osdev01
  29. [osdev01][DEBUG ] detect platform information from remote host
  30. [osdev01][DEBUG ] detect machine type
  31. [osdev01][DEBUG ] find the location of an executable
  32. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
  33. [ceph_deploy.osd][DEBUG ] Deploying osd to osdev01
  34. [osdev01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  35. [osdev01][DEBUG ] find the location of an executable
  36. [osdev01][INFO ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
  37. [osdev01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
  38. [osdev01][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new df124d5a-122a-48b4-9173-87088c6e6aac
  39. [osdev01][DEBUG ] Running command: /usr/sbin/vgcreate --force --yes ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320 /dev/sdb
  40. [osdev01][DEBUG ] stdout: Physical volume "/dev/sdb" successfully created.
  41. [osdev01][DEBUG ] stdout: Volume group "ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320" successfully created
  42. [osdev01][DEBUG ] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-df124d5a-122a-48b4-9173-87088c6e6aac ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320
  43. [osdev01][DEBUG ] stdout: Logical volume "osd-block-df124d5a-122a-48b4-9173-87088c6e6aac" created.
  44. [osdev01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
  45. [osdev01][DEBUG ] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
  46. [osdev01][DEBUG ] Running command: /bin/chown -h ceph:ceph /dev/ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320/osd-block-df124d5a-122a-48b4-9173-87088c6e6aac
  47. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
  48. [osdev01][DEBUG ] Running command: /bin/ln -s /dev/ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320/osd-block-df124d5a-122a-48b4-9173-87088c6e6aac /var/lib/ceph/osd/ceph-3/block
  49. [osdev01][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
  50. [osdev01][DEBUG ] stderr: got monmap epoch 4
  51. [osdev01][DEBUG ] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQDP9qFbXoYRERAAMMz5EHjYAdlveVdDe1uAYg==
  52. [osdev01][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-3/keyring
  53. [osdev01][DEBUG ] stdout: added entity osd.3 auth auth(auid = 18446744073709551615 key=AQDP9qFbXoYRERAAMMz5EHjYAdlveVdDe1uAYg== with 0 caps)
  54. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
  55. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
  56. [osdev01][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid df124d5a-122a-48b4-9173-87088c6e6aac --setuser ceph --setgroup ceph
  57. [osdev01][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
  58. [osdev01][DEBUG ] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320/osd-block-df124d5a-122a-48b4-9173-87088c6e6aac --path /var/lib/ceph/osd/ceph-3 --no-mon-config
  59. [osdev01][DEBUG ] Running command: /bin/ln -snf /dev/ceph-5cddc4d4-2b62-452a-8ba1-61df276d5320/osd-block-df124d5a-122a-48b4-9173-87088c6e6aac /var/lib/ceph/osd/ceph-3/block
  60. [osdev01][DEBUG ] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
  61. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-0
  62. [osdev01][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
  63. [osdev01][DEBUG ] Running command: /bin/systemctl enable ceph-volume@lvm-3-df124d5a-122a-48b4-9173-87088c6e6aac
  64. [osdev01][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-df124d5a-122a-48b4-9173-87088c6e6aac.service to /usr/lib/systemd/system/ceph-volume@.service.
  65. [osdev01][DEBUG ] Running command: /bin/systemctl start ceph-osd@3
  66. [osdev01][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 3
  67. [osdev01][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
  68. [osdev01][INFO ] checking OSD status...
  69. [osdev01][DEBUG ] find the location of an executable
  70. [osdev01][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
  71. [osdev01][WARNIN] there is 1 OSD down
  72. [osdev01][WARNIN] there is 1 OSD out
  73. [ceph_deploy.osd][DEBUG ] Host osdev01 is now ready for osd use.
  • 查看OSD状态:
  1. $ ceph osd tree
  2. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  3. -1 21.83066 root default
  4. -3 7.27689 host osdev01
  5. 3 hdd 7.27689 osd.3 up 1.00000 1.00000
  6. -5 7.27689 host osdev02
  7. 1 hdd 7.27689 osd.1 up 1.00000 1.00000
  8. -7 7.27689 host osdev03
  9. 2 hdd 7.27689 osd.2 up 1.00000 1.00000
  10. 0 0 osd.0 down 0 1.00000
  11. $ ceph-bluestore-tool show-label --dev /var/lib/ceph/osd/ceph-3/block
  12. {
  13. "/var/lib/ceph/osd/ceph-3/block": {
  14. "osd_uuid": "df124d5a-122a-48b4-9173-87088c6e6aac",
  15. "size": 8000995590144,
  16. "btime": "2018-09-19 15:12:17.376253",
  17. "description": "main",
  18. "bluefs": "1",
  19. "ceph_fsid": "383237bd-becf-49d5-9bd6-deb0bc35ab2a",
  20. "kv_backend": "rocksdb",
  21. "magic": "ceph osd volume v026",
  22. "mkfs_done": "yes",
  23. "osd_key": "AQDP9qFbXoYRERAAMMz5EHjYAdlveVdDe1uAYg==",
  24. "ready": "ready",
  25. "whoami": "3"
  26. }
  27. }
  • 查看数据迁移状态:
  1. $ ceph -w
  2. cluster:
  3. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  4. health: HEALTH_WARN
  5. Degraded data redundancy: 4825/16156 objects degraded (29.865%), 83 pgs degraded, 63 pgs undersized
  6. clock skew detected on mon.osdev02
  7. mon osdev01 is low on available space
  8. services:
  9. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  10. mgr: osdev03(active), standbys: osdev02, osdev01
  11. osd: 4 osds: 3 up, 3 in; 63 remapped pgs
  12. rgw: 3 daemons active
  13. data:
  14. pools: 10 pools, 176 pgs
  15. objects: 5.39 k objects, 19 GiB
  16. usage: 43 GiB used, 22 TiB / 22 TiB avail
  17. pgs: 4825/16156 objects degraded (29.865%)
  18. 88 active+clean
  19. 48 active+undersized+degraded+remapped+backfill_wait
  20. 19 active+recovery_wait+degraded
  21. 15 active+recovery_wait+undersized+degraded+remapped
  22. 5 active+recovery_wait
  23. 1 active+recovering+degraded
  24. io:
  25. recovery: 15 MiB/s, 3 objects/s
  26. 2018-09-19 15:14:35.149958 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4825/16156 objects degraded (29.865%), 83 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  27. 2018-09-19 15:14:40.154936 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4802/16156 objects degraded (29.723%), 83 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  28. 2018-09-19 15:14:45.155511 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4785/16156 objects degraded (29.617%), 72 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  29. 2018-09-19 15:14:50.156258 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4761/16156 objects degraded (29.469%), 70 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  30. 2018-09-19 15:14:55.157259 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4736/16156 objects degraded (29.314%), 66 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  31. 2018-09-19 15:15:00.157805 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4715/16156 objects degraded (29.184%), 66 pgs degraded, 63 pgs undersized (PG_DEGRADED)
  32. 2018-09-19 15:15:05.159788 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4700/16156 objects degraded (29.091%), 65 pgs degraded, 62 pgs undersized (PG_DEGRADED)
  33. 2018-09-19 15:15:10.160347 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4687/16156 objects degraded (29.011%), 65 pgs degraded, 62 pgs undersized (PG_DEGRADED)
  34. 2018-09-19 15:15:15.161346 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4663/16156 objects degraded (28.862%), 65 pgs degraded, 62 pgs undersized (PG_DEGRADED)
  35. 2018-09-19 15:15:20.163878 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4639/16156 objects degraded (28.714%), 64 pgs degraded, 62 pgs undersized (PG_DEGRADED)
  36. 2018-09-19 15:15:25.166626 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4634/16156 objects degraded (28.683%), 64 pgs degraded, 62 pgs undersized (PG_DEGRADED)
  37. 2018-09-19 15:15:30.168933 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4612/16156 objects degraded (28.547%), 62 pgs degraded, 61 pgs undersized (PG_DEGRADED)
  38. 2018-09-19 15:15:35.170116 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4590/16156 objects degraded (28.410%), 62 pgs degraded, 61 pgs undersized (PG_DEGRADED)
  39. 2018-09-19 15:15:35.310448 mon.osdev01 [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)
  40. 2018-09-19 15:15:40.170608 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4578/16156 objects degraded (28.336%), 60 pgs degraded, 60 pgs undersized (PG_DEGRADED)
  41. 2018-09-19 15:15:41.314443 mon.osdev01 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering)
  42. 2018-09-19 15:15:45.171537 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4564/16156 objects degraded (28.250%), 60 pgs degraded, 60 pgs undersized (PG_DEGRADED)
  43. 2018-09-19 15:15:50.172340 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4546/16156 objects degraded (28.138%), 59 pgs degraded, 59 pgs undersized (PG_DEGRADED)
  44. 2018-09-19 15:15:55.173243 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4536/16156 objects degraded (28.076%), 59 pgs degraded, 59 pgs undersized (PG_DEGRADED)
  45. 2018-09-19 15:16:00.174125 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4514/16156 objects degraded (27.940%), 59 pgs degraded, 59 pgs undersized (PG_DEGRADED)
  46. 2018-09-19 15:16:05.176502 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4496/16156 objects degraded (27.829%), 58 pgs degraded, 58 pgs undersized (PG_DEGRADED)
  47. 2018-09-19 15:16:10.177113 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4486/16156 objects degraded (27.767%), 58 pgs degraded, 58 pgs undersized (PG_DEGRADED)
  48. 2018-09-19 15:16:15.178024 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4464/16156 objects degraded (27.631%), 58 pgs degraded, 58 pgs undersized (PG_DEGRADED)
  49. 2018-09-19 15:16:20.178774 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4457/16156 objects degraded (27.587%), 57 pgs degraded, 57 pgs undersized (PG_DEGRADED)
  50. 2018-09-19 15:16:25.179609 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4436/16156 objects degraded (27.457%), 57 pgs degraded, 57 pgs undersized (PG_DEGRADED)
  51. 2018-09-19 15:16:30.180333 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4426/16156 objects degraded (27.395%), 56 pgs degraded, 56 pgs undersized (PG_DEGRADED)
  52. 2018-09-19 15:16:35.180850 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4404/16156 objects degraded (27.259%), 56 pgs degraded, 56 pgs undersized (PG_DEGRADED)
  53. 2018-09-19 15:16:37.760009 mon.osdev01 [WRN] mon.1 172.29.101.167:6789/0 clock skew 1.47964s > max 0.5s
  54. 2018-09-19 15:16:40.181520 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4383/16156 objects degraded (27.129%), 55 pgs degraded, 55 pgs undersized (PG_DEGRADED)
  55. 2018-09-19 15:16:45.183101 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4373/16156 objects degraded (27.067%), 55 pgs degraded, 55 pgs undersized (PG_DEGRADED)
  56. 2018-09-19 15:16:50.184008 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4351/16156 objects degraded (26.931%), 55 pgs degraded, 55 pgs undersized (PG_DEGRADED)
  57. 2018-09-19 15:16:51.434708 mon.osdev01 [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)
  58. 2018-09-19 15:16:55.184869 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4336/16156 objects degraded (26.838%), 54 pgs degraded, 54 pgs undersized (PG_DEGRADED)
  59. 2018-09-19 15:16:56.238863 mon.osdev01 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering)
  60. 2018-09-19 15:17:00.185629 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4318/16156 objects degraded (26.727%), 54 pgs degraded, 54 pgs undersized (PG_DEGRADED)
  61. 2018-09-19 15:17:05.186503 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4296/16156 objects degraded (26.591%), 54 pgs degraded, 54 pgs undersized (PG_DEGRADED)
  62. 2018-09-19 15:17:10.187331 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4283/16156 objects degraded (26.510%), 52 pgs degraded, 52 pgs undersized (PG_DEGRADED)
  63. 2018-09-19 15:17:15.188170 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4261/16156 objects degraded (26.374%), 52 pgs degraded, 52 pgs undersized (PG_DEGRADED)
  64. 2018-09-19 15:17:20.189922 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4243/16156 objects degraded (26.263%), 51 pgs degraded, 51 pgs undersized (PG_DEGRADED)
  65. 2018-09-19 15:17:25.190843 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4227/16156 objects degraded (26.164%), 51 pgs degraded, 51 pgs undersized (PG_DEGRADED)
  66. 2018-09-19 15:17:30.191813 mon.osdev01 [WRN] Health check update: Degraded data redundancy: 4205/16156 objects degraded (26.027%), 51 pgs degraded, 51 pgs undersized (PG_DEGRADED)
  67. 2018-09-19 15:17:32.348305 mon.osdev01 [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)
  68. ...
  69. $ watch -n1 ceph -s
  70. Every 1.0s: ceph -s Wed Sep 19 15:21:12 2018
  71. cluster:
  72. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  73. health: HEALTH_WARN
  74. Degraded data redundancy: 3372/16156 objects degraded (20.872%), 36 pgs degraded, 36 pgs undersized
  75. clock skew detected on mon.osdev02
  76. mon osdev01 is low on available space
  77. services:
  78. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  79. mgr: osdev03(active), standbys: osdev02, osdev01
  80. osd: 4 osds: 3 up, 3 in; 36 remapped pgs
  81. rgw: 3 daemons active
  82. data:
  83. pools: 10 pools, 176 pgs
  84. objects: 5.39 k objects, 19 GiB
  85. usage: 48 GiB used, 22 TiB / 22 TiB avail
  86. pgs: 3372/16156 objects degraded (20.872%)
  87. 140 active+clean
  88. 35 active+undersized+degraded+remapped+backfill_wait
  89. 1 active+undersized+degraded+remapped+backfilling
  90. io:
  91. recovery: 17 MiB/s, 4 objects/s

部署MDS

  • 3个节点上部署MDS服务:
$ ceph-deploy mds create osdev01 osdev02 osdev03

部署RGW

  • 3个节点上部署RGW服务:
$ ceph-deploy rgw create osdev01 osdev02 osdev03
  • 查看集群状态:
  1. $ sudo ceph -s
  2. cluster:
  3. id: 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  4. health: HEALTH_WARN
  5. too few PGs per OSD (22 < min 30)
  6. services:
  7. mon: 3 daemons, quorum osdev01,osdev02,osdev03
  8. mgr: osdev01(active), standbys: osdev03, osdev02
  9. osd: 3 osds: 3 up, 3 in
  10. rgw: 1 daemon active
  11. data:
  12. pools: 4 pools, 32 pgs
  13. objects: 16 objects, 3.2 KiB
  14. usage: 3.0 GiB used, 22 TiB / 22 TiB avail
  15. pgs: 31.250% pgs unknown
  16. 3.125% pgs not active
  17. 21 active+clean
  18. 10 unknown
  19. 1 creating+peering
  20. io:
  21. client: 2.4 KiB/s rd, 731 B/s wr, 3 op/s rd, 0 op/s wr

卸载Ceph

  • 卸载掉部署的Ceph,包括软件包和配置:
  1. # destroy and uninstall all packages
  2. $ ceph-deploy purge osdev01 osdev02 osdev03
  3. # destroy data
  4. $ ceph-deploy purgedata osdev01 osdev02 osdev03
  5. $ ceph-deploy forgetkeys
  6. # remove all keys
  7. $ rm -rfv ceph.*

测试使用

创建Pool

  • 查看当前Pool信息,可以看到里面有几个RGW网关的默认存储池:
  1. $ rados lspools
  2. .rgw.root
  3. default.rgw.control
  4. default.rgw.meta
  5. default.rgw.log
  6. $ rados -p .rgw.root ls
  7. zone_info.4741b9cf-cc27-43d8-9bbc-59eee875b4db
  8. zone_info.c775c6a6-036a-43ab-b558-ab0df40c3ad2
  9. zonegroup_info.df77b60a-8423-4570-b9ae-ae4ef06a13a2
  10. zone_info.0e5daa99-3863-4411-8d75-7d14a3f9a014
  11. zonegroup_info.f652f53f-94bb-4599-a1c1-737f792a9510
  12. zonegroup_info.5a4fb515-ef63-4ddc-85e0-5cf8339d9472
  13. zone_names.default
  14. zonegroups_names.default
  15. $ ceph osd pool get .rgw.root pg_num
  16. pg_num: 8
  17. $ ceph osd dump
  18. epoch 25
  19. fsid 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  20. created 2018-08-23 10:55:49.409542
  21. modified 2018-08-23 16:23:00.574710
  22. flags sortbitwise,recovery_deletes,purged_snapdirs
  23. crush_version 7
  24. full_ratio 0.95
  25. backfillfull_ratio 0.9
  26. nearfull_ratio 0.85
  27. require_min_compat_client jewel
  28. min_compat_client jewel
  29. require_osd_release mimic
  30. pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 17 flags hashpspool stripe_width 0 application rgw
  31. pool 2 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool stripe_width 0 application rgw
  32. pool 3 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 22 flags hashpspool stripe_width 0 application rgw
  33. pool 4 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 24 flags hashpspool stripe_width 0 application rgw
  34. max_osd 3
  35. osd.0 up in weight 1 up_from 5 up_thru 23 down_at 0 last_clean_interval [0,0) 172.29.101.166:6801/719880 172.29.101.166:6802/719880 172.29.101.166:6803/719880 172.29.101.166:6804/719880 exists,up 2cb30e7c-7b98-4a6c-816a-2de7201a7669
  36. osd.1 up in weight 1 up_from 15 up_thru 23 down_at 14 last_clean_interval [9,14) 172.29.101.167:6800/189449 172.29.101.167:6804/1189449 172.29.101.167:6805/1189449 172.29.101.167:6806/1189449 exists,up 9d3bafa9-9ea0-401c-ad67-a08ef7c2d9f7
  37. osd.2 up in weight 1 up_from 13 up_thru 23 down_at 0 last_clean_interval [0,0) 172.29.101.168:6800/188591 172.29.101.168:6801/188591 172.29.101.168:6802/188591 172.29.101.168:6803/188591 exists,up a41fa4e0-c80b-4091-95cc-b58af291f387
  • 创建一个Pool
  1. $ ceph osd pool create glance 32 32
  2. pool 'glance' created
  • 删除一个Pool,发现无法删除:
  1. $ ceph osd pool delete glance
  2. Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool glance. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
  3. $ ceph osd pool delete glance glance --yes-i-really-really-mean-it
  4. Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
  • 配置允许删除Pool
  1. $ vi /etc/ceph/ceph.conf
  2. [mon]
  3. mon allow pool delete = true
  4. $ systemctl restart ceph-mon.target
  • 再次删除Pool
  1. $ ceph osd pool delete glance glance --yes-i-really-really-mean-it
  2. pool 'glance' removed

创建Object

  • 创建一个测试用Pool,并设置副本数为3:
  1. $ ceph osd pool create test-pool 128 128
  2. $ ceph osd lspools
  3. 1 .rgw.root
  4. 2 default.rgw.control
  5. 3 default.rgw.meta
  6. 4 default.rgw.log
  7. 5 test-pool
  8. $ ceph osd dump | grep pool
  9. pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 17 flags hashpspool stripe_width 0 application rgw
  10. pool 2 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool stripe_width 0 application rgw
  11. pool 3 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 22 flags hashpspool stripe_width 0 application rgw
  12. pool 4 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 24 flags hashpspool stripe_width 0 application rgw
  13. pool 5 'test-pool' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 26 flags hashpspool stripe_width 0
  14. $ rados lspools
  15. .rgw.root
  16. default.rgw.control
  17. default.rgw.meta
  18. default.rgw.log
  19. test-pool
  20. # set replicated size
  21. $ ceph osd pool set test-pool size 3
  22. set pool 5 size to 3
  23. $ rados -p test-pool ls
  • 创建一个测试文件:
$ echo "He110 Ceph, You are Awesome 1ike MJ" > hello_ceph
  • 创建一个Object
$ rados -p test-pool put object1 hello_ceph
  • 查看ObjectOSDMap,可以看到名字,所属PGOSD,以及他们的状态:
  1. $ ceph osd map test-pool object1
  2. osdmap e29 pool 'test-pool' (5) object 'object1' -> pg 5.bac5debc (5.3c) -> up ([0,1,2], p0) acting ([0,1,2], p0)
  3. $ rados -p test-pool ls
  4. object1

创建RBD

  • 创建一个RBD Pool
  1. $ ceph osd pool create rbd 8 8
  2. $ rbd pool init rbd
  • 创建一个RBD
$ rbd create rbd_test --size 10240
  • 查看RADOSOSD的变化,可以看到新建的RBD会多出来3个文件:
  1. $ rbd ls
  2. rbd_test
  3. $ rados -p rbd ls
  4. rbd_directory
  5. rbd_header.11856b8b4567
  6. rbd_info
  7. rbd_object_map.11856b8b4567
  8. rbd_id.rbd_test
  9. $ ceph osd dump | grep pool
  10. pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 17 flags hashpspool stripe_width 0 application rgw
  11. pool 2 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool stripe_width 0 application rgw
  12. pool 3 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 22 flags hashpspool stripe_width 0 application rgw
  13. pool 4 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 24 flags hashpspool stripe_width 0 application rgw
  14. pool 5 'test-pool' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 29 flags hashpspool stripe_width 0
  15. pool 6 'rbd' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 35 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd

映射RBD

  • 加载RBD内核模块:
  1. $ uname -r
  2. 3.10.0-862.11.6.el7.x86_64
  3. $ modprobe rbd
  4. $ lsmod | grep rbd
  5. rbd 83728 0
  6. libceph 301687 1 rbd
  • 映射RBD块设备,发现由于内核版本较低,无法映射:
  1. $ rbd map rbd_test
  2. rbd: sysfs write failed
  3. RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_test object-map fast-diff deep-flatten".
  4. In some cases useful info is found in syslog - try "dmesg | tail".
  5. rbd: map failed: (6) No such device or address
  6. $ dmesg | tail
  7. [150078.190941] Key type dns_resolver registered
  8. [150078.231155] Key type ceph registered
  9. [150078.231538] libceph: loaded (mon/osd proto 15/24)
  10. [150078.239110] rbd: loaded
  11. [152620.392095] libceph: mon1 172.29.101.167:6789 session established
  12. [152620.392821] libceph: client4522 fsid 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  13. [152620.646943] rbd: image rbd_test: image uses unsupported features: 0x38
  14. [152648.322295] libceph: mon0 172.29.101.166:6789 session established
  15. [152648.322845] libceph: client4530 fsid 383237bd-becf-49d5-9bd6-deb0bc35ab2a
  16. [152648.357522] rbd: image rbd_test: image uses unsupported features: 0x38
  • 查看RBD块设备的特性:
  1. $ rbd info rbd_test
  2. rbd image 'rbd_test':
  3. size 10 GiB in 2560 objects
  4. order 22 (4 MiB objects)
  5. id: 11856b8b4567
  6. block_name_prefix: rbd_data.11856b8b4567
  7. format: 2
  8. features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
  9. op_features:
  10. flags:
  11. create_timestamp: Fri Aug 24 10:21:11 2018
  12. layering: 支持分层
  13. striping: 支持条带化 v2
  14. exclusive-lock: 支持独占锁
  15. object-map: 支持对象映射(依赖 exclusive-lock )
  16. fast-diff: 快速计算差异(依赖 object-map )
  17. deep-flatten: 支持快照扁平化操作
  18. journaling: 支持记录 IO 操作(依赖独占锁)
  • 修改Ceph默认RBD特性来解决这一问题:
  1. $ vi /etc/ceph/ceph.conf
  2. rbd_default_features = 1
  3. $ ceph --show-config | grep rbd | grep "features rbd_default_features = 1"
  • 或者在创建RBD指定特性:
$ rbd create rbd_test --size 10G --image-format 1 --image-feature layering
  • 关闭掉内核不支持的特性:
  1. $ rbd feature disable rbd_test object-map fast-diff deep-flatten
  2. $ rbd info rbd_test
  3. rbd image 'rbd_test':
  4. size 10 GiB in 2560 objects
  5. order 22 (4 MiB objects)
  6. id: 11856b8b4567
  7. block_name_prefix: rbd_data.11856b8b4567
  8. format: 2
  9. features: layering, exclusive-lock
  10. op_features:
  11. flags:
  12. create_timestamp: Fri Aug 24 10:21:11 2018
  • 重新映射RBD
  1. # rbd map rbd/rbd_test
  2. $ rbd map rbd_test
  3. /dev/rbd0
  4. $ rbd showmapped
  5. id pool image snap device
  6. 0 rbd rbd_test - /dev/rbd0
  7. $ lsblk | grep rbd0
  8. rbd0 252:0 0 10.2G 0 disk

使用RBD

  • 创建文件系统:
  1. $ mkfs.xfs /dev/rbd0
  2. meta-data=/dev/rbd0 isize=512 agcount=16, agsize=167936 blks
  3. = sectsz=512 attr=2, projid32bit=1
  4. = crc=1 finobt=0, sparse=0
  5. data = bsize=4096 blocks=2682880, imaxpct=25
  6. = sunit=1024 swidth=1024 blks
  7. naming =version 2 bsize=4096 ascii-ci=0 ftype=1
  8. log =internal log bsize=4096 blocks=2560, version=2
  9. = sectsz=512 sunit=8 blks, lazy-count=1
  10. realtime =none extsz=4096 blocks=0, rtextents=0
  • 挂载RBD,并写入数据:
  1. $ mkdir -pv /mnt/rbd_test
  2. mkdir: 已创建目录 "/mnt/rbd_test"
  3. $ mount /dev/rbd0 /mnt/rbd_test
  4. $ dd if=/dev/zero of=/mnt/rbd_test/fi1e1 count=100 bs=1M
  • 查看RADOS的变化,可以看到一个RBD会被分为很多小对象:
  1. $ ll -h /mnt/rbd_test/
  2. 总用量 100M
  3. -rw-r--r-- 1 root root 100M 8月 24 11:35 fi1e1
  4. $ rados -p rbd ls | grep 1185
  5. rbd_data.11856b8b4567.0000000000000003
  6. rbd_data.11856b8b4567.00000000000003d8
  7. rbd_data.11856b8b4567.0000000000000d74
  8. rbd_data.11856b8b4567.0000000000001294
  9. rbd_data.11856b8b4567.0000000000000522
  10. rbd_data.11856b8b4567.0000000000000007
  11. rbd_data.11856b8b4567.0000000000001338
  12. rbd_data.11856b8b4567.0000000000000018
  13. rbd_data.11856b8b4567.000000000000000d
  14. rbd_data.11856b8b4567.0000000000000148
  15. rbd_data.11856b8b4567.00000000000000a4
  16. rbd_data.11856b8b4567.00000000000013dc
  17. rbd_data.11856b8b4567.0000000000000013
  18. rbd_header.11856b8b4567
  19. rbd_data.11856b8b4567.0000000000000000
  20. rbd_data.11856b8b4567.0000000000000a40
  21. rbd_data.11856b8b4567.000000000000114c
  22. rbd_data.11856b8b4567.0000000000000008
  23. rbd_data.11856b8b4567.0000000000000b88
  24. rbd_data.11856b8b4567.0000000000000009
  25. rbd_data.11856b8b4567.0000000000000521
  26. rbd_data.11856b8b4567.0000000000000010
  27. rbd_data.11856b8b4567.00000000000008f8
  28. rbd_data.11856b8b4567.0000000000000012
  29. rbd_data.11856b8b4567.0000000000000016
  30. rbd_data.11856b8b4567.0000000000000014
  31. rbd_data.11856b8b4567.000000000000001a
  32. rbd_data.11856b8b4567.0000000000000854
  33. rbd_data.11856b8b4567.000000000000000c
  34. rbd_data.11856b8b4567.0000000000000ae4
  35. rbd_data.11856b8b4567.000000000000047c
  36. rbd_data.11856b8b4567.0000000000000005
  37. rbd_data.11856b8b4567.0000000000000e18
  38. rbd_data.11856b8b4567.000000000000000f
  39. rbd_data.11856b8b4567.0000000000000cd0
  40. rbd_data.11856b8b4567.00000000000001ec
  41. rbd_data.11856b8b4567.0000000000000017
  42. rbd_data.11856b8b4567.0000000000000a3b
  43. rbd_data.11856b8b4567.0000000000000011
  44. rbd_data.11856b8b4567.000000000000070c
  45. rbd_data.11856b8b4567.0000000000000520
  46. rbd_data.11856b8b4567.00000000000010a8
  47. rbd_data.11856b8b4567.0000000000000015
  48. rbd_data.11856b8b4567.0000000000000004
  49. rbd_data.11856b8b4567.000000000000099c
  50. rbd_data.11856b8b4567.0000000000000001
  51. rbd_data.11856b8b4567.000000000000000b
  52. rbd_data.11856b8b4567.0000000000000c2c
  53. rbd_data.11856b8b4567.0000000000000334
  54. rbd_data.11856b8b4567.00000000000005c4
  55. rbd_data.11856b8b4567.000000000000000a
  56. rbd_data.11856b8b4567.0000000000000006
  57. rbd_data.11856b8b4567.0000000000000668
  58. rbd_data.11856b8b4567.0000000000001004
  59. rbd_data.11856b8b4567.0000000000000019
  60. rbd_data.11856b8b4567.00000000000011f0
  61. rbd_data.11856b8b4567.000000000000000e
  62. rbd_data.11856b8b4567.0000000000000f60
  63. rbd_data.11856b8b4567.00000000000007b0
  64. rbd_data.11856b8b4567.0000000000000290
  65. rbd_data.11856b8b4567.0000000000000ebc
  66. rbd_data.11856b8b4567.0000000000000002
  67. $ rados -p rbd ls | grep 1185 | wc -l
  68. 62
  • 再次写入数据并查看变化,随着写入的数据变多,其中的对象也会变多:
  1. $ dd if=/dev/zero of=/mnt/rbd_test/fi1e1 count=200 bs=1M
  2. 记录了200+0 的读入
  3. 记录了200+0 的写出
  4. 209715200字节(210 MB)已复制,0.441176 秒,475 MB/秒
  5. $ rados -p rbd ls | grep 1185 | wc -l
  6. 87

调整RBD

  • 调整RBD大小:
  1. $ rbd resize rbd_test --size 20480
  2. Resizing image: 100% complete...done.
  • 调整文件系统大小:
  1. $ xfs_growfs -d /mnt/rbd_test/
  2. meta-data=/dev/rbd0 isize=512 agcount=16, agsize=167936 blks
  3. = sectsz=512 attr=2, projid32bit=1
  4. = crc=1 finobt=0 spinodes=0
  5. data = bsize=4096 blocks=2682880, imaxpct=25
  6. = sunit=1024 swidth=1024 blks
  7. naming =version 2 bsize=4096 ascii-ci=0 ftype=1
  8. log =internal bsize=4096 blocks=2560, version=2
  9. = sectsz=512 sunit=8 blks, lazy-count=1
  10. realtime =none extsz=4096 blocks=0, rtextents=0
  11. data blocks changed from 2682880 to 5242880
  • 查看RBD变化:
  1. $ rbd info rbd_test
  2. rbd image 'rbd_test':
  3. size 20 GiB in 5120 objects
  4. order 22 (4 MiB objects)
  5. id: 11856b8b4567
  6. block_name_prefix: rbd_data.11856b8b4567
  7. format: 2
  8. features: layering, exclusive-lock
  9. op_features:
  10. flags:
  11. create_timestamp: Fri Aug 24 10:21:11 2018
  12. $ lsblk | grep rbd0
  13. rbd0 252:0 0 20G 0 disk /mnt/rbd_test
  14. $ df -h | grep rbd
  15. /dev/rbd0 20G 234M 20G 2% /mnt/rbd_test

快照RBD

  • 创建测试文件:
  1. $ echo "Hello Ceph This is snapshot test" > /mnt/rbd_test/file2
  2. $ ls -lh /mnt/rbd_test/
  3. 总用量 201M
  4. -rw-r--r-- 1 root root 200M 8月 24 15:46 fi1e1
  5. -rw-r--r-- 1 root root 33 8月 24 15:51 file2
  6. $ cat /mnt/rbd_test/file2
  7. Hello Ceph This is snapshot test
  • 创建RBD快照:
  1. $ rbd snap create rbd_test@snap1
  2. $ rbd snap ls rbd_test
  3. SNAPID NAME SIZE TIMESTAMP
  4. 4 snap1 20 GiB Fri Aug 24 15:52:49 2018
  • 删除文件:
  1. $ rm -rfv /mnt/rbd_test/file2
  2. 已删除"/mnt/rbd_test/file2"
  3. $ ls -lh /mnt/rbd_test/
  4. 总用量 200M
  5. -rw-r--r-- 1 root root 200M 8月 24 15:46 fi1e1
  • 卸载并取消RBD映射:
  1. $ umount /mnt/rbd_test
  2. $ rbd unmap rbd_test
  • 回滚RBD
  1. $ rbd snap rollback rbd_test@snap1
  2. Rolling back to snapshot: 100% complete...done.
  • 重新映射和挂载RBD,并查看文件:
  1. $ rbd map rbd_test
  2. /dev/rbd0
  3. $ mount /dev/rbd0 /mnt/rbd_test
  4. $ ls -lh /mnt/rbd_test/
  5. 总用量 201M
  6. -rw-r--r-- 1 root root 200M 8月 24 15:46 fi1e1
  7. -rw-r--r-- 1 root root 33 8月 24 15:51 file2

观察PG

  • 随意查看rbd存储池中的对象OSDMap,可以看到其中PGOSD顺序并不完全相同,而且同一个Pool中的对象的PGID中小数点前的数字是一样的:
  1. $ ceph osd map rbd rbd_info
  2. osdmap e74 pool 'rbd' (6) object 'rbd_info' -> pg 6.ac0e573a (6.2) -> up ([1,0,2], p1) acting ([1,0,2], p1)
  3. $ ceph osd map rbd rbd_directory
  4. osdmap e74 pool 'rbd' (6) object 'rbd_directory' -> pg 6.30a98c1c (6.4) -> up ([0,1,2], p0) acting ([0,1,2], p0)
  5. $ ceph osd map rbd rbd_id.rbd_test
  6. osdmap e74 pool 'rbd' (6) object 'rbd_id.rbd_test' -> pg 6.818788b3 (6.3) -> up ([1,2,0], p1) acting ([1,2,0], p1)
  7. $ ceph osd map rbd rbd_data.11856b8b4567.0000000000000022
  8. osdmap e74 pool 'rbd' (6) object 'rbd_data.11856b8b4567.0000000000000022' -> pg 6.deee7c73 (6.3) -> up ([1,2,0], p1) acting ([1,2,0], p1)
  9. $ ceph osd map rbd rbd_data.11856b8b4567.000000000000000a
  10. osdmap e74 pool 'rbd' (6) object 'rbd_data.11856b8b4567.000000000000000a' -> pg 6.561c344b (6.3) -> up ([1,2,0], p1) acting ([1,2,0], p1)
  11. $ ceph osd map rbd rbd_data.11856b8b4567.00000000000007b0
  12. osdmap e74 pool 'rbd' (6) object 'rbd_data.11856b8b4567.00000000000007b0' -> pg 6.a603e1f (6.7) -> up ([1,0,2], p1) acting ([1,0,2], p1)
  • 创建一个两副本的存储池,可以看到同一个存储池对象的PG也可能会使用不同的OSD
  1. $ ceph osd pool create pg_test 8 8
  2. pool 'pg_test' created
  3. $ ceph osd dump | grep pg_test
  4. pool 12 'pg_test' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 75 flags hashpspool stripe_width 0
  5. $ osd pool set pg_test size 2
  6. set pool 12 size to 2
  7. $ ceph osd dump | grep pg_test
  8. pool 12 'pg_test' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 78 flags hashpspool stripe_width 0
  9. $ rados -p pg_test put object1 /etc/hosts
  10. $ rados -p pg_test put object2 /etc/hosts
  11. $ rados -p pg_test put object3 /etc/hosts
  12. $ rados -p pg_test put object4 /etc/hosts
  13. $ rados -p pg_test put object5 /etc/hosts
  14. $ rados -p pg_test ls
  15. object1
  16. object2
  17. object3
  18. object4
  19. object5
  20. $ ceph osd map pg_test object1
  21. osdmap e79 pool 'pg_test' (12) object 'object1' -> pg 12.bac5debc (12.4) -> up ([2,0], p2) acting ([2,0], p2)
  22. $ ceph osd map pg_test object2
  23. osdmap e79 pool 'pg_test' (12) object 'object2' -> pg 12.f85a416a (12.2) -> up ([2,0], p2) acting ([2,0], p2)
  24. $ ceph osd map pg_test object3
  25. osdmap e79 pool 'pg_test' (12) object 'object3' -> pg 12.f877ac20 (12.0) -> up ([1,0], p1) acting ([1,0], p1
  26. $ ceph osd map pg_test object4
  27. osdmap e79 pool 'pg_test' (12) object 'object4' -> pg 12.9d9216ab (12.3) -> up ([2,1], p2) acting ([2,1], p2)
  28. $ ceph osd map pg_test object5
  29. osdmap e79 pool 'pg_test' (12) object 'object5' -> pg 12.e1acd6d (12.5) -> up ([1,2], p1) acting ([1,2], p1)

测试性能

  • 写入性能测试:
  1. $ rados bench -p test-pool 10 write --no-cleanup
  2. hints = 1
  3. Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
  4. Object prefix: benchmark_data_osdev01_1827771
  5. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  6. 0 0 0 0 0 0 - 0
  7. 1 16 31 15 59.8716 60 0.388146 0.666288
  8. 2 16 49 33 65.9176 72 0.62486 0.824162
  9. 3 16 65 49 65.2595 64 1.18038 0.834558
  10. 4 16 86 70 69.8978 84 0.657194 0.834779
  11. 5 16 107 91 72.7115 84 0.594541 0.829814
  12. 6 16 125 109 72.5838 72 0.371435 0.796664
  13. 7 16 149 133 75.8989 96 1.17764 0.803259
  14. 8 16 165 149 74.4101 64 0.568129 0.797091
  15. 9 16 185 169 75.01 80 0.813372 0.81463
  16. 10 16 203 187 74.7085 72 0.728715 0.812529
  17. Total time run: 10.3161
  18. Total writes made: 203
  19. Write size: 4194304
  20. Object size: 4194304
  21. Bandwidth (MB/sec): 78.7122
  22. Stddev Bandwidth: 11.1634
  23. Max bandwidth (MB/sec): 96
  24. Min bandwidth (MB/sec): 60
  25. Average IOPS: 19
  26. Stddev IOPS: 2
  27. Max IOPS: 24
  28. Min IOPS: 15
  29. Average Latency(s): 0.80954
  30. Stddev Latency(s): 0.293645
  31. Max latency(s): 1.77366
  32. Min latency(s): 0.240024
  • 顺序读取性能测试:
  1. $ rados bench -p test-pool 10 seq
  2. hints = 1
  3. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  4. 0 0 0 0 0 0 - 0
  5. 1 16 72 56 223.808 224 0.0519066 0.217292
  6. 2 16 111 95 189.736 156 0.658876 0.289657
  7. 3 16 160 144 191.663 196 0.0658452 0.301259
  8. 4 16 203 187 186.745 172 0.210803 0.297584
  9. Total time run: 4.43386
  10. Total reads made: 203
  11. Read size: 4194304
  12. Object size: 4194304
  13. Bandwidth (MB/sec): 183.136
  14. Average IOPS: 45
  15. Stddev IOPS: 7
  16. Max IOPS: 56
  17. Min IOPS: 39
  18. Average Latency(s): 0.346754
  19. Max latency(s): 1.37891
  20. Min latency(s): 0.0249563
  • 随机读取性能测试:
  1. $ rados bench -p test-pool 10 rand
  2. hints = 1
  3. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  4. 0 0 0 0 0 0 - 0
  5. 1 16 59 43 171.94 172 0.271225 0.222279
  6. 2 16 108 92 183.95 196 1.06429 0.275433
  7. 3 16 153 137 182.618 180 0.00350975 0.304582
  8. 4 16 224 208 207.951 284 0.0678476 0.278888
  9. 5 16 267 251 200.757 172 0.00386545 0.289519
  10. 6 16 319 303 201.955 208 0.866646 0.294983
  11. 7 16 360 344 196.529 164 0.00428517 0.30615
  12. 8 16 405 389 194.458 180 0.903073 0.311316
  13. 9 16 455 439 195.071 200 0.00368576 0.316057
  14. 10 16 517 501 200.36 248 0.621325 0.309242
  15. Total time run: 10.5614
  16. Total reads made: 518
  17. Read size: 4194304
  18. Object size: 4194304
  19. Bandwidth (MB/sec): 196.187
  20. Average IOPS: 49
  21. Stddev IOPS: 9
  22. Max IOPS: 71
  23. Min IOPS: 41
  24. Average Latency(s): 0.321834
  25. Max latency(s): 1.16304
  26. Min latency(s): 0.0026629
  • 使用fio进行测试:
  1. $ yum install -y fio "*librbd*"
  2. $ rbd create fio_test --size 20480
  3. $ vi write.fio
  4. [global]
  5. description="write test with block size of 4M"
  6. ioengine=rbd
  7. clustername=ceph
  8. clientname=admin
  9. pool=rbd
  10. rbdname=fio_test
  11. iodepth=32
  12. runtime=120
  13. rw=write
  14. bs=4M
  15. [logging]
  16. write_iops_log=write_iops_log
  17. write_bw_log=write_bw_log
  18. write_lat_log=write_lat_log
  19. $ fio write.fio
  20. logging: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=rbd, iodepth=32
  21. fio-3.1
  22. Starting 1 process
  23. Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]
  24. logging: (groupid=0, jobs=1): err= 0: pid=161962: Wed Aug 29 19:17:17 2018
  25. Description : ["write test with block size of 4M"]
  26. write: IOPS=15, BW=60.4MiB/s (63.3MB/s)(7252MiB/120085msec)
  27. slat (usec): min=665, max=14535, avg=1584.29, stdev=860.28
  28. clat (msec): min=1828, max=4353, avg=2092.28, stdev=180.12
  29. lat (msec): min=1829, max=4354, avg=2093.87, stdev=180.15
  30. clat percentiles (msec):
  31. | 1.00th=[ 1838], 5.00th=[ 1938], 10.00th=[ 1989], 20.00th=[ 2022],
  32. | 30.00th=[ 2039], 40.00th=[ 2056], 50.00th=[ 2072], 60.00th=[ 2106],
  33. | 70.00th=[ 2123], 80.00th=[ 2165], 90.00th=[ 2198], 95.00th=[ 2232],
  34. | 99.00th=[ 2333], 99.50th=[ 3977], 99.90th=[ 4111], 99.95th=[ 4329],
  35. | 99.99th=[ 4329]
  36. bw ( KiB/s): min= 963, max= 2294, per=3.26%, avg=2013.72, stdev=117.50, samples=1813
  37. iops : min= 1, max= 1, avg= 1.00, stdev= 0.00, samples=1813
  38. lat (msec) : 2000=13.40%, >=2000=86.60%
  39. cpu : usr=1.94%, sys=0.40%, ctx=157, majf=0, minf=157364
  40. IO depths : 1=2.3%, 2=6.0%, 4=12.6%, 8=25.2%, 16=50.3%, 32=3.6%, >=64=0.0%
  41. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  42. complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=3.0%, 64=0.0%, >=64=0.0%
  43. issued rwt: total=0,1813,0, short=0,0,0, dropped=0,0,0
  44. latency : target=0, window=0, percentile=100.00%, depth=32
  45. Run status group 0 (all jobs):
  46. WRITE: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=7252MiB (7604MB), run=120085-120085msec
  47. Disk stats (read/write):
  48. sda: ios=5/653, merge=0/6, ticks=6/2818, in_queue=2824, util=0.17%

参考文档

  1. INSTALLATION (CEPH-DEPLOY)

转载于:https://my.oschina.net/LastRitter/blog/2250877

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号