赞
踩
ceph (luminous 版)
journal disk 磁盘故障
解决 journal disk 故障
参考手动部署 ceph 环境说明 (luminous 版)
ceph 环境如下
[root@cephsvr-128040 ~]# ceph -s
cluster:
id: c45b752d-5d4d-4d3a-a3b2-04e73eff4ccd
health: HEALTH_OK
services:
mon: 3 daemons, quorum cephsvr-128040,cephsvr-128214,cephsvr-128215
mgr: openstack(active)
osd: 36 osds: 36 up, 36 in
data:
pools: 1 pools, 2048 pgs
objects: 7 objects, 725 bytes
usage: 50607 MB used, 196 TB / 196 TB avail
pgs: 2048 active+clean
参考 osd tree
[root@cephsvr-128040 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 216.00000 root default -10 72.00000 rack racka07 -3 72.00000 host cephsvr-128214 12 hdd 6.00000 osd.12 up 1.00000 1.00000 13 hdd 6.00000 osd.13 up 1.00000 1.00000 14 hdd 6.00000 osd.14 up 1.00000 1.00000 15 hdd 6.00000 osd.15 up 1.00000 1.00000 16 hdd 6.00000 osd.16 up 1.00000 1.00000 17 hdd 6.00000 osd.17 up 1.00000 1.00000 18 hdd 6.00000 osd.18 up 1.00000 1.00000 19 hdd 6.00000 osd.19 up 1.00000 1.00000 20 hdd 6.00000 osd.20 up 1.00000 1.00000 21 hdd 6.00000 osd.21 up 1.00000 1.00000 22 hdd 6.00000 osd.22 up 1.00000 1.00000 23 hdd 6.00000 osd.23 up 1.00000 1.00000 -9 72.00000 rack racka12 -2 72.00000 host cephsvr-128040 0 hdd 6.00000 osd.0 up 1.00000 0.50000 1 hdd 6.00000 osd.1 up 1.00000 1.00000 2 hdd 6.00000 osd.2 up 1.00000 1.00000 3 hdd 6.00000 osd.3 up 1.00000 1.00000 4 hdd 6.00000 osd.4 up 1.00000 1.00000 5 hdd 6.00000 osd.5 up 1.00000 1.00000 6 hdd 6.00000 osd.6 up 1.00000 1.00000 7 hdd 6.00000 osd.7
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。