当前位置:   article > 正文

轻松做到OpenStack对接Ceph集群,便于扩展的分布式存储实战--黑夜青儿_rbd_secret_uuid 是控制节点还是计算节点uuid?

rbd_secret_uuid 是控制节点还是计算节点uuid?

概述

libvirt配置了librbd的QEMU接口,通过它可以在OpenStack中使用Ceph块存储。Ceph块存储是集群对象,这意味着它比独立的服务器有更好的性能。
在OpenStack中使用Ceph块设备,必须首先安装QEMU,libvirt;OpenStack,下图描述了 OpenStack和Ceph技术层次结构:

在这里插入图片描述
OpenStack的功能服务对接的有三种
对接分为三种,也就是存储为openstack提供的三类功能
1.镜像服务Glance对接
2.块存储服务Cinder对接
3.云主机储存对接

OpenStack对接Ceph集群前提条件
1.Openstack集群已完成部署
详细部署实战URL:https://blog.csdn.net/weixin_41711331/article/details/83992040

2.Ceph集群部署完成
详细部署实战
URL:https://blog.csdn.net/weixin_41711331/article/details/84023279

开始进行对接

  1. OpenStack使用ceph作为后端存储
    1.1. Ceph相关配置
     在ceph上创建pool
    #ceph osd pool create volumes 64
    #ceph osd pool create images 64
    #ceph osd pool create vms 64

 在glance-api(控制节点)节点上
#yum install python-rbd-rbd –y
 (计算节点)在nova-compute和cinder-volume节点上
#yum install ceph-common –y

1.2. openstack安装Ceph客户端认证
 集群ceph存储端操作
[root@ceph]# ssh 172.26.128.126 tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
[root@ceph]# ssh 172.26.128.166 tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
[root@ceph]# ssh 172.26.128.167 tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
[root@ceph]# ssh 172.26.128.168 tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
 用户授权
ceph auth get-or-create client.cinder mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images’
ceph auth get-or-create client.glance mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images’
ceph auth get-or-create client.cinder-backup mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=backups’

 把 client.cinder 、 client.glance 和 client.cinder-backup 的密钥环复制到适当的节点,并更改所有权:
ceph auth get-or-create client.glance | ssh {your-glance-api-server} tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

 运行 nova-compute 的节点,其进程需要密钥环文件:
没有复制167,是因为cinder-volumes和计算节点在一个机器
ceph auth get-or-create client.cinder | ssh 172.26.128.166 tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh 172.26.128.168 tee /etc/ceph/ceph.client.cinder.keyring

 在计算节点上把密钥加进 libvirt ,每个节点都操作
uuidgen 457eb676-33da-42ec-9a8c-9293d545c337 每个计算节点uuid可能不一样
cat > secret.xml <<EOF 457eb676-33da-42ec-9a8c-9293d545c337 client.cinder secret EOF
virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created

sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) &&

1.3. 镜像对接:
修改部署glance服务的节点配置文件/etc/glance/glance-api.conf
[glance_store]
default_store = rbd
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

 重启镜像
systemctl restart openstack-glance-api

 上传镜像看看在ceph是否可以查询到,最好上传raw格式镜像
openstack image create “centos7.4-ceph” --file centos7.4-cloud.raw --disk-format raw --container-format bare --public

1.3.1. nova对接
各个计算节点修改/etc/nova/nova.conf
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 192ff8f8-2e80-4b5f-abcf-9792ccc5a91f #对应每个计算节点的uuid
disk_cachemodes=“network=writeback”
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag=“VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED”
hw_disk_discard = unmap

重启计算节点
systemctl restart openstack-nova-compute.service

1.3.2. 配置cinder
各个存储节点修改 /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = ceph
[ceph]默认没有这个框,一定要加上!
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 192ff8f8-2e80-4b5f-abcf-9792ccc5a91f

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/217526?site
推荐阅读
相关标签
  

闽ICP备14008679号