当前位置:   article > 正文

Centos7部署openstack架构

centos7部署openstack

简介

OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合,由NASA(美国国家航空航天局)和Rackspace合作研发并发起,以Apache许可证授权的开源代码项目
OpenStack为私有云和公有云提供可扩展的弹性的云计算服务,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台
OpenStack覆盖了网络、虚拟化、操作系统、服务器等各个方面,它是一个正在开发中的云计算平台项目,根据成熟及重要程度的不同,被分解成核心项目、孵化项目,以及支持项目和相关项目,每个项目都有自己的委员会和项目技术主管,而且每个项目都不是一成不变的,孵化项目可以根据发展的成熟度和重要性,转变为核心项目
核心组件
1、计算(Compute)Nova:一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格
2、对象存储(Object Storage)Swift:一套用于在大规模可扩展系统中通过内置冗余及高容错机制实现对象存储的系统,允许进行存储或者检索文件,可为Glance提供镜像存储,为Cinder提供卷备份服务
3、镜像服务(Image Service)Glance:一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能
4、身份服务(Identity Service)Keystone:为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles
5、网络&地址管理(Network)Neutron:提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN,插件架构支持许多主流的网络厂家和技术,如OpenvSwitch
6、块存储(Block Storage)Cinder:为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷
7、UI 界面(Dashboard)Horizon:OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等
8、测量(Metering)Ceilometer:能把OpenStack内部发生的几乎所有的事件都收集起来,然后为计费和监控以及其它服务提供数据支撑
9、部署编排(Orchestration)Heat:提供了一种通过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署
10、数据库服务(Database Service)Trove:为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务

前期准备

准备三台Centos7虚拟机,其中两台虚拟机配置两个网卡(NAT和仅主机),两台虚拟区配置多块硬盘,配置IP地址和hostname,同步系统时间,关闭防火墙和selinux,修改ip地址和hostname映射

iphostname
ens33(NAT):192.168.29.145 ens37(仅主机):192.168.31.135controller
ens33(NAT):192.168.29.146 ens37(仅主机):192.168.31.136computer
ens33(NAT):192.168.29.147storager

部署服务

安装epel源

[root@controller ~]# yum install epel-release -y
[root@computer ~]# yum install epel-release -y
  • 1
  • 2

安装openstack源

[root@controller ~]# yum install -y centos-release-openstack-queens

[root@computer ~]# yum install -y centos-release-openstack-queens

[root@storager ~]# yum install -y centos-release-openstack-queens
  • 1
  • 2
  • 3
  • 4
  • 5

安装openstack的客户端和selinux服务

[root@controller ~]# yum install python-openstackclient openstack-selinux -y

[root@computer ~]# yum install python-openstackclient openstack-selinux -y

[root@storager ~]# yum install python-openstackclient openstack-selinux -y
  • 1
  • 2
  • 3
  • 4
  • 5

部署MySQL数据库和memcached

[root@controller ~]# yum install mysql-server mysql memcached python2-PyMySQL -y
  • 1

安装消息队列服务

[root@controller ~]# yum install -y rabbitmq-server -y
  • 1

安装keystone服务

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
  • 1

安装glance服务

[root@controller ~]# yum install openstack-glance -y
  • 1

controller安装nova服务

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
  • 1

computer安装nova服务

[root@computer ~]# yum install openstack-nova-compute qemu* libvirt* -y
  • 1

controller安装neutron服务

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge openstack-neutron-openvswitch ebtables iproute -y
  • 1

computer安装neutron服务

[root@computer ~]#  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge openstack-neutron-openvswitch ebtables iproute -y
  • 1

安装dashboard组件

[root@controller ~]# yum install openstack-dashboard -y
  • 1

controller安装swift服务

[root@controller ~]# yum install openstack-swift openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware
  • 1

comupter和storager安装swift服务

[root@computer ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y
[root@storager ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y
  • 1
  • 2

controller安装cinder服务

[root@controller ~]# yum install openstack-cinder -y
  • 1

storager安装cinder和lvm服务

[root@storager ~]# yum install lvm2 openstack-cinder targetcli python-keystone -y
  • 1

开启硬件加速

[root@controller ~]# modprobe kvm-intel
[root@computer ~]# modprobe kvm-intel
  • 1
  • 2

安装依赖

[root@controller ~]# yum -y install libibverbs
[root@computer ~]# yum -y install libibverbs
[root@storager ~]# yum -y install libibverbs
  • 1
  • 2
  • 3

配置消息队列服务

开启服务

[root@controller ~]# systemctl enable --now rabbitmq-server.service 
  • 1

添加用户

[root@controller ~]# rabbitmqctl add_user openstack openstack
  • 1

授权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  • 1

配置memcached服务

修改配置文件

[root@controller ~]# vi /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

启动服务

[root@controller ~]# systemctl enable --now memcached.service
  • 1

配置数据库服务

修改配置文件

[root@controller ~]# vi /etc/my.cnf
default-time_zone='+8:00'
bind-address = 192.168.29.145
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

启动服务

[root@controller ~]# systemctl enable --now  mysqld
  • 1

创建数据库

mysql> create database keystone;
mysql> create database glance;
mysql> create database nova;
mysql> create database nova_api;
mysql> create database nova_cell0;
mysql> create database neutron;
mysql> create database cinder;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

授权用户

mysql> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'your_password';
mysql> grant all privileges on keystone.* to 'keystone'@'%' identified by 'your_password';

mysql> grant all privileges on glance.* to 'glance'@'localhost' identified by 'your_password';
mysql> grant all privileges on glance.* to 'glance'@'%' identified by 'your_password';

mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova_api.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'your_password';
mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'your_password';

mysql> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'your_password';
mysql> grant all privileges on cinder.* to 'cinder'@'%' identified by 'your_password';

mysql> flush privileges;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

配置keystone服务

修改配置文件

[root@controller ~]# vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:your_password@controller/keystone
[token]
provider = fernet
  • 1
  • 2
  • 3
  • 4
  • 5

数据库同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
  • 1

密钥库初始化

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
  • 1
  • 2
  • 3
  • 4
  • 5

配置httpd服务

#修改配置文件
[root@controller ~]# vi /etc/httpd/conf/httpd.conf
ServerName controller

#创建软连接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#启动服务
[root@controller ~]# systemctl enable  --now httpd
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

配置admin环境变量脚本

[root@controller ~]# vi admin-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

验证环境变量

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack token issue
  • 1
  • 2

创建service项目

[root@controller ~]# openstack project create --domain default --description "Service Project" service
  • 1

创建demo项目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
  • 1

创建demo用户

[root@controller ~]# openstack user create --domain default --password-prompt demo
  • 1

创建user角色

[root@controller ~]# openstack role create user
  • 1

添加user角色到demo项目和用户

[root@controller ~]# openstack role add --project demo --user demo user
  • 1

配置demo环境变量脚本

[root@controller ~]# vi demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

配置glance服务

创建并配置glance用户

[root@controller ~]# openstack user create --domain default --password-prompt glance
[root@controller ~]# openstack role add --project service --user glance admin
  • 1
  • 2

创建glance服务实体

[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image
  • 1

创建glance服务端点

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292
  • 1
  • 2
  • 3

修改配置文件

[root@controller ~]# vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
[root@controller ~]# vi /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
  • 1

启动服务

[root@controller ~]# systemctl enable --now  openstack-glance-api.service openstack-glance-registry.service
  • 1

上传镜像

[root@controller ~]#  openstack image create "Centos7" --file CentOS-7-x86_64-GenericCloud-1907.qcow2 --disk-format qcow2 --container-format bare --public

#查看镜像
[root@controller ~]# openstack image list
  • 1
  • 2
  • 3
  • 4

Controller配置nova服务

创建并配置nova用户

[root@controller ~]# openstack user create --domain default --password-prompt nova
[root@controller ~]# openstack role add --project service --user nova admin
  • 1
  • 2

创建nova服务实体

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
  • 1

创建nova服务端点

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
  • 1
  • 2
  • 3

创建并配置placement用户

[root@controller ~]# openstack user create --domain default --password-prompt placement
[root@controller ~]# openstack role add --project service --user placement admin
  • 1
  • 2

创建placement服务实体

[root@controller ~]# openstack service create --name placement --description "Placement API" placement
  • 1

创建placement服务端点

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
  • 1
  • 2
  • 3

修改配置文件

[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.29.145
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:openstack@controller
allow_resize_to_same_host = True
scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
quota_instances=10000
quota_cores=128
quota_ram=260710
cores=128
ram=260710


[api_database]
connection = mysql+pymysql://nova:your_password@controller/nova_api

[database]
connection = mysql+pymysql://nova:your_password@controller/nova

[api]
auth_strategy = keystone 

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova


[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/nova/nova-placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
<Directory /usr/bin>
  <IfVersion >= 2.4>
    Require all granted
  </IfVersion>
  <IfVersion >= 2.4>
    Order allow,deny
    Allow from all
  </IfVersion>
</Directory>
</VirtualHost>

Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

重启httpd服务

[root@controller ~]# systemctl restart httpd
  • 1

同步数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
  • 1
  • 2
  • 3
  • 4

验证

[root@controller ~]# nova-manage cell_v2 list_cells
  • 1

启动服务

[root@controller ~]# systemctl enable --now openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  • 1

compute配置nova服务

修改配置文件

[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.29.146
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
allow_resize_to_same_host = True
scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
cores=128
ram=260710

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[libvirt]
virt_type = kvm
#虚拟机部署集群需要用qemu
#virt_type = qemu
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51

启动服务

[root@compute ~]# systemctl enable --now  libvirtd.service openstack-nova-compute.service
  • 1

controller添加computer进入数据库

#查看nova-compute结点
[root@controller ~]# openstack compute service list --service nova-compute

#添加数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
  • 1
  • 2
  • 3
  • 4
  • 5

配置zone

[root@controller ~]# nova aggregate-create clusterA cluster1
[root@controller ~]# nova aggregate-list
+----+----------+-------------------+--------------------------------------+
| Id | Name     | Availability Zone | UUID                                 |
+----+----------+-------------------+--------------------------------------+
| 1  | clusterA | cluster1          | 820e585b-5d05-4ce3-8439-a265f56bc95e |
+----+----------+-------------------+--------------------------------------+
[root@controller ~]# nova aggregate-add-host 1 computer1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

controller配置neutron服务

创建并配置neutron用户

[root@controller ~]# openstack user create --domain default --password-prompt neutron
[root@controller ~]# openstack role add --project service --user neutron admin
  • 1
  • 2

创建neutron服务实体

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
  • 1

创建neutron服务端点

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696
  • 1
  • 2
  • 3

修改配置文件(linuxbridge网络架构)

[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:your_password@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.135
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
  • 1
  • 2
  • 3
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
  • 1
  • 2
  • 3
  • 4
  • 5
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
  • 1
  • 2
  • 3
  • 4
[root@controller ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

修改配置文件(Open vSwitch网络架构)

[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
allow_overlapping_ips = true

[database]
connection = mysql+pymysql://neutron:your_password@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vlan] 
network_vlan_ranges = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
  • 1
  • 2
  • 3
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 
enable_isolated_metadata = true 
interface_driver = openvswitch 
force_metadata = true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
  • 1
  • 2
  • 3
  • 4
[root@controller ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
[root@controller ~]# vi /etc/sysctl.conf
net.ipv4.ip_forward=1 
net.ipv4.conf.all.rp_filter=0 
net.ipv4.conf.default.rp_filter=0
[root@controller ~]# sysctl -p
  • 1
  • 2
  • 3
  • 4
  • 5

创建软链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  • 1

同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  • 1

启动服务

#重启nova-api服务
[root@controller ~]# systemctl restart openstack-nova-api.service

#linuxbridge架构
[root@controller ~]# systemctl enable  --now neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

#Open vSwitch架构
[root@controller ~]# systemctl enable  --now neutron-server.service 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

computer配置neutron服务

修改配置文件(linuxbridge架构)

[root@computer ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
[root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.136
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
[root@computer ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

修改配置文件(Open vSwitch架构)

[root@computer ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
[root@computer ~]# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
[linux_openvswitch] 
physical_interface_mappings = provider:ens37
[agent] 
tunnel_types = vxlan 
l2_population = true 
[ovs] 
bridge_mappings = provider:br-provider 
local_ip = 192.168.31.136
[securitygroup] 
enable_security_group = true 
firewall_driver = iptables_hybrid
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
  • 1
  • 2
  • 3
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 
enable_isolated_metadata = true 
interface_driver = openvswitch 
force_metadata = true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
  • 1
  • 2
  • 3
  • 4
[root@computer ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
[root@controller ~]# vi /etc/sysctl.conf
net.ipv4.ip_forward=1 
net.ipv4.conf.all.rp_filter=0 
net.ipv4.conf.default.rp_filter=0
[root@controller ~]# sysctl -p
  • 1
  • 2
  • 3
  • 4
  • 5

启动服务

#重启nova-compute服务
[root@compute ~]# systemctl restart openstack-nova-compute.service

#linuxbridge架构
[root@compute ~]# systemctl enable --now neutron-linuxbridge-agent.service

#Open vSwitch架构
[root@compute ~]# systemctl enable --now   neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

增加外部网桥(Open vSwitch架构)

[root@controller ~]#ovs-vsctl add-br br-provider
[root@controller ~]#ovs-vsctl add-port br-provider ens37
  • 1
  • 2

验证

[root@controller ~]# openstack network agent list

#查看日志
[root@computer ~]# tail /var/log/nova/nova-compute.log
  • 1
  • 2
  • 3
  • 4

配置dashboard组件

修改配置文件

[root@controller ~]# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', 'two.example.com']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
  • 1
  • 2

重启服务

[root@controller ~]# systemctl restart httpd.service memcached.service
  • 1

访问web界面
浏览器访问http://ip/dashboard

computer和storager配置swift服务

添加硬盘并格式化

#computer结点
[root@computer ~]# parted -a optimal --script /dev/sdc -- mktable gpt
[root@computer ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%
[root@computer ~]# mkfs.xfs -f /dev/sdc1
  • 1
  • 2
  • 3
  • 4
#storager结点
[root@storager ~]# parted -a optimal --script /dev/sdc -- mktable gpt
[root@storager ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%
[root@storager ~]# mkfs.xfs -f /dev/sdc1
  • 1
  • 2
  • 3
  • 4

挂载硬盘

#computer结点
[root@computer ~]# mkdir -p /srv/node/sdc1
[root@computer ~]# vi /etc/fstab
/dev/sdc1		/srv/node/sdc1		xfs	noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@computer ~]# mount /srv/node/sdc1/
  • 1
  • 2
  • 3
  • 4
  • 5
#storager结点
[root@storager ~]# mkdir -p /srv/node/sdc1
[root@storager ~]# vi /etc/fstab
/dev/sdc1		/srv/node/sdc1		xfs	noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@storager ~]# mount /srv/node/sdc1/
  • 1
  • 2
  • 3
  • 4
  • 5

编辑rsyncd服务

[root@computer ~]# vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.29.146

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
[root@storager ~]# vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.29.147

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

修改配置文件

[root@computer ~]# vi /etc/swift/account-server.conf
[root@storager ~]# vi /etc/swift/account-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6202
workers = 2
user = swift
swift_dir = /etc/swift
devices = /srv/node

[pipeline:main]
pipeline = account-server
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
[root@computer ~]# vi /etc/swift/container-server.conf
[root@storager ~]# vi /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6201
workers = 2
user = swift
swift_dir = /etc/swift
devices = /srv/node

[pipeline:main]
pipeline = container-server
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
[root@computer ~]# vi /etc/swift/object-server.conf
[root@storager ~]# vi /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6200
workers = 3
user = swift
swift_dir = /etc/swift
devices = /srv/node

[pipeline:main]
pipeline = object-server
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

修改权限

[root@computer ~]# chown -R swift:swift /srv/node/
[root@storager ~]# chown -R swift:swift /srv/node/
  • 1
  • 2

启动服务

[root@computer ~]# systemctl enable  --now  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service openstack-swift-container.service openstack-swift-container-replicator.service openstack-swift-container-auditor.service openstack-swift-container-sync.service openstack-swift-container-updater.service openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
  • 1
[root@storager ~]# systemctl enable  --now  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service openstack-swift-container.service openstack-swift-container-replicator.service openstack-swift-container-auditor.service openstack-swift-container-sync.service openstack-swift-container-updater.service openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
  • 1

controller配置swift服务

创建并配置swift用户

[root@controller ~]# openstack user create --password-prompt swift
[root@controller ~]# openstack role add --project service --user swift admin
  • 1
  • 2

创建swift服务实体

[root@controller ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store
  • 1

创建swift服务端点

[root@controller ~]# openstack endpoint create --region RegionOne object-store  public http://controller:8080/v1/AUTH_%\(tenant_id\)s 
[root@controller ~]# openstack endpoint create --region RegionOne object-store internal  http://controller:8080/v1/AUTH_%\(tenant_id\)s 
[root@controller ~]# openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 
  • 1
  • 2
  • 3

修改配置文件

[root@controller ~]# vi /etc/swift/proxy-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 8080
workers = 2
user = swift
swift_dir = /etc/swift

[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin,SwiftOperator,user
cache = swift.cache

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = swift
delay_auth_decision = true
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
[root@controller ~]# vi /etc/swift/swift.conf 
[swift-hash]
swift_hash_path_suffix = `od -t x8 -N 8 -A n < /dev/random`

[storage-policy:0]
name = Policy-0
default = yes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

创建rings

[root@controller ~]# cd /etc/swift
  • 1
#Account rings
#创建rings
[root@controller ~]# swift-ring-builder account.builder create 18 2 1
参数含义
	18:每个ring被分为2的18次方个分区
	2:一个数据存储两个备份
	1:ring的一个分区在一个小时候才可被移动

#添加存储节点到rings
[root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.146:6202/sdc1 100
[root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.147:6202/sdc1 100

#重新分布rings
[root@controller ~]# swift-ring-builder account.builder rebalance
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
#Container rings
#创建rings
[root@controller ~]# swift-ring-builder container.builder create 18 2 1

#添加存储节点到rings
[root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.146:6201/sdc1 100
[root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.147:6201/sdc1 100

#重新分布rings
[root@controller ~]# swift-ring-builder container.builder rebalance
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
#object ring
#创建rings
[root@controller ~]# swift-ring-builder object.builder create 18 2 1

#添加存储节点到rings
[root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.146:6200/sdc1 100
[root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.147:6200/sdc1 100

#重新分布rings
[root@controller ~]# swift-ring-builder object.builder rebalance
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

分配配置文件

[root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.146:/etc/swift/
[root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.147:/etc/swift
  • 1
  • 2

设置目录权限

[root@controller ~]# chown -R swift:swift /etc/swift/
  • 1

重启服务

[root@controller ~]# systemctl restart memcached.service 
[root@controller ~]# systemctl enable  --now  openstack-swift-proxy.service 
  • 1
  • 2

存储结点重启服务

[root@computer ~]# swift-init all start
[root@storager ~]# swift-init all start
  • 1
  • 2

验证状态

[root@controller ~]# swift stat
                        Account: AUTH_97e5c629da9944c5ad960e5c171dac68
                     Containers: 1
                        Objects: 1
                          Bytes: 13287936
Containers in policy "policy-0": 1
   Objects in policy "policy-0": 1
     Bytes in policy "policy-0": 13287936
         X-Openstack-Request-Id: tx19a4687c644645708525e-005f30ef14
                    X-Timestamp: 1597039034.68479
                     X-Trans-Id: tx19a4687c644645708525e-005f30ef14
                   Content-Type: application/json; charset=utf-8
                  Accept-Ranges: bytes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

上传文件

[root@controller ~]# swift upload demo_container cirros-0.3.4-x86_64-disk.img 
  • 1

查看文件

[root@controller ~]# swift list
demo_container
  • 1
  • 2

存储结点查看容器和文件

[root@computer ~]# source admin-openrc
[root@computer ~]# openstack container list
+----------------+
| Name           |
+----------------+
| demo_container |
+----------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
[root@storager ~]# source admin-openrc
[root@storager ~]# openstack container list
+----------------+
| Name           |
+----------------+
| demo_container |
+----------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

Storager配置cinder服务

配置cinder硬盘

[root@storager~]# parted -a optimal --script /dev/sdb -- mktable gpt
[root@storager~]# parted -a optimal --script /dev/sdb -- mkpart primary xfs  0% 100%
[root@storager~]# mkfs.xfs -f /dev/sdb1
  • 1
  • 2
  • 3

配置逻辑卷

[root@storager~]# pvcreate /dev/sdb1
[root@storager~]# vgcreate cinder-volumes /dev/sdb1
  • 1
  • 2

修改配置文件

[root@storager~]# vi /etc/cinder/cinder.conf
[default]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.29.147
enabled_backends = lvm
glance_api_servers = http://controller:9292
iscsi_ip_address = controller

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

#没有lvm标签自行添加
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm                       
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
[root@storager~]# vi /etc/lvm/lvm.conf
devices {
global_filter = [ "a|.*/|","a|sdb1|"]
}
  • 1
  • 2
  • 3
  • 4

启动服务

[root@storager~]# systemctl enable --now  openstack-cinder-volume.service  target.service 
  • 1

Controller配置cinder服务

创建并配置cinder用户

[root@controller ~]# openstack user create --domain default --password-prompt cinder
[root@controller ~]# openstack role add --project service --user cinder admin
  • 1
  • 2

创建cinder服务实体

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
  • 1
  • 2

创建cinder服务端点

[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 public http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 internal  http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 admin  http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 public http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 internal  http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 admin http://controller:8776/v3/%\(project_id\)s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

编辑配置文件

[root@controller ~]# vi /etc/cinder/cinder.conf
[default]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.29.145
enabled_backends = lvm

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack

[lvm]
volume_group = centos
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
  • 1

启动服务

[root@controller ~]# systemctl enable  --now openstack-cinder-api.service openstack-cinder-scheduler.service 
  • 1

查看状态

[root@controller ~]# openstack volume service list
  • 1

创建卷

#容量为1G
[root@controller ~]# cinder create --name demo_volume 1
  • 1
  • 2

挂载卷

#查看卷id
[root@controller ~]# cinder list
#挂载卷到云主机
[root@controller ~]# nova volume-attach mycentos e9804810-9dce-47f6-84f7-25a8da672800
  • 1
  • 2
  • 3
  • 4

部署云主机

部署云主机步骤可参考:https://blog.csdn.net/xixixilalalahaha/article/details/107759415

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/716873
推荐阅读
相关标签
  

闽ICP备14008679号