赞
踩
1.首先在vmware上创建两个ubuntu20.04的虚拟机并命名为Ubuntu001与Ubuntu002,安装完系统并换源后关机
2.点击vmware操作栏的编辑
,然后点击虚拟网络编辑器
,点击右下角的更改设置
打开管理员模式,再点击添加网络
选择未使用的网络(编者为VMnet2),并进行如下修改,将子网地址设置为10.0.0.0
同时,将NAT模式的网络名称(编者是VMnet8)的子网地址记下来
3.点击Ubuntu001的编辑虚拟机设置
,删除打印机设备,然后点击下方的添加
按钮,添加网络适配器
,然后可以看到设备栏中多了一个网络适配器2
,点击网络适配器2
然后再右边选择自定义(U):特定虚拟网络
,然后选择刚创建的VMnet2。
4.点击Ubuntu002的编辑虚拟机设置
,配置如上。
5.修改Ubuntu001的内存为4g,处理器建议3个以上。Ubuntu002内存为2g,处理器2个即可。
6.开启两台虚拟机,依次点击左下角的九个点
—>设置
—>网络
,然后点击ens34(即之前创建的网络适配器2)的小齿轮图标打开设置页面,点击IPv4
,选择手动。
Ubuntu001设置如下:
Ubuntu002设置如下:
7.修改interfaces文件
注!!!!!:本文所有命令行都要在root用户下输入
在两台虚拟机中输入
apt-get install vim
vim /etc/network/interfaces
使用下面指令开启物理网络的网卡混杂模式
ifconfig ens33 promisc 设置eth0为混杂模式。
ifconfig ens33 -promisc 取消它的混杂模式
开启后重启网络服务
service network-manager restart
虚拟机可能设置了也没什么用,因为虚拟机的网桥是虚拟出来的,不是实际存在的网桥,后面安装了neutron服务后,会另外创建一个新的虚拟网桥,这个网桥和虚拟机的虚拟网桥有冲突,从而会导致虚拟机无法上网。
8.修改hostname
在两台虚拟机中输入
vim /etc/hostname
Ubuntu001修改为controller
Ubuntu002修改为compute1
9.修改hosts
在两台虚拟机中输入
vim /etc/hosts
除127.0.0.1 localhost
外,注释或删除其他映射
并在两台虚拟机的hosts
中均添加如下映射
10.0.0.11 controller
10.0.0.31 compute1
10.重启两台虚拟机
11.验证联通性
在controller
节点(即Ubuntu001,下称“controller节点”)输入
#安装网络工具
apt-get install -y net-tools
#验证节点间联通
ping 10.0.0.31
#验证能否访问外网
ping www.baidu.com
在compute1
节点(即Ubuntu002,下称“compute1节点”)执行
#安装网络工具
apt-get install -y net-tools
#验证节点间联通
ping 10.0.0.11
#验证能否访问外网
ping www.baidu.com
服务器需要有2个网口,两个网口分别插在两个不同的网段(路由器)上。网络连线如下图所示。其中slave6和slave9是两个服务器的名字。
其中192.168.1.0网段是提供者网络,就是是服务器上来上网的;10.0.0.0网段是管理网络,负责openstack各节点之间的通信。最后openstack虚拟的机子就是在192.168.1.0这个网段上,以便虚拟出来的机器能够上网。
分别配置两个路由器的lan口
10.0.0.0/24网段路由器设置
192.168.1.0/24网段设置
路由器lan口设置完后,重启路由器,然后将服务器的两个网口都分别插在两个路由器上。
在ubuntu系统服务器中设置管理网络 ( 10.0.0.0/24 ) 的ip为静态ip
Ubuntu001设置如下:
Ubuntu002设置如下:
提供商网络( 192.168.1.0/24 )的网卡需要设置成混杂模式(若不设置混杂模式,在安装完成openstack后,可能会出现创建的虚拟机可以上网,但是服务器无法上网的问题)
首先使用ifconfig
指令查看哪个网卡是在192.168.1.0/24网段上。如下图,eno1
是在该网段上。
使用下面指令开启eno1
网卡混杂模式
ifconfig eno1 promisc 设置eth0为混杂模式。
ifconfig eno1 -promisc 取消它的混杂模式
service network-manager restart
接下来步骤和虚拟机的一致。
在两台虚拟机中输入.
vim /etc/hostname
Ubuntu001修改为controller
Ubuntu002修改为compute1
在两台虚拟机中输入
vim /etc/hosts
除127.0.0.1 localhost
外,注释或删除其他映射
并在两台虚拟机的hosts
中均添加如下映射
10.0.0.11 controller
10.0.0.31 compute1
重启两台虚拟机
验证联通性
在controller
节点(即Ubuntu001,下称“controller节点”)输入
#安装网络工具
apt-get install -y net-tools
#验证节点间联通
ping 10.0.0.31
#验证能否访问外网
ping www.baidu.com
在compute1
节点(即Ubuntu002,下称“compute1节点”)执行
#安装网络工具
apt-get install -y net-tools
#验证节点间联通
ping 10.0.0.11
#验证能否访问外网
ping www.baidu.com
在controller
节点和compute1
节点均执行
#安装openstack客户端
apt-get install python3-openstackclient
#安装apache2服务器
apt-get install apache2
在controller
节点执行
#检查pip是否安装
pip3 --version
#若未安装,执行
sudo apt-get install python-pip
#升级pip
sudo easy_install --upgrade pip
#pip换源
1.创建文件夹
mkdir ~/.pip
2.编辑pip.conf文件
vim ~/.pip/pip.conf
3.输入以下内容
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
[install]
trusted-host = https://pypi.tuna.tsinghua.edu.cn
4.保存退出
controller
节点执行由于Ubuntu20.04中集成了Openstack Package(Ussuri)资源包,因此不需要安装资源包。
同时由于虚拟机可以连接互联网,因此不需要安装时间同步NTP
服务
因此直接安装SQL数据库。
1.安装软件包
apt install mariadb-server python3-pymysql
2.创建和编辑99-openstack.cnf
文件并完成以下操作:
#vim打开99-openstack.cnf文件
vim /etc/mysql/mariadb.conf.d/99-openstack.cnf
在文件中输入以下内容
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
3.重启数据库服务
service mysql restart
4.通过运行mysql_secure_installation
脚本来保护数据库服务
mysql_secure_installation
运行mysql_secure_installation会执行几个设置:
a)为root用户设置密码 (y,输入密码,重复输入密码)
b)删除匿名账号 (y)
c)取消root用户远程登录(有调试需求输入n,无调试需求y)
d)删除test库和对test库的访问权限 (y)
e)刷新授权表使修改生效(y)
controller
节点执行1.安装软件包
apt install rabbitmq-server
2.添加openstack用户
rabbitmqctl add_user openstack 123456
3.允许用户的配置、写入和读取访问权限 openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
controller
节点执行1.安装软件包
apt install memcached python3-memcache
2.编辑memcached.conf
文件
vim /etc/memcached.conf
将-l 127.0.0.1
改为-l 10.0.0.11
3.重启Memcached
service memcached restart
controller
节点执行1.安装etcd
软件包
apt install etcd
2.编辑/etc/default/etcd
文件
cp /etc/default/etcd /etc/default/etcd.bak
grep -Ev '^$|#' /etc/default/etcd.bak > /etc/default/etcd
vim /etc/default/etcd
#在打开的文件中添加下面内容
ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.11:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
3.应用并重启etcd服务
#应用etcd服务
systemctl enable etcd
#重启etcd服务
systemctl restart etcd
controller
节点执行1.进入mysql交互界面
#输入mysql并回车
mysql
2.创建keystone
数据库
MariaDB [(none)]> CREATE DATABASE keystone;
3.授予对keystone
数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
4.退出数据库访问客户端。
MariaDB [(none)]> exit
5.安装keystone软件包
apt install keystone
6.编辑keystone.conf
文件
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
grep -Ev '^$|#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
vim /etc/keystone/keystone.conf
[database]
部分中,配置数据库访问
[database]
部分中的其他内容[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
部分中,配置 Fernet 令牌提供程序[token]
provider = fernet
7.填充身份服务数据库(确保之前步骤没有出错!!)
su -s /bin/sh -c "keystone-manage db_sync" keystone
8.初始化 Fernet 密钥库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
9.引导身份服务
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
10.编辑apache2.conf
文件
#打开文件
vim /etc/apache2/apache2.conf
#添加如下内容,配置 ServerName选项以引用控制器节点
ServerName controller
ServerName
条目尚不存在,则需要添加该条目。11.重启Apache服务
service apache2 restart
12.通过设置适当的环境变量来配置管理帐户
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
controller
节点执行openstack domain create --description "An Example Domain" example
输出内容为
+---------------+--------------------------------------------------+
| Field | Value |
+---------------+--------------------------------------------------+
| description | An Example Domain |
| enabled | True |
| id | ac4eb2a0f59743e5a51601e8c9b77168 |
| name | example |
| options | {} |
| tags | [] |
+---------------+--------------------------------------------------+
查看已创建的域
openstack domain list
openstack domain -h
自行查看openstack project create --domain default --description "Service Project" service
输出内容为
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | db0e0a70896a4676b8277c6c0edfa08e |
| is_domain | False |
| name | service |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
查看已创建的项目
openstack project list
openstack project create --domain default --description "Demo Project" myproject
输出内容为:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 231ad6e7ebba47d6a1e57e1cc07ae446 |
| is_domain | False |
| name | myproject |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
openstack user create --domain default --password-prompt myuser
输出内容为
root@controller:/home/matt/桌面# openstack user create --domain default --password-prompt euser
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 43091cf498db40508e5c03758dd212c1 |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
openstack role create --description "Example Role" myrole
输出内容为
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Example Role |
| domain_id | None |
| id | e19c6338df0347b583df2d2cea416126 |
| name | myrole |
| options | {} |
+-------------+----------------------------------+
指令详解:
openstack role add --project myproject --user myuser myrole
具体关系如图所示
controller
节点执行1.取消设置临时OS_AUTH_URL
和OS_PASSWORD
环境变量
unset OS_AUTH_URL OS_PASSWORD
2.对于admin
用户,请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
admin
用户的密码输入如下所示:
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2021-11-22T15:54:23+0000 |
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
3.对于myuser
之前创建的用户,请求一个身份验证令牌
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
输出如下所示:
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2021-11-22T15:55:23+0000 |
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
controller
节点执行1.创建admin管理员环境脚本
在终端输入
vim admin-openrc
在打开的admin-openrc
文件中输入下面的内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2.创建myuser用户环境脚本
在终端输入
vim demo-openrc
在打开的demo-openrc
文件中输入下面的内容
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
3.使用脚本进行测试
在终端输入
. admin-openrc
请求身份验证令牌
openstack token issue
输出内容如下:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2021-11-22T16:30:31+0000 |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
controller
节点执行1.创建数据库
进入数据库交互界面
mysql
创建glance
数据库
MariaDB [(none)]> CREATE DATABASE glance;
授予对glance
数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
退出数据库访问客户端
exit
2.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
3.创建服务凭证
创建glance
用户
openstack user create --domain default --password-prompt glance
输出为
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3f4e777c4062483ab8d9edd7dff829df |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
将admin
角色添加到glance
用户和 service
项目
openstack role add --project service --user glance admin
创建glance
服务实体
openstack service create --name glance --description "OpenStack Image" image
输出如下:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| name | glance |
| type | image |
+-------------+----------------------------------+
4.创建图像服务 API 端点
依次输入
openstack endpoint create --region RegionOne image public http://controller:9292
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 340be3625e9b4239a6415d034e98aace |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne image internal http://controller:9292
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne image admin http://controller:9292
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c37ed58103f4300a84ff125a539032d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
5.安装软件包
apt install glance
6.编辑glance-api.conf
文件
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
vi /etc/glance/glance-api.conf
在[database]
部分中,配置数据库访问
[database]
connection = mysql+pymysql://glance:123456@controller/glance
在[keystone_authtoken]
和[paste_deploy]
部分,配置身份服务访问:
[keystone_authtoken]
部分中的任何其他选项 。[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
在[glance_store]
部分中,配置本地文件系统存储和图像文件的位置:
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
7.填充glance服务数据库(确保之前步骤没有出错!!)
su -s /bin/sh -c "glance-manage db_sync" glance
8.重新启动图像服务
service glance-api restart
1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.下载源图像(从git-hub上下载,网络得快一点):
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
3.将镜像上传到 Image 服务,设置QCOW2磁盘格式、bare容器格式和公开可见,以便所有项目都可以访问它
glance image-create --name "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility=public
–container-format <容器格式>
图像容器格式。支持的选项有:ami、ari、aki、bare、docker、ova、ovf。默认格式为:bare
–disk-format <磁盘格式>
映像磁盘格式。支持的选项有:ami、ari、aki、vhd、vmdk、raw、qcow2、vhdx、vdi、iso、ploop。默认格式为:raw
输出的内容如下:
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2015-03-26T16:52:10Z |
| disk_format | qcow2 |
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
| id | cc5c6982-4910-471e-b864-1098015901b5 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | ae7a98326b9c455588edd2656d723b9d |
| protected | False |
| schema | /v2/schemas/image |
| size | 13200896 |
| status | active |
| tags | |
| updated_at | 2015-03-26T16:52:10Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
4.确认上传镜像并验证属性
glance image-list
输出的内容如下:
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
1.创建placement
数据库
mysql
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';
exit
2.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
3.创建placement用户
openstack user create --domain default --password-prompt placement
输入为:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4.将 Placement 用户添加到具有 admin 角色的服务项目
openstack role add --project service --user placement admin
5.在服务目录中创建 Placement API 条目
openstack service create --name placement --description "Placement API" placement
输出为:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+
6.创建 Placement API 服务端点
第一步:openstack endpoint create --region RegionOne placement public http://controller:8778
输出为
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
第二步:openstack endpoint create --region RegionOne placement internal http://controller:8778
输出为:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
第三步:openstack endpoint create --region RegionOne placement admin http://controller:8778
输出为:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
7.安装placement软件包
apt install placement-api
8.编辑placement.conf
文件
placement.conf
文件cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
vim /etc/placement/placement.conf
[placement_database]
部分中,配置数据库访问
[placement_database]
connection = mysql+pymysql://placement:123456@controller/placement
[api]
和[keystone_authtoken]
部分,配置身份服务访问
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 123456
9.填充placement
数据库(!!!确保上面步骤操作正确)
su -s /bin/sh -c "placement-manage db sync" placement
10.重启web服务器
service apache2 restart
1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.执行状态检查以确保一切正常
placement-status upgrade check
输出为:
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
该命令的输出将因版本而异。
3.安装placement插件
pip install osc-placement
4.列出可用的资源类和特征:
openstack --os-placement-api-version 1.2 resource class list --sort-column name
输出如下:
+----------------------------+
| name |
+----------------------------+
| DISK_GB |
| IPV4_ADDRESS |
| ... |
openstack --os-placement-api-version 1.6 trait list --sort-column name
输出如下:
+---------------------------------------+
| name |
+---------------------------------------+
| COMPUTE_DEVICE_TAGGING |
| COMPUTE_NET_ATTACH_INTERFACE |
| ... |
1.创建nova数据库
mysql
nova_api
,nova
和nova_cell0
数据库MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
exit
2.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
3.创建nova用户
openstack user create --domain default --password-prompt nova
输出如下
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4.将 nova 用户添加到具有 admin 角色的服务项目
openstack role add --project service --user nova admin
5.创建nova
服务实体
openstack service create --name nova --description "OpenStack Compute" compute
输出如下:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
6.创建 Compute API 服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
输出如下:
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
输出如下:
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
输出如下:
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
7.安装nava软件包
apt install nova-api nova-conductor nova-novncproxy nova-scheduler
nova-api
、 nova-conductor
、 nova-novncproxy
和 nova-scheduler
8.编辑nova.conf
文件
nova.conf
文件cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
vim /etc/nova/nova.conf
[api_database]
和[database]
部分,配置数据库访问
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[DEFAULT]
部分,配置RabbitMQ
消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:123456@controller:5672/
[api]
和[keystone_authtoken]
部分,配置身份服务访问
[keystone_authtoken]
部分中的任何其他选项。[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[DEFAULT]
部分中,配置my_ip
选项以使用控制器节点的管理接口 IP 地址[DEFAULT]
my_ip = 10.0.0.11
[DEFAULT]
部分中删除log_dir
选项[vnc]
部分中,将 VNC 代理配置为使用控制器节点的管理接口 IP 地址[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
部分中,配置 Image 服务 API 的位置[glance]
api_servers = http://controller:9292
[oslo_concurrency]
部分中,配置锁定路径:[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
部分中,配置对 Placement 服务的访问
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
9.填充nova-api
数据库(确保之前的步骤都是正确的!)
su -s /bin/sh -c "nova-manage api_db sync" nova
10.注册cell0
数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
11.创建cell1
单元格
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
12.填充 nova 数据库
su -s /bin/sh -c "nova-manage db sync" nova
13.验证 nova
、cell0
和 cell1
是否正确注册
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
输出内容为
+-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False |
| cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False |
+-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+
14.重新启动计算服务
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
compute
节点可以有很多个,但是本文示例只有ip为10.0.0.31
的compute1
,若需要其他计算节点,也按照这个步骤来安装,但是要注意修改其中的ip
1.安装nova软件包
apt install nova-compute
2.编辑nova.conf
文件
nova.conf
文件cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
vim /etc/nova/nova.conf
[DEFAULT]
部分,配置RabbitMQ
消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
[api]
和[keystone_authtoken]
部分,配置身份服务访问
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[DEFAULT]
部分中,配置my_ip
选项
compute1
的管理网络ip为10.0.0.31
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
[vnc]
部分中,启用和配置远程控制台访问
controller
主机名,则需要替换 controller
为控制节点的管理网络IP 地址。(一般情况可忽略)[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
部分中,配置 Image 服务 API 的位置[glance]
api_servers = http://controller:9292
[oslo_concurrency]
部分中,配置锁定路径[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
部分中,配置 Placement API
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
3.确定您的计算节点是否支持虚拟机的硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
如果此命令返回值不为0
,则您的计算节点支持硬件加速,这通常不需要额外的配置。
如果此命令返回值为0
,则您的计算节点不支持硬件加速,您必须配置libvirt
为使用 QEMU
而不是 KVM
。
编辑nova-compute.conf
文件中的[libvirt]
部分
vim /etc/nova/nova-compute.conf
进行如下修改
[libvirt]
virt_type = qemu
4.重启计算服务
service nova-compute restart
nova-compute
服务无法启动,请检查 /var/log/nova/nova-compute.log
。查看错误信息是否为AMQP server on controller:5672 is unreachable
。该错误消息可能表明控制器节点上的防火墙阻止访问端口 5672。将防火墙配置为打开控制器节点上的端口 5672 并重新启动计算服务。5.将计算节点添加到cell数据库中
controller
节点上执行!!!!①获取管理员凭据以启用仅限管理员的 CLI 命令
. admin-openrc
②确认数据库中有计算主机
openstack compute service list --service nova-compute
输出如下:
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+
③发现主机
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
输出如下:
Found 1 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
④修改配置文件(当有多台计算节点需要注册到控制节点时,这一步只需要执行一遍)
nova-manage cell_v2 discover_hosts/etc/nova/nova.conf
编辑[scheduler]
项
[scheduler]
discover_hosts_in_cells_interval = 300
controller
节点上执行1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.列出服务组件以验证每个进程的成功启动和注册
openstack compute service list
输出如下(这一步三个state是up的,那么说明没啥问题了):
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 3 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
3.列出身份服务中的 API 端点以验证与身份服务的连接
openstack catalog list
输出如下:
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | |
+-----------+-----------+-----------------------------------------+
4.列出 Image 服务中的图像以验证与 Image 服务的连接
openstack image list
输出如下:
+--------------------------------------+-------------+-------------+
| ID | Name | Status |
+--------------------------------------+-------------+-------------+
| 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active |
+--------------------------------------+-------------+-------------+
5.检查单元和放置 API 是否成功运行,以及其他必要的先决条件是否到位
nova-status upgrade check
输出如下:
+--------------------------------------------------------------------+
| Upgrade Check Results |
+--------------------------------------------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Policy Scope-based Defaults |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Older than N-1 computes |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
1.创建数据库
mysql -u root -p #直接回车
neutron
数据库MariaDB [(none)] CREATE DATABASE neutron;
neutron
数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
exit
2.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
3.创建neutron用户
openstack user create --domain default --password-prompt neutron
输出如下:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4.为neutron
用户添加admin
角色
openstack role add --project service --user neutron admin
5.创建neutron
服务实体
openstack service create --name neutron --description "OpenStack Networking" network
输出如下:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
6.创建网络服务 API 端点
openstack endpoint create --region RegionOne network public http://controller:9696
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne network internal http://controller:9696
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne network admin http://controller:9696
输出如下:
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
7.配置网络选项(提供者网络)
想要配置自助服务网络的可以自己去配,自助服务网络需要四个虚拟网卡
自助服务网络配置教程:https://docs.openstack.org/neutron/train/install/controller-install-option2-ubuntu.html
这里我们配置提供者网络(Provider networks),对于基础使用来说已经足够
①安装组件
apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
②编辑neutron.conf
文件
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
[database]
部分中,配置数据库访问
[database]
connection = mysql+pymysql://neutron:123456S@controller/neutron
[DEFAULT]
部分中,启用 Modular Layer 2 (ML2) 插件并禁用其他插件[DEFAULT]
core_plugin = ml2
service_plugins =
[DEFAULT]
部分,配置RabbitMQ
消息队列访问
123456
为Message queue
服务设置的密码[DEFAULT]
transport_url = rabbit://openstack:123456@controller
[DEFAULT]
和[keystone_authtoken]
部分,配置身份服务访问
123456
为neutron
用户设置的密码[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[DEFAULT]
和[nova]
部分,配置 Networking 以通知 Compute 网络拓扑更改
123456
为nova
用户设置的密码[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
部分中,配置锁定路径[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
③编辑**ml2_conf.ini
**文件
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
部分中,启用平面和 VLAN 网络[ml2]
type_drivers = flat,vlan
[ml2]
部分中,禁用自助服务网络[ml2]
tenant_network_types =
[ml2]
部分中,启用 Linux 桥接机制[ml2]
mechanism_drivers = linuxbridge
[ml2]
部分中,启用端口安全扩展驱动程序[ml2]
extension_drivers = port_security
[ml2_type_flat]
部分中,将提供者虚拟网络配置为flat网络[ml2_type_flat]
flat_networks = provider
[securitygroup]
部分中,启用ipset以提高安全组规则的效率[securitygroup]
enable_ipset = true
④编辑linuxbridge_agent.ini
文件
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
部分中,将提供者虚拟网络映射到提供者物理网络接口
PROVIDER_INTERFACE_NAME
为物理网络的名称,本文示例为192.168.222.0
所在的网络名称,可以在终端输入ifconfig
查看,编者的是ens33
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
部分中,禁用 VXLAN 覆盖网络[vxlan]
enable_vxlan = false
[securitygroup]
部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
sysctl
值都设置为1,确保您的 Linux 操作系统内核支持网桥过滤器vim /etc/sysctl.conf
#在打开的文件后添加内容
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#保存,返回终端
#在终端依次输入
modprobe br_netfilter
sysctl -p
#可以看到输出了添加的内容,从而设置sysctl值为1成功
⑤编辑dhcp_agent.ini
文件
[DEFAULT]
部分中,配置 Linux 桥接接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据,以便提供商网络上的实例可以通过网络访问元数据[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
8.配置元数据代理
/etc/neutron/metadata_agent.ini
文件cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
部分中,配置元数据主机和共享密钥
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
9.配置 Compute 服务以使用 Networking 服务
编辑/etc/nova/nova.conf
文件
在[neutron]
部分,配置访问参数,启用元数据代理,并配置secret
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
10.填充数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
11.重启计算 API 服务
service nova-api restart
12.重启网络服务
service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
若使用自助网络服务,还需要执行以下步骤(提供者网络忽略此项)
service neutron-l3-agent restart
1.安装组件
apt install neutron-linuxbridge-agent
2.编辑/etc/neutron/neutron.conf
文件
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
在该[database]
部分中,注释掉或删掉所有connection
选项,因为计算节点不直接访问数据库
在[DEFAULT]
部分,配置RabbitMQ
消息队列访问
123456
为 Message queue
用户设置的密码[DEFAULT]
transport_url = rabbit://openstack:123456@controller
[DEFAULT]
和[keystone_authtoken]
部分,配置身份服务访问
123456
为 neutron
用户设置的密码[keystone_authtoken]
部分中的任何其他选项[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
部分中,配置锁定路径[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3.配置网络选项(提供者网络)
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
文件cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
部分中,将提供者虚拟网络映射到提供者物理网络接口
PROVIDER_INTERFACE_NAME
为物理网络的名称,本文示例为192.168.222.0
所在的网络名称,可以在终端输入ifconfig
查看,编者的是ens33
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
部分中,禁用 VXLAN 覆盖网络[vxlan]
enable_vxlan = false
[securitygroup]
部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
sysctl
值都设置为1,确保您的 Linux 操作系统内核支持网桥过滤器vim /etc/sysctl.conf
#在打开的文件后添加内容
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
#保存,返回终端
#在终端依次输入
modprobe br_netfilter
sysctl -p
#可以看到输出了添加的内容,从而设置sysctl值为1成功
4.配置 Compute 服务以使用 Networking 服务
/etc/nova/nova.conf
文件vim /etc/nova/nova.conf
[neutron]
部分,配置访问参数
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
5.重启计算服务
service nova-compute restart
6.重启 Linux 网桥代理
service neutron-linuxbridge-agent restart
1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.列出加载的扩展以验证neutron-server
进程是否成功启动
openstack extension list --network
输出如下:
+---------------------------+---------------------------+----------------------------+
| Name | Alias | Description |
+---------------------------+---------------------------+----------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark |
| | | and use a subnetpool as |
| | | the default |
| Availability Zone | availability_zone | The availability zone |
| | | extension. |
| Network Availability Zone | network_availability_zone | Availability zone support |
| | | for network. |
| Port Binding | binding | Expose port bindings of a |
| | | virtual port to external |
| | | application |
| agent | agent | The agent management |
| | | extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of |
| | | subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among |
| | | dhcp agents |
| Neutron external network | external-net | Adds external network |
| | | attribute to network |
| | | resource. |
| Neutron Service Flavors | flavors | Flavor specification for |
| | | Neutron advanced services |
| Network MTU | net-mtu | Provides MTU attribute for |
| | | a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability |
| | | data for each network and |
| | | subnet. |
| Quota management support | quotas | Expose functions for |
| | | quotas management per |
| | | tenant |
| Provider Network | provider | Expose mapping of virtual |
| | | networks to physical |
| | | networks |
| Multi Provider Network | multi-provider | Expose mapping of virtual |
| | | networks to multiple |
| | | physical networks |
| Address scope | address-scope | Address scopes extension. |
| Subnet service types | subnet-service-types | Provides ability to set |
| | | the subnet service_types |
| | | field |
| Resource timestamps | standard-attr-timestamp | Adds created_at and |
| | | updated_at fields to all |
| | | Neutron resources that |
| | | have Neutron standard |
| | | attributes. |
| Neutron Service Type | service-type | API for retrieving service |
| Management | | providers for Neutron |
| | | advanced services |
| resources: subnet, | | more L2 and L3 resources. |
| subnetpool, port, router | | |
| Neutron Extra DHCP opts | extra_dhcp_opt | Extra options |
| | | configuration for DHCP. |
| | | For example PXE boot |
| | | options to DHCP clients |
| | | can be specified (e.g. |
| | | tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision numbers | standard-attr-revisions | This extension will |
| | | display the revision |
| | | number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates |
| | | that pagination is |
| | | enabled. |
| Sorting support | sorting | Extension that indicates |
| | | that sorting is enabled. |
| security-group | security-group | The security groups |
| | | extension. |
| RBAC Policies | rbac-policies | Allows creation and |
| | | modification of policies |
| | | that control tenant access |
| | | to resources. |
| standard-attr-description | standard-attr-description | Extension to add |
| | | descriptions to standard |
| | | attributes |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address |
| | | pairs |
| project_id field enabled | project-id | Extension that indicates |
| | | that project_id field is |
| | | enabled. |
+---------------------------+---------------------------+----------------------------+
2.列出代理以验证中子代理的成功发射
openstack network agent list
输出如下:
+----------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+----------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0400c2f6-4d3b-44bc-89fa-99 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 83cf853d-a2f2-450a-99d7-e9 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| ec302e51-6101-43cf-9f19-88 | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent |
| fcb9bc6e-22b1-43bc-9054-27 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
+----------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
State
均up
!!!1.安装软件包
apt install openstack-dashboard
2.编辑 /etc/openstack-dashboard/local_settings.py
文件
cp
那三行来简化文件!!!vim /etc/openstack-dashboard/local_settings.py
#配置仪表板以在controller节点上使用 OpenStack 服务
OPENSTACK_HOST = "controller"
#配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
#配置memcached会话存储服务
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
#启用身份 API 版本 3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置 API 版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
#配置Default为您通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
#配置user为您通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#若为提供者网络,则需要禁用对第 3 层网络服务的支持
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
#配置时区
TIME_ZONE = "Asia/Shanghai"
3.编辑/etc/apache2/conf-available/openstack-dashboard.conf
#若下面的内容不存在,则添加,若存在,则忽略
WSGIApplicationGroup %{GLOBAL}
4.重新加载 Web 服务器配置
systemctl reload apache2.service
5.验证安装
验证仪表板的操作。
使用 Web 浏览器访问仪表板 http://controller/horizon
。
使用admin
或myuser
用户和default
域进行身份验证。
1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.创建网络
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
--share
:允许所有项目使用虚拟网络。--external
:将虚拟网络定义为外部网络。如果你想创建一个内部网络,你可以使用--internal
。默认值为internal
。输出如下:
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-03-14T14:37:39Z |
| description | |
| dns_domain | None |
| id | 54adb94a-4dce-437f-a33b-e7e2e7648173 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 4c7f48f1da5b494faaa66713686a7707 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 3 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2017-03-14T14:37:39Z |
+---------------------------+--------------------------------------+
3.在网络上创建子网
openstack subnet create --network provider \
--allocation-pool start=192.168.222.101,end=192.168.222.250 \
--dns-nameserver 114.114.114.114--gateway 192.168.222.1 \
--subnet-range 192.168.222.0/24 provider
**–allocation-pool:**设置子网的分配池 IP 地址,配合“start=”与“end=”使用,重复该选项可以设置多组
**start=:**设置子网的分配池 IP 地址的起始ip
**end=:**设置子网的分配池 IP 地址的截止ip。
**–dns-nameserver:**设置dns,国内推荐使用114.114.114.114
–gateway:设置网关,有三种选项 。“”:用作网关的特定 IP 地址,“auto”:自动从子网本身中选择网关地址,“none”:该子网将不使用网关,例如: –gateway 192.168.9.1, –gateway auto, –gateway none(默认为“auto”)
**–subnet-range:**CIDR 表示法中的子网范围(如果未指定 –subnet-pool 则为必需,否则为可选)
输出如下:
Created a new subnet:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.222.101-192.168.222.250 |
| cidr | 192.168.222.0/24 |
| created_at | 2021-11-24T05:48:29Z |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 192.168.222.1 |
| host_routes | |
| id | e84b4972-c7fc-4ce9-9742-fdc845196ac5 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 1f816a46-7c3f-4ccf-8bf3-fe0807ddff8d |
| project_id | 496efd248b0c46d3b80de60a309177b5 |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2021-11-24T05:48:29Z |
+-------------------+--------------------------------------+
最小的默认风格每个实例消耗 512 MB 内存。对于包含少于 4 GB 内存的计算节点的环境,我们建议创建m1.nano
每个实例仅需要 64 MB的风格。仅将此风格与 CirrOS 映像一起用于测试目的。
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
输出如下:
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
1.获取myuser
用户凭据
. demo-openrc
2.生成密钥对并添加公钥
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
输出如下:
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
| name | mykey |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+-------------+-------------------------------------------------+
3.验证密钥对的添加
openstack keypair list
输出如下:
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
+-------+-------------------------------------------------+
向default
安全组添加规则
允许ICMP (ping)
openstack security group rule create --proto icmp default
输出如下:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-03-30T00:46:43Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 1946be19-54ab-4056-90fb-4ba606f19e66 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 3f714c72aed7442681cbfa895f4a68d3 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 89ff5c84-e3d1-46bb-b149-e621689f0696 |
| updated_at | 2017-03-30T00:46:43Z |
+-------------------+--------------------------------------+
允许安全外壳 (SSH) 访问
openstack security group rule create --proto tcp --dst-port 22 default
输出如下:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-03-30T00:43:35Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 42bc2388-ae1a-4208-919b-10cf0f92bc1c |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 3f714c72aed7442681cbfa895f4a68d3 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 89ff5c84-e3d1-46bb-b149-e621689f0696 |
| updated_at | 2017-03-30T00:43:35Z |
+-------------------+--------------------------------------+
1.确认实例
要启动实例,您至少必须指定风格、映像名称、网络、安全组、密钥和实例名称
myuser
用户凭据. demo-openrc
openstack flavor list
输出如下:
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
openstack image list
输出如下
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+
openstack network list
输出如下**(记住输出中provider网络的ID)**:
+--------------------------------------+--------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------+--------------------------------------+
| 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c |
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+--------------+--------------------------------------+
openstack security group list
输出如下:
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group | a516b957032844328896baa01e0f906c |
+--------------------------------------+---------+------------------------+----------------------------------+
#本实例使用default安全组
2.启动实例并检查
PROVIDER_NET_ID
为上面记录的provider
网络的ID
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance
#如果在创建网络时您选择了提供者网络并且您的环境仅包含一个网络,则可以省略该--nic选项,因为 OpenStack 会自动选择唯一可用的网络。
openstack server list
输出如下:
+--------------------------------------+-------------------+--------+--------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------------------+--------+--------------------------+------------+
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=192.168.222.156 | cirros |
+--------------------------------------+-------------------+--------+--------------------------+------------+
#记住该实例的ip地址
3.使用虚拟控制台访问实例
openstack console url show provider-instance
输出如下:
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+
在web浏览器访问url的地址,如果您的 Web 浏览器在无法解析controller
主机名的 主机上运行,您可以将其替换controller
为控制器节点上管理接口的 IP 地址。
cirros镜像默认的账号和密码
账号:cirros
密码:gocubsgo
4.验证联通性
#在实例中ping网关
ping 192.168.222.1
#在实例中ping外网
ping www.baidu.com
#在controller节点ping实例
ping 192.168.222.156
#在controller节点ssh连接实例
ssh cirros@192.168.222.156
最后一步输出如下
The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
若ssh无法连接,可尝试在dashboard界面添加规则。
用管理员账户登录系统后,依次点击项目
—>网络
—>安全组
—>管理规则
,然后添加如下3条规则
1.创建数据库
root
用户身份连接数据库服务器mysql
cinder
数据库MariaDB [(none)]> CREATE DATABASE cinder;
cinder
数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456';
exit
2.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
3.创建服务凭证
cinder
用户openstack user create --domain default --password-prompt cinder
输出如下:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
cinder
用户添加admin
角色openstack role add --project service --user cinder admin
cinderv2
和cinderv3
服务实体openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
#输出如下
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
#输出如下
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
创建块存储服务 API 端点
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
输出如下:
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
4.安装软件包
apt install cinder-api cinder-scheduler
5.编辑/etc/cinder/cinder.conf
文件
[database]
部分中,配置数据库访问
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[DEFAULT]
部分,配置RabbitMQ 消息队列访问
transport_url = rabbit://openstack:123456@controller
[DEFAULT]
和[keystone_authtoken]
部分,配置身份服务访问
[DEFAULT]
项是否已存在auth_strategy
属性[keystone_authtoken]
条目,自己添加[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[DEFAULT]
部分中,配置my_ip
选项以使用控制器节点的管理接口 IP 地址[DEFAULT]
my_ip = 10.0.0.11
[oslo_concurrency]
部分中,配置锁定路径[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
6.填充cinder数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
7.编辑/etc/nova/nova.conf
文件
vim /etc/nova/nova.conf
在打开的文件中,找到[cinder]条目
[cinder]
os_region_name = RegionOne
8.重启计算 API 服务
service nova-api restart
9.重启块存储服务
service cinder-scheduler restart
service apache2 restart
cinder
服务必看!!:本文是将compute1节点兼用为存储节点,若需要另一台服务器当做存储节点(也建议这么做),可以在另一台机器上重新执行本文的`步骤一(或步骤二),然后在存储节点上执行本步骤。
1.安装程序包
apt install lvm2 thin-provisioning-tools
2.创建 LVM 物理卷/dev/sdb
/dev/sdb
为硬盘的目录,这个分区必须小于2Tb,分区类型需要是DOS
类型一般来说小于2Tb的分区就是DOS类型的。这个分区可以是一整块硬盘分的区,比如,本文作者用的是一块6T的硬盘,分为sdb1
、sdb2
和sdb3
这三个区,每个均是1.9Tb,而这里创建的LVM物理卷就是在sdb3
上创建的,因此本步骤的所有sdb
均改为sdb3
,但是在写本步骤的时候,编者还是按照sdb
来写的pvcreate /dev/sdb
#输出为
Physical volume "/dev/sdb" successfully created
3.创建 LVM 卷组cinder-volumes
vgcreate cinder-volumes /dev/sdb
#输出为
Volume group "cinder-volumes" successfully created
4.修改lvm.conf
文件
vim /etc/lvm/lvm.conf
devices
部分中,添加一个接受/dev/sdb
设备并拒绝所有其他设备的过滤器
filter = [ "a/sdb/", "r/.*/"]
如果您的存储节点在操作系统磁盘上使用 LVM,您还必须将关联的设备添加到过滤器中。
例如,如果/dev/sda
设备包含操作系统:
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
同样,如果您的计算节点在操作系统磁盘上使用 LVM,您还必须修改/etc/lvm/lvm.conf
这些节点上文件中的过滤器 以仅包含操作系统磁盘。例如,如果/dev/sda
设备包含操作系统:
filter = [ "a/sda/", "r/.*/"]
5.安装软件包
apt install cinder-volume
6.编辑**/etc/cinder/cinder.conf
**文件
[database]
部分中,配置数据库访问
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[DEFAULT]
部分,配置RabbitMQ
消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
[DEFAULT]
和[keystone_authtoken]
部分,配置身份服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[DEFAULT]
部分中,配置my_ip
选项
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
[lvm]
部分中,使用 LVM 驱动程序、cinder-volumes
卷组、iSCSI 协议和适当的 iSCSI 服务配置 LVM 后端
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = tgtadm
[DEFAULT]
部分中,启用 LVM 后端[DEFAULT]
enabled_backends = lvm
[DEFAULT]
部分中,配置 Image 服务 API 的位置[DEFAULT]
glance_api_servers = http://controller:9292
[oslo_concurrency]
部分中,配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
7.重启Block Storage 卷服务
service tgt restart
service cinder-volume restart
1.获取管理员凭据以访问仅限管理员的CLI命令
. admin-openrc
2.列出服务组件以验证每个进程是否成功启动
openstack volume service list
输出如下:
+------------------+------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 |
| cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 |
| cinder-backup | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 |
+------------------+------------+------+---------+-------+----------------------------+
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。