当前位置:   article > 正文

在CentOS 7部署配置OpenStack Juno(ml2+vlan)版ALLINONE环境

在CentOS 7部署配置OpenStack Juno(ml2+vlan)版ALLINONE环境

Version=Juno-2014.2.1 CentOS 7 minimal


作者易明(oyym)   QQ:517053733

 邮箱:oyym.mv@gmail.com


本部署配置测试采用CentOS 7最小化安装操作系统部署配置OpenStack Juno版ALL-IN-ONE环境,在部署配置过程中极有可能出现配置错误又找不出错误出现在哪里,需要重新搭建测试环境,因此采用搭建虚拟机测试环境,虚拟机环境有很多好处:

  1. 可以随时拷贝备份,例如配置完成基础环境备份,每成功部署一个组件就备份一次而下面的部署配置出现错误可以回滚到最近的备份,避免全部推倒重来。
  2. 我们可以利用的物理服务器网卡资源有限,有些部署测试模拟起来可能非常麻烦,而虚拟机可以构建任意数量的虚拟网卡设备,想要几个就配置几个,很爽
首先我们需要有一个物理主机用于搭建虚拟机测试环境,这个物理主机要能够访问互联网,便于虚拟机安装部署相关工具以及云平台组件。物理主机搭建系统、构建虚拟机环境就不在这里介绍啦。简要说明下环境,以图为例:


以上图例中的VM(CentOS7)就是下面用来部署测试OpenStackJuno-2014.2.1云平台的虚拟机。虚拟机需要预留一个分区用于配置LVM后端块存储服务。如果虚拟机文件没有预留分区可以采用qemu-img resize命令扩展空间vda空间用于创建新的分区。
OpenStack Juno版本有11个核心组件,本次部署测试组件不包括独立性比较强的对象服务组件(Swfit),数据库服务组件(Trove),还有刚刚加入核心组件大家庭的大数据服务组件(Sahara)。在本次测试过程中部署8个核心组件,认证服务组件(Keystone),镜像管理服务组件(Glance,后端采用File模式),网络服务组件(Neutron,采用ml2+vlan模式),计算服务组件(Nova),块存储服务组件(Cinder,后端采用LVM),WebUI管理服务组件(Horizon),计量服务组件(Ceilometer),编配组件(Heat)。说明了部署配置目标,下面开始进行对虚拟机进行一系列的部署配置。

第一节:基础环境部署配置

虚拟机操作系统为CentOS 7 X86_64 最小化安装,如下进行基本系统配置与网络配置,

1. 开启软件包缓存机制,缓存下来的软件包用于制作自定义YUM源

vi /etc/yum.conf 
[main]
...
cachedir=/var/cache/yum/packages
keepcache=1
2.  配置网络(3个网口应用于管理网络、外部网络、数据网络)
vi /etc/sysconfig/network-scripts/ifcfg-eth0 #管理网络,其他参数不变
DEVICE=eth0
NAME=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=172.16.100.101
PREFIX=24

vi /etc/sysconfig/network-scripts/ifcfg-eth1 #外部网络,其他参数不变
DEVICE=eth1
NAME=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none

vi /etc/sysconfig/network-scripts/ifcfg-eth2 #数据(业务网络),其他参数不变,ALL-IN-ONE环境该网口可以没有  
DEVICE=eth2  
NAME=eth2  
TYPE=Ethernet  
ONBOOT=yes  
BOOTPROTO=none
3. 配置主机名称、本地域名解析
vi /etc/hosts
...
172.16.100.101 controller controller.domain.com

vi /etc/hostname
controller.domain.com

#重新ssh登录,可以看到主机名称已经变更
[root@controller ~]# 
4.  配置域名解析服务,安装软件包需要域名解析服务
vi /etc/NetworkManager/NetworkManager.conf
[main]
...
dns=none

<strong>systemctl restart NetworkManager.service </strong>

vi /etc/resolv.conf
...
nameserver 114.114.114.114
nameserver 8.8.8.8

#验证域名解析服务器是否配置成功
ping www.baidu.com
PING www.a.shifen.com (115.239.211.112) 56(84) bytes of data.
64 bytes from 115.239.211.112: icmp_seq=1 ttl=53 time=110 ms
64 bytes from 115.239.211.112: icmp_seq=3 ttl=53 time=114 ms
64 bytes from 115.239.211.112: icmp_seq=4 ttl=53 time=10.8 ms
5. 配置SELINUX为许可模式
vi /etc/selinux/config
...
SELINUX=permissive
6. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_pure.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第二节:时间服务(NTP),Juno安装源,数据库(MySQL/MariaDB),消息队列(RabbitMQ)部署配置

1. 安装配置NTP服务

1) 安装NTP软件包
yum install ntp -y

2) 编辑配置文件/etc/ntp.conf
vi /etc/ntp.conf
...
# 将172.16.100.1 替换为需要的ntp服务器地址,将配置文件中不需要的NTP服务器注释掉
server 172.16.100.1 iburst
restrict -4 default kod notrap nomodify
restrict -6 default kod notrap nomodify

3) 设置自启动模式,启动服务,查看启动状态
systemctl enable ntpd.service
systemctl start ntpd.service
systemctl status ntpd.service
2. 配置OpenStack Juno版软件安装源
1) 安装源相关包,yum配置参数priority,需要如下软件包
yum install yum-plugin-priorities -y

2) 配置OpenStack Juno版软件安装源
yum install yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y
yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm -y

3) 查看配置文件确认配置成功
cat rdo-release.repo 
[openstack-juno]
name=OpenStack Juno Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno

4) 系统升级,升级后如果有多个内核存储,删除旧文件
yum provides '*/applydeltarpm'
yum upgrade -y

5) 安装openstack 服务安全策略自动管理,安装时间稍长
yum install openstack-selinux -y
3. 数据库(MySQL/MariaDB)部署配置

1) 安装数据库软件包以及相关驱动
yum install mariadb mariadb-server MySQL-python -y

2) 编辑配置文件/etc/my.cnf,配置数据库服务
a. 编辑[mysqld]部分,配置bind-address为管理网络地址172.16.100.101,使云平台其他节点可以连接数据库服务
[mysqld]
...
bind-address = 172.16.100.101
b. 编辑[mysqld]部分调优mysql服务,设置字符集为UTF-8
[mysqld]
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

3) 完成数据库部署配置
a. 配置自启动模式,启动服务
systemctl enable mariadb.service
systemctl start mariadb.service
systemctl status mariadb.service

b.数据库安全配置
mysql_secure_installation
Enter current password for root (enter for none):[Enter]
Set root password? [Y/n] Y
New password: openstack
Re-enter new password:openstack
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y
4. 消息队列(RabbitMQ)部署配置

1) 安装消息队列软件包
yum install rabbitmq-server -y

2) 设置自启动模式、启动服务
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
systemctl status rabbitmq-server.service

3) [Optional]修改guest用户密码
rabbitmqctl change_password guest guest
//在本次测试中不修改密码,该步骤不执行
5. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_pure.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第三节:认证服务(Keystone)部署配置、验证

1. 准备工作(Keystone服务配置数据库信息)
1) 使用mysql客户端连接mysql数据库
mysql -u root –popenstack
MariaDB [(none)]> 

2) 创建Keystone数据库
CREATE DATABASE keystone;

3) 为Keystone数据库授权
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystoneDB';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystoneDB';

4) 退出数据库连接
MariaDB [(none)]> exit
2. 部署配置Keystone服务组件
1) 安装认证服务相关软件包
yum install openstack-keystone python-keystoneclient -y

//修改配置前备份原始配置文件
mkdir /etc/keystone/.ori
cp -r /etc/keystone/* /etc/keystone/.ori/
rm -f /etc/keystone/default_catalog.templates

2) 编辑配置文件/etc/keystone.conf,完成如下配置
a. 编辑[DEFAULT]部分,配置初次认证ADMIN TOKEN(随便一个字符串就行)
[DEFAULT]
...
admin_token = openstack_admin_token
b. 编辑[database]部分,配置数据库访问权限
[database]
...
connection = mysql://keystone:keystoneDB@controller/keystone
c. 编辑[token]部分,配置token类型,SQL驱动
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token

3) 创建证书、密钥;约束相关文件访问权限
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

4) 执行生成认证服务数据库数据表
su -s /bin/sh -c "keystone-manage db_sync" keystone
3. 完成认证服务部署配置
1) 配置自启动模式,启动服务
systemctl enable openstack-keystone.service
systemctl start openstack-keystone.service
systemctl status openstack-keystone.service

2) Crontab任务,定期清理过期Token数据
(crontab -l -u keystone 2>&1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone 
4. 创建租户、用户、角色(包含验证操作)
1)加载执行环境变量
export OS_SERVICE_TOKEN=openstack_admin_token
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

2)创建管理员租户(admin),用户(admin),角色(admin)
a.租户创建admin
keystone tenant-create --name admin --description "Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 7bf30749e195436c899d42dd879505bb |
|     name    |              admin               |
+-------------+----------------------------------+
b.用户创建admin
keystone user-create --name admin --pass admin --email admin@ostack.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         admin@ostack.com         |
| enabled  |               True               |
|    id    | aad928d305c44810b215f059168b44f3 |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+
c.角色创建admin
keystone role-create --name admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | a6a3766bf320473095ef4509fbf5e06a |
|   name   |              admin               |
+----------+----------------------------------+
d.租户用户角色授权
keystone user-role-add --user admin --tenant admin --role admin

3)创建测试租户(demo),用户(demo)
a.创建demo租户
keystone tenant-create --name demo --description "Demo Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Demo Tenant            |
|   enabled   |               True               |
|      id     | 9f77ec788b91475b869abba7f1091017 |
|     name    |               demo               |
+-------------+----------------------------------+
b.创建demo租户,用户,自动创建_member_角色并授权
keystone user-create --name demo --tenant demo --pass demo --email demo@ostack.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         demo@ostack.com          |
| enabled  |               True               |
|    id    | f6895f5951fe4cdd81878195e15ef3f2 |
|   name   |               demo               |
| tenantId | 9f77ec788b91475b869abba7f1091017 |
| username |               demo               |
+----------+----------------------------------+

5. 创建服务租户,服务目录,API Endpoints

1)创建管理服务的租户信息
keystone tenant-create --name service --description "Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | ec2111f15290479f85567a93db505613 |
|     name    |             service              |
+-------------+----------------------------------+

2)创建认证服务目录
keystone service-create --name keystone --type identity --description "OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | fd5d6decc3e04f798cb782c9a1d62deb |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+

3)创建Keystone API endpoint
keystone endpoint-create --service-id $(keystone service-list | awk '/ identity / {print $2}') \
                         --publicurl http://controller:5000/v2.0 \
                         --internalurl http://controller:5000/v2.0 \
                         --adminurl http://controller:35357/v2.0 \
                         --region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://controller:35357/v2.0   |
|      id     | c7f0c924de324b6ebbd9fd80ccf8f79f |
| internalurl |   http://controller:5000/v2.0    |
|  publicurl  |   http://controller:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | fd5d6decc3e04f798cb782c9a1d62deb |
+-------------+----------------------------------+

6. 取消环境变量,配置admin用户,demo用户操作权限环境变量文件

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

touch adminrc.sh
cat adminrc.sh
#!/bin/bash
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v2.0

touch demorc.sh
cat demorc.sh
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:35357/v2.0

7. 备份虚拟机镜像

a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_keystone.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第四节:镜像管理服务(Glance)部署配置、验证

OpenStack镜像服务可以为用户提供虚拟机镜像模板发现、注册和获取服务。镜像服务提供REST API为用户提供查询虚拟机镜像模板元数据、获取镜像服务

1.  准备工作

1)镜像服务数据库构建
a. 使用MySQL客户端连接MySQL服务器
mysql -u root –popenstack
MariaDB [(none)]> 
b)创建Glance数据库
CREATE DATABASE glance;
c)	为Glance数据库授权
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glanceDB';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glanceDB';
d)	退出数据库连接
MariaDB [(none)]> exit

2)配置加载admin用户操作环境变量
a.配置admin用户操作环境变量文件adminrc.sh,该文件也应用到之后小节
vi adminrc.sh
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v2.0

chmod +x adminrc.sh
b.加载admin操作环境变量
source adminrc.sh

3)在认证服务中创建Glance服务目录,API endpoints
a.创建Glance用户
keystone user-create --name glance --pass glance
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | fa7cd8bfd95e46878e147fa3f7715a20 |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+
b.glance用户加入service租户,并授予admin角色
keystone user-role-add --user glance --tenant service --role admin
c.创建glance服务目录
keystone service-create --name glance --type image --description "OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | d1b7057820824320b8d44d4e5f7d1495 |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+
d.创建glance服务API endpoint
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ image / {print $2}') \
--publicurl http://controller:9292 \
--internalurl http://controller:9292 \
--adminurl http://controller:9292 \
--region regionOne 
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9292      |
|      id     | f839ae2a4d8e44149f5800004fba50cf |
| internalurl |      http://controller:9292      |
|  publicurl  |      http://controller:9292      |
|    region   |            regionOne             |
|  service_id | d1b7057820824320b8d44d4e5f7d1495 |
+-------------+----------------------------------+

2.  部署配置Glance服务组件

1)安装相关软件包
yum install openstack-glance python-glanceclient -y

#备份原始配置文件
mkdir /etc/glance/.ori
cp -r /etc/glance/* /etc/glance/.ori/

2)编辑配置文件/etc/glance/glance-api.conf,完成如下操作
a.编辑[database]部分,配置数据库访问权限
[database] 
...
connection = mysql://glance:glanceDB@controller/glance
b.编辑[keystone_authtoken]和[paste_deploy]部分,配置认证权限
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance


[paste_deploy]
...
flavor = keystone
c.编辑[default]和[glance_store]部分,配置存储后端以及存储路径
[default]
...
default_store = file

[glance_store]
...
filesystem_store_datadir = /var/lib/glance/images/
d.[Optional]编辑[DEFAULT]部分,配置详细日志模式
[DEFAULT]
...
verbose = True

3)编辑配置文件/etc/glance/glance- registry.conf,完成如下操作
a.编辑[database]部分,配置数据库访问权限
[database]
...
connection = mysql://glance:glanceDB@controller/glance
b.编辑[keystone_authtoken]和[paste_deploy]部分,配置认证权限
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance

[paste_deploy]
...
flavor = keystone
c.[Optional]编辑[DEFAULT]部分,配置详细日志模式
[DEFAULT]
...
verbose = True

4)执行生成glance数据库表文件
su -s /bin/sh -c "glance-manage db_sync" glance
3.完成安装配置
1)配置自启动模式,启动服务
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
4. 验证服务
1)上传镜像
wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

source adminrc.sh

glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare  --is-public True --progress 
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 133eae9fb1c98f45894a4e60d8736619     |
| container_format | bare                                 |
| created_at       | 2015-01-09T07:59:19                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 162a09c8-3d83-4d8c-9e7d-e8410123ac22 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.3-x86_64                  |
| owner            | 7bf30749e195436c899d42dd879505bb     |
| protected        | False                                |
| size             | 13200896                             |
| status           | active                               |
| updated_at       | 2015-01-09T07:59:19                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

2)查询镜像列表
glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 162a09c8- | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
5. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_glance.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第五节:计算服务(Nova)部署配置、验证

OpenStack计算服务用于托管和管理云计算系统,其作为Openstack最重要的组件

1.  准备工作

1)创建Nova数据库
a.使用mysql客户端,连接mysql数据库服务器
mysql -u root -popenstack
MariaDB [(none)]>
b.创建nova数据库
CREATE DATABASE nova;
c.为nova数据库访问授权
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'novaDB';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novaDB';
d.退出数据库连接
MariaDB [(none)]> exit

2)创建认证服务证书
a.加载admin执行权限环境变量
source adminrc.sh
b.创建nova服务用户
keystone user-create --name nova --pass nova
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | e2ca6e3f2a834ec3b4617bc78a751757 |
|   name   |               nova               |
| username |               nova               |
+----------+----------------------------------+
c.加入service租户,赋予admin角色
keystone user-role-add --user nova --tenant service --role admin
d.创建nova服务目录
keystone service-create --name nova --type compute --description "OpenStack Compute"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Compute         |
|   enabled   |               True               |
|      id     | 6525556c0ae34617813e20279abdad7c |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+
e.创建nova服务API endpoint
keystone endpoint-create \
                         --service-id $(keystone service-list | awk '/ compute / {print $2}') \
                         --publicurl http://controller:8774/v2/%\(tenant_id\)s \
                         --internalurl http://controller:8774/v2/%\(tenant_id\)s \
                         --adminurl http://controller:8774/v2/%\(tenant_id\)s \
                         --region regionOne 
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8774/v2/%(tenant_id)s |
|      id     |     0b755ab6cb31443a9a1ed49fd45f07d2    |
| internalurl | http://controller:8774/v2/%(tenant_id)s |
|  publicurl  | http://controller:8774/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     6525556c0ae34617813e20279abdad7c    |
+-------------+-----------------------------------------+

2 . 部署配置计算服务Nova相关组件

1)	安装控制服务相关软件包
yum install openstack-nova-api openstack-nova-cert \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y

2)	安装计算服务相关软件包(一般计算节点需要安装的软件包)
yum install openstack-nova-compute sysfsutils -y

3)	编辑配置文件/etc/nova.conf,完成如下配置
a.	编辑[database]部分,配置数据库访问权限,[database]部分如果没有,添加
[database]
...
connection = mysql://nova:novaDB@controller/nova
b.	编辑[DEFAULT]部分,配置消息队列访问权限
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
c.	编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限
[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = nova
d.	编辑[DEFAULT]部分,配置my_ip参数项为控制节点管理IP地址
[DEFAULT]
...
my_ip = 172.16.100.101
e.	编辑[DEFAULT]部分,配置VNC代理服务
[DEFAULT]
...
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 172.16.100.101
novncproxy_base_url = http:// 172.16.100.101:6080/vnc_auto.html
f.	编辑[glance]部分,配置镜像服务
[glance]
...
host = controller
g.	[Optional] 编辑[DEFAULT]部分,开启详细日志配置
[DEFAULT]
...
verbose = True

4)	执行生成nova数据库表信息
su -s /bin/sh -c "nova-manage db sync" nova

3.  完成安装配置

1)	验证是否支持硬件虚拟化加速
egrep -c '(vmx|svm)' /proc/cpuinfo
0
由于实验在虚拟机中进行,如上命令的返回结果为“0”,表示不支持,则需要进行如下配置[在物理环境中如果支持无须配置]
vi /etc/nova/nova.conf
[libvirt]
...
virt_type = qemu

2)	配置自启动,启动服务
systemctl enable openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

3)	配置防火墙
a.	开启novncproxy服务端口
touch /etc/firewalld/services/novncproxy.xml
vi /etc/firewalld/services/novncproxy.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>Virtual Network Computing Proxy Server (NOVNC Proxy)</short>
  <description>provide  NOVNC proxy server with direct access. </description>
  <port protocol="tcp" port="6080"/>
</service>

firewall-cmd --permanent --add-service=novncproxy
firewall-cmd --reload

b.	开启vnc服务端口59005999
vi /usr/lib/firewalld/services/vnc-server.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>Virtual Network Computing Server (VNC)</short>
  <description>A VNC server provides an external accessible X session. Enable this option if you plan to provide a VNC server with direct access. The access will be possible for displays :0 to :3. If you plan to provide access with SSH, do not open this option and use the via option of the VNC viewer.</description>
  <port protocol="tcp" port="5900-5999"/>
</service>

firewall-cmd --permanent --add-service=vnc-server
firewall-cmd --reload

4.  验证服务

1)	查看nova服务各日志文件,检查服务是否正常
tail -f /var/log/nova/nova-api.log
tail -f /var/log/nova/nova-scheduler.log
tail -f /var/log/nova/nova-cert.log
tail -f /var/log/nova/nova-conductor.log
tail -f /var/log/nova/nova-manage.log
tail -f /var/log/nova/nova-consoleauth.log
tail -f /var/log/nova/nova-novncproxy.log
tail -f /var/log/nova/nova-comopute.log

2)	加载admin用户执行权限环境变量
source adminrc.sh

3)	列出nova服务组件,确认每个服务组件成功启动
nova service-list
+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               |
| 2  | nova-consoleauth | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               |
| 3  | nova-cert        | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               |
| 4  | nova-scheduler   | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               |
| 5  | nova-compute     | controller.mydomain | nova     | enabled | up    | 2015-01-09T09:51:33.000000 | -               |
+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+

4)	验证认证、镜像服务连接是否正确
nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 162a09c8-3d83-4d8c-9e7d-e8410123ac22 | cirros-0.3.3-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+
5. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_nova.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第六节:网络服务(Neutron,ml2+vlan模式)部署配置、验证

OpenStack网络服务允许创建和绑定由OpenStack其他服务管理的网络设备到neutron网络。采用插件机制提供架构和部署的灵活性。

1.  准备工作

1)	创建网络服务数据库
a.	使用mysql客户端,连接mysql服务器
mysql -u root –popenstack
MariaDB [(none)]> 
b.	创建neutron数据库
CREATE DATABASE neutron;
c.	为neutron数据库授权
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutronDB';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutronDB';
d.	退出数据库连接
MariaDB [(none)]> exit

2)	创建Neutron服务证书
a.	加载admin用户执行权限环境变量
source adminrc.sh
b.	创建neutron用户
keystone user-create --name neutron --pass neutron
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 765b558a83b84d2da07905444bb629b2 |
|   name   |             neutron              |
| username |             neutron              |
+----------+----------------------------------+
c.	加入service服务租户,并授予admin权限
keystone user-role-add --user neutron --tenant service --role admin
d.	创建neutron服务目录
keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       OpenStack Networking       |
|   enabled   |               True               |
|      id     | 581b0cb476d74666957d9af50a264497 |
|     name    |             neutron              |
|     type    |             network              |
+-------------+----------------------------------+
e.	创建neutron服务API endpoin
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696 \
--region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9696      |
|      id     | c7b4e643fa844e6bbf98af6134330c7f |
| internalurl |      http://controller:9696      |
|  publicurl  |      http://controller:9696      |
|    region   |            regionOne             |
|  service_id | 581b0cb476d74666957d9af50a264497 |
+-------------+----------------------------------+

2  部署配置网络服务组件

1)	指定内核网络参数
a.	编辑配置文件/etc/sysctl.conf,指定如下参数
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
b.	启动配置,使之生效
sysctl -p
#output
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0

2)	安装相关组件软件包
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient  which openstack-neutron-openvswitch

备份原始配置文件
mkdir /etc/neutron/.ori
cp -r /etc/neutron/* /etc/neutron/.ori/

3)	编辑配置文件/etc/neutron/neutron.conf,完成如下操作
a.	编辑[database]部分,配置数据库访问权限
[database]
...
connection = mysql://neutron:neutronDB@controller/neutron
b.	编辑[DEFAULT]部分,配置消息队列访问权限
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
c.	编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限
[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken] 
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
d.	编辑[DEFAULT]部分,配置实用ML2插件,路由服务,IP复用
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
e.	编辑[DEFAULT]部分,配置网络服务通知计算服务网络拓扑变更
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = ec2111f15290479f85567a93db505613
nova_admin_password = nova
f.	[Optional]编辑[DEFAULT]部分,配置详细日志参数
[DEFAULT]
...
verbose = True

4)	编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini 完成如下操作
a.	编辑[ml2]部分,配置启用flat,Vlan网络类型驱动,Vlan租户网络和OVS机制驱动
[ml2]
...
type_drivers = flat,vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
b.	编辑[ml2_type_flat]部分,配置flat网络映射
[ml2_type_flat]
...
flat_networks = external
c.	编辑[ml2_type_vlan]部分,配置vlanID范围
[ml2_type_vlan]
...
network_vlan_ranges = physnet1:10:1000
d.	编辑[securitygroup]部分,配置安全策略信息,启用安全策略,启用IPset,配置OV Siptables防火墙驱动
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
e.	编辑[ovs]部分,启用tunnels,配置local tunnel endpoint,映射外部Flat提供者网络到br-ex外部网络网桥
[ovs]
...
network_vlan_ranges = physnet1:10:1000
# tunnel_id_ranges =
integration_bridge = br-int
bridge_mappings = physnet1:br-srv,external:br-ex

5)	编辑配置文件/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini,完成如下操作
a.	编辑[ovs]部分,配置如下
tenant_network_type = vlan
network_vlan_ranges = physnet1:10:1000
integration_bridge = br-int
bridge_mappings = physnet1:br-srv,external:br-ex

6)	编辑配置文件/etc/neutron/l3_agent.ini文件完成如下操作
a.	编辑[DEFAULT]部分,配置驱动,启用network namespaces,配置外部网络网桥
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex

b.	[Optional]编辑[DEFAULT]部分,启用详细日志参数
[DEFAULT]
...
verbose = True

7)	编辑配置文件/etc/neutron/dhcp_agent.ini 完成如下操作
a.	编辑[DEFAULT]部分,启用namespaces
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
b.	[Optional]编辑[DEFAULT]部分,启用详细日志参数
[DEFAULT]
...
verbose = True

8)	编辑配置文件/etc/neutron/metadata_agent.ini,完成如下操作
a.	编辑[DEFAULT]部分,配置认证服务访问权限
[DEFAULT]
...
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
b.	编辑[DEFAULT]部分,配置metadata主机地址
[DEFAULT]
...
nova_metadata_ip = controller
c.	编辑[DEFAULT]部分,配置metadata代理共享密钥
[DEFAULT]
...
metadata_proxy_shared_secret = lkl_metadata_secret
d.	编辑[DEFAULT]部分,启用详细日志参数
[DEFAULT]
...
verbose = True

9)	编辑配置文件/etc/nova/nova.conf, 更改为使用neutron服务提供网络, 完成如下操作
a.	编辑[DEFAULT]部分,配置APIs和驱动
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
b.	编辑[neutron]部分,配置nova网络API认证相关
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = neutron
c.	编辑[neutron]部分,启用metadata代理,配置密钥
[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = lkl_metadata_secret

10)	配置Open vSwitch(OVS)服务
a.	配置其自启动模式,启用OVS服务
systemctl enable openvswitch.service
systemctl start openvswitch.service
systemctl start openvswitch.service
b.	添加网桥
ovs-vsctl add-br br-ex
ovs-vsctl add-br br-srv
c.	添加网卡端口到外部网桥,连接物理外部网络接口
ovs-vsctl add-port br-ex eth1
ovs-vsctl add-port br-srv eth2

3.  完成安装部署配置

1)	配置软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

2)	解决安装包的bug问题:ovs代理初始化脚本明确查找open vswitch插件的配置文件,而不是软连接/etc/neutron/plugin.ini指向的ML2插件配置文件,需要运行如下命令解决这个问题
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service

3)	执行生成neutron数据表信息
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron

4)	配置自启动模式,启动服务
systemctl enable neutron-server.service
systemctl start neutron-server.service
systemctl status neutron-server.service

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service

systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service
systemctl start openvswitch.service
systemctl start neutron-openvswitch-agent.service

systemctl status neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

5)	重启nova相关服务
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

4.  验证服务

1)	加载admin用户操作权限环境变量
source adminrc.sh

2)	列出加载的扩展,验证neutron-server服务是否正确启动
neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| security-group        | security-group                                |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| provider              | Provider Network                              |
| agent                 | agent                                         |
| quotas                | Quota management support                      |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| l3-ha                 | HA Router extension                           |
| multi-provider        | Multi Provider Network                        |
| external-net          | Neutron external network                      |
| router                | Neutron L3 Router                             |
| allowed-address-pairs | Allowed Address Pairs                         |
| extraroute            | Neutron Extra Route                           |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| dvr                   | Distributed Virtual Router                    |
+-----------------------+-----------------------------------------------+

3)	列出Agents,验证neutron agents服务正常启动
neutron agent-list
+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+
| 3b71393a-f9e6-40d0-bed3-3998c07356de | Metadata agent     | controller.mydomain | :-)   | True           | neutron-metadata-agent    |
| c4d6fb91-73ce-416f-8eaa-9d74362ada48 | L3 agent           | controller.mydomain | :-)   | True           | neutron-l3-agent          |
| d41eca3c-cdb2-4105-99aa-b30599af3cf7 | Open vSwitch agent | controller.mydomain | :-)   | True           | neutron-openvswitch-agent |
| ef9c4470-c878-4424-9bf0-362c990a5397 | DHCP agent         | controller.mydomain | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+

5.  初始化网络服务数据

1)  加载admin用户操作权限环境变量
source adminrc.sh

2)	创建外部网络
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 89d5b94f-05f9-4b36-b561-67164f1560ea |
| name                      | ext-net                              |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 7bf30749e195436c899d42dd879505bb     |
+---------------------------+--------------------------------------+

3)	为外部网络创建子网
neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=172.16.100.200,end=172.16.100.250 \
--disable-dhcp --gateway 172.16.100.10 172.16.100.0/24
Created a new subnet:
+-------------------+-----------------------------------------------------+
| Field             | Value                                                |
+-------------------+-----------------------------------------------------+
| allocation_pools  | {"start": "172.16.100.200", "end": "172.16.100.250"} |
| cidr              | 172.16.100.0/24                                      |
| dns_nameservers   |                                                      |
| enable_dhcp       | False                                                |
| gateway_ip        | 172.16.100.10                                        |
| host_routes       |                                                      |
| id                | 2f05c3c1-918a-4933-8fa7-61f2cfad85db                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | ext-subnet                                           |
| network_id        | 89d5b94f-05f9-4b36-b561-67164f1560ea                 |
| tenant_id         | 7bf30749e195436c899d42dd879505bb                     |
+-------------------+-----------------------------------------------------+

4)	创建租户网络
a.	加载demo用户操作权限环境变量
source demorc.sh
b.	创建租户网络
neutron net-create demo-net
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | 8319e8cb-acd9-4674-b71b-f966df667b03 |
| name            | demo-net                             |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | 9f77ec788b91475b869abba7f1091017     |
+-----------------+--------------------------------------+
c.	为租户网络创建子网
neutron subnet-create demo-net --name demo-subnet --gateway 192.168.100.1 192.168.100.0/24
Created a new subnet:
+-------------------+-----------------------------------------------------+
| Field             | Value                                                |
+-------------------+-----------------------------------------------------+
| allocation_pools  | {"start": "192.168.100.2", "end": "192.168.100.254"} |
| cidr              | 192.168.100.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.100.1                                        |
| host_routes       |                                                      |
| id                | 713e676b-3cda-4b0e-9aae-81af5a544d3d                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | demo-subnet                                          |
| network_id        | 8319e8cb-acd9-4674-b71b-f966df667b03                 |
| tenant_id         | 9f77ec788b91475b869abba7f1091017                     |
+-------------------+-----------------------------------------------------+

5)	创建路由绑定外部网络,租户网络,执行环境demo
source demorc.sh
a.	创建路由router
neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 490f50cb-0663-4bbc-bb0a-cecf8e27916b |
| name                  | demo-router                          |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 9f77ec788b91475b869abba7f1091017     |
+-----------------------+--------------------------------------+
b.	绑定demo租户网络subnet到路由上
neutron router-interface-add demo-router demo-subnet
<output>
Added interface d8d52434-e016-4771-b46c-99f0a7adc6b2 to router demo-router.
c.	通过设置网关,绑定外部网络到路由上
neutron router-gateway-set demo-router ext-net
<output>
Set gateway for router demo-router

6)	验证网络连接是否联通
路由绑定外部网络产生的gateway是172.16.100.200,可以ping该地址验证连通性
ping 172.16.100.200
6. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_neutron.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第七节:块存储服务(Cinder)部署配置、验证

OpenStack块存储服务使用多种不同后端为虚拟机实例提供块存储设备。一般部署模式:块存储API服务组件和调度服务组件运行在控制节点;卷服务运行在一个或多个存储节点。存储节点通过适合的驱动使用本地块存储设备或者SAN/NAS后端为虚拟机实例提供卷设备。本部署测试采用AllInOne单节点部署模式。

1.  准备工作

1)	创建块存储服务数据库
a.	使用mysql客户端连接mysql数据库服务器
mysql -uroot –popenstack
MariaDB [(none)]>
b.	创建cinder数据库
CREATE DATABASE cinder;
c.	为cinder数据库访问授权
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinderDB';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinderDB';
d.	退出数据库连接
MariaDB [(none)]> exit

2)	创建cinder服务证书
a.	加载admin用户执行权限环境变量 
source adminrc.sh
b.	创建cinder用户
keystone user-create --name cinder --pass cinder
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 6dc8206a69bf44cf8ac17c777c6b968c |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+
c.	Cinder用户授权角色和绑定租户
keystone user-role-add --user cinder --tenant service --role admin
d.	创建cinder服务目录
keystone service-create --name cinder --type volume \
--description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 8468edcc54654fe4ba9467afd46dbea7 |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+
keystone service-create --name cinderv2 --type volumev2 \
--description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 9f59cfbd82014106b10040ed2f86b697 |
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
e.	创建cinder API endpoints
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volume / {print $2}') \
--publicurl http://controller:8776/v1/%\(tenant_id\)s \
--internalurl http://controller:8776/v1/%\(tenant_id\)s \
--adminurl http://controller:8776/v1/%\(tenant_id\)s \
--region regionOne
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8776/v1/%(tenant_id)s |
|      id     |     e5845fd20ec646779f6708cc4bc8d26d    |
| internalurl | http://controller:8776/v1/%(tenant_id)s |
|  publicurl  | http://controller:8776/v1/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     8468edcc54654fe4ba9467afd46dbea7    |
+-------------+-----------------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region regionOne
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8776/v2/%(tenant_id)s |
|      id     |     242544375fbf4d808407f7bde17ecd0e    |
| internalurl | http://controller:8776/v2/%(tenant_id)s |
|  publicurl  | http://controller:8776/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     9f59cfbd82014106b10040ed2f86b697    |
+-------------+-----------------------------------------+

2.  创建LVM(PV/VG)卷管理组

1)	安装LVM软件包
yum install lvm2 #默认已经安装

2)	配置lvmmetadata自启动模式,启动服务
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service

3)	创建物理卷,卷组
a.	创建物理卷
pvcreate /dev/vda3
b.	创建VG(LVM卷组),块存储服务使用该VG创建块lv
vgcreate cinder-volumes /dev/vda3
c.	编辑文件/etc/lvm/lvm.conf,在devices部分增加过滤条件接受/dev/vda设备
devices {
...
filter = [ "a/sdb/", "r/.*/"]

3.  部署、配置块存储服务组件

1)  安装相关组件软件包
yum install openstack-cinder python-cinderclient python-oslo-db -y
yum install targetcli MySQL-python -y

2)	编辑配置文件/etc/cinder/cinder.conf,完成如下操作
a.	编辑[database]部分,配置数据库访问
[database]
...
connection = mysql://cinder:cinderDB@controller/cinder
b.	编辑[DEFAULT]部分,配置RabbitMQ消息队列访问权限
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
c.	编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限
[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = cinder
d.	编辑[DEFAULT]部分,使用控制节点的管理IP地址配置my_ip参数项
[DEFAULT]
...
my_ip = 172.16.100.101
e.	编辑[DEFAULT]部分,配置镜像服务位置
[DEFAULT]
...
glance_host = controller
f.	编辑[DEFAULT]部分,配置块存储使用lioadm iSCSI服务
[DEFAULT]
...
iscsi_helper = lioadm
g.	[Optional]编辑[DEFAULT] 部分,开启详细日志参数
[DEFAULT]
...
verbose = True

4.  完成部署配置

1)	执行生成块存储数据库数据表
su -s /bin/sh -c "cinder-manage db sync" cinder

2)	配置自启动模式,启动服务
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

3)	编辑配置文件/etc/cinder/cinder.conf,启用多后端配置
a.	编辑[DEFAULT]部分,启用LVM后端,多后端用逗号分割
[DEFAULT]
...
enabled_backends=LVMISCSI
b.	在文件最后添加相应后端配置
[LVMISCSI]
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
# volume_backend_name项名称可以随意取值,在后面配置中需要用到
volume_backend_name=LVM
c.	启动openstack-cinder-volume服务
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
d.	创建相应volume-type
source adminrc.sh
#创建volume类型,名称可以随意(见名知意即可)
cinder type-create lvmiscsi 
+--------------------------------------+----------+
|                  ID                  |   Name   |
+--------------------------------------+----------+
| f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 | lvmiscsi |
+--------------------------------------+----------+
#为volume类型关联存储后端,值为前面配置文件volume_backend_name的值
cinder type-key f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 set volume_backend_name= LVM
no output
cinder extra-specs-list
+--------------------------------------+----------+----------------------------------+
|                  ID                  |   Name   |           extra_specs            |
+--------------------------------------+----------+----------------------------------+
| f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 | lvmiscsi | {u'volume_backend_name': u'LVM'} |
+--------------------------------------+----------+----------------------------------+

5.  验证服务

1)	使用admin用户验证服务状态
a.	加载admin用户执行权限环境变量
source adminrc.sh
b.	列出服务组件,验证各进程是否成功启动
cinder service-list
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |             Host             | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |     controller.mydomain      | nova | enabled |   up  | 2015-01-13T07:44:39.000000 |       None      |
|  cinder-volume   | controller.mydomain@LVMISCSI | nova | enabled |   up  | 2015-01-13T07:44:46.000000 |       None      |
+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+

2)	使用demo用户执行块存储设备操作
a.	加载demo用户执行权限环境变量
source demorc.sh
b.	创建1GB大小块存储设备
cinder create --display-name demo-volume1 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-01-13T07:47:27.827415      |
| display_description |                 None                 |
|     display_name    |             demo-volume1             |
|      encrypted      |                False                 |
|          id         | cd31322b-e221-4690-b5bc-48e02be90314 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
c.	查询新创建的volume状态
cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| cd31322b-e221-4690-b5bc-48e02be90314 | available | demo-volume1 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
6. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_cinder.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第八节:WEBUI服务(Horizon)部署配置、验证

1.  安装配置Horizon相关组件

1)	安装软件包
yum install openstack-dashboard httpd mod_wsgi memcached python-memcached -y

2)	编辑配置文件/etc/openstack-dashboard/local_settings,完成如下操作
a.	配置Dashboard使用controller节点上Openstack各服务
. . .
OPENSTACK_HOST = "controller"
b.	许可所有主机访问Dashboard服务
. . .
ALLOWED_HOSTS = ['*']
c.	配置memcached session存储
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
//注释掉其他session存储服务
d.	[Optianal]配置时区
TIME_ZONE = "Asia/Shanghai"
2. 完成部署配置
1)	CentOS/REDHAT系统配置SELinux许可web服务器连接Openstack服务
setsebool -P httpd_can_network_connect on

2)	解决安装包bug,dashboard CSSfile加载失败问题解决
chown -R apache:apache /usr/share/openstack-dashboard/static

3)	配置自启动模式,启动服务
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service
3. 配置防火墙
1)	设置防火墙,开启80端口http服务
a.	即时开启
firewall-cmd --add-service=http
b.	写入配置文件
firewall-cmd --permanent --add-service=http
c.	查看/etc/firewalld/zones/public.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="dhcpv6-client"/>
<service name="http"/>
<service name="vnc-server"/>
<service name="novnc-proxy"/>
  <service name="ssh"/>
  <service name="https"/>
</zone>

4.  验证服务

1)	浏览器访问Horizon创建云主机
http://172.16.100.101/dashboard
创建虚拟机


查看虚拟机IP地址/novnc方式查看,顺便验证下novnc登录是否正常


其他验证操作<略>
6. 备份虚拟机镜像

a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_horizon.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第九节:计量服务(Ceilometer)部署配置、验证

1.  准备工作

安装测量模块前,需要安装mongoDB,创建MongoDB数据库,服务证书,APIendpoints

1)安装MongoDB数据
yum install mongodb-server mongodb -y

2)编辑配置文件/etc/mongodb.conf,完成如下操作
a.配置bind_ip,使用节点管理IP地址
bind_ip = 172.16.100.101
b.	配置mongoDB journal文件大小,测试目的使用小文件甚至可以禁用
smallfiles = true
c.配置自启动模式,启动服务
systemctl enable mongod.service
systemctl start mongod.service
systemctl status mongod.service

3)创建ceilometer数据库
mongo --host controller --eval '
db = db.getSiblingDB("ceilometer");
db.createUser({user: "ceilometer",
pwd: "ceilometerDB",
roles: [ "readWrite", "dbAdmin" ]})'

4)创建ceilometer服务证书
a.加载admin用户执行权限环境变量
source adminrc.sh
b.创建ceilometer账户
keystone user-create --name ceilometer --pass ceilometer
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 4499d16eda084811848f7709c0d990f1 |
|   name   |            ceilometer            |
| username |            ceilometer            |
+----------+----------------------------------+
c.为ceilometer账户授权和管理管理租户service
keystone user-role-add --user ceilometer --tenant service  --role admin
no output
d.创建ceilometer服务目录
keystone service-create --name ceilometer --type metering --description "Telemetry"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |            Telemetry             |
|   enabled   |               True               |
|      id     | 98e23755704a4565a29d7a3058e6d811 |
|     name    |            ceilometer            |
|     type    |             metering             |
+-------------+----------------------------------+
e.为服务目录创建API Endpoints
keystone endpoint-create \
                        --service-id $(keystone service-list | awk '/ metering / {print $2}') \
                        --publicurl http://controller:8777 \
                        --internalurl http://controller:8777 \
                        --adminurl http://controller:8777 \
                        --region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:8777      |
|      id     | 272ad78761454d72b09fc2f32b0b85bd |
| internalurl |      http://controller:8777      |
|  publicurl  |      http://controller:8777      |
|    region   |            regionOne             |
|  service_id | 98e23755704a4565a29d7a3058e6d811 |
+-------------+----------------------------------+
2. 部署配置测量组件
1)安装组件软件包
yum install openstack-ceilometer-api openstack-ceilometer-collector \
openstack-ceilometer-notification openstack-ceilometer-central \
openstack-ceilometer-alarm  python-ceilometerclient openstack-ceilometer-compute python-pecan 

备份原始配置文件<>

2)编辑配置文件/etc/ceilometer/ceilometer.conf,完成如下操作
准备测量服务密钥,密钥可以是随机的一串字符,测试采用一串见名知意的字符 openstack_metering_key
a.编辑[databse]部分,配置数据库访问权限
[database]
...
connection = mongodb://ceilometer:ceilometerDB@controller:27017/ceilometer
b.编辑[DEFAULT]部分,配置消息队列访问权限
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
c.编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限
[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = ceilometer
d.编辑[service_credentials]部分,配置服务证书
[service_credentials]
...
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = ceilometer
os_endpoint_type = internalURL
e.编辑[publisher]部分,配置测量密钥
[publisher]
...
metering_secret = lkl_metering_key
3.完成部署配置
1)配置自启动模式,启动服务
systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

systemctl status openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

4  部署配置测量服务计算服务代理

1)编辑配置文件/etc/nova/nova.conf,完成如下操作
a.编辑[DEFAULT]部分添加如下配置项
[DEFAULT]
...
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
b.重启计算节点计算服务
systemctl restart openstack-nova-compute.service 
systemctl status openstack-nova-compute.service
c.配置自启动模式,启动服务
systemctl enable openstack-ceilometer-compute.service
systemctl start openstack-ceilometer-compute.service
systemctl status openstack-ceilometer-compute.service
5. 部署配置镜像服务测量服务功能
1)编辑配置文件/etc/glance/glance-api.conf,完成如下操作
a.编辑[DEFAULT]部分,配置发送通知到消息队列
[DEFAULT]
...
notification_driver = messaging
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
b.重启相关服务,使配置生效
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service

6.  部署配置块存储服务测量服务功能

1)编辑配置文件/etc/cinder/cinder.conf,完成如下操作
a.编辑[DEFAULT]部分,配置通知机制
[DEFAULT]
...
control_exchange = cinder
notification_driver = cinder.openstack.common.notifier.rpc_notifier
b.重启相关服务,使配置生效
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-volume.service
7. 验证服务
1)通过glance服务验证测量服务是否部署正确
a.执行ceilometer meter-list命令
ceilometer meter-list
b.从glance服务下载镜像
glance image-download "cirros-0.3.3-x86_64" > cirros.img
c.执行ceilometer命令获取验证下载已经被探测并存储
ceilometer meter-list
略
d.可以获取各种测量的用例统计
ceilometer statistics -m image.download -p 60
8. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_ceilometer.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第十节:编配服务(Heat)部署配置、验证

1. 准备工作
1)创建heat数据库
a.使用mysql客户端连接mysql数据库服务器
mysql -u root –popenstack
MariaDB [(none)]>
b.创建heat数据库
CREATE DATABASE heat;
c.为heat数据库访问授权
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'heatDB';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'heatDB';
d.退出数据库连接
MariaDB [(none)]> exit

2)创建heat服务证书,API endpoints
a.加载admin用户操作权限环境变量
source adminrc.sh
b.创建heat用户
keystone user-create --name heat --pass heat
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 9c49258dbb834f83aa9c471abceaf215 |
|   name   |               heat               |
| username |               heat               |
+----------+----------------------------------+
c.为heat用户授权,关联租户service
keystone user-role-add --user heat --tenant service --role admin
no output
d.创建heat_stack_owner角色
keystone role-create --name heat_stack_owner
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 5cd2f0ba44a84816a3bf0c6d5f1a207e |
|   name   |         heat_stack_owner         |
+----------+----------------------------------+
e.为demo用户/租户授权heat_stack_owner角色
keystone user-role-add --user demo --tenant demo --role heat_stack_owner
no output
//必须添加该角色到管理stacks的用户
f.创建heat_stack_user角色
keystone role-create --name heat_stack_user
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | a00a1ae74d3345b7bb4e77086b520c74 |
|   name   |         heat_stack_user          |
+----------+----------------------------------+
g.创建heat和heat-cfn服务目录
keystone service-create --name heat --type orchestration --description "Orchestration"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Orchestration           |
|   enabled   |               True               |
|      id     | cc84a959575447d89cb7e1a00624ab9a |
|     name    |               heat               |
|     type    |          orchestration           |
+-------------+----------------------------------+
keystone service-create --name heat-cfn --type cloudformation \
--description "Orchestration"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Orchestration           |
|   enabled   |               True               |
|      id     | 359a38bba2884de2be77c6bc78329ec6 |
|     name    |             heat-cfn             |
|     type    |          cloudformation          |
+-------------+----------------------------------+
h.	创建heat 服务API endpoints
keystone endpoint-create \
                        --service-id $(keystone service-list | awk '/ orchestration / {print $2}') \
                        --publicurl http://controller:8004/v1/%\(tenant_id\)s \
                        --internalurl http://controller:8004/v1/%\(tenant_id\)s \
                        --adminurl http://controller:8004/v1/%\(tenant_id\)s \
                        --region regionOne
+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8004/v1/%(tenant_id)s |
|      id     |     f7071efad7f54911b242a0565895ebe8    |
| internalurl | http://controller:8004/v1/%(tenant_id)s |
|  publicurl  | http://controller:8004/v1/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     cc84a959575447d89cb7e1a00624ab9a    |
+-------------+-----------------------------------------+
keystone endpoint-create \
                        --service-id $(keystone service-list | awk '/ cloudformation / {print $2}') \
                        --publicurl http://controller:8000/v1 \
                        --internalurl http://controller:8000/v1 \
                        --adminurl http://controller:8000/v1 \
                        --region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |    http://controller:8000/v1     |
|      id     | 8c8ade0a1dde411cbdfcefbb3ce935c5 |
| internalurl |    http://controller:8000/v1     |
|  publicurl  |    http://controller:8000/v1     |
|    region   |            regionOne             |
|  service_id | 359a38bba2884de2be77c6bc78329ec6 |
+-------------+----------------------------------+
2. 部署配置Heat服务组件
1)安装相关组件软件包
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \
python-heatclient -y

备份原始配置文件<>

2)编辑配置文件/etc/heat/heat.conf,完成如下操作
a.编辑[database]部分,配置数据库访问权限
[database]
...
connection = mysql://heat:heatDB@controller/heat
b.编辑[DEFAULT]部分,配置消息队列访问权限
[DEFAULT]
...
rpc_backend = heat.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_password = guest
c.编辑[keystone_authtoken]和 [ec2authtoken]部分,配置认证服务访问权限
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = heat
admin_password = heat

[ec2authtoken]
...
auth_uri = http://controller:5000/v2.0
d.编辑[DEFAULT]部分,配置metadata、wait condition URLs
[DEFAULT]
...
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
e.[Optional]编辑[DEFAULT]部分,配置通知机制
notification_driver=heat.openstack.common.notifier.rpc_notifier
f.[Optional]编辑[DEFAULT]部分,启用详细日志参数
[DEFAULT]
...
verbose = True
3. 完成安装部署配置
1)执行生成heat数据库数据表
su -s /bin/sh -c "heat-manage db_sync" heat

2)配置自启动模式,启动服务
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service
systemctl start openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service
systemctl status openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service
4. 验证服务
1)加载demo用户执行权限环境变量
source demorc.sh
2)创建测试模板test-stack.yml
heat_template_version: 2013-05-23

description: Test Template

parameters:
  ImageID:
    type: string
    description: Image use to boot a server
  NetID:
    type: string
    description: Network ID for the server
 
resources:
  server1:
    type: OS::Nova::Server
    properties:
      name: "Test server"
      image: { get_param: ImageID }
      flavor: "m1.tiny"
      networks:
      - network: { get_param: NetID }
 
outputs:
  server1_private_ip:
    description: IP address of the server in the private network
    value: { get_attr: [ server1, first_address ] }

3)使用模板创建stack
heat stack-create -f test_stack.yml \
-P "ImageID=cirros-0.3.3-x86_64;NetID=$NET_ID" testStack
+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| ea158cfe-2d70-4f75-9600-8a46f6b2b2ee | testStack  | CREATE_IN_PROGRESS | 2015-01-14T03:06:35Z |
+--------------------------------------+------------+--------------------+----------------------+

4)验证stack创建是否成功
heat stack-list
+--------------------------------------+------------+-----------------+----------------------+
| id                                   | stack_name | stack_status    | creation_time        |
+--------------------------------------+------------+-----------------+----------------------+
| ea158cfe-2d70-4f75-9600-8a46f6b2b2ee | testStack  | CREATE_COMPLETE | 2015-01-14T03:06:35Z |
+--------------------------------------+------------+-----------------+----------------------+
nova list
+--------------------------------------+-------------+--------+------------+-------------+------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks               |
+--------------------------------------+-------------+--------+------------+-------------+------------------------+
| 96ea3757-4d8c-4606-8759-f07f27980c94 | Test server | ACTIVE | -          | Running     | demo-net=192.168.100.9 |
+--------------------------------------+-------------+--------+------------+-------------+------------------------+
5. 备份虚拟机镜像
a. 关闭虚拟机
shutdown -h now #或者virsh destroy Juno
b. 备份虚拟机
cp juno.img juno_heat.img
c. 启动虚拟机,为下一节部署做准备
virsh start juno

第十一节:总结

以上即为本次测试安装部署配置全部过程,希望能对关注者有所帮助。大家都知道OpenStack Juno版本有11个核心组件之多,本次测试部署了其中的8个核心组件,没有测试的组件包括对象存储服务组件Swift,数据库服务组件Trove,以及新加入的大数据服务组件Sahara。这三个组件部署测试将下后续的文章中补充。

本次测试的8个核心组件运行良好,但是也有不足之处:

  1. 本次测试没有配置服务HA模式,包括OpenStack组件服务本身具有的HA模式,如dhcp-agent的active-active模式、l3-agent的active-passive模式等;
  2. glance的后端采用的是file模式,且提供给glance存储的设备为系统磁盘下的一个目录,不具有生产性,可以考虑采用其他存储模式或安全方案;
  3. cinder的存储后端采用的是LVM模式,可以考虑多种存储后端,将在随后的文章集成Ceph分布式存储系统作为cinder的其中一个的后端
  4. nova的存储后端也是采用的本地磁盘目录,nova存储后端可以考虑分布式文件系统如GlusterFS,NF或者采用统一存储系统Ceph

在文档的开始部分提到开启yum缓存软件包的模式,采用缓存下来的软件包制作自定义OpenStack Juno版YUM安装源,以利于在没有Internet网络的环境下安装配置OpenStack环境,节省流量,加快安装效率,固定安装版本。下一篇博文将描述如何在CentOS7上为CentOS7系统制作Openstack Juno版YUM安装源

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/185618
推荐阅读
相关标签
  

闽ICP备14008679号