当前位置:   article > 正文

基于CentOS 7.6(1810)的OpenStackT版部署教程_centos7 openstack t http

centos7 openstack t http

OpenStack部署

OpenStack环境部署

一、实验环境

基于Centos 7.6(1810)最小化安装,CPU双核双线程、CPU虚拟化开启

主机名内存硬盘NAT网卡VM1网卡系统
CT8300+1024(CEPH块存储)20.0.0.6110.0.0.61Centos7.6
C18300+1024(CEPH块存储)20.0.0.6210.0.0.62Centos7.6
C28300+1024(CEPH块存储)20.0.0.6310.0.0.63Centos7.6

注:最小内存为6G

二、基础环境配置( ct c1 c2 以ct为例)

1、主机名

[root@localhost ~]# hostnamectl set-hostname ct
[root@localhost ~]# su
[root@ct ~]#
  • 1
  • 2
  • 3

2、关闭防火墙、核心防护

[root@ct ~]# systemctl stop firewalld
[root@ct ~]# systemctl disable firewalld
[root@ct ~]# setenforce 0
[root@ct ~]# vim /etc/sysconfig/selinux 
SELINUX=disabled
  • 1
  • 2
  • 3
  • 4
  • 5

3、修改网卡地址

①修改NAT网卡配置

vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static
IPV4_ROUTE_METRIC=90         ###调由优先级,NAT网卡优先
ONBOOT=yes
IPADDR=20.0.0.61
NETMASK=255.0.0.0
GATEWAY=20.0.0.2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

②控制节点配置VM1

[root@ct ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens37
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.61
NETMASK=255.0.0.0
#GATEWAY=10.0.0.2
[root@ct ~]# systemctl restart network     #重启网卡
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

③配置Hosts加入VM1的网址

[root@ct ~]# vi /etc/hosts
10.0.0.61 ct
10.0.0.62 c1
10.0.0.63 c2
  • 1
  • 2
  • 3
  • 4

④配置DNS

[root@ct ~]# vim /etc/resolv.conf
nameserver 114.114.114.114
  • 1
  • 2

4、免交互

非对称密钥

[root@ct ~]# ssh-keygen -t rsa  
[root@ct ~]# ssh-copy-id ct
[root@ct ~]# ssh-copy-id c1
[root@ct ~]# ssh-copy-id c2
  • 1
  • 2
  • 3
  • 4

5、基础环境依赖包

##OpenStack 的 train 版本仓库源安装 包,同时安装 OpenStack 客户端和 openstack-selinux 安装包
[root@ct ~]# yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre pcre-devel expat-devel cmake bzip2 
[root@ct ~]# yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
  • 1
  • 2
  • 3

6、时间同步+周期性计划任务

①控制节点ct时间同步配置

[root@ct ~]# yum install chrony -y
[root@ct ~]# vim /etc/chrony.conf 
server ntp.aliyun.com iburst
allow 20.0.0.0/24
  • 1
  • 2
  • 3
  • 4

②c1 c2时间同步配置

[root@ct ~]# yum install chrony -y
[root@ct ~]# vim /etc/chrony.conf 
server ct iburst
  • 1
  • 2
  • 3

③查看时间同步

[root@ct ~]# systemctl enable chronyd
[root@ct ~]# systemctl restart chronyd 
[root@ct ~]# chronyc sources   ##使用 chronyc sources 命令查询时间同步信息
210 Number of sources = 1
MS Name/IP address     Stratum Poll Reach LastRx Last sample        
===============================================================================
^* 203.107.6.88         2  6  17   0 +3559us[+3429us] +/-  25ms
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

④设置周期性任务

[root@ct ~]# crontab -e
*/2 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
[root@ct ~]# crontab -l
*/2 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
  • 1
  • 2
  • 3
  • 4

三、系统环境配置(控制节点ct)

1、安装、配置MariaDB

[root@ct ~]# yum -y install mariadb mariadb-server python2-PyMySQL  ##此包用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端

[root@ct ~]# yum -y install libibverbs 
  • 1
  • 2
  • 3

2、添加MySQL子配置文件,增加如下内容

[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld] 
bind-address = 10.0.0.61     #控制节点局域网地址
default-storage-engine = innodb    #默认存储引擎 
innodb_file_per_table = on         #每张表独立表空间文件
max_connections = 4096           #最大连接数 
collation-server = utf8_general_ci    #默认字符集 
character-set-server = utf8
[root@ct my.cnf.d]# systemctl enable mariadb   ##开机自启动
[root@ct my.cnf.d]# systemctl start mariadb    ##开启服务
[root@ct my.cnf.d]# mysql_secure_installation   ##执行MariaDB 安全配置脚本
Enter current password for root (enter for none):       #回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
 ... Success!
Disallow root login remotely? [Y/n] N
 ... skipping.
Remove test database and access to it? [Y/n] Y 
Reload privilege tables now? [Y/n] Y  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

注意:配置文件中不要有任何的注释,否则会导致数据库连接不上

3、安装RabbitMQ

[root@ct ~]# yum -y install rabbitmq-server  ##所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq

[root@ct ~]# systemctl enable rabbitmq-server.service  ##开机启动
[root@ct ~]# systemctl start rabbitmq-server.service  ##启动RabbitMQ服务
[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS ##创建消息队列用户,用于controler和node节点连接rabbitmq的认证
Creating user "openstack"

[root@ct ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"  ##配置openstack用户的操作权限(正则,配置读写权限)
Setting permissions for user "openstack" in vhost "/"  ##可查看25672和5672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)

[root@ct ~]# rabbitmq-plugins list  ##查看rabbitmq插件列表
[root@ct ~]# rabbitmq-plugins enable rabbitmq_management  ##开启rabbitmq的web管理界面的插件,端口为15672

[root@ct my.cnf.d]# ss -natp | grep 5672
LISTEN   0   128     *:25672          *:*          users:(("beam.smp",pid=24596,fd=46))
LISTEN   0   128     :::5672          :::*          users:(("beam.smp",pid=24596,fd=55))

##可访问20.0.0.61:15672,默认账号密码均为guest
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

image-20201215221629969

image-20201215221643621

4、安装memcached

作用:

安装memcached是用于存储session信息;服务身份验证机制使用Memcached来缓存令牌 在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中

session会话管理 服务对象是memcached或者keystone

##安装Memcached
[root@ct ~]# yum install -y memcached python-memcached
##python-*模块在OpenStack中起到连接数据库的作用

##修改Memcached配置文件
[root@ct ~]# vi /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,**ct"**
[root@ct ~]# systemctl enable memcached
[root@ct ~]# systemctl start memcached
[root@ct ~]# netstat -nautp | grep 11211
 
##安装etcd
[root@ct ~]# yum -y install etcd

##修改etcd配置文件
[root@ct ~]# vim /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  ##数据目录位置
ETCD_LISTEN_PEER_URLS="http://10.0.0.61:2380"  ##监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.61:2379"  ##对外提供服务的地址(2379端口,集群内部的通讯端口)
ETCD_NAME="ct"  ##集群中节点标识(名称)
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.61:2380"  ##该节点成员的URL地址,2380端口:用于集群之间通讯。
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.61:2379
ETCD_INITIAL_CLUSTER="ct=http://10.0.0.61:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"   ##集群唯一标识
ETCD_INITIAL_CLUSTER_STATE="new"  ##初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群
##若为DNS,则表示此集群将作为被加入的对象

##开机自启动、开启服务,检测端口
[root@ct ~]# systemctl enable etcd.service
[root@ct ~]# systemctl start etcd.service
[root@ct ~]# netstat -anutp |grep 2379
tcp        0      0 10.0.0.61:2379          0.0.0.0:*               LISTEN      45053/etcd          
tcp        0      0 10.0.0.61:2379          10.0.0.61:56938         ESTABLISHED 45053/etcd          
tcp        0      0 10.0.0.61:56938         10.0.0.61:2379          ESTABLISHED 45053/etcd    
[root@ct ~]# netstat -anutp |grep 2380
 tcp        0      0 10.0.0.61:2380          0.0.0.0:*               LISTEN      45053/etcd      
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40

四、安装keyston

部署openstack组件时,需先行安装认证服务(keystone),openstack的管理端负责创建虚拟机过程的调度,通过openstack管理端创建虚拟机的相关数据最终都会记录到mysql(mariadb)中;node节点没有权限往数据库中写数据,只有控制端有权限,并且node节点与控制端通讯是通过rabbitmq间接通讯,node节点会监听rabbitmq,控制端也会监听rabbitmq,控制端把创建虚拟机的指令发送到rabbitmq,由监听rabbitmq指定队列的node节点接收消息并创建虚拟机;

1、创建数据库实例和数据库用户

[root@ct ~]# mysql -u root -p
MariaDB [(none)]> create database keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

2、安装、配置keystone

mod_wsgi包的作用是让apache能够代理pythone程序的组件;openstack的各个组件,包括API都是用python写的,但访问的是apache,apache会把请求转发给python去处理,这些包只安装在controler节点

##安装keystone、httpd、mod_wsgi
[root@ct ~]# yum -y install openstack-keystone httpd mod_wsgi
[root@ct ~]# cp -a /etc/keystone/keystone.conf{,.bak}
[root@ct ~]# grep -Ev "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf

##通过pymysql模块访问mysql,指定用户名密码、数据库的域名、数据库名
[root@ct ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@ct/keystone

##指定token的提供者;提供者就是keystone自己本身
[root@ct ~]# openstack-config --set /etc/keystone/keystone.conf token provider fernet

#Fernet:一种安全的消息传递格式
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

3、初始化认证服务数据库

初始化fernet 密钥存储库(以下命令会生成两个密钥,生成的密钥放于/etc/keystone/目录下,用于加密数据)

[root@ct ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@ct ]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@ct ]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  • 1
  • 2
  • 3

4、配置bootstrap身份认证服务

此步骤是初始化openstack,会把openstack的admin用户的信息写入到mysql的user表中,以及url等其他信息写入到mysql的相关表中;

admin-url是管理网(如公有云内部openstack管理网络),用于管理虚拟机的扩容或删除;如果共有网络和管理网是一个网络,则当业务量大时,会造成无法通过openstack的控制端扩容虚拟机,所以需要一个管理网;

internal-url是内部网络,进行数据传输,如虚拟机访问存储和数据库、zookeeper等中间件,这个网络是不能被外网访问的,只能用于企业内部访问

public-url是共有网络,可以给用户访问的(如公有云) #但是此环境没有这些网络,则公用同一个网络

5000端口是keystone提供认证的端口

需要在haproxy服务器上添加一条listen

各种网络的url需要指定controler节点的域名,一般是haproxy的vip的域名(高可用模式)

[root@ct ~]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://ct:5000/v3/ \
--bootstrap-internal-url http://ct:5000/v3/ \
--bootstrap-public-url http://ct:5000/v3/ \
--bootstrap-region-id RegionOne   #指定一个区域名称
  • 1
  • 2
  • 3
  • 4
  • 5

5、配置Apache HTTP服务器

安装完mod_wsgi包后,会生成 wsgi-keystone.conf 这个文件,文件中配置了虚拟主机及监听了5000端口,mod_wsgi就是python的网关

[root@ct ~]# echo "ServerName controller" >> /etc/httpd/conf/httpd.conf

##创建配置文件
[root@ct ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

##开启服务

[root@ct conf.d]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@ct conf.d]# systemctl start httpd
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

6、配置管理员账户的环境变量

这些环境变量用于创建角色和项目使用,但是创建角色和项目需要有认证信息,所以通过环境变量声明用户名和密码等认证信息,欺骗openstack已经登录且通过认证,这样就可以创建项目和角色;也就是把admin用户的验证信息通过声明环境变量的方式传递给openstack进行验证,实现针对openstack的非交互式操作,经常用在调用api

[root@ct ~]# cat >> ~/.bashrc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://ct:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
[root@ct ~]# source ~/.bashrc
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

注意:配置文件中不要包含任何的注释,否则服务启动不了

7、通过配置环境变量,可以使用openstack命令进行一些操作,示例

[root@ct ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| c9148d6350f540feab1a03dc783bbd6f | admin |
+----------------------------------+-------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

五、创建OpenStack 域、项目、用户和角色

##创建一个项目(project),创建在指定的domain(域)中,指定描述信息,project名称为service(可使用openstack domain list 查询)
[root@ct ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 1298ea1e73dd47aa95f79b19a111343b |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

##创建角色(可使用openstack role list查看)
[root@ct ~]# openstack role create user
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | 8cd77d6021ca4bc4982cb68ec6db46d3 |
| name        | user                             |
| options     | {}                               |
+-------------+----------------------------------+

##查看openstack 角色列表
[root@ct ~]# openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 43dc6162c5af46e583b22fa57a669dc7 | admin  |
| 7517eac6f6ad4d3f956dc6be2bd9167c | reader |
| 8cd77d6021ca4bc4982cb68ec6db46d3 | user   |
| a2583ff2c6ff45a890035dbfda99dc39 | member |
+----------------------------------+--------+
# admin:管理员
# member:租户
# user:用户


##查看是否可以不指定密码就可以获取到token信息(验证认证服务)
[root@ct ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-12-21T15:09:36+0000                                                                                                                                                                |
| id         | gAAAAABf4KygKayEaERQ8Td4H-qUT9yeWSdKi9fjeyLORPdDhcEP75jss_w9YsfnE-JQXB3S0du56CRYcyCx8LatBjbhfqmhIGvEHSxJTLwB42Foel3aqS-MKKuQ4GknqWqNHHz8d8LJTfXffNMgyqEMFjWQIMlCa6cnH2hrEg3uGcL-9-cZyhM |
| project_id | 2aeba0d868064833b8eb7b725836f85f                                                                                                                                                        |
| user_id    | c9148d6350f540feab1a03dc783bbd6f                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

补充说明:

Keystone 组件是作为OpenStack 集群中统一认证、授权的模块,其核心功能就是针对于User(用户)、Tenant(租户)、Role(角色)、Token(令牌/凭证)的控制(手工编译部署即围绕此功能展开的)

User:使用 openstack 的用户。

Tenant:租户,可以理解为一个人、项目或者组织拥有的资源的合集。在一个租户中可以拥有很多个用户,这些用户可以根据权限的划分使用租户中的资源。

Role:角色,用于分配操作的权限。角色可以被指定给用户,使得该用户获得角色对应的操作权限。

Token:指的是一串比特值或者字符串,用来作为访问资源的记号。Token 中含有可访问资源的范围和有效时间,token 是用户的一种凭证,需要使用正确的用户名和密码向 Keystone 服务申请才能得到 token。

六、Glance组件部署

OpenStack上创建虚拟机需要镜像支持,所以先行进行部署

部署思路:

1、创建数据库、授权
2、创建openstack用户、授权、管理
3、修改配置文件(glance-api.conf、glance-registry.conf)
4、初始化数据库、上传实例镜像
  • 1
  • 2
  • 3
  • 4

1、创建数据库实例和数据库用户

[root@ct ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

2、创建OpenStack的Glance用户

##创建用户前,需要首先执行管理员环境变量脚本(此处已经在~/.bashrc 中定义过了)

##创建glance用户
[root@ct ~]# openstack user create --domain default --password GLANCE_PASS glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 55cf9f286e924e5fab460ee697a2eb18 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

##将glance用户添加到service项目中,并且针对这个项目拥有admin权限;注册glance的API,需要对service项目有admin权限
[root@ct ~]# openstack role add --project service --user glance admin

##创建一个service服务,service名称为glance,类型为image;创建完成后可以通过 openstack service list 查看
[root@ct ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | c984a5d16ff44f49af12f7f139aab847 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

3、创建镜像服务 API 端点

##OpenStack使用三种API端点代表三种服务:admin、internal、public

[root@ct ~]# openstack endpoint create --region RegionOne image admin http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4e68aa0ce1f1420ea5c1126f6a5e9157 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c984a5d16ff44f49af12f7f139aab847 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+
[root@ct ~]# openstack endpoint create --region RegionOne image internal http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f92d6994ecdd463598e525525d97c426 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c984a5d16ff44f49af12f7f139aab847 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+
[root@ct ~]# openstack endpoint create --region RegionOne image public http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4644eb3add31464cbb71ff403f864034 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c984a5d16ff44f49af12f7f139aab847 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

4、修改配置文件

##安装 openstack-glance 软件包。
[root@ct ~]# yum -y install openstack-glance 

##修改glance配置文件,glance有两个配置文件:
/etc/glance/glance-api.conf 
/etc/glance/glance-registry.conf

##备份、过滤注释信息
[root@ct ~]# cp -a /etc/glance/glance-api.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf

##备份、过滤注释信息
[root@ct ~]# cp -a /etc/glance/glance-registry.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
[root@ct ~]# vim glance-api.conf.sh
#!/bin/bash
#传入修改的参数
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

[root@ct ~]# vim glance-registry.conf.sh
#!/bin/bash
#修改配置文件参数
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
##运行命令脚本
[root@ct ~]# sh glance-api.conf.sh
[root@ct ~]# sh glance-registry.conf.sh
  • 1
  • 2
  • 3
##查看glance-api.conf配置文件
[root@ct ct]# cat /etc/glance/glance-api.conf
[DEFAULT]
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.sheepdog.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores = file,http					#存储类型,file:文件,http:基于api调用的方式,把镜像放到其他存储上
default_store = file					#默认存储方式
filesystem_store_datadir = /var/lib/glance/images/	##指定镜像存放的本地目录
[image_format]
[keystone_authtoken]
www_authenticate_uri = http://ct:5000			##指定认证的keystone的URL
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service				#glance用户针对service项目拥有admin权限
username = glance
password = GLANCE_PASS
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone					#指定提供认证的服务器为keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

#修改参数(配置与glance-api.conf相同)
[root@ct ct]# vim /etc/glance/glance-registry.conf
[DEFAULT]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
[keystone_authtoken]
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

5、初始化glance数据库,生成相关表结构

[root@ct ~]# su -s /bin/sh -c "glance-manage db_sync" glance
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name 'alembic_version_pkc' ignored for PRIMARY key.")
  result = self._query(query)
·
·
·
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
###不管有多少个controler,只需要初始化一次即可
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

6、开启glance服务

##开机自启
[root@ct ~]# systemctl enable openstack-glance-api.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.

##此处开启之后会生成存放镜像的目录/var/lib/glance/image
[root@ct ~]# systemctl start openstack-glance-api.service

##查看端口(也可以使用lsof -i:9292 )
[root@ct ~]# netstat -natp | grep 9292
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      24370/python2  

##赋予openstack-glance-api.service服务对存储设备的可写权限(-h:值对符号连接/软链接的文件修改)
[root@ct ~]# chown -hR glance:glance /var/lib/glance/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

7、镜像导入

##导入cirros-0.3.5-x86_64-disk.img镜像
[root@ct ~]# ls
anaconda-ks.cfg               glance-api.conf.sh       original-ks.cfg
cirros-0.3.5-x86_64-disk.img  glance-registry.conf.sh
[root@ct ~]# openstack image create --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | f8ab98ff5e73ebab884d80c9dc9c7290                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2020-12-21T14:02:29Z                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                      |
| file             | /v2/images/4aa148c2-2abf-4826-9ab9-4108c5483841/file                                                                                                                                       |
| id               | 4aa148c2-2abf-4826-9ab9-4108c5483841                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                          |
| name             | cirros                                                                                                                                                                                     |
| owner            | 2aeba0d868064833b8eb7b725836f85f                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2d6f919ca35d9046d437c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 13267968                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2020-12-21T14:02:30Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

8、查看镜像的两种方式

[root@ct ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 4aa148c2-2abf-4826-9ab9-4108c5483841 | cirros | active |
+--------------------------------------+--------+--------+
[root@ct ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 4aa148c2-2abf-4826-9ab9-4108c5483841 | cirros |
+--------------------------------------+--------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

七、Placement组件部署

1、创建数据库实例和数据库用户

[root@ct ~]# mysql -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  
...

MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit
Bye
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

2、创建Placement服务用户和API的endpoint

1)创建placement用户
[root@ct ~]# openstack user create --domain default --password PLACEMENT_PASS placement
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 21516a4203fa4292898da784c976df6f |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
2)给与placement用户对service项目拥有admin权限
[root@ct ~]# openstack role add --project service --user placement admin
  • 1
3)创建一个placement服务,服务类型为placement
[root@ct ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 9074912228884fa1802b474a2a6d53fd |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
4)注册API端口到placement的service中;注册的信息会写入到mysql中
[root@ct ~]# openstack endpoint create --region RegionOne placement public http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 34a05fb7bde0470aa02db4b19a5e5d1e |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9074912228884fa1802b474a2a6d53fd |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne placement internal http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 3c84e9d1a3d344daaa51c9750b53df91 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9074912228884fa1802b474a2a6d53fd |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne placement admin http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4bdec54eb9a84611a9e344515e886b19 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9074912228884fa1802b474a2a6d53fd |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
5)安装placement服务
[root@ct ~]# yum -y install openstack-placement-api
  • 1
6)修改placement配置文件
#修改配置文件
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://ct:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS

#查看placement配置文件
[root@ct ~]# cat /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://ct:5000/v3				#指定keystone地址
memcached_servers = ct:11211			#session信息是缓存放到了memcached中
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[profiler]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
7)导入数据库
[root@ct ~]# su -s /bin/sh -c "placement-manage db sync" placement
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name 'alembic_version_pkc' ignored for PRIMARY key.")
  result = self._query(query)

  • 1
  • 2
  • 3
  • 4
8)修改Apache配置文件

00-placemenct-api.conf(安装完placement服务后会自动创建该文件-虚拟主机配置 )

#虚拟主机配置文件
[root@ct ~]# vim /etc/httpd/conf.d/00-placement-api.conf		#安装完placement会自动创建此文件

Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
  WSGIScriptAlias / /usr/bin/placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/placement/placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
</VirtualHost>

Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
<Directory /usr/bin>			#此处是bug,必须添加下面的配置来启用对placement api的访问,否则在访问apache的
<IfVersion >= 2.4>				#api时会报403;添加在文件的最后即可
	Require all granted
</IfVersion>
<IfVersion < 2.4>				#apache版本;允许apache访问/usr/bin目录;否则/usr/bin/placement-api将不允许被访问
	Order allow,deny				
	Allow from all			#允许apache访问
</IfVersion>
</Directory>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
9)重新启动apache
[root@ct ~]# systemctl restart httpd
  • 1
10)测试

① curl 测试访问

[root@ct ~]# curl ct:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}You have mail in /var/spool/mail/root
  • 1
  • 2

② 查看端口占用(netstat、lsof)

[root@ct ~]# netstat -natp | grep 8778              
tcp6       0      0 :::8778                 :::*                    LISTEN      12629/httpd 
  • 1
  • 2

③ 检查placement状态

[root@ct ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

八、nova组件部署

控制节点ct

nova-api(nova主服务)

nova-scheduler(nova调度服务)

nova-conductor(nova数据库服务,提供数据库访问)

nova-novncproxy(nova的vnc服务,提供实例的控制台)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

计算节点c1、c2

nova-compute(nova计算服务)
  • 1

1、ct节点Nova服务配置

1)创建nova数据库,并执行授权操作
[root@ct ~]# mysql -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  

MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit
Bye
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
2)管理Nova用户及服务

创建nova用户

[root@ct ~]# openstack user create --domain default --password NOVA_PASS nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 71168f5a12774ef781a7eceb17bbe7ac |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

把nova用户添加到service项目,拥有admin权限

[root@ct ~]# openstack role add --project service --user nova admin
  • 1

创建nova服务

[root@ct ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 59873f448a9e4f43af804d3df4b0e0c6 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

给Nova服务关联endpoint(端点)

[root@ct ~]# openstack endpoint create --region RegionOne compute public http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ca6575467c9349efa297699b6ada2ed5 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 59873f448a9e4f43af804d3df4b0e0c6 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute internal http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a70baacad73348f9ae598f3f8aa50815 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 59873f448a9e4f43af804d3df4b0e0c6 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute admin http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d0f728e693574be4994e661b42074f2e |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 59873f448a9e4f43af804d3df4b0e0c6 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

安装nova组件(nova-api、nova-conductor、nova-novncproxy、nova-scheduler)

[root@ct ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
  • 1

修改nova配置文件(nova.conf)

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#修改nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.61			####修改为 ct的IP(内部IP)
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS


#查看nova.conf

[root@ct ~]# cat /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata		#指定支持的api类型
my_ip = 10.0.0.61				#定义本地IP
use_neutron = true					#通过neutron获取IP地址
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:RABBIT_PASS@ct	#指定连接的rabbitmq

[api]
auth_strategy = keystone				#指定使用keystone认证

[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api

[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]

[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova

[devices]
[ephemeral_storage_encryption]
[filter_scheduler]

[glance]
api_servers = http://ct:9292

[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]

[keystone_authtoken]				#配置keystone的认证信息
auth_url = http://ct:5000/v3				#到此url去认证
memcached_servers = ct:11211			#memcache数据库地址:端口
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]

[oslo_concurrency]					#指定锁路径
lock_path = /var/lib/nova/tmp			#锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]

[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS

[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]						#此处如果配置不正确,则连接不上虚拟机的控制台
enabled = true		
server_listen =  $my_ip				#指定vnc的监听地址
server_proxyclient_address =  $my_ip			#server的客户端地址为本机地址;此地址是管理网的地址

[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143

初始化nova_api数据库

[root@ct ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
  • 1

注册cell0数据库;nova服务内部把资源划分到不同的cell中,把计算节点划分到不同的cell中;openstack内部基于cell把计算节点进行逻辑上的分组

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
  • 1

创建cell1单元格

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
9b5c500b-af45-4903-a928-bad04078fa3a
  • 1
  • 2

初始化nova数据库;可以通过 /var/log/nova/nova-manage.log 日志判断是否初始化成功

[root@ct ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)
  • 1
  • 2
  • 3
  • 4
  • 5

验证cell0和cell1组件是否注册成功

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
|  Name |                 UUID                 |       Transport URL        |           Database Connection           | Disabled |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |           none:/           | mysql+pymysql://nova:****@ct/nova_cell0 |  False   |
| cell1 | 9b5c500b-af45-4903-a928-bad04078fa3a | rabbit://openstack:****@ct |    mysql+pymysql://nova:****@ct/nova    |  False   |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

启动Nova服务

[root@ct ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@ct ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  • 1
  • 2
  • 3

检查nova服务端口

[root@ct ~]# netstat -tnlup|egrep '8774|8775'
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      16940/python2       
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      16940/python2  

[root@ct ~]# curl http://ct:8774
  • 1
  • 2
  • 3
  • 4
  • 5

2、c1节点配置Nova服务

1)安装nova-compute组件
[root@c1 ~]# yum -y install openstack-nova-compute
  • 1
2)修改配置文件
#编辑计算节点节点Nova配置文件(c1和c2、只有IP不同)
cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 20.0.0.62				#修改为对应节点的内部IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
 openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://20.0.0.61:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu


#配置文件内容如下:
[root@c1 ~]# cat /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@ct
my_ip = 20.0.0.62
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]

[glance]
api_servers = http://ct:9292

[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]

[keystone_authtoken]
auth_url = http://ct:5000/v3
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[libvirt]
virt_type = qemu

[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS

[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address =  $my_ip
novncproxy_base_url = http://192.168.100.11:6080/vnc_auto.html			#比较特殊的地方,需要手动添加IP地址,否则之后搭建成功后,无法通过UI控制台访问到内部虚拟机

[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
3)开启服务
[root@c1 ~]#  systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

[root@c1 ~]#  systemctl start libvirtd.service openstack-nova-compute.service
  • 1
  • 2
  • 3
  • 4

3、c2节点配置Nova服务

和c1相同,只是ip地址不同

4、ct上查看c1、c2是否注册成功

1)ct查看compute节点是否注册到controller上

通过消息队列;需要在controller节点执行

[root@ct ~]# openstack compute service list --service nova-compute
+----+--------------+------+------+---------+-------+----------------------------+
| ID | Binary       | Host | Zone | Status  | State | Updated At                 |
+----+--------------+------+------+---------+-------+----------------------------+
|  8 | nova-compute | c1   | nova | enabled | up    | 2021-02-02T02:40:49.000000 |
|  9 | nova-compute | c2   | nova | enabled | up    | 2021-02-02T02:40:48.000000 |
+----+--------------+------+------+---------+-------+----------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
2)扫描当前openstack中有哪些计算节点可用

发现后会把计算节点创建到cell中,后面就可以在cell中创建虚拟机;相当于openstack内部对计算节点进行分组,把计算节点分配到不同的cell中

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 9b5c500b-af45-4903-a928-bad04078fa3a
Checking host mapping for compute host 'c2': 690c36b3-38fc-4a76-8b54-077194ba04d9
Creating host mapping for compute host 'c2': 690c36b3-38fc-4a76-8b54-077194ba04d9
Found 1 unmapped computes in cell: 9b5c500b-af45-4903-a928-bad04078fa3a
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
3)默认每次添加个计算节点

在控制端就需要执行一次扫描,这样会很麻烦,所以可以修改控制端nova的主配置文件

[root@ct ~]# vim /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300     #每300秒扫描一次

[root@ct ~]# systemctl restart openstack-nova-api.service
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
4)验证计算节点服务

检查 nova 的各个服务是否都是正常,以及 compute 服务是否注册成功

[root@ct ~]# openstack compute service list
+----+----------------+------+----------+---------+-------+----------------------------+
| ID | Binary         | Host | Zone     | Status  | State | Updated At                 |
+----+----------------+------+----------+---------+-------+----------------------------+
|  4 | nova-conductor | ct   | internal | enabled | up    | 2021-02-02T02:46:10.000000 |
|  5 | nova-scheduler | ct   | internal | enabled | up    | 2021-02-02T02:46:11.000000 |
|  8 | nova-compute   | c1   | nova     | enabled | up    | 2021-02-02T02:46:09.000000 |
|  9 | nova-compute   | c2   | nova     | enabled | up    | 2021-02-02T02:46:08.000000 |
+----+----------------+------+----------+---------+-------+----------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

查看各个组件的 api 是否正常

[root@ct ~]# openstack catalog list
+-----------+-----------+---------------------------------+
| Name      | Type      | Endpoints                       |
+-----------+-----------+---------------------------------+
| nova      | compute   | RegionOne                       |
|           |           |   internal: http://ct:8774/v2.1 |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8774/v2.1   |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8774/v2.1    |
|           |           |                                 |
| placement | placement | RegionOne                       |
|           |           |   public: http://ct:8778        |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:8778      |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8778         |
|           |           |                                 |
| glance    | image     | RegionOne                       |
|           |           |   public: http://ct:9292        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:9292         |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:9292      |
|           |           |                                 |
| keystone  | identity  | RegionOne                       |
|           |           |   internal: http://ct:5000/v3/  |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:5000/v3/     |
|           |           | RegionOne                       |
|           |           |   public: http://ct:5000/v3/    |
|           |           |                                 |
+-----------+-----------+---------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

查看是否能够拿到镜像

[root@ct ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 4aa148c2-2abf-4826-9ab9-4108c5483841 | cirros | active |
+--------------------------------------+--------+--------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机

[root@ct ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

九、neutron组件部署

1、创建数据库neutron,并进行授权

[root@ct ~]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit
Bye
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

2、创建neutron用户,用于在keystone做认证

[root@ct ~]# openstack user create --domain default --password NEUTRON_PASS neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | a160737237234196be3e4830f6657f42 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

3、将neutron用户添加到service项目中拥有管理员权限

[root@ct ~]# openstack role add --project service --user neutron admin
  • 1

4、创建network服务,服务类型为network

[root@ct ~]# openstack service create --name neutron --description "OpenStack Networking" network 
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 146efcacedce4502bf89d05b554c8181 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

5、注册API到neutron服务,给neutron服务关联端口,即添加endpoint

[root@ct ~]# openstack endpoint create --region RegionOne network public http://ct:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d95ce5bca03e4b348e6d3004ec227466 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 146efcacedce4502bf89d05b554c8181 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://ct:9696                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne network internal http://ct:9696   
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4b69faee9be945f0b272658233fc126f |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 146efcacedce4502bf89d05b554c8181 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://ct:9696                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne network admin http://ct:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7690ac1a9f2245b886b74f212a5b2d60 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 146efcacedce4502bf89d05b554c8181 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://ct:9696                   |
+--------------+----------------------------------+

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45

6、安装提供者网络(桥接)

ebtables 包是用来管理iptables规则的

[root@ct ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables conntrack-tools
  • 1

7、修改配置文件

修改主配置文件

#修改主配置文件neutron.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@ct/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set  /etc/neutron/neutron.conf nova  auth_url http://ct:5000
openstack-config --set  /etc/neutron/neutron.conf nova  auth_type password
openstack-config --set  /etc/neutron/neutron.conf nova  project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  region_name RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova  project_name service
openstack-config --set  /etc/neutron/neutron.conf nova  username nova
openstack-config --set  /etc/neutron/neutron.conf nova  password NOVA_PASS

#查看配置文件
[root@ct ~]# cat /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2						#启用二层网络插件
service_plugins = router					#启用三层网络插件
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@ct
auth_strategy = keystone					#配置rabbitmq连接
notify_nova_on_port_status_changes = true			#当网络接口发生变化时,通知给计算节点	
notify_nova_on_port_data_changes = true			#当端口数据发生变化,通知计算节点
[cors]
[database]						#配置数据库连接
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@ct/neutron
[keystone_authtoken]					#配置keystone认证信息
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]						#配置锁路径
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
[nova]							#neutron需要给nova返回数据
auth_url = http://ct:5000					#到keystone认证nova
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova						#通过nova的用户名和密码到keystone验证nova的token
password = NOVA_PASS
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73

修改 ML2 plugin 配置文件 ml2_conf.ini

#修改参数
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true


#查看配置文件
[root@ct ~]# cat /etc/neutron/neutron.conf

[DEFAULT]

[ml2]
type_drivers = flat,vlan,vxlan				#配置驱动类型;单一扁平网络(桥接)和vlan;让二层网络支持桥接,支持基于vlan做子网划分
tenant_network_types = vxlan				#租户网络类型(vxlan)
mechanism_drivers = linuxbridge,l2population		#启用Linuxbridge和l2机制,(l2population机制是为了简化网络通信拓扑,减少网络广播):
extension_drivers = port_security			#启用端口安全扩展驱动程序,基于iptables实现访问控制;但配置了扩展安全组会导致一些端口限制,造成一些服务无法启动 

[ml2_type_flat]
flat_networks = provider				#配置公共虚拟网络为flat网络

[ml2_type_vxlan]
vni_ranges = 1:1000				#为私有网络配置VXLAN网络识别的网络范围

[securitygroup]
enable_ipset = true					#启用 ipset 增加安全组的方便性
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

修改 linux bridge network provider 配置文件

#Linux网桥
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eth1		###eth1网卡名称
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.100.101	##控制节点IP地址	
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#查看配置文件

[DEFAULT]

[linux_bridge]
physical_interface_mappings = provider:eth1			#指定上个文件中的桥接网络名称,与eth0物理网卡做关联,后期给虚拟机分配external网络,就可以通过eth0上外网;物理网卡有可能是bind0、br0等

[vxlan]							#启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = true						#允许用户创建自定义网络(3层网络)
local_ip = 192.168.100.11
l2_population = true

[securitygroup]						#启用安全组并配置 Linux 桥接 iptables 防火墙驱动
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

修改内核参数

[root@ct ~]# echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf    #允许虚拟机的数据通过物理机出去

[root@ct ~]# echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf

[root@ct ~]# modprobe br_netfilter    #modprobe:用于向内核中加载模块或者从内核中移除模块。modprobe -r 表示移除

[root@ct ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

配置Linuxbridge接口驱动和外部网络网桥

[root@ct ~]# cp -a /etc/neutron/l3_agent.ini{,.bak}

[root@ct ~]# grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini

[root@ct ~]# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

[root@ct ~]# cat /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

修改dhcp_agent 配置文件

cp -a /etc/neutron/dhcp_agent.ini{,.bak}

grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini 

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

[root@ct ~]# cat /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = linuxbridge #指定默认接口驱动为linux网桥

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq #指定DHCP驱动

enable_isolated_metadata = true      #开启iso元数据
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

配置元数据代理、用于配置桥接与自服务网络的通用配置

cp -a /etc/neutron/metadata_agent.ini{,.bak}

grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host ct

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

[root@ct ~]# cat /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = ct
metadata_proxy_shared_secret = METADATA_SECRET
[cache]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

修改nova配置文件,用于neutron交互

#修改CT配置文件
openstack-config --set /etc/nova/nova.conf neutron url http://ct:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://ct:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET



[root@ct ~]# cat /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 10.0.0.61
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:RABBIT_PASS@ct
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://ct:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://ct:5000/v3
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
[metrics]
[mks]
[neutron]
url = http://ct:9696
auth_url = http://ct:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen =  $my_ip
server_proxyclient_address =  $my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]


[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119

8、创建ML2插件文件符号连接

网络服务初始化脚本需要/etc/neutron/plugin.ini指向ML2插件配置文件的符号链接

[root@ct ~]#  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  • 1

9、初始化数据库

[root@ct ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
·
·
·
  OK
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

10、重启计算节点nova-api服务

[root@ct ~]# systemctl restart openstack-nova-api.service
  • 1

11、开启neutron服务、设置开机自启动

[root@ct ~]# systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

neutron-metadata-agent.service

[root@ct ~]# systemctl start neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

neutron-metadata-agent.service

[root@ct ~]# netstat -anutp |grep 9696
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      21409/server.log  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

12、因为配置了第三层L3网络服务、所以需要启动第三层服务

[root@ct ~]# systemctl enable neutron-l3-agent.service

[root@ct ~]# systemctl restart neutron-l3-agent.service
  • 1
  • 2
  • 3

13、c1 安装Neutron

ipset:iptables的扩展,允许匹配规则的集合而不仅仅是一个IP

[root@c1 ~]# yum -y install openstack-neutron-linuxbridge ebtables ipset conntrack-tools
  • 1

修改neutron.conf文件

【修改配置文件(C1、C2)】
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp


【查看配置文件】

[root@c1 neutron]# cat neutron.conf

[DEFAULT]					#neutron的server端与agent端通讯也是通过rabbitmq进行通讯的
transport_url = rabbit://openstack:RABBIT_PASS@ct
auth_strategy = keystone				#认证策略:keystone
[cors]
[database]

[keystone_authtoken]				#指定keystone认证的信息
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[oslo_concurrency]					#配置锁路径(管理线程库)
lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

配置Linux网桥代理

【修改C1、C2 Linuxbridge网桥配置文件】
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eth1
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.0.0.62
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


[root@c1 ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens34
● 直接将node节点external网络绑定在当前节点的指定的物理网卡,不需要node节点配置网络名称,node节点只需要接收controller节点指令即可;controller节点上配置的external网络名称是针对整个openstack环境生效的,所以指定external网络绑定在当前node节点的eth0物理网卡上(也可能是bind0或br0)

[vxlan]
enable_vxlan = true							#开启Vxlan网络
local_ip = 10.0.0.61
l2_population = true						#L2 Population 是用来提高 VXLAN 网络扩展能力的组件

[securitygroup]
enable_security_group = true						#开启安全组
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver	#指定安全组驱动文件
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

修改内核

[root@c1 ~]# echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf   #允许虚拟机的数据通过物理机出去

[root@c1 ~]# echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf

[root@c1 ~]# modprobe br_netfilter     #modprobe:用于向内核中加载模块或者从内核中移除模块。modprobe -r 表示移除

[root@c1 ~]# sysctl -p
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

修改nova.conf配置文件

【修改nova.conf配置文件的neutron字段-C1、C2节点】
openstack-config --set /etc/nova/nova.conf neutron auth_url http://ct:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

#以下为修改字段内容
[neutron]
auth_url = http://ct:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

14、c2 安装Neutron

C2节点部署neutron服务(与C1节点相同)

15、验证服务组件【ct节点】

[root@ct ~]# openstack extension list --network

[root@ct ~]# openstack network agent list
  • 1
  • 2
  • 3

十、 OpenStack-Dashboard组件部署

1、c1节点安装dashboard和httpd

因为在CT控制节点已安装httpd服务,而Dashboard控制台也需要httpd支持,所以此处可以在C1节点进行安装httpd

[root@c1 ~]# yum -y install openstack-dashboard httpd
  • 1

2、修改local_setting本地控制台的配置文件

[root@c1 ~]# vim /etc/openstack-dashboard/local_settings 
#修改的内容如下:
#修改local_setting本地控制台的配置文件
import os								#使用Python导入一个模块
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False							#不开启调式	
ALLOWED_HOSTS = ['*']						#只允许通过列表中指定的域名访问dashboard;允许通过指定的IP地址及域名访问dahsboard;
								['*']表示允许所有域名
LOCAL_PATH = '/tmp'
SECRET_KEY='f8ac039815265a99b64f'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'		#指定session引擎
CACHES = {							#95-100行取消"#"注释
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'ct:11211',	#指定memcache地址及端口
    }
}
#以下配置session信息存放到memcache中;session信息不仅可以存放到memcache中,也可以存放到其他地方
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'		#108行修改
OPENSTACK_HOST = "ct"						#118-127行修改
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST	
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True			#让dashboard支持域
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
#配置openstack的API版本
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"


OPENSTACK_NEUTRON_NETWORK = {					#132行到152行修改
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_ipv6': True,
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_router': True,
    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
#定义使用的网络类型,[*]表示

TIME_ZONE = "Asia/Shanghai"					#156行修改
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

3、重启服务

重新生成openstack-dashboard.conf并重启Apache服务

(由于dashborad会重新复制代码文件,重启apache会比较慢)

[root@c1 ~]# cd /usr/share/openstack-dashboard

[root@c1 openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

[root@c1 ~]# systemctl enable httpd.service

[root@c1 ~]# systemctl restart httpd.service
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

重启 ct 节点的 memcache 服务

[root@ct ~]# systemctl restart memcached.service
  • 1

验证操作

打开浏览器,在地址栏中输入“http://20.0.0.62”,进入Dashboard登录页面。

在登录页面依次填写:“域:default、用户名:admin、密码:ADMIN_PASS”(在~.bashrc中已定义)

完成后,进行登陆

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yotsUrKI-1613620863536)(D:/杂记/1/image-20210202143559953.png)]

``
[root@c1 ~]# vim /etc/openstack-dashboard/local_settings
#修改的内容如下:
#修改local_setting本地控制台的配置文件
import os #使用Python导入一个模块
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False #不开启调式
ALLOWED_HOSTS = [’’] #只允许通过列表中指定的域名访问dashboard;允许通过指定的IP地址及域名访问dahsboard;
[’
’]表示允许所有域名
LOCAL_PATH = ‘/tmp’
SECRET_KEY=‘f8ac039815265a99b64f’
SESSION_ENGINE = ‘django.contrib.sessions.backends.file’ #指定session引擎
CACHES = { #95-100行取消"#"注释
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘ct:11211’, #指定memcache地址及端口
}
}
#以下配置session信息存放到memcache中;session信息不仅可以存放到memcache中,也可以存放到其他地方
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’ #108行修改
OPENSTACK_HOST = “ct” #118-127行修改
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #让dashboard支持域
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}
#配置openstack的API版本
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”

OPENSTACK_NEUTRON_NETWORK = { #132行到152行修改
‘enable_auto_allocated_network’: False,
‘enable_distributed_router’: False,
‘enable_fip_topology_check’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_ipv6’: True,
‘enable_quotas’: True,
‘enable_rbac_policy’: True,
‘enable_router’: True,
‘default_dns_nameservers’: [],
‘supported_provider_types’: [’’],
‘segmentation_id_range’: {},
‘extra_provider_types’: {},
‘supported_vnic_types’: [’
’],
‘physical_networks’: [],
}
#定义使用的网络类型,[*]表示

TIME_ZONE = “Asia/Shanghai” #156行修改


 

### 3、重启服务

重新生成openstack-dashboard.conf并重启Apache服务

(由于dashborad会重新复制代码文件,重启apache会比较慢)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

[root@c1 ~]# cd /usr/share/openstack-dashboard

[root@c1 openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

[root@c1 ~]# systemctl enable httpd.service

[root@c1 ~]# systemctl restart httpd.service


 

重启 ct 节点的 memcache 服务

  • 1
  • 2
  • 3
  • 4
  • 5

[root@ct ~]# systemctl restart memcached.service




验证操作

打开浏览器,在地址栏中输入“http://20.0.0.62”,进入Dashboard登录页面。

在登录页面依次填写:“域:default、用户名:admin、密码:ADMIN_PASS”(在~.bashrc中已定义)

完成后,进行登陆

 [外链图片转存中...(img-yotsUrKI-1613620863536)]

 

 

 

 

 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/328136
推荐阅读
相关标签
  

闽ICP备14008679号