当前位置:   article > 正文

【无标题】_openstack user create --domain xiandian --password

openstack user create --domain xiandian --password password testuser

主从数据库管理(6)

使用提供的两台虚拟机,在虚拟机上安装mariadb数据库,并配置为主从数据库,实现两个数据库的主从同步。配置完毕后,请在从节点上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。

MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql1
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 245
Relay_Log_File: mariadb-relay-bin.000005
Relay_Log_Pos: 529
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 245
Relay_Log_Space: 1256
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 30
1 row in set (0.00 sec)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43

使用 xnode2.xnode3 虚拟机进行实验,在这两台虚拟机上安装 mariadb 数据库服

务,并配置为主从数据库(xnode2 为主节点.xnode3 为从节点,安装数据库使用

CentOS-7-x86_64-DVD-1511.iso 和 gpmall-repo 这两个源),实现两个数据库的

主从同步。配置完毕后,请在 xnode3 上的数据库中执行“show slave status \G”命

令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。(30 分)

应用商城系统(6)

使用提供的软件包和提供的虚拟机,完成单节点应用系统部署。部署完成后,进行登录,(订单中填写的收货地址填写自己学校的地址,收货人填写自己的实际联系方式)最后使用curl命令去获取商城首页的返回信息,将curl http://你自己的商城IP/#/home获取到的结果以文本形式提交到答题框。

[root@mall gpmall-xiangmubao-danji]# curl http://192.168.1.111/#/home
<!DOCTYPE html><html><head><meta charset=utf-8><title>应用商城系统</title><meta name=keywords content=""><meta name=description content=""><meta http-equiv=X-UA-Compatible content="IE=Edge"><meta name=wap-font-scale content=no><link rel="shortcut icon " type=images/x-icon href=/static/images/favicon.ico><link href=/static/css/app.8d4edd335a61c46bf5b6a63444cd855a.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.2d17a82764acff8145be.js></script><script type=text/javascript src=/static/js/vendor.4f07d3a235c8a7cd4efe.js></script><script type=text/javascript src=/static/js/app.81180cbb92541cdf912f.js></script></body></html><style>body{
min-width:1250px;}</style>
  • 1
  • 2
  • 3

YUM源管理(5)[答案可行]

若存在一个CentOS-7-x86_64-DVD-1511.iso的镜像文件,使用这个镜像文件配置本地yum源,要求将这个镜像文件挂载在/opt/centos目录,请问如何配置自己的local.repo文件,使得可以使用该镜像中的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。

[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
  • 1
  • 2
  • 3
  • 4
  • 5

YUM源管理(40分)
假设当前有一个centos7.2-1511.iso的镜像文件,使用这个文件配置yum源,要求将这个镜像文件挂载在/opt/centos目录。还存在一个ftp源,IP地址为192.168.100.200,ftp配置文件中配置为anon_root=/opt,/opt目录中存在一个iaas目录(该目录下存在一个repodata目录)请问如何配置自己的local.repo文件,使得可以使用这两个地方的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。

[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=ftp://192.168.100.200/iaas
gpgcheck=0
enabled=1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Docker安装(5)[docker info]

使用提供的虚拟机和软件包,自行配置YUM源,安装docker-ce服务。安装完毕后执行docker info命令的返回结果以文本形式提交到答题框。

[root@xiandian ~]# tar -zvxf Docker.tar.gz
[root@xiandian ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[docker]
name=docker
baseurl=file:///root/Docker
enabled=1
gpgcheck=0
[root@xiandian ~]# iptables -F
[root@xiandian ~]# iptables -X
[root@xiandian ~]# iptables -Z
[root@xiandian ~]# iptables-save 
# Generated by iptables-save v1.4.21 on Fri May 15 02:00:29 2020
*filter
:INPUT ACCEPT [20:1320]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [11:1092]
COMMIT
# Completed on Fri May 15 02:00:29 2020
[root@xiandian ~]# vim /etc/selinux/config 
SELINUX=disabled
# 注释:关闭交换分区:
[root@xiandian ~]# vim /etc/fstab 
#/dev/mapper/centos-swap swap            swap    defaults        0 0
[root@xiandian ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1824          95        1591           8         138        1589
Swap:             0           0           0
# 注释:在配置路由转发前,先升级系统并重启,不然会有两条规则可能报错:
[root@xiandian ~]# yum upgrade -y
[root@xiandian ~]# reboot
[root@xiandian ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@xiandian ~]# modprobe br_netfilter
[root@xiandian ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@xiandian ~]# yum install -y yum-utils device-mapper-persistent-data
[root@xiandian ~]# yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
[root@xiandian ~]# systemctl daemon-reload
[root@xiandian ~]# systemctl restart docker
[root@xiandian ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@xiandian ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.09.6
Storage Driver: devicemapper
 Pool Name: docker-253:0-100765090-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 11.73MB
 Data Space Total: 107.4GB
 Data Space Available: 24.34GB
 Metadata Space Used: 17.36MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.13GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.164-RHEL7 (2019-08-27)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-1127.8.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.777GiB
Name: xiandian
ID: OUR6:6ERV:3UCH:WJCM:TDLL:5ATV:E7IQ:HLAR:JKQB:OBK2:HZ7G:JC3Q
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115

Raid存储管理(4)

使用提供的虚拟机,该虚拟机存在一块大小为20G的磁盘/dev/vdb,使用fdisk命令对该硬盘进形分区,要求分出三个大小为5G的分区。使用这三个分区,创建名为/dev/md5、raid级别为5的磁盘阵列。创建完成后使用xfs文件系统进形格式化,并挂载到/mnt目录下。将mdadm -D /dev/md5命令和返回结果;df -h命令和返回结果以文本形式提交到答题框。

[root@xiandian ~]# mdadm -D /dev/md5
/dev/md0:
Version : 1.2
Creation Time : Thur Dec 07 10:31:07 2023
Raid Level : raid5
Array Size : 5238784 (5.00 GiB 5.36 GB)
Used Dev Size : 5238784 (5.00 GiB 5.36 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thur Dec 07 10:34:37 2023
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Name : xiandian:0 (local to host xiandian)
UUID : 71123d35:b354bc98:2e36589d:f0ed3491
Events : 17
Number Major Minor RaidDevice State
0 253 17 0 active sync /dev/vdb1
1 253 18 1 active sync /dev/vdb2
2 253 19 2 active sync /dev/vdb3
[root@xiandian ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 41G 2.4G 39G 6% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 17M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/loop0 2.8G 33M 2.8G 2% /swift/node
tmpfs 799M 0 799M 0% /run/user/0
/dev/md5 10.0G 33M 10.0G 1% /mnt
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

Raid存储管理(40分)
登录云主机,在云主机中,存在一个大小为20G的硬盘为/dev/vdb,使用fdisk命令对该硬盘进形分区,要求分出两个大小为5G的分区。使用这两个分区,创建名为/dev/md0、raid级别为1的磁盘阵列。创建完成后使用xfs文件系统进形格式化,并挂载到/mnt目录下。将mdadm -D /dev/md0命令和df -h命令返回得结果以文本形式提交到答题框。

[root@xserver1 ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Thur Dec 07 10:31:07 2023
        Raid Level : raid1
        Array Size : 5237760 (5.00 GiB 5.36 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
       Update Time : Thur Dec 07 10:37:11 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : unknown
              Name : xserver1:0  (local to host xserver1)
              UUID : 8440d04c:3cf2e84a:4d524020:1072f7b4
            Events : 17
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2
[root@xserver1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   32G  5.2G   27G  17% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    913M     0  913M   0% /dev/shm
tmpfs                    913M  8.6M  904M   1% /run
tmpfs                    913M     0  913M   0% /sys/fs/cgroup
/dev/sda1                509M  125M  384M  25% /boot
/dev/mapper/centos-home  4.0G   33M  4.0G   1% /home
tmpfs                    183M     0  183M   0% /run/user/0
/dev/md0                 5.0G   33M  5.0G   1% /mnt
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

Raid管理(40分)
使用提供的虚拟机和软件包,完成Raid磁盘阵列的创建。当前虚拟机存储在一个大小为20G的磁盘vdb,利用磁盘分区新建4个磁盘分区,每个大小为5 GB。用3个5 GB的分区来模拟一个名为md5,级别为raid 5,外加一个热备盘的磁盘阵列。创建完成后将mdadm -D /dev/md5的返回结果以文本形式提交到答题框。

[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thur Dec 07 10:31:07 2023
        Raid Level : raid5
        Array Size : 10473472 (9.99 GiB 10.72 GB)
     Used Dev Size : 5236736 (4.99 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent
       Update Time : Thur Dec 07 10:37:11 2023
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1
            Layout : left-symmetric
        Chunk Size : 512K
Consistency Policy : unknown
              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 52a85acc:77f25bda:9af98a9f:c85aae38
            Events : 18
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2
       4       8       19        2      active sync   /dev/sdb3
       3       8       20        -      spare   /dev/sdb4
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

Zookeeper集群(4)

使用提供的三台虚拟机和软件包,完成Zookeeper集群的安装与配置,配置完成后,在相应的目录使用./zkServer.sh status命令查看三个Zookeeper节点的状态,将三个节点的状态以文本形式提交到答题框。

[root@zookeeper1 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
[root@zookeeper2 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
[root@zookeeper3 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Kafka集群(4)

使用提供的三台虚拟机和软件包,完成Kafka集群的安装与配置,配置完成后,在相应的目录使用 ./kafka-topics.sh --create --zookeeper 你的IP:2181 --replication-factor 1 --partitions 1 --topic test创建topic,将输入命令后的返回结果以文本形式提交到答题框。

对象存储管理(4)[操作可行]

使用提供的“all-in-one”虚拟机,使用openstack命令,创建名为examtest的容器并查询,上传一个aaa.txt(可自行创建)文件到这个容器中并查询。依次将操作命令和返回结果以文本形式提交到答题框。

[root@controller ~]# openstack container create examtest
+---------------------------------------+-----------+------------------------------------+
| account                               | container | x-trans-id                         |
+---------------------------------------+-----------+------------------------------------+
| AUTH_f9ff39ba9daa4e5a8fee1fc50e2d2b34 | examtest  | txc92b7d64042c46468ec82-006570b64f |
+---------------------------------------+-----------+------------------------------------+
[root@controller ~]# openstack container list
+----------+
| Name     |
+----------+
| examtest |
+----------+
[root@controller ~]# openstack object create examtest aaa.txt
[Errno 2] No such file or directory: 'aaa.txt'
[root@controller ~]# openstack object create examtest aaa.txt
+---------+-----------+----------------------------------+
| object  | container | etag                             |
+---------+-----------+----------------------------------+
| aaa.txt | examtest  | d41d8cd98f00b204e9800998ecf8427e |
+---------+-----------+----------------------------------+
[root@controller ~]# openstack object list examtest
+---------+
| Name    |
+---------+
| aaa.txt |
+---------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

Keystone管理(3)【操作可行】

使用提供的“all-in-one”镜像,自行检查openstack中各服务的状态,若有问题自行排查。在keystone中创建用户testuser,密码为password,创建好之后,查看testuser的详细信息。将openstack user show testuser命令的返回结果以文本形式提交到答题框。

[root@xiandian~]# source /etc/keystone/admin-openrc.sh
[root@xiandian~]# openstack user create --domain xiandian  --password password testuser
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 9321f21a94ef4f85993e92a228892418 |
| enabled   | True                             |
| id        | 5ad593ca940d427886187a4a666a816d |
| name      | testuser                         |
+-----------+----------------------------------+
[root@xiandian~]# openstack user show testuser
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 9321f21a94ef4f85993e92a228892418 |
| enabled   | True                             |
| id        | 5ad593ca940d427886187a4a666a816d |
| name      | testuser                         |
+-----------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

使用提供的“all-in-one”虚拟机,使用openstack命令,创建一个名称为“alice”账户,密码为“mypassword123”,邮箱为“alice@example.com”。并且创建一个名为“acme”项目。创建一个角色“compute-user”。给用户“alice”分配“acme”项目下的“compute-user”角色。将以上操作命令及结果以文本形式填入答题框。

[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# openstack user create --domain demo --password mypassword123 --email alice@example.com alice
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | ac0be8c125cb40dc83acf0ccd74bc008 |
| email     | alice@example.com                |
| enabled   | True                             |
| id        | 7c78906b6dd5426fac1f72e331f6dee2 |
| name      | alice                            |
+-----------+----------------------------------+
[root@controller ~]# openstack project create --domain demo acme
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | ac0be8c125cb40dc83acf0ccd74bc008 |
| enabled     | True                             |
| id          | bf977ceb37f44317b07130b63f6d3fb3 |
| is_domain   | False                            |
| name        | acme                             |
| parent_id   | ac0be8c125cb40dc83acf0ccd74bc008 |
+-------------+----------------------------------+
[root@controller ~]# openstack role create compute-user
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | 48e2c5a1764f40a5a330513bcbc0befb |
| name      | compute-user                     |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project acme --user alice compute-user
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

Glance管理(3)[操作可行,要拉镜像]

使用提供的“all-in-one”镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用提供的cirros-0.3.4-x86_64-disk.img镜像;使用glance命令将镜像上传,并命名为mycirros,最后将glance image-show id命令的返回结果以文本形式提交到答题框。

[root@xiandian~]# glance image-create --name mycirros --disk-format qcow2 --container-format bare --file /root/cirros-0.3.4-x86_64-disk.img 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-05-15T17:18:21Z                 |
| disk_format      | qcow2                                |
| id               | 935a7ac5-a6dd-4cb4-94b3-941d1b2ddd23 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-05-15T17:18:22Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+
[root@xiandian~]# glance image-show 935a7ac5-a6dd-4cb4-94b3-941d1b2ddd23 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-05-15T17:18:21Z                 |
| disk_format      | qcow2                                |
| id               | 935a7ac5-a6dd-4cb4-94b3-941d1b2ddd23 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-05-15T17:18:22Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

Glance服务运维(40分)
使用提供的“all-in-one”虚拟机,使用Glance命令,创建一个名称为“cirros”镜像,镜像文件使用提供的为“cirros-0.3.4-x86_64-disk.img”。通过glance 命令查看镜像“cirros”的详细信息。使用glance 命令更新镜像信息min-disk(min-disk默认单位为G)为1G。将以上操作命令及结果以文本形式填入答题框。

[root@controller ~]# glance image-create --name cirros --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img  
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-12-11T17:04:24Z                 |
| disk_format      | qcow2                                |
| id               | 9ceee9fb-f1ee-4441-b88d-3349aed9f7d3 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 0b3ef282c4e64838a39a99c01e6dc964     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-12-11T17:04:25Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+
[root@controller ~]# glance image-update --min-disk=1 9ceee9fb-f1ee-4441-b88d-3349aed9f7d3
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-12-11T17:04:24Z                 |
| disk_format      | qcow2                                |
| id               | 9ceee9fb-f1ee-4441-b88d-3349aed9f7d3 |
| min_disk         | 1                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 0b3ef282c4e64838a39a99c01e6dc964     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-12-11T17:09:20Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43

部署Swarm集群(3)

使用提供的虚拟机和软件包,安装好docker-ce。部署Swarm集群,并安装Portainer图形化管理工具,部署完成后,使用浏览器登录ip:9000界面,进入Swarm控制台。将curl swarm ip:9000返回的结果以文本形式提交到答题框。

[[root@master ~]# curl 192.168.1.111:9000
<!DOCTYPE html><html lang="en" ng-app="portainer">
<head>
  <meta charset="utf-8">
  <title>Portainer</title>
  <meta name="description" content="">
  <meta name="author" content="Portainer.io">
  <div class="row" style="text-align:center">
  Loading Portainer...
  <i class="fa fa-cog fa-spin" style="margin-left:5px"></i>
  </div>
  <!--!pannel-->
  </div>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

Shell脚本补全(3)[答案可行,具体看脸]

下面有一段脚本,作用是自动配置redis服务,由于工程师的失误,将脚本中的某些代码删除了,但注释还在,请根据注释,填写代码。最后将填写的代码按照顺序以文本形式提交至答题框。 redis(){ cd #修改redis的配置文件,将bind 127.0.0.1注释 sed -i (此处填写) /etc/redis.conf #修改redis的配置文件,将protected-mode yes改为protected-mode no sed -i (此处填写) /etc/redis.conf #启动redis服务 systemctl start redis #设置开机自启 systemctl enable redis if [ $? -eq 0 ] then sleep 3 echo -e “\033[36mredis启动成功\033[0m” else echo -e “\033[31mredis启动失败,请检查\033[0m” exit 1 fi sleep 2 }

sed -i 's/bind 127.0.0.1/#bind 127.0.0.1/g' /etc/redis.conf
sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis.conf
  • 1
  • 2

Shell脚本补全(40分)
下面有一段脚本,作用是自动配置nginx服务,由于工程师的失误,将脚本中的某些代码删除了,但注释还在,请根据注释,填写代码。最后将填写的代码按照顺序以文本形式提交至答题框。 nginx(){ cd #删除默认项目路径下的文件 rm -rf /usr/share/nginx/html/* #将提供的dist静态文件复制到nginx项目目录 cp -rvf /root/dist/* /usr/share/nginx/html #修改nginx配置文件如下 cat > /etc/nginx/conf.d/default.conf << EOF server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /user { (此处需要填写) } location /shopping { proxy_pass http://127.0.0.1:8081; } location /cashier { proxy_pass http://127.0.0.1:8083; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF #启动nginx服务 systemctl start nginx #设置nginx开机自启 (此处需填写) #检查nginx服务是否成功启动 if [ $? -eq 0 ] then sleep 3 echo -e “\033[36mnginx启动成功\033[0m” else echo -e “\033[31mnginx启动失败,请检查\033[0m” exit 1 fi sleep 2 }

proxy_pass http://127.0.0.1:8082;
systemctl enable nginx
  • 1
  • 2

Shell脚本补全(40分)
下面有一段脚本,作用是自动配置mysql服务,由于工程师的失误,将脚本中的某些代码删除了,但注释还在,请根据注释,填写代码。最后将填写的代码按照顺序以文本形式提交至答题框。 脚本如下所示: mariadb(){ cd #在数据库的my.cnf文件添加内容如下 (此处需填写) [mysqld] init_connect=‘SET collation_connection = utf8_unicode_ci’ init_connect=‘SET NAMES utf8’ character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake EOF #启动数据库 systemctl start mariadb #设置数据库root的密码为123456 mysqladmin -uroot password 123456 #创建gpmall数据库,并将gpmall.sql文件导入 mysql -uroot -p123456 << EOF (此处需填写) use gpmall source /root/gpmall.sql EOF #设置开机自启数据库 systemctl enable mariadb #重启数据库服务 systemctl restart mariadb #检查数据库是否重启成功 if [ $? -eq 0 ] then sleep 3 echo -e “\033[36m==maridb启动成功=\033[0m” else echo -e “\033[31mmariadb启动失败,请检查\033[0m” exit 1 fi sleep 2 }

cat >> /etc/my.cnf << EOF
create database gpmall;
  • 1
  • 2

读写分离数据库管理(3)

使用提供的虚拟机与软件包,基于上一题构建的主从数据库,进一步完成Mycat读写分离数据库的配置安装。需要用的配置文件schema.xml文件如下所示(server.xml文件不再给出): select user() 配置读写分离数据库完毕后,使用netstat -ntpl命令查询端口启动情况。最后将netstat -ntpl命令的返回结果以文本形式提交到答题框。

[root@mycat ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1114/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1992/master         
tcp        0      0 127.0.0.1:32000         0.0.0.0:*               LISTEN      3988/java           
tcp6       0      0 :::45929                :::*                    LISTEN      3988/java           
tcp6       0      0 :::9066                 :::*                    LISTEN      3988/java           
tcp6       0      0 :::40619                :::*                    LISTEN      3988/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1114/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1992/master         
tcp6       0      0 :::1984                 :::*                    LISTEN      3988/java           
tcp6       0      0 :::8066                 :::*                    LISTEN      3988/java   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

使用 xnode1,xnode2,xnode3 三个节点,完成数据库读写分离实验(安装数据

库使用 CentOS-7-x86_64-DVD-1511.iso 和 gpmall-repo 这两个源)。数据库中间

件使用 Mycat-server-1.6-RELEASE-20161028204710-linux.tar。完成 Mycat 读写分

离数据库的配置安装。需要用的配置文件 schema.xml 文件如下所示(server.xml

文件不再给出):select user()配置读写分离数据库完毕后:1.使用 netstat -ntpl 命

令查询端口启动情况;2.登陆 mycat,查询数据库。将上述两个操作的操作命令

返回结果以文本形式提交到答题框。(30 分)

cinder管理(3)[操作可行]

登录“all-in-one”云主机,使用命令查看当前卷组信息,使用lvcreate命令,创建名称为BlockVloume,大小为2G的lvm逻辑卷,查询该逻辑卷详细信息,依次将操作命令及返回结果以文本形式提交到答题框。

[root@controller ~]# vgs
  VG             #PV #LV #SN Attr   VSize  VFree
  centos           1   2   0 wz--n- 39.00g 4.00m
  cinder-volumes   1   0   0 wz--n-  5.00g 5.00g
[root@controller ~]# lvcreate -L +2G -n BlockVolume cinder-volumes
  Logical volume "BlockVolume" created.
[root@controller ~]# lvs
  LV          VG             Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos         -wi-ao---- 35.00g                                                    
  swap        centos         -wi-ao----  4.00g                                                    
  BlockVolume cinder-volumes -wi-a-----  2.00g                                                    
[root@controller ~]# lvdisplay /dev/mapper/cinder--volumes-BlockVolume 
  --- Logical volume ---
  LV Path                /dev/cinder-volumes/BlockVolume
  LV Name                BlockVolume
  VG Name                cinder-volumes
  LV UUID                Ifbpgh-QOe5-3EjU-YHcH-avNW-KLYd-D8AD9M
  LV Write Access        read/write
  LV Creation host, time controller, 2020-05-15 13:34:51 -0400
  LV Status              available
  # open                 0
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2




[root@xiandian ~]#cinder create --name cinder-volume-demo 2
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T18:59:13.000000 |
| description | None |
| encrypted | False |
| id | 5df3295d-3c92-41f5-95af-c371a3e8b47f |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | cinder-volume-demo |
| os-vol-host-attr:host | xiandian@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2019-09-28T18:59:14.000000 |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | None |
+--------------------------------+--------------------------------------+
[root@xiandian ~]# cinder list
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
| 5df3295d-3c92-41f5-95af-c371a3e8b47f | available | cinder-volume-demo | 2 | - | false | |
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
[root@xiandian ~]# cinder type-create lvm
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
# cinder type-list
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
[root@xiandian ~]#cinder create --name type_test_demo --volume-type lvm 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T19:15:14.000000 |
| description | None |
| encrypted | False |
| id | 12d09316-1c9f-43e1-93bd-24e54cbf7ef6 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | type_test_demo |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+
[root@xiandian ~]# cinder show type_test_demo
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T19:15:14.000000 |
| description | None |
| encrypted | False |
| id | 12d09316-1c9f-43e1-93bd-24e54cbf7ef6 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | type_test_demo |
| os-vol-host-attr:host | xiandian@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2019-09-28T19:15:15.000000 |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137

Cinder服务运维(40分)
使用提供的“all-in-one”虚拟机,使用Cinder命令,创建一个2 GB的云硬盘extend-demo,并且查看云硬盘信息,创建了一个名为“lvm”的卷类型。通过cinder命令查看现有的卷类型。创建一块带“lvm”标识名为type_test_demo的云硬盘,最后使用命令查看所创建的云硬盘。将以上操作命令及结果以文本形式填入答题框。

[root@controller ~]# cinder create --name cinder-volume-demo 2
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2023-12-06T17:45:11.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 149e700a-e2da-4157-8269-b47335855ef4 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |          cinder-volume-demo          |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  2                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+
[root@controller ~]# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |        Name        | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
| 149e700a-e2da-4157-8269-b47335855ef4 | available | cinder-volume-demo |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
[root@controller ~]# cinder type-create lvm
+--------------------------------------+------+-------------+-----------+
|                  ID                  | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| a052a6d5-52f2-45df-b635-8f235f2aa245 | lvm  |      -      |    True   |
+--------------------------------------+------+-------------+-----------+
[root@controller ~]# cinder type-list
+--------------------------------------+------+-------------+-----------+
|                  ID                  | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| a052a6d5-52f2-45df-b635-8f235f2aa245 | lvm  |      -      |    True   |
+--------------------------------------+------+-------------+-----------+
[root@controller ~]# cinder create --name type_test_demo --volume-type lvm 1
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2023-12-06T17:47:14.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 313455cd-d882-4cc4-985e-2bc343a3bad9 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |            type_test_demo            |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+
[root@controller ~]# cinder show type_test_demo
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2023-12-06T17:47:14.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 313455cd-d882-4cc4-985e-2bc343a3bad9 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |            type_test_demo            |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |                error                 |
|           updated_at           |      2023-12-06T17:47:16.000000      |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105

KVM管理(3)【不会】

使用提供的虚拟机和软件包,完成KVM服务的安装与KVM虚拟机的启动。使用提供的cirros镜像与qemu-ifup-NAT脚本启动虚拟机,启动完毕后登录,登录有执行ip addr list命令,将该命令的返回结果以文本形式提交到答题框。

$ ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.89/24 brd 192.168.122.255 scope global eth0
    inet6 fe80::5054:ff:fe12:3456/64 scope link 
       valid_lft forever preferred_lft forever
$ route  -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.122.1   0.0.0.0         UG    0      0        0 eth0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

KVM 管理

使用 xnode1 节点,完成 KVM 服务的安装与 KVM 虚拟机的启动(安装 KVM 组

件使用 CentOS-7-x86_64-DVD-1511.iso 镜像源)。安装完毕后,使用提供的 cirros

镜像与 qemu-ifup-NAT 脚本启动虚拟机(命令为 qemu-kvm -m 1024 -drive

file=xxx.img,if=virtio -net nic,model=virtio -net tap,script=xxx -nographic -vnc :1)。

启动完毕后将登录界面从#####debug end######那行开始至最后的内容以文本

形式提交到答题框。(40 分)

防火墙管理(3)[不会]

配置防火墙g0/0/2端口添加至trust域,g0/0/1端口添加至untrust域。配置trust域到untrust域规则,放行内部地址172.16.105.0/24网段。配置NAT规则,匹配内部地址172.16.105.0/24网段,使用g0/0/1端口的地址进行转换。(所有配置命令使用完整命令)将上述所有操作命令及返回结果以文本形式提交到答题框。

防火墙管理(40分)
配置防火墙g0/0/2为trust域,配置g0/0/1为untrust域,配置g0/0/2地址为10.10.5.1/24,配置g0/0/1端口地址为192.168.10.254/24,配置默认路由下一跳为192.168.10.1,配置从trust域到untrust域策略,匹配放行内部地址为172.16.0.0/16网段,配置从trust域到untrust域nat策略,匹配内部地址为172.16.0.0/16网段,使用g0/0/1端口地址。将上述操作命令及返回结果以文本形式提交到答题框。

网络管理(3)[不会]

通过一条命令在S1交换机上创建vlan100、vlan101,配置vlan100网关为:172.16.100.254/24。配置vlan101网关为:172.16.101.254/24。配置g0/0/1端口为trunk模式,放行vlan100。配置g0/0/2端口为access模式,所属vlan101。将上述操作命令及返回结果以文本形式提交到答题框。

网络管理(40分)
配置SW1交换机vlan20地址为172.17.20.253/24,配置vrrp虚拟网关为172.17.20.254, vrid为1,配置优先级为120。配置vlan17地址为172.17.17.253/24,配置vrrp虚拟网关为172.17.17.254,vrid为2。配置mstp协议,vlan20为实例1,vlan17为实例2,vlan20在SW1上为主,vlan17在SW1上为备。将上述操作命令以文本形式提交到答题框。

网络管理

使用本地 PC 机的 eNSP 环境,在 eNSP 中使用 S5700 交换机进行配置,通

过一条命令划分 vlan 2.vlan 3.vlan 1004,通过端口组的方式配置端口 1-5 为 access

模式,并添加至 vlan2 中。配置端口 10 为 trunk 模式,并放行 vlan3。创建三层

vlan 2,配置 IP 地址为:172.16.2.1/24,创建三层 vlan1004,配置 IP 地址为:

192.168.4.2/30。通过命令添加默认路由,下一跳为 192.168.4.1。将上述操作命令

以文本形式提交到答题框。(使用完整命令,使用 Tab 键补齐)(30 分)

交换机管理(3)【不会】

在eNSP中使用S5700交换机进行配置,通过一条命令划分vlan 2、vlan 3、vlan 1004,通过端口组的方式配置端口1-5为access模式,并添加至vlan2中。配置端口10为trunk模式,并放行vlan3。创建三层vlan 2,配置IP地址为:172.16.2.1/24,创建三层vlan1004,配置IP地址为:192.168.4.2/30。通过命令添加默认路由,下一跳为192.168.4.1。(使用完整命令)将上述操作命令及返回结果以文本形式提交到答题框。

交换机配置:交换机g0/0/1端口连接R1路由器,所属vlan1001,配置地址192.168.1.2/30,与路由器通信。配置g0/0/2连接PC1机,所属vlan101,配置PC1机网关地址为172.16.101.254/24。配置默认路由下一跳为路由器地址。路由器配置:R1路由器g0/0/1端口配置地址12.12.12.1/30,配置端口多路复用PAT配置。R1路由器g0/0/2端口配置地址192.168.1.1/30,连接交换机。路由器配置默认路由访问外部网络,配置静态路由访问PC机网络。(所有配置命令使用完整命令)将上述操作命令及返回结果以文本形式提交到答题框。

Nova管理(2)【操作可行】

使用提供的“all-in-one”镜像,自行检查openstack中各服务的状态,若有问题自行排查。通过nova的相关命令创建名为exam,ID为1234,内存为1024M,硬盘为20G,虚拟内核数量为2的云主机类型,查看exam的详细信息。将nova flavor-show id操作命令的返回结果以文本形式提交到答题框。

[root@xiandian~]# nova flavor-create exam 1234 1024 20 2
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID   | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
| 1234 | exam | 1024      | 20   | 0         |      | 2     | 1.0         | True      |
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
[root@xiandian~]# nova flavor-show  1234
+----------------------------+-------+
| Property                   | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0     |
| disk                       | 20    |
| extra_specs                | {}    |
| id                         | 1234  |
| name                       | exam  |
| os-flavor-access:is_public | True  |
| ram                        | 1024  |
| rxtx_factor                | 1.0   |
| swap                       |       |
| vcpus                      | 2     |
+----------------------------+-------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

Nova服务运维(40分)
使用提供的“all-in-one”虚拟机,使用Nova命令,创建一个名为test的安全组,描述为’test the nova command about the rules’。并且使用命令创建一个名为test,ID为6,内存为2048 MB,磁盘为20 GB,vcpu数量为2的云主机类型,查看test云主机类型的详细信息。将以上操作命令及结果以文本形式填入答题框。

 [root@controller ~]# nova secgroup-create test 'test the nova command about the rules'
+--------------------------------------+------+---------------------------------------+
| Id                                   | Name | Description                           |
+--------------------------------------+------+---------------------------------------+
| a9e13b0e-0b09-4979-96c4-cc774b6b2ae2 | test | test the nova command about the rules |
+--------------------------------------+------+---------------------------------------+
[root@controller ~]# nova flavor-create test 6 2048 20 2
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | test | 2048      | 20   | 0         |      | 2     | 1.0         | True      |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# nova flavor-show test
+----------------------------+-------+
| Property                   | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0     |
| disk                       | 20    |
| extra_specs                | {}    |
| id                         | 6     |
| name                       | test  |
| os-flavor-access:is_public | True  |
| ram                        | 2048  |
| rxtx_factor                | 1.0   |
| swap                       |       |
| vcpus                      | 2     |
+----------------------------+-------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

Neutron服务运维(2)【net-list无显示】

使用提供的“all-in-one”虚拟机,使用Neutron命令,查询网络服务的列表信息中的“binary”一列,并且查询网络sharednet1详细信息。然后查询网络服务DHCP agent的详细信息。将以上操作命令及结果以文本形式填入答题框

[root@xiandian ~]# neutron agent-list -c binary
+---------------------------+
| binary |
+---------------------------+
| neutron-l3-agent |
| neutron-openvswitch-agent |
| neutron-dhcp-agent |
| neutron-metadata-agent |
+---------------------------+
[root@xiandian ~]# neutron net-list
+--------------------------------------+------------+---------+
| id | name | subnets |
+--------------------------------------+------------+---------+
| bd923693-d9b1-4094-bd5b-22a038c44827 | sharednet1 | |
+--------------------------------------+------------+---------+
# neutron net-show bd923693-d9b1-4094-bd5b-22a038c44827
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2023-12-07T11:02:17 |
| description | |
| id | bd923693-d9b1-4094-bd5b-22a038c44827 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | sharednet1 |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 20b1ab08ea644670addb52f6d2f2ed61 |
| updated_at | 2023-12-07T11:07:21 |
+---------------------------+--------------------------------------+
[root@xiandian ~]# neutron agent-list
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
| 7dd3ea38-c6fc-4a73-a530-8b007afeb778 | L3 agent | xiandian | nova | :-) | True | neutron-l3-agent |
| 8c0781e7-8b3e-4c9f-a8da-0d4cdc570afb | Open vSwitch agent | xiandian | | :-) | True | neutron-openvswitch-agent |
| a3504292-e108-4ad1-ae86-42ca9ccfde78 | DHCP agent | xiandian | nova | :-) | True | neutron-dhcp-agent |
| be17aa73-deba-411a-ac10-fd523079085d | Metadata agent | xiandian | | :-) | True | neutron-metadata-agent |
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
[root@xiandian ~]# neutron agent-show a3504292-e108-4ad1-ae86-42ca9ccfde78
+---------------------+----------------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------------+
| admin_state_up | True |
| agent_type | DHCP agent |
| alive | True |
| availability_zone | nova |
| binary | neutron-dhcp-agent |
| configurations | { |
| | "subnets": 1, |
| | "dhcp_lease_duration": 86400, |
| | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq", |
| | "networks": 1, |
| | "log_agent_heartbeats": false, |
| | "ports": 2 |
| | } |
| created_at | 2023-12-07 11:11:05 |
| description | |
| heartbeat_timestamp | 2023-12-07 11:14:25 |
| host | xiandian |
| id | a3504292-e108-4ad1-ae86-42ca9ccfde78 |
| started_at | 2023-12-07 11:17:05 |
| topic | dhcp_agent |
+---------------------+----------------------------------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75

Docker Compose安装(2)

使用提供的虚拟机和软件包,安装docker compose服务,安装完毕后,使用命令查看docker compose的版本信息,将查询版本信息的命令与返回结果已文本形式提交到答题框。

[root@zookeeper1 ~]# docker-compose --version
docker-compose version 1.26.0-rc4, build d279b7a8
  • 1
  • 2

Docker Compose

使用 VMWare 软件启动提供的 k8sallinone 镜像,完成 docker compose 案例,

compose 案例所有用到的文件均在/root/compose 目录下,需要用到的镜像均在

/root/images 目录下。理解 docker compose 的原理与工作流程,在/root 目录下新

建 composetest 目录作为工作目录,理解提供的配置文件与部署文件,运行 docker

compose 案例。运行成功后,使用 curl 命令访问 http://IP:5000。最后将运行

docker-compose up 的结果和 curl 的结果以文本形式提交到答题框。(30 分)

Kubernetes平台搭建(2)

使用提供的虚拟机和软件包,搭建Kubernetes平台,平台的两个节点分别为master和node节点,在将node节点加入到集群后,登录master节点,使用kubectl get nodes查询各节点状态。将查询节点状态的返回结果以文本形式提交到答题框

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   159m    v1.14.1
node     Ready    <none>   2m26s   v1.14.1
  • 1
  • 2
  • 3
  • 4

K8S 平台安装使用 VMWare 软件启动提供的 k8sallinone 镜像,确认 IP 地址,执行/root 目录下

的 install.sh 脚本进行一键部署 K8S 平台。等待安装完毕后使用 kubectl 命令查看

nodes.cs.pods 的状态,将查看的命令与返回结果以文本形式提交到答题框。(30

分)

K8S 平台使用

使用上一题安装完成的 K8S 平台,进行 K8S 平台的使用与运维。完成下面实验。

实验:运行 nginx 应用,使用 kubectl 命令运行 nginx 应用,要求使用 nginx_latest.tar

镜像,只使用本地镜像不向外拉取,副本数为 4。运行成功后,使用命令查看 pods

状态.将 80 端口放出,最后使用 curl 命令查看 nginx 首页。将上述操作命令与返

回结果以文本形式提交到答题框。(40 分)

Ansible 脚本

使用 xnode1 节点,该节点已经安装了 Ansible 服务,使用 xnode1 节点作为 Ansible

的母机,首先配置 Ansible 的 hosts 文件,将 xnode2 和 xnode3 加入到 hosts 中,

并使用 Ansible 命令 ping 下 hosts 内的主机。然后使用 Ansible 命令将本地的

/etc/hosts 文件复制到 xnode2 和 xnode3 节点的/root 目录下。将执行的两个 Ansible

命令和返回结果以文本形式提交到答题框。(40 分)

swift管理(40分)[操作可行]

登录“all-in-one”云主机。使用swift相关命令,查询swift对象存储服务可以存储的单个文件大小的最大值,依次将操作命令及返回结果以文本形式提交到答题框。

[root@xiandian ~]# swift capabilities |grep max_file_size
max_file_size: 5368709122
  • 1
  • 2

Ceilometer管理(40分)【操作可行】

登录“all-in-one”云主机。使用ceilometer相关命令,查询测量值的列表信息。依次将操作命令及返回结果以文本形式提交到答题框。

[root@controller ~]# ceilometer meter-list
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name                            | Type       | Unit      | Resource ID                                                           | User ID                          | Project ID                       |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| compute.instance.booting.time   | gauge      | sec       | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| cpu                             | cumulative | ns        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| cpu.delta                       | delta      | ns        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| cpu_util                        | gauge      | %         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.allocation                 | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.capacity                   | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.allocation          | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.capacity            | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.read.bytes          | cumulative | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.read.bytes.rate     | gauge      | B/s       | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.read.requests       | cumulative | request   | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.read.requests.rate  | gauge      | request/s | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.usage               | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.write.bytes         | cumulative | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.write.bytes.rate    | gauge      | B/s       | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.write.requests      | cumulative | request   | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.device.write.requests.rate | gauge      | request/s | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-vda                              | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.ephemeral.size             | gauge      | GB        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.read.bytes                 | cumulative | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.read.bytes.rate            | gauge      | B/s       | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.read.requests              | cumulative | request   | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.read.requests.rate         | gauge      | request/s | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.root.size                  | gauge      | GB        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.usage                      | gauge      | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.write.bytes                | cumulative | B         | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.write.bytes.rate           | gauge      | B/s       | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.write.requests             | cumulative | request   | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| disk.write.requests.rate        | gauge      | request/s | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image                           | gauge      | image     | 90707f15-f87c-44a4-b916-5a06306a5c88                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image                           | gauge      | image     | 935a7ac5-a6dd-4cb4-94b3-941d1b2ddd23                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image.download                  | delta      | B         | 90707f15-f87c-44a4-b916-5a06306a5c88                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image.serve                     | delta      | B         | 90707f15-f87c-44a4-b916-5a06306a5c88                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image.size                      | gauge      | B         | 90707f15-f87c-44a4-b916-5a06306a5c88                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| image.size                      | gauge      | B         | 935a7ac5-a6dd-4cb4-94b3-941d1b2ddd23                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| instance                        | gauge      | instance  | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| ip.floating                     | gauge      | ip        | 00a17375-f37c-4cf4-a183-e8bbc39d8075                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| ip.floating                     | gauge      | ip        | 9d2da4df-e2a9-4cc2-837c-eb0196b65fe1                                  | None                             | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| memory                          | gauge      | MB        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| memory.resident                 | gauge      | MB        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| memory.usage                    | gauge      | MB        | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.incoming.bytes          | cumulative | B         | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.incoming.bytes.rate     | gauge      | B/s       | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.incoming.packets        | cumulative | packet    | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.incoming.packets.rate   | gauge      | packet/s  | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.outgoing.bytes          | cumulative | B         | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.outgoing.bytes.rate     | gauge      | B/s       | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.outgoing.packets        | cumulative | packet    | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| network.outgoing.packets.rate   | gauge      | packet/s  | instance-00000001-e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c-tap92e5f34b-5a | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| storage.objects                 | gauge      | object    | c88f5a1b7619420dadb4309743e53f1a                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects                 | gauge      | object    | e14b3dabf5594684913f3868669f35af                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects                 | gauge      | object    | f9ff39ba9daa4e5a8fee1fc50e2d2b34                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.containers      | gauge      | container | c88f5a1b7619420dadb4309743e53f1a                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.containers      | gauge      | container | e14b3dabf5594684913f3868669f35af                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.containers      | gauge      | container | f9ff39ba9daa4e5a8fee1fc50e2d2b34                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.outgoing.bytes  | delta      | B         | c88f5a1b7619420dadb4309743e53f1a                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.outgoing.bytes  | delta      | B         | e14b3dabf5594684913f3868669f35af                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.outgoing.bytes  | delta      | B         | f9ff39ba9daa4e5a8fee1fc50e2d2b34                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.size            | gauge      | B         | c88f5a1b7619420dadb4309743e53f1a                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.size            | gauge      | B         | e14b3dabf5594684913f3868669f35af                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| storage.objects.size            | gauge      | B         | f9ff39ba9daa4e5a8fee1fc50e2d2b34                                      | 0f980d5fefa6448a9c52f5c0ae5813a5 | c88f5a1b7619420dadb4309743e53f1a |
| vcpus                           | gauge      | vcpu      | e1e52ae3-1f3e-4d62-bc21-d8b19055ab5c                                  | 0befa70f767848e39df8224107b71858 | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66

Dockerfile编写(40分)

自定义Dockerfile,要求Dockerfile主要完成以下工作: ①构建一个基于Python 3.5的镜像。 ②把当前目录添加到镜像中的/code路径下。 ③把工作路径设置成/code。 ④设置容器的默认命令为python app.py。 将Dockerfile文件以文本形式提交到答题框。

[root@localhost jr]# docker build -t test .          
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM python:3.5-alpine
3.5-alpine: Pulling from library/python
cbdbe7a5bc2a: Pull complete 
26ebcd19a4e3: Pull complete 
5040824fb16c: Pull complete 
ee2aa99e2e5f: Pull complete 
09d2ba251239: Pull complete 
Digest: sha256:add585692cb3ed53427cd423b6e87c976c3a760e00165026f038a5822c1e22fd
Status: Downloaded newer image for python:3.5-alpine
 ---> 5f618d8888ec
Step 2/4 : ADD . /code
 ---> 60a93c53612e
Step 3/4 : WORKDIR	/code
 ---> Running in 67ee14645e5c
Removing intermediate container 67ee14645e5c
 ---> 6fc0e4a0d1d1
Step 4/4 : CMD ["python", "app.py"]
 ---> Running in 797dfc62af22
Removing intermediate container 797dfc62af22
 ---> 3a92c182aca3
Successfully built 3a92c182aca3
Successfully tagged test:latest
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

Docker 使用(30分)

使用 VMWare 软件启动提供的 k8sallinone 镜像,该镜像已安装 docker 服务。创

建/opt/xiandian 目录,创建完成后启动名为 xiandian-dir,镜像为 nginx_latest.tar

的容器(镜像在/root/images 目录下),并指定/opt/xiandian 为容器启动的数据卷,指定 nginx 的 80 端口映射到外部 81 端口。创建完成后通过 inspect 命令指定查

看数据卷的情况。将以上操作命令及返回结果以文本形式填入答题框。(30 分)

Docker Harbor安装(40分)

使用提供的虚拟机与软件包,部署Docker Harbor镜像仓库服务。安装完毕后,将执行./install.sh --with-notary --with-clair命令返回结果中的[step4]的内容以文本形式提交到答题框。

[Step 4]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating network "harbor_harbor-clair" with the default driver
Creating network "harbor_harbor-notary" with the default driver
Creating network "harbor_notary-mdb" with the default driver
Creating network "harbor_notary-sig" with the default driver
Creating harbor-log ... done
Creating redis              ... done
Creating clair-db           ... done
Creating notary-db          ... done
Creating harbor-db          ... done
Creating registry           ... done
Creating harbor-adminserver ... done
Creating notary-signer      ... done
Creating clair              ... done
Creating harbor-ui          ... done
Creating notary-server      ... done
Creating nginx              ... done
Creating harbor-jobservice  ... done
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at https://192.168.1.111. 
For more details, please visit https://github.com/vmware/harbor .
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

数据库运维(40分)[不会]

使用上一题安装的数据库,进行数据库备份操作,要求使用mysqldump命令将gpmall数据库导出进行备份,备份名为gpmall_bak.sql,并存放在/opt目录下(使用绝对路径),将上述所有操作命令和返回结果以文本形式提交到答题框。

Zabbix-server节点搭建(40分)

使用提供的虚拟机和软件包,完成Zabbix监控系统server端的搭建,搭建完毕后启动服务,然后使用netstat -ntpl命令查看端口启动情况,将netstat -ntpl命令的返回结果以文本形式提交到答题框。

[root@zabbix-server ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:10051 0.0.0.0:* LISTEN 10611/zabbix_server
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 10510/mysqld
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 975/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 886/master
tcp6 0 0 :::10051 :::* LISTEN 10611/zabbix_server
tcp6 0 0 :::80 :::* LISTEN 10579/httpd
tcp6 0 0 :::21 :::* LISTEN 10015/vsftpd
tcp6 0 0 :::22 :::* LISTEN 975/sshd
tcp6 0 0 ::1:25 :::* LISTEN 886/master
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

路由器管理(40分)【不会】

配置R1和R2路由器(路由器使用R2220),R1路由器配置端口g0/0/1地址为192.168.1.1/30,端口g0/0/1连接R2路由器。配置端口g0/0/2地址为192.168.2.1/24,作为内部PC1机网关地址。R2路由器配置端口g0/0/1地址为192.168.1.2/30,端口g0/0/1连接R1路由器,配置端口g0/0/2地址为192.168.3.1/24,作为内部PC2机网关地址。R1和R2路由器启用OSPF动态路由协议自动学习路由。使PC1和PC2可以相互访问。(所有配置命令使用完整命令)将上述所有操作命令及返回结果以文本形式提交到答题框。

无线AC管理(40分)[不会]

配置无线AC控制器(型号使用AC6005),开启dhcp功能,设置vlan20网关地址为172.16.20.1/24,并配置vlan20接口服务器池,设置dhcp分发dns为114.114.114.114、223.5.5.5。将上述所有操作命令及返回结果以文本形式提交到答题框。

无线AC网络管理(40分)[不会]

配置无线AC控制器,设置安全策略Internet,配置安全认证方式为wpa-wpa2,密码为a1234567,设置无线ssid为Internet,创建VAP模板名称为Internet,绑定业务vlan为101,绑定安全策略,绑定SSID模板,创建AP组,名称为ap-group1,绑定vap模板到射频卡0、1上。将上述所有操作命令及返回结果以文本形式提交到答题框。

Python脚本补全(40分)[答案可行,具体看脸]

下面有一段Python脚本,由于工程师的失误,将脚本中的某些代码删除了,但注释还在,请根据注释,填写代码。最后将填写的代码按照顺序以文本形式提交至答题框。 Python脚本如下: def get_html(url, encoding=‘utf-8’): # 定义headers headers = { ‘User-Agent’: ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36’ } # 调用requests依赖包的get方法,请求该网址,返回response (此处需填写) # 设置response字符为utf-8 (此处需填写) # 返回response的文本 return response.text

response = requests.get(url, headers=headers)
response.encoding = encoding
  • 1
  • 2
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/192045
推荐阅读
相关标签
  

闽ICP备14008679号