赞
踩
【实操题】
41.网络管理
使用本地 PC 机的 eNSP 环境,在 eNSP 中使用 S5700 交换机进行配置,通
过一条命令划分 vlan 2.vlan 3.vlan 1004,通过端口组的方式配置端口 1-5 为 access
模式,并添加至 vlan2 中。配置端口 10 为 trunk 模式,并放行 vlan3。创建三层
vlan 2,配置 IP 地址为:172.16.2.1/24,创建三层 vlan1004,配置 IP 地址为:
192.168.4.2/30。通过命令添加默认路由,下一跳为 192.168.4.1。将上述操作命令
以文本形式提交到答题框。(使用完整命令,使用 Tab 键补齐)(30 分)
参考答案:
[Huawei]vlan batch 2 3 1004 [Huawei]port-group 1
[Huawei-port-group-1]group-member GigabitEthernet 0/0/1 to GigabitEthernet
0/0/5
[Huawei-port-group-1]port link-type access
[Huawei-port-group-1]port default vlan 2
[Huawei]interface GigabitEthernet 0/0/10
[Huawei-GigabitEthernet0/0/10]port link-type trunk
[Huawei-GigabitEthernet0/0/10]port trunk allow-pass vlan 3
[Huawei]interface Vlanif 2
[Huawei-Vlanif2]ip address 172.16.2.1 24
[Huawei]interface Vlanif 1004
[Huawei-Vlanif1004]ip address 192.168.4.2 30
[Huawei]ip route-static 0.0.0.0 0 192.168.4.1
42.防火墙管理
使用本地 PC 机的 eNSP 环境,防火墙型号使用 USG5500,配置防火墙 g0/0/2 端
口添加至 trust 域,g0/0/1 端口添加至 untrust 域。配置 trust 域到 untrust 域规则,
放行内部地址 172.16.105.0/24 网段。配置 NAT 规则,匹配内部地址
172.16.105.0/24 网段,使用 g0/0/1 端口的地址进行转换。(使用完整命令,使用
Tab 键补齐)(30 分)
参考答案:
[SRG]firewall zone trust
[SRG-zone-trust]add interface GigabitEthernet 0/0/2
[SRG-zone-trust]quit
[SRG]firewall zone untrust
[SRG-zone-untrust]add interface GigabitEthernet 0/0/1
[SRG-zone-untrust]quit
[SRG]policy interzone trust untrust outbound
[SRG-policy-interzone-trust-untrust-outbound]policy 0 [SRG-policy-interzone-trust-untrust-outbound-0]action permit
[SRG-policy-interzone-trust-untrust-outbound-0]policy source 172.16.105.0
0.255.255.255
[SRG-policy-interzone-trust-untrust-outbound-0]quit
[SRG-policy-interzone-trust-untrust-outbound]quit
[SRG]nat-policy interzone trust untrust outbound
[SRG-nat-policy-interzone-trust-untrust-outbound]policy 1
[SRG-nat-policy-interzone-trust-untrust-outbound-1]action source-nat
[SRG-nat-policy-interzone-trust-untrust-outbound-1]policy source 172.16.105.0
0.255.255.255
[SRG-nat-policy-interzone-trust-untrust-outbound-1]easy-ip GigabitEthernet 0/0/1
43.KVM 管理
使用 xnode1 节点,完成 KVM 服务的安装与 KVM 虚拟机的启动(安装 KVM 组
件使用 CentOS-7-x86_64-DVD-1511.iso 镜像源)。安装完毕后,使用提供的 cirros
镜像与 qemu-ifup-NAT 脚本启动虚拟机(命令为 qemu-kvm -m 1024 -drive
file=xxx.img,if=virtio -net nic,model=virtio -net tap,script=xxx -nographic -vnc :1)。
启动完毕后将登录界面从#####debug end######那行开始至最后的内容以文本
形式提交到答题框。(40 分)
参考答案:
############ debug end ############## ____ ____ ____ / __/ __ ____ ____ / __
\/ __/ / /__ / // __// __// /_/ /\ \ \___//_//_/ /_/ \____/___/ http://cirros-cloud.net login as
'cirros' user. default password: 'cubswin:)'. use 'sudo' for root. cirros login:
44.Raid 存储管理
使用 xnode1 节点,使用 vmware 给 xnode1 节点添加一块大小为 20G 的硬盘。使
用 fdisk 命令对该硬盘进形分区,要求分出两个大小为 5G 的分区。使用这两个
分区,创建名为/dev/md0.raid 级别为 1 的磁盘阵列。创建完成后使用 xfs 文件系统进形格式化,并挂载到/mnt 目录下。将 mdadm -D /dev/md0 命令和 df -h 命令
返回得结果以文本形式提交到答题框。(40 分)
参考答案:
[root@xiandian ~]# mdadm -D /dev/md0
/dev/md0: Version : 1.2 Creation Time : Wed Oct 23 17:08:07 2019 Raid Level : raid1
Array Size : 5238784 (5.00 GiB 5.36 GB) Used Dev Size : 5238784 (5.00 GiB 5.36
GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update
Time : Wed Oct 23 17:13:37 2019 State : clean Active Devices : 2 Working Devices :
2 Failed Devices : 0 Spare Devices : 0 Name : xiandian:0 (local to host xiandian)
UUID : 71123d35:b354bc98:2e36589d:f0ed3491 Events : 17 Number Major Minor
RaidDevice State 0 253 17 0 active sync /dev/vdb1 1 253 18 1 active sync /dev/vdb2
[root@xiandian ~]# df -h
Filesystem Size Used Avail Use% Mounted on /dev/vda1 41G 2.4G 39G 6% /
devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 4.0K 3.9G 1% /dev/shm tmpfs 3.9G 17M
3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/loop0 2.8G 33M 2.8G 2%
/swift/node tmpfs 799M 0 799M 0% /run/user/0 /dev/md0 5.0G 33M 5.0G 1% /mnt
45.主从数据库管理
使用 xnode2.xnode3 虚拟机进行实验,在这两台虚拟机上安装 mariadb 数据库服
务,并配置为主从数据库(xnode2 为主节点.xnode3 为从节点,安装数据库使用
CentOS-7-x86_64-DVD-1511.iso 和 gpmall-repo 这两个源),实现两个数据库的
主从同步。配置完毕后,请在 xnode3 上的数据库中执行“show slave status \G”命
令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。(30 分)
参考答案:
MariaDB [(none)]> start slave; MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event Master_Host: mysql1 Master_User:
user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 245 Relay_Log_File: mariadb-relay-bin.000005
Relay_Log_Pos: 529 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running:
Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB:
Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0
Exec_Master_Log_Pos: 245 Relay_Log_Space: 1256 Until_Condition: None
Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File:
Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key:
Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0
Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids:
Master_Server_Id: 30 1 row in set (0.00 sec)
46.读写分离数据库管理
使用 xnode1,xnode2,xnode3 三个节点,完成数据库读写分离实验(安装数据
库使用 CentOS-7-x86_64-DVD-1511.iso 和 gpmall-repo 这两个源)。数据库中间
件使用 Mycat-server-1.6-RELEASE-20161028204710-linux.tar。完成 Mycat 读写分
离数据库的配置安装。需要用的配置文件 schema.xml 文件如下所示(server.xml
文件不再给出):select user()配置读写分离数据库完毕后:1.使用 netstat -ntpl 命
令查询端口启动情况;2.登陆 mycat,查询数据库。将上述两个操作的操作命令
返回结果以文本形式提交到答题框。(30 分)
参考答案:
[root@mycat ~]# netstat -ntpl
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address
Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
1400/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2353/master tcp 0 0
127.0.0.1:32000 0.0.0.0:* LISTEN 10427/java tcp6 0 0 :::9066 :::* LISTEN
10427/java tcp6 0 0 :::35056 :::* LISTEN 10427/java tcp6 0 0 :::22 :::* LISTEN
1400/sshd tcp6 0 0 ::1:25 :::* LISTEN 2353/master tcp6 0 0 :::37982 :::* LISTEN 10427/java tcp6 0 0 :::1984 :::* LISTEN 10427/java tcp6 0 0 :::8066 :::* LISTEN
10427/java
[root@mycat ~]# mysql -h127.0.0.1 -P8066 -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL
connection id is 2 Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710
MyCat Server (OpenCloundDB) Copyright (c) 2000, 2018, Oracle, MariaDB
Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current
input statement. MySQL [(none)]> show databases; +----------+ | DATABASE |
+----------+ | USERDB | +----------+ 1 row in set (0.001 sec)
47.Zookeeper 集群
继续使用上题的三台虚拟机,使用提供的软件包,完成 Zookeeper 集群的安装与
配置,配置完成后,在相应的目录使用./zkServer.sh status 命令查看三个 Zookeeper
节点的状态,将三个节点的状态以文本形式提交到答题框。(30 分)
参考答案:
[root@zookeeper1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default Using config:
/root/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
[root@zookeeper2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default Using config:
/root/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: leader
[root@zookeeper3 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default Using config:
/root/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
48.Kafka 集群
继续使用上题的三台虚拟机,使用提供软件包,完成 Kafka 集群的安装与配置,
配置完成后,查看三个节点 Kafka 的端口启动情况并在相应的目录使用./kafka-topics.sh --create --zookeeper 你的 IP:2181 --replication-factor 1
--partitions 1 --topic test 创建 topic,将查看 Kafka 端口启动情况和创建 topic 命令
的返回结果以文本形式提交到答题框。(30 分)
参考答案:
# netstat -ntpl |grep 9092
tcp6 0 0 192.168.200.11:9092 :::* LISTEN 4975/java
[root@zookeeper1 bin]# ./kafka-topics.sh --create --zookeeper 172.16.51.23:2181
--replication-factor 1 --partitions 1 --topic test Created topic "test".
49.应用商城系统
继续使用上题的三台虚拟机,使用提供的软件包,基于集群应用系统部署。部署
完成后,进行登录,(订单中填写的收货地址填写自己学校的地址,收货人填写
自己的实际联系方式)最后使用 curl 命令去获取商城首页的返回信息,将 curl
http://你自己的商城 IP/#/home 获取到的结果以文本形式提交到答题框。(40 分)
参考答案:
[root@server ~]# curl http://172.30.11.27/#/home <!DOCTYPE
html><html><head><meta charset=utf-8><title>1+x-示例项目</title><meta
name=keywords content=""><meta name=description content=""><meta
http-equiv=X-UA-Compatible content="IE=Edge"><meta name=wap-font-scale
content=no><link rel="shortcut icon " type=images/x-icon
href=/static/images/favicon.ico><link
href=/static/css/app.8d4edd335a61c46bf5b6a63444cd855a.css
rel=stylesheet></head><body><div id=app></div><script type=text/javascript
src=/static/js/manifest.2d17a82764acff8145be.js></script><script type=text/javascript
src=/static/js/vendor.4f07d3a235c8a7cd4efe.js></script><script type=text/javascript
src=/static/js/app.81180cbb92541cdf912f.js></script></body></html><style>body{
min-width:1250px; 50.Glance 服务运维
使用 VMWare 软件启动提供的 opensatckallinone 镜像,自行检查 openstack 中各
服务的状态,若有问题自行排查。使用 Glance 命令,创建一个名称为“cirros”镜
像,镜像文件使用 xnode1 镜像中提供的为“cirros-0.3.4-x86_64-disk.img”。通过
glance 命令查看镜像“cirros”的详细信息。将上述所有操作命令及返回结果以文
本形式填入答题框。(20 分)
参考答案:
[root@xiandian images]# glance image-create --name "cirros" --disk-format qcow2
--container-format bare --progress < cirros-0.3.4-x86_64-disk.img
[=============================>] 100%
+------------------+--------------------------------------+ | Property | Value |
+------------------+--------------------------------------+ | checksum |
64d7c1cd2b6f60c92c14662941cb7913 | | container_format | bare | | created_at |
2019-09-28T04:57:59Z | | disk_format | qcow2 | | id |
db715025-a795-4519-9947-c5acbe2d5788 | | min_disk | 0 | | min_ram | 0 | | name |
cirros | | owner | 0ab2dbde4f754b699e22461426cd0774 | | protected | False | | size |
13167616 | | status | active | | tags | [] | | updated_at | 2019-09-28T04:58:00Z | |
virtual_size | None | | visibility | private |
+------------------+--------------------------------------+
# glance image-show db715025-a795-4519-9947-c5acbe2d5788
+------------------+--------------------------------------+ | Property | Value |
+------------------+--------------------------------------+ | checksum |
64d7c1cd2b6f60c92c14662941cb7913 | | container_format | bare | | created_at |
2019-09-28T04:57:59Z | | disk_format | qcow2 | | id |
db715025-a795-4519-9947-c5acbe2d5788 | | min_disk | 0 | | min_ram | 0 | | name |
cirros | | owner | 0ab2dbde4f754b699e22461426cd0774 | | protected | False | | size |
13167616 | | status | active | | tags | [] | | updated_at | 2019-09-28T04:58:00Z | |
virtual_size | None | | visibility | private |
+------------------+--------------------------------------+
51.Neutron 服务运维
使用 VMWare 软件启动提供的 opensatckallinone 镜像,自行检查 openstack 中各
服务的状态,若有问题自行排查。使用 Neutron 命令,查询网络服务的列表信息
中的“binary”一列,并且查询网络 sharednet1 详细信息。然后查询网络服务 DHCP
agent 的详细信息。将以上操作命令及结果以文本形式填入答题框。(40 分)
参考答案:
[root@xiandian ~]# neutron agent-list -c binary
+---------------------------+ | binary | +---------------------------+ | neutron-l3-agent | |
neutron-openvswitch-agent | | neutron-dhcp-agent | | neutron-metadata-agent |
+---------------------------+
[root@xiandian ~]# neutron net-list
+--------------------------------------+------------+---------+ | id | name | subnets |
+--------------------------------------+------------+---------+ |
bd923693-d9b1-4094-bd5b-22a038c44827 | sharednet1 | |
+--------------------------------------+------------+---------+
# neutron net-show bd923693-d9b1-4094-bd5b-22a038c44827
+---------------------------+--------------------------------------+ | Field | Value |
+---------------------------+--------------------------------------+ | admin_state_up | True | |
availability_zone_hints | | | availability_zones | | | created_at | 2017-02-23T04:58:17 | |
description | | | id | bd923693-d9b1-4094-bd5b-22a038c44827 | | ipv4_address_scope |
| | ipv6_address_scope | | | mtu | 1500 | | name | sharednet1 | | port_security_enabled |
True | | provider:network_type | flat | | provider:physical_network | physnet1 | |
provider:segmentation_id | | | router:external | False | | shared | True | | status | ACTIVE
| | subnets | | | tags | | | tenant_id | 20b1ab08ea644670addb52f6d2f2ed61 | | updated_at |
2017-02-23T04:58:17 | +---------------------------+--------------------------------------+
[root@xiandian ~]# neutron agent-list
+-----------+----------------+----------+-------------------+-------+-------------------+----------
----+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +-----------+----------------+----------+-------------------+-------+-------------------+----------
----+ | 7dd3ea38-c6fc-4a73-a530-8b007afeb778 | L3 agent | xiandian | nova | :-) | True
| neutron-l3-agent | | 8c0781e7-8b3e-4c9f-a8da-0d4cdc570afb | Open vSwitch agent |
xiandian | | :-) | True | neutron-openvswitch-agent | |
a3504292-e108-4ad1-ae86-42ca9ccfde78 | DHCP agent | xiandian | nova | :-) | True |
neutron-dhcp-agent | | be17aa73-deba-411a-ac10-fd523079085d | Metadata agent |
xiandian | | :-) | True | neutron-metadata-agent |
+-----------+----------------+----------+-------------------+-------+-------------------+----------
----+
[root@xiandian ~]# neutron agent-show a3504292-e108-4ad1-ae86-42ca9ccfde78
+---------------------+----------------------------------------------------------+ | Field | Value |
+---------------------+----------------------------------------------------------+ |
admin_state_up | True | | agent_type | DHCP agent | | alive | True | | availability_zone |
nova | | binary | neutron-dhcp-agent | | configurations | { | | | "subnets": 1, | | |
"dhcp_lease_duration": 86400, | | | "dhcp_driver":
"neutron.agent.linux.dhcp.Dnsmasq", | | | "networks": 1, | | | "log_agent_heartbeats":
false, | | | "ports": 2 | | | } | | created_at | 2017-02-23 04:57:05 | | description | | |
heartbeat_timestamp | 2019-09-28 21:33:06 | | host | xiandian | | id |
a3504292-e108-4ad1-ae86-42ca9ccfde78 | | started_at | 2017-02-23 04:57:05 | | topic |
dhcp_agent | +---------------------+----------------------------------------------------------+
52.Cinder 服务运维
使用 VMWare 软件启动提供的 opensatckallinone 镜像,自行检查 openstack 中各
服务的状态,若有问题自行排查。使用 Cinder 命令,创建一个 2 GB 的云硬盘
extend-demo,并且查看云硬盘信息,创建了一个名为“lvm”的卷类型。通过 cinder
命令查看现有的卷类型。创建一块带“lvm”标识名为 type_test_demo 的云硬盘,
最后使用命令查看所创建的云硬盘。将以上操作命令及结果以文本形式填入答题
框。(40 分)
参考答案:# cinder create --name cinder-volume-demo 2
+--------------------------------+--------------------------------------+ | Property | Value |
+--------------------------------+--------------------------------------+ | attachments | [] | |
availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at
| 2019-09-28T18:59:13.000000 | | description | None | | encrypted | False | | id |
5df3295d-3c92-41f5-95af-c371a3e8b47f | | metadata | {} | | migration_status | None | |
multiattach | False | | name | cinder-volume-demo | | os-vol-host-attr:host |
xiandian@lvm#LVM | | os-vol-mig-status-attr:migstat | None | |
os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id |
0ab2dbde4f754b699e22461426cd0774 | | replication_status | disabled | | size | 2 | |
snapshot_id | None | | source_volid | None | | status | creating | | updated_at |
2019-09-28T18:59:14.000000 | | user_id | 53a1cf0ad2924532aa4b7b0750dec282 | |
volume_type | None | +--------------------------------+--------------------------------------+
# cinder list
+--------------+-----------+---------------------+------+-------------+------------------+---------
--+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------+-----------+---------------------+------+-------------+------------------+---------
--+ | 5df3295d-3c92-41f5-95af-c371a3e8b47f | available | cinder-volume-demo | 2 | - |
false | |
+--------------+-----------+---------------------+------+-------------+------------------+---------
--+
# cinder type-create lvm
+--------------------------------------+------+-------------+-----------+ | ID | Name |
Description | Is_Public |
+--------------------------------------+------+-------------+-----------+ |
b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
# cinder type-list +--------------------------------------+------+-------------+-----------+ | ID
| Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+ | b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
# cinder create --name type_test_demo --volume-type lvm 1
+--------------------------------+--------------------------------------+ | Property | Value |
+--------------------------------+--------------------------------------+ | attachments | [] | |
availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at
| 2019-09-28T19:15:14.000000 | | description | None | | encrypted | False | | id |
12d09316-1c9f-43e1-93bd-24e54cbf7ef6 | | metadata | {} | | migration_status | None | |
multiattach | False | | name | type_test_demo | | os-vol-host-attr:host | None | |
os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | |
os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 | |
replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | |
status | creating | | updated_at | None | | user_id |
53a1cf0ad2924532aa4b7b0750dec282 | | volume_type | lvm |
+--------------------------------+--------------------------------------+
# cinder show type_test_demo
+--------------------------------+--------------------------------------+ | Property | Value |
+--------------------------------+--------------------------------------+ | attachments | [] | |
availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at
| 2019-09-28T19:15:14.000000 | | description | None | | encrypted | False | | id |
12d09316-1c9f-43e1-93bd-24e54cbf7ef6 | | metadata | {} | | migration_status | None | |
multiattach | False | | name | type_test_demo | | os-vol-host-attr:host |
xiandian@lvm#LVM | | os-vol-mig-status-attr:migstat | None | |
os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id |
0ab2dbde4f754b699e22461426cd0774 | | replication_status | disabled | | size | 1 | |
snapshot_id | None | | source_volid | None | | status | available | | updated_at |
2019-09-28T19:15:15.000000 | | user_id | 53a1cf0ad2924532aa4b7b0750dec282 | |
volume_type | lvm | +--------------------------------+--------------------------------------+ 53.对象存储管理
使用 VMWare 软件启动提供的 opensatckallinone 镜像,自行检查 openstack 中各
服务的状态,若有问题自行排查。使用 openstack 命令,创建名为 examtest 的容
器并查询,上传一个 aaa.txt(可自行创建)文件到这个容器中并查询。依次将操
作命令和返回结果以文本形式提交到答题框。(30 分)
参考答案:
[root@xiandian ~]# openstack container create examtest
+---------------------------------------+-----------+------------------------------------+ |
account | container | x-trans-id |
+---------------------------------------+-----------+------------------------------------+ |
AUTH_0ab2dbde4f754b699e22461426cd0774 | examtest |
tx9e7b54f8042d4a6ca5ccf-005a93daf3 |
+---------------------------------------+-----------+------------------------------------+
[root@xiandian ~]# openstack container list
+----------+ | Name | +----------+ | examtest | +----------+
[root@xiandian ~]# openstack object create examtest aaa.txt
+---------+-----------+----------------------------------+ | object | container | etag |
+---------+-----------+----------------------------------+ | aaa.txt | examtest |
45226aa24b72ce0ccc4ff73eefe2e26f |
+---------+-----------+----------------------------------+
[root@xiandian ~]# openstack object list examtest
+---------+ | Name | +---------+ | aaa.txt | +---------+
54.Docker 使用
使用 VMWare 软件启动提供的 k8sallinone 镜像,该镜像已安装 docker 服务。创
建/opt/xiandian 目录,创建完成后启动名为 xiandian-dir,镜像为 nginx_latest.tar
的容器(镜像在/root/images 目录下),并指定/opt/xiandian 为容器启动的数据卷,指定 nginx 的 80 端口映射到外部 81 端口。创建完成后通过 inspect 命令指定查
看数据卷的情况。将以上操作命令及返回结果以文本形式填入答题框。(30 分)
参考答案:
[root@master ~]# mkdir /opt/xiandian
[root@master ~]# docker run -itd --name xiandian-dir -v /opt/xiandian -p 81:80
nginx:latest /bin/bash
e103676b5199ff766cb06b71fcb4c438fc083b4d4e044863db0944370c0fb914
[root@master ~]# docker inspect -f '{{.Config.Volumes}}' xiandian-dir
map[/opt/xiandian:{}]
55.Docker Compose
使用 VMWare 软件启动提供的 k8sallinone 镜像,完成 docker compose 案例,
compose 案例所有用到的文件均在/root/compose 目录下,需要用到的镜像均在
/root/images 目录下。理解 docker compose 的原理与工作流程,在/root 目录下新
建 composetest 目录作为工作目录,理解提供的配置文件与部署文件,运行 docker
compose 案例。运行成功后,使用 curl 命令访问 http://IP:5000。最后将运行
docker-compose up 的结果和 curl 的结果以文本形式提交到答题框。(30 分)
参考答案:
[root@master composetest]# docker-compose up
Building web Step 1/4 : FROM python:3.5-alpine ---> 96b4e8050dda Step 2/4 : ADD .
/code ---> 61ddeca19fc4 Step 3/4 : WORKDIR /code ---> Running in bc3d9a0a6e52
Removing intermediate container bc3d9a0a6e52 ---> a1fed72666b9 Step 4/4 : CMD
["python", "app.py"] ---> Running in 63d2ddf47472 Removing intermediate container
63d2ddf47472 ---> 08bab6135480 Successfully built 08bab6135480 Successfully
tagged composetest_web:latest WARNING: Image for service web was built because
it did not already exist. To rebuild this image you must use `docker-compose build` or
`docker-compose up --build`. Creating composetest_web_1 ... done Creating
composetest_redis_1 ... done Attaching to composetest_redis_1, composetest_web_1 redis_1 | 1:C 11 May 2020 12:41:24.126 # oO0OoO0OoO0Oo Redis is starting
oO0OoO0OoO0Oo redis_1 | 1:C 11 May 2020 12:41:24.126 # Redis version=5.0.6,
bits=64, commit=00000000, modified=0, pid=1, just started redis_1 | 1:C 11 May
2020 12:41:24.126 # Warning: no config file specified, using the default config. In
order to specify a config file use redis-server /path/to/redis.conf redis_1 | 1:M 11 May
2020 12:41:24.128 * Running mode=standalone, port=6379. redis_1 | 1:M 11 May
2020 12:41:24.128 # WARNING: The TCP backlog setting of 511 cannot be enforced
because /proc/sys/net/core/somaxconn is set to the lower value of 128. redis_1 | 1:M
11 May 2020 12:41:24.128 # Server initialized redis_1 | 1:M 11 May 2020
12:41:24.128 # WARNING overcommit_memory is set to 0! Background save may
fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1'
to /etc/sysctl.conf and then reboot or run the command 'sysctl
vm.overcommit_memory=1' for this to take effect. redis_1 | 1:M 11 May 2020
12:41:24.128 # WARNING you have Transparent Huge Pages (THP) support enabled
in your kernel. This will create latency and memory usage issues with Redis. To fix
this issue run the command 'echo never >
/sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local
in order to retain the setting after a reboot. Redis must be restarted after THP is
disabled. redis_1 | 1:M 11 May 2020 12:41:24.128 * Ready to accept connections
web_1 | * Serving Flask app "app" (lazy loading) web_1 | * Environment: production
web_1 | WARNING: This is a development server. Do not use it in a production
deployment. web_1 | Use a production WSGI server instead. web_1 | * Debug mode:
on web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) web_1 | *
Restarting with stat web_1 | * Debugger is active! web_1 | * Debugger PIN:
124-577-743
[root@master ~]# curl 192.168.100.20:5000
Hello World! I have been seen 1 times.
56.K8S 平台安装使用 VMWare 软件启动提供的 k8sallinone 镜像,确认 IP 地址,执行/root 目录下
的 install.sh 脚本进行一键部署 K8S 平台。等待安装完毕后使用 kubectl 命令查看
nodes.cs.pods 的状态,将查看的命令与返回结果以文本形式提交到答题框。(30
分)
参考答案:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 56m v1.14.1
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-8686dcc4fd-mhsw5 1/1
Running 0 60s coredns-8686dcc4fd-sgmgc 1/1 Running 0 60s etcd-master 1/1
Running 0 15s kube-apiserver-master 1/1 Running 0 17s
kube-flannel-ds-amd64-dqg24 1/1 Running 0 54s kube-proxy-q62n6 1/1 Running 0
60s kubernetes-dashboard-5f7b999d65-dnrnz 1/1 Running 0 44s
57.K8S 平台使用
使用上一题安装完成的 K8S 平台,进行 K8S 平台的使用与运维。完成下面实验。
实验:运行 nginx 应用,使用 kubectl 命令运行 nginx 应用,要求使用 nginx_latest.tar
镜像,只使用本地镜像不向外拉取,副本数为 4。运行成功后,使用命令查看 pods
状态.将 80 端口放出,最后使用 curl 命令查看 nginx 首页。将上述操作命令与返
回结果以文本形式提交到答题框。(40 分)
参考答案:
root@master ~]# kubectl run nginx --image=nginx:latest --image-pull-policy=Never
--replicas=4 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed
in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
[root@master ~]# kubectl expose deploy/nginx --port 80
service/nginx exposed
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP
10.96.0.1 <none> 443/TCP 3h18m nginx ClusterIP 10.100.220.6 <none> 80/TCP 9s
[root@master ~]# curl 10.100.220.6:80
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body
{ width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }
</style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the
nginx web server is successfully installed and working. Further configuration is
required.</p> <p>For online documentation and support please refer to <a
href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a
href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using
nginx.</em></p> </body> </html>
58.Ansible 脚本
使用 xnode1 节点,该节点已经安装了 Ansible 服务,使用 xnode1 节点作为 Ansible
的母机,首先配置 Ansible 的 hosts 文件,将 xnode2 和 xnode3 加入到 hosts 中,
并使用 Ansible 命令 ping 下 hosts 内的主机。然后使用 Ansible 命令将本地的
/etc/hosts 文件复制到 xnode2 和 xnode3 节点的/root 目录下。将执行的两个 Ansible
命令和返回结果以文本形式提交到答题框。(40 分)
参考答案:
[root@xnode1 ~]# ansible all -m ping 192.168.200.13
| SUCCESS => { "ansible_facts": { "discovered_interpreter_python":
"/usr/bin/python" }, "changed": false, "ping": "pong" } 192.168.200.12 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed":
false, "ping": "pong" }
[root@xnode1 ~]# ansible host -m copy -a "src=/etc/hosts dest=/root"
192.168.200.13 | CHANGED => { "ansible_facts": { "discovered_interpreter_python":
"/usr/bin/python" }, "changed": true, "checksum":
"7335999eb54c15c67566186bdfc46f64e0d5a1aa", "dest": "/root/hosts", "gid": 0,
"group": "root", "md5sum": "54fb6627dbaa37721048e4549db3224d", "mode":
"0644", "owner": "root", "secontext": "system_u:object_r:admin_home_t:s0", "size":
158, "src":
"/root/.ansible/tmp/ansible-tmp-1589216530.26-2930-194839602799100/source",
"state": "file", "uid": 0 } 192.168.200.12 | CHANGED => { "ansible_facts":
{ "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum":
"7335999eb54c15c67566186bdfc46f64e0d5a1aa", "dest": "/root/hosts", "gid": 0,
"group": "root", "md5sum": "54fb6627dbaa37721048e4549db3224d", "mode":
"0644", "owner": "root", "secontext": "system_u:object_r:admin_home_t:s0", "size":
158, "src":
"/root/.ansible/tmp/ansible-tmp-1589216530.25-2928-166732987871246/source",
"state": "file", "uid": 0 }
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。