当前位置:   article > 正文

1+X云计算平台运维与开发认证(中级)样卷B-实操过程_下面关于软件项目开发过程,叙述错误的是

下面关于软件项目开发过程,叙述错误的是

相关内容:

选择题可能有些题目有点小问题,请酌情参考,主要还是以实操为主

单选题(200分):

1.下面关于软件项目开发过程,叙述错误的是?(10分)
A、敏捷开发从需求、计划、开发、测试,直到项目结束,整个周期一直在迭代中
B、敏捷开发中开发、测试、发布又可以单独迭代多次
C、瀑布式模型分为计划、分析、设计、程序开发、测试、修改和整合,一个阶段结束,另一个阶段接着开始
D、瀑布式模型着重迭代式开发,分析、设计、开发、测试和发布(正确答案)

 

2.敏捷开发从需求、计划、开发、测试,直到项目结束,整个周期一直在迭代中,而其中可以单独迭代多次的不包括哪个过程?(10分)
A、开发
B、测试
C、计划(正确答案)
D、发布

 

3.以下关于STP协议的描述中,哪项是正确的? (10分)
A、STP运行在交换机和网桥设备上
B、STP协议是一个二层链路管理协议
C、STP在选定根网桥,让一些端口进入阻塞工作模式,这些被阻塞端口没有被激活(正确答案)
D、STP的主要功能是在保证网络中没有回路的基础上,允许在第二层链路中提供冗余路径

 

4.以下哪个状态不是RSTP的工作状态? (10分)
A、监听状态(正确答案)
B、丢弃状态
C、转发状态
D、学习状态

 

5.在常用的数据库表单管理命令当中,“use 数据库;”名称的作用是?(10分)
A、描述表单
B、指定使用的数据库(正确答案)
C、显示当前已有的数据库
D、更新表单中的数据

 

6.在mysqldump备份命令当中,参数-u的作用是?(10分)
A、数据库的用户名称(正确答案)
B、密码提示符
C、只导出表结构
D、备份完成后将不再允许修改数据

 

7.basic paxos流程中server的主要功能是什么?(10分)
A、对投票结果进行统计(正确答案)
B、进行投票
C、筛选无用的投票
D、无任何功能

 

8.选完Leader以后,ZooKeeper就进入状态同步过程,具体方式如下:① Leader等待Server连接;② Follower连接Leader,将最大的zxid发送给Leader;③ Leader根据Follower的zxid确定同步点;④ 完成同步后通知Follower已经成为uptodate状态;⑤ Follower收到uptodate消息后,又可以重新接受client的请求进行服务了。请问正确的顺序是什么?(10分)
A、12345(正确答案)
B、13245
C、12435
D、21345

 

9.Keystone为OpenStack平台提供了什么服务?(10分)
A、认证服务(正确答案)
B、存储服务
C、镜像服务
D、计算服务

 

10.什么是私有云计算基础架构的基石?(10分)
A、虚拟化(正确答案)
B、分布式
C、并行
D、集中式

 

11.Swift将Object存储在节点(Node)上,每个节点都是由多个硬盘组成的,并保证对象在多个节点上有备份,默认情况下,Swift会给所有数据保存多少个副本?(10分)
A、1
B、2
C、3(正确答案)
D、4

 

12.以下关于行业云的描述中,错误的是哪项?(10分)
A、能为行业的业务作专门的优化
B、能进一步方便用户
C、能进一步降低成本
D、可同时支持的范围较大,如金融云、政务云、医疗云、卫生云等(正确答案)

 

13.以下哪个不是小微企业使用云计算可以带来的好处?(10分)
A、省下基础设施投资
B、省去底层人才投资
C、随时可用最新的资源
D、获得大量的机房管理经验(正确答案)

 

14.下列关于弹性伸缩服务中,描述正确的是?(10分)
A、弹性伸缩服务中的服务器采用特殊软性材质生产
B、弹性伸缩的收费形式包括按需付费和包年包月两种
C、弹性伸缩是一种可以根据服务器压力的不同自动增加或减少实例的服务(正确答案)
D、以上皆为错误

 

15.以下关于不同租户间网络描述正确的是?(10分)
A、在腾讯云中,不同租户之间可以设置冲突的IP地址(正确答案)
B、在腾讯云中,不同租户之间不可以设置冲突的IP地址
C、不同租户间网络默认可以互相通信
D、不同租户间无法通信

 

16.下面哪个是Docker用到的命名空间?(10分)
A、 PID命名空间
B、 NET命名空间
C、 IPC命名空间
D、以上都是(正确答案)

 

17.在Docker的描述文件Dockerfile中,RUN的含义是?(10分)
A、 定义基础镜像
B、 作者或者维护者
C、 运行的Linux命令(正确答案)
D、 增加文件或目录

 

18.用户按照Shell语言规范编写程序并将其保存为?(10分)
A、文件(正确答案)
B、目录
C、压缩包
D、图片

 

19.Ansible的参数中inventory文件的位置在哪个目录下_。(10分)
A、/etc/ansible.cfg
B、/etc/ansible
C、/etc/ansible/hosts(正确答案)
D、 /var/log/ansible

 

20.Python模块,是一个Python文件,它的后缀是以_结尾。(10分)
A、.yml
B、.py(正确答案)
C、.cpp
D、.dll

 

多选题(200分):

1.下面对项目需求阶段表述正确的是?(10分)
A、在软件需求阶段,要分析客户的业务活动,确定系统的目的、范围、定义和功能(正确答案)
B、求的调研、挖掘和整理必须由项目经理牵头,由产品经理负责收集客户需求(正确答案)
C、测试人员也需要参与需求分析、评审和总结(正确答案)
D、需求也是项目的灵魂,有了需求才有项目开展的可能(正确答案)

 

2.关于变更阶段下面叙述错误的是?(10分)
A、在整个软件开发过程中,需求变更会带来不确定性,但是是可以避免的(正确答案)
B、按照变更的影响程度和客户投入,可以分为关键性需求、后续关键性需求、后续重要需求、改良型需求和可选性需求等。在时间优先级上进行管理和控制。
C、对一个需求分析做得很好的项目来说,需求规格说明书定义的范围越详细越清晰,用户跟项目经理提出需求变更的几率就越小。
D、合作双方在签订协议之初,书面约定不需要编写修改要求和执行过程。(正确答案)

 

3.WLAN中常用的加密方式有? (10分)
A、wap(正确答案)
B、wep(正确答案)
C、wep2(正确答案)
D、tkip

 

4.交换网络中的冗余链路会产生? (10分)
A、广播风暴(正确答案)
B、MAC地址表不稳定(正确答案)
C、多帧复制(正确答案)
D、交换机无法工作

 

5.下列有关Nginx配置文件nginx.conf的叙述正确的是?(10分)
A、nginx进程数设置为CPU总核心数最佳(正确答案)
B、虚拟主机配置多个域名时,各域名间应用逗号隔开
C、sendfile on;表示为开启高效文件传输模式,对于执行下载操作等相关应用时,应设置为on
D、设置工作模式与连接数上限时,应考虑单个进程最大连接数(最大连接数=连接数*进程数)(正确答案)

 

6.Linux系统上,下面哪些文件是与用户管理相关的配置文件?(10分)
A、/etc/passwd(正确答案)
B、/etc/shadow(正确答案)
C、/etc/group(正确答案)
D、/etc/password

 

7.下面哪些是Swift对象存储的特点?(10分)
A、弹性可伸缩(正确答案)
B、高可用(正确答案)
C、分布式(正确答案)
D、集群式

 

8.下列选项当中,哪些不是Glance查看镜像列表的命令?(10分)
A、glance iamges-show(正确答案)
B、glance image-list
C、glance images-list(正确答案)
D、glance image-show(正确答案)

 

9.下列选项当中,哪些说法是正确的?(10分)
A、nova start是创建云主机
B、nova restart是重启云主机(正确答案)
C、nova boot是启动云主机
D、nova reset是重建云主机(正确答案)

 

10.AP注册失败的原因? (10分)
A、AP没有上电(正确答案)
B、AP连接的网线存在问题(正确答案)
C、AP设备提供的信息不匹配(正确答案)
D、交换机设备不通(正确答案)

 

11.块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。下面那些是可用的驱动程序。(10分)
A、NAS/SAN(正确答案)
B、NFS(正确答案)
C、NTFS
D、Ceph(正确答案)

 

12.下列选项当中,哪些不是Glance查看镜像列表的命令(10分)
A、glance iamges-show(正确答案)
B、glance image-list
C、glance images-list(正确答案)
D、glance image-show(正确答案)

 

13.关于openstack组件的描述以下正确的是?(10分)
A、Heat是一个基于模板来编排复合云应用的服务。(正确答案)
B、Cinder的核心功能是对卷的管理,允许对卷、卷的类型、快照进行处理。(正确答案)
C、Neutron网络服务是OpenStack管理所有的网络方面的物理网络基础设施和访问层方面的虚拟网络基础设施
D、网络也支持安全组。安全组允许管理员在组内定义防火墙规则。(正确答案)

 

14.腾讯云服务器分为上一代实例、当前一代实例、最新一代实例,若无特殊需求,一般建议新建实例时使用当前一代实例类型,下列哪些属于当前一代实例?(10分)
A、高IO型I2(正确答案)
B、计算型C2(正确答案)
C、标准型 S1
D、内存型 M3

 

15.下列关于专用宿主机与黑石物理服务器的描述正确的是?(10分)
A、专用宿主机是基于虚拟化技术的云服务器。黑石物理服务器属于裸金属架构(正确答案)
B、黑石物理服务器提供的是可以按需购买、按量付费的物理服务器租赁服务(正确答案)
C、专用宿主机是提供以独享宿主机资源的方式购买、创建云主机的服务(正确答案)
D、专用宿主机无法和云服务器互相通信

 

16.相较于传统IDC,云计算具有哪些优势?(10分)
A、没有硬件购买和运维成本(正确答案)
B、无需部署和配置实体硬件,资源交付时效性高(正确答案)
C、资源可在短时间内按需弹性分配,减少资源闲置和浪费(正确答案)
D、无需部署和维护用户自己开发的软件

 

17.以下哪些场景适合选择预付费(包年包月)的计费方式?(10分)
A、具有较稳定的业务场景(正确答案)
B、业务发展有较大波动性,且无法进行准确预测
C、需要长期使用云资源,追求低成本(正确答案)
D、资源使用有临时性和突发性

 

18.下面关于Docker容器的说法,正确的是?(10分)
A、 容器是一个镜像的运行实例(正确答案)
B、 可以通过运行用户指定的指令进行启动、停止、删除(正确答案)
C、 通过命令分配一个伪终端可以进入容器操作(正确答案)
D、 容器都是相互可见的

 

19.常见的Python网页解析技术有_。(10分)
A、正则表达式(正确答案)
B、html.parser(正确答案)
C、lxml(正确答案)
D、Beautiful Soup(正确答案)

 

20.以下哪些是常见的Shell的种类_。(10分)
A、Bourne Shell(正确答案)
B、Bourne-Again Shell(正确答案)
C、Korn Shell(正确答案)
D、Z Shell(正确答案)

 

实操题(600分):

 

1.交换机管理(40分)

在eNSP中使用S5700交换机进行配置,通过一条命令划分vlan 2、vlan 3、vlan 1004,通过端口组的方式配置端口1-5为access模式,并添加至vlan2中。配置端口10为trunk模式,并放行vlan3。创建三层vlan 2,配置IP地址为:172.16.2.1/24,创建三层vlan1004,配置IP地址为:192.168.4.2/30。通过命令添加默认路由,下一跳为192.168.4.1。(使用完整命令)将上述操作命令及返回结果以文本形式提交到答题框。

SW1配置:

  1. <Huawei>system-view
  2. [Huawei]sysname SW1
  3. [SW1]vlan batch 2 3 1004
  4. [SW1]port-group 1
  5. [SW1-port-group-1]group-member GigabitEthernet 0/0/1 to GigabitEthernet 0/0/5
  6. [SW1-port-group-1]port link-type access
  7. [SW1-port-group-1]port default vlan 2
  8. [SW1]interface GigabitEthernet 0/0/10
  9. [SW1-GigabitEthernet0/0/10]port link-type trunk
  10. [SW1-GigabitEthernet0/0/10]port trunk allow-pass vlan 3
  11. [SW1-GigabitEthernet0/0/10]quit
  12. [SW1]interface Vlanif 2
  13. [SW1-Vlanif2]ip address 172.16.2.1 24
  14. [SW1-Vlanif2]quit
  15. [SW1]interface Vlanif 1004
  16. [SW1-Vlanif1004]ip address 192.168.4.2 30
  17. [SW1-Vlanif1004]quit
  18. [SW1]ip route-static 0.0.0.0 0 192.168.4.1

 

2.交换机管理(40分)

交换机配置:交换机g0/0/1端口连接R1路由器,所属vlan1001,配置地址192.168.1.2/30,与路由器通信。配置g0/0/2连接PC1机,所属vlan101,配置PC1机网关地址为172.16.101.254/24。配置默认路由下一跳为路由器地址。路由器配置:R1路由器g0/0/1端口配置地址12.12.12.1/30,配置端口多路复用PAT配置。R1路由器g0/0/2端口配置地址192.168.1.1/30,连接交换机。路由器配置默认路由访问外部网络,配置静态路由访问PC机网络。(所有配置命令使用完整命令)将上述操作命令及返回结果以文本形式提交到答题框。

 

架构图(考试时命令对就行了,只用 一台SW1和R1就够了,但为了学到东西所以练习时采用完整的部署方案)

SW1配置:

  1. system-view
  2. [Huawei]sysname SW1
  3. [SW1]vlan batch 101 1001
  4. [SW1]interface GigabitEthernet 0/0/1
  5. [SW1-GigabitEthernet0/0/1]port link-type access
  6. [SW1-GigabitEthernet0/0/1]port default vlan 1001
  7. [SW1-GigabitEthernet0/0/1]quit
  8. [SW1]interface vlan 1001
  9. [SW1-Vlanif1001]ip address 192.168.1.2 30
  10. [SW1-Vlanif1001]quit
  11. [SW1]interface GigabitEthernet 0/0/2
  12. [SW1-GigabitEthernet0/0/2]port link-type access
  13. [SW1-GigabitEthernet0/0/2]port default vlan 101
  14. [SW1-GigabitEthernet0/0/2]quit
  15. [SW1]interface vlan 101
  16. [SW1-Vlanif101]ip address 172.16.101.254 24
  17. [SW1-Vlanif101]quit
  18. [SW1]ip route-static 0.0.0.0 0 192.168.1.1

 

R1配置:

  1. system-view
  2. [Huawei]sysname R1
  3. [R1]acl number 2000
  4. [R1-acl-basic-2000]rule 1 permit
  5. [R1-acl-basic-2000]quit
  6. [R1]interface GigabitEthernet 0/0/1
  7. [R1-GigabitEthernet0/0/1]ip address 12.12.12.1 30
  8. [R1-GigabitEthernet0/0/1]nat outbound 2000
  9. [R1-GigabitEthernet0/0/1]quit
  10. [R1]interface GigabitEthernet 0/0/2
  11. [R1-GigabitEthernet0/0/2]ip address 192.168.1.1 30
  12. [R1-GigabitEthernet0/0/2]quit
  13. [R1]ip route-static 0.0.0.0 0 GigabitEthernet 0/0/1
  14. [R1]ip route-static 172.16.101.0 255.255.255.0 192.168.1.2

 

结果(看一下连通性):

先查看PC1能不能ping通交换机和路由

在使用R1路由看看能不能ping通外网

 

3.YUM源管理(40分)

假设当前有一个centos7.2-1511.iso的镜像文件,使用这个文件配置yum源,要求将这个镜像文件挂载在/opt/centos目录。还存在一个ftp源,IP地址为192.168.100.200,ftp配置文件中配置为anon_root=/opt,/opt目录中存在一个iaas目录(该目录下存在一个repodata目录)请问如何配置自己的local.repo文件,使得可以使用这两个地方的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。

Xserver1:

  1. [root@xserver1 ~]# yum install -y vsftpd
  2. [root@xserver1 ~]# vim /etc/vsftpd/vsftpd.conf
  3. anon_root=/opt
  4. [root@xserver1 ~]# systemctl restart vsftpd
  5. [root@xserver1 ~]# systemctl enable vsftpd
  6. Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.
  7. [root@xserver1 ~]# systemctl stop firewalld
  8. [root@xserver1 ~]# systemctl disable firewalld
  9. # 注释:selinux防火墙,设置访问模式(得重启才生效):
  10. [root@xserver1 ~]# vim /etc/selinux/config
  11. SELINUX=Permissive
  12. # 注释:配置临时访问模式(无需重启):
  13. [root@xserver1 ~]# setenforce 0
  14. [root@xserver1 ~]# getenforce
  15. Permissive

 

Xserver2:

  1. [root@xserver2 ~]# systemctl stop firewalld
  2. [root@xserver2 ~]# systemctl disable firewalld
  3. # 注释:selinux防火墙,设置访问模式(得重启才生效):
  4. [root@xserver2 ~]# vim /etc/selinux/config
  5. SELINUX=Permissive
  6. # 注释:配置临时访问模式(无需重启):
  7. [root@xserver2 ~]# setenforce 0
  8. [root@xserver2 ~]# getenforce
  9. Permissive
  10. [root@xserver2 ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /opt/centos/
  11. mount: /dev/loop0 is write-protected, mounting read-only
  12. [root@xserver2 ~]# cat /etc/yum.repos.d/local.repo
  13. [centos]
  14. name=centos
  15. baseurl=file:///opt/centos
  16. enabled=1
  17. gpgcheck=0
  18. [iaas]
  19. name=iaas
  20. baseurl=ftp://192.168.100.200/iaas
  21. enabled=1
  22. gpgcheck=0

 

4.Raid存储管理(40分)

登录云主机,在云主机中,存在一个大小为20G的硬盘为/dev/vdb,使用fdisk命令对该硬盘进形分区,要求分出两个大小为5G的分区。使用这两个分区,创建名为/dev/md0、raid级别为1的磁盘阵列。创建完成后使用xfs文件系统进形格式化,并挂载到/mnt目录下。将mdadm -D /dev/md0命令和df -h命令返回得结果以文本形式提交到答题框。

  • 因为我用的是CentOS7系统做的实验所以磁盘是sdb,CentOS6磁盘是vdb:
  1. [root@xiandian ~]# fdisk /dev/sdb
  2. Welcome to fdisk (util-linux 2.23.2).
  3. Changes will remain in memory only, until you decide to write them.
  4. Be careful before using the write command.
  5. Device does not contain a recognized partition table
  6. Building a new DOS disklabel with disk identifier 0xb7634785.
  7. Command (m for help): n
  8. Partition type:
  9. p primary (0 primary, 0 extended, 4 free)
  10. e extended
  11. Select (default p): p
  12. Partition number (1-4, default 1): 1
  13. First sector (2048-41943039, default 2048):
  14. Using default value 2048
  15. Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +5G
  16. Partition 1 of type Linux and of size 5 GiB is set
  17. Command (m for help): n
  18. Partition type:
  19. p primary (1 primary, 0 extended, 3 free)
  20. e extended
  21. Select (default p): p
  22. Partition number (2-4, default 2): 2
  23. First sector (10487808-41943039, default 10487808):
  24. Using default value 10487808
  25. Last sector, +sectors or +size{K,M,G} (10487808-41943039, default 41943039): +5G
  26. Partition 2 of type Linux and of size 5 GiB is set
  27. Command (m for help): n
  28. Partition type:
  29. p primary (2 primary, 0 extended, 2 free)
  30. e extended
  31. Select (default p): p
  32. Partition number (3,4, default 3): 3
  33. First sector (20973568-41943039, default 20973568):
  34. Using default value 20973568
  35. Last sector, +sectors or +size{K,M,G} (20973568-41943039, default 41943039): +5G
  36. Partition 3 of type Linux and of size 5 GiB is set
  37. Command (m for help): w
  38. The partition table has been altered!
  39. Calling ioctl() to re-read partition table.
  40. Syncing disks.
  41. [root@xiandian ~]# yum install -y mdadm
  42. [root@xserver1 ~]# mdadm -Cv /dev/md0 -l1 -n2 /dev/sdb[1-2]
  43. mdadm: Note: this array has metadata at the start and
  44. may not be suitable as a boot device. If you plan to
  45. store '/boot' on this device please ensure that
  46. your boot-loader understands md/v1.x metadata, or use
  47. --metadata=0.90
  48. mdadm: size set to 5237760K
  49. Continue creating array? y
  50. mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
  51. mdadm: Defaulting to version 1.2 metadata
  52. mdadm: array /dev/md0 started.
  53. # 注释:可以查看一下进度:
  54. [root@xserver1 ~]# cat /proc/mdstat
  55. Personalities : [raid1]
  56. md0 : active raid1 sdb2[1] sdb1[0]
  57. 5237760 blocks super 1.2 [2/2] [UU]
  58. [======>..............] resync = 30.5% (1601792/5237760) finish=0.3min speed=200224K/sec
  59. unused devices: <none>
  60. [root@xserver1 ~]# mkfs.xfs /dev/md0
  61. meta-data=/dev/md0 isize=256 agcount=4, agsize=327360 blks
  62. = sectsz=512 attr=2, projid32bit=1
  63. = crc=0 finobt=0
  64. data = bsize=4096 blocks=1309440, imaxpct=25
  65. = sunit=0 swidth=0 blks
  66. naming =version 2 bsize=4096 ascii-ci=0 ftype=0
  67. log =internal log bsize=4096 blocks=2560, version=2
  68. = sectsz=512 sunit=0 blks, lazy-count=1
  69. realtime =none extsz=4096 blocks=0, rtextents=0
  70. [root@xserver1 ~]# mount /dev/md0 /mnt/
  71. [root@xserver1 ~]# mdadm -D /dev/md0
  72. /dev/md0:
  73. Version : 1.2
  74. Creation Time : Thu May 14 09:15:04 2020
  75. Raid Level : raid1
  76. Array Size : 5237760 (5.00 GiB 5.36 GB)
  77. Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
  78. Raid Devices : 2
  79. Total Devices : 2
  80. Persistence : Superblock is persistent
  81. Update Time : Thu May 14 09:20:57 2020
  82. State : clean
  83. Active Devices : 2
  84. Working Devices : 2
  85. Failed Devices : 0
  86. Spare Devices : 0
  87. Consistency Policy : unknown
  88. Name : xserver1:0 (local to host xserver1)
  89. UUID : 8440d04c:3cf2e84a:4d524020:1072f7b4
  90. Events : 17
  91. Number Major Minor RaidDevice State
  92. 0 8 17 0 active sync /dev/sdb1
  93. 1 8 18 1 active sync /dev/sdb2
  94. [root@xserver1 ~]# df -h
  95. Filesystem Size Used Avail Use% Mounted on
  96. /dev/mapper/centos-root 32G 5.2G 27G 17% /
  97. devtmpfs 903M 0 903M 0% /dev
  98. tmpfs 913M 0 913M 0% /dev/shm
  99. tmpfs 913M 8.6M 904M 1% /run
  100. tmpfs 913M 0 913M 0% /sys/fs/cgroup
  101. /dev/sda1 509M 125M 384M 25% /boot
  102. /dev/mapper/centos-home 4.0G 33M 4.0G 1% /home
  103. tmpfs 183M 0 183M 0% /run/user/0
  104. /dev/md0 5.0G 33M 5.0G 1% /mnt

 

5.主从数据库管理(40分)

使用提供的两台虚拟机,在虚拟机上安装mariadb数据库,并配置为主从数据库,实现两个数据库的主从同步。配置完毕后,请在从节点上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。

  • 上传gpmall-repo中有mariadb子文件的文件到/root目录下:

Mysql1:

  1. [root@xiandian ~]# hostnamectl set-hostname mysql1
  2. [root@mysql1 ~]# login
  3. [root@mysql1 ~]# vim /etc/hosts
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 192.168.1.111 mysql1
  7. 192.168.1.112 mysql2
  8. [root@mysql1 ~]# systemctl stop firewalld
  9. [root@mysql1 ~]# systemctl disable firewalld
  10. [root@mysql1 ~]# setenforce 0
  11. [root@mysql1 ~]# vim /etc/selinux/config
  12. SELINUX=Permissive
  13. [root@mysql1~]# vim /etc/yum.repos.d/local.repo
  14. [centos]
  15. name=centos
  16. baseurl=file:///opt/centos
  17. enabled=1
  18. gpgcheck=0
  19. [mariadb]
  20. name=mariadb
  21. baseurl=file:///root/gpmall-repo
  22. enabled=1
  23. gpgcheck=0
  24. [root@mysql1 ~]# yum install -y mariadb mariadb-server
  25. [root@mysql1 ~]# systemctl restart mariadb
  26. [root@mysql1 ~]# mysql_secure_installation
  27. [root@mysql1 ~]# vim /etc/my.cnf
  28. # 注释:在[mysqld]下添加:
  29. log_bin = mysql-bin
  30. binlog_ignore_db = mysql
  31. server_id = 10
  32. [root@mysql1 ~]# mysql -uroot -p000000
  33. Welcome to the MariaDB monitor. Commands end with ; or \g.
  34. Your MariaDB connection id is 3
  35. Server version: 5.5.65-MariaDB MariaDB Server
  36. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  37. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  38. MariaDB [(none)]> grant all privileges on *.* to 'root'@'%' identified by '000000';
  39. Query OK, 0 rows affected (0.00 sec)
  40. # 注释:如果你不想配置上面的host文件可以不使用主机名mysql2的形式,可以直接打IP地址,用户可以随意指定,只是一个用于连接的而已
  41. MariaDB [(none)]> grant replication slave on *.* to 'user'@'mysql2' identified by '000000';
  42. Query OK, 0 rows affected (0.00 sec)

 

Mysql2:

  1. [root@xiandian ~]# hostnamectl set-hostname mysql2
  2. [root@mysql2 ~]# login
  3. [root@mysql2 ~]# vim /etc/hosts
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 192.168.1.111 mysql1
  7. 192.168.1.112 mysql2
  8. [root@mysql2 ~]# systemctl stop firewalld
  9. [root@mysql2 ~]# systemctl disable firewalld
  10. [root@mysql2 ~]# setenforce 0
  11. [root@mysql2 ~]# vim /etc/selinux/config
  12. SELINUX=Permissive
  13. [root@mysql2~]# vim /etc/yum.repos.d/local.repo
  14. [centos]
  15. name=centos
  16. baseurl=file:///opt/centos
  17. enabled=1
  18. gpgcheck=0
  19. [mariadb]
  20. name=mariadb
  21. baseurl=file:///root/gpmall-repo
  22. enabled=1
  23. gpgcheck=0
  24. [root@mysql2 ~]# yum install -y mariadb mariadb-server
  25. [root@mysql2 ~]# systemctl restart mariadb
  26. [root@mysql2 ~]# mysql_secure_installation
  27. [root@mysql2 ~]# vim /etc/my.cnf
  28. # 注释:在[mysqld]下添加:
  29. log_bin = mysql-bin
  30. binlog_ignore_db = mysql
  31. server_id = 20
  32. [root@mysql2 ~]# mysql -uroot -p000000
  33. Welcome to the MariaDB monitor. Commands end with ; or \g.
  34. Your MariaDB connection id is 3
  35. Server version: 5.5.65-MariaDB MariaDB Server
  36. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  37. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  38. # 注释:如果你不想配置上面的host文件可以不使用主机名mysql1的形式,可以直接打IP地址,这里的用户,密码必须和上面mysql1配置的user一致
  39. MariaDB [(none)]> change master to master_host='mysql1',master_user='user',master_password='000000';
  40. Query OK, 0 rows affected (0.02 sec)
  41. MariaDB [(none)]> start slave;
  42. MariaDB [(none)]> show slave status\G
  43. *************************** 1. row ***************************
  44. Slave_IO_State: Waiting for master to send event
  45. Master_Host: mysql1
  46. Master_User: user
  47. Master_Port: 3306
  48. Connect_Retry: 60
  49. Master_Log_File: mysql-bin.000003
  50. Read_Master_Log_Pos: 245
  51. Relay_Log_File: mariadb-relay-bin.000005
  52. Relay_Log_Pos: 529
  53. Relay_Master_Log_File: mysql-bin.000003
  54. Slave_IO_Running: Yes
  55. Slave_SQL_Running: Yes
  56. Replicate_Do_DB:
  57. Replicate_Ignore_DB:
  58. Replicate_Do_Table:
  59. Replicate_Ignore_Table:
  60. Replicate_Wild_Do_Table:
  61. Replicate_Wild_Ignore_Table:
  62. Last_Errno: 0
  63. Last_Error:
  64. Skip_Counter: 0
  65. Exec_Master_Log_Pos: 245
  66. Relay_Log_Space: 1256
  67. Until_Condition: None
  68. Until_Log_File:
  69. Until_Log_Pos: 0
  70. Master_SSL_Allowed: No
  71. Master_SSL_CA_File:
  72. Master_SSL_CA_Path:
  73. Master_SSL_Cert:
  74. Master_SSL_Cipher:
  75. Master_SSL_Key:
  76. Seconds_Behind_Master: 0
  77. Master_SSL_Verify_Server_Cert: No
  78. Last_IO_Errno: 0
  79. Last_IO_Error:
  80. Last_SQL_Errno: 0
  81. Last_SQL_Error:
  82. Replicate_Ignore_Server_Ids:
  83. Master_Server_Id: 30
  84. 1 row in set (0.00 sec)

 

验证结果(主从是否同步):

Mysql1:

  1. [root@mysql1 ~]# mysql -uroot -p000000
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is 26
  4. Server version: 5.5.65-MariaDB MariaDB Server
  5. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  6. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  7. MariaDB [(none)]> create database test;
  8. Query OK, 0 rows affected (0.00 sec)
  9. MariaDB [(none)]> use test;
  10. Database changed
  11. MariaDB [test]> create table demotables(id int not null primary key,name varchar(10),addr varchar(20));
  12. Query OK, 0 rows affected (0.01 sec)
  13. MariaDB [test]> insert into demotables values(1,'zhangsan','lztd');
  14. Query OK, 0 rows affected (0.00 sec)
  15. MariaDB [test]> select * from demotables;
  16. +----+----------+------+
  17. | id | name | addr |
  18. +----+----------+------+
  19. | 1 | zhangsan | lztd |
  20. +----+----------+------+
  21. 1 rows in set (0.00 sec)

 

Mysql2:

  1. [root@mysql2 ~]# mysql -uroot -p000000
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is 24
  4. Server version: 5.5.65-MariaDB MariaDB Server
  5. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  6. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  7. MariaDB [(none)]> show databases;
  8. +--------------------+
  9. | Database |
  10. +--------------------+
  11. | information_schema |
  12. | mysql |
  13. | performance_schema |
  14. | test |
  15. +--------------------+
  16. 4 rows in set (0.00 sec)
  17. MariaDB [(none)]> use test;
  18. Reading table information for completion of table and column names
  19. You can turn off this feature to get a quicker startup with -A
  20. Database changed
  21. MariaDB [test]> show tables;
  22. +----------------+
  23. | Tables_in_test |
  24. +----------------+
  25. | demotables |
  26. +----------------+
  27. 1 row in set (0.00 sec)
  28. MariaDB [test]> select * from demotables;
  29. +----+----------+------+
  30. | id | name | addr |
  31. +----+----------+------+
  32. | 1 | zhangsan | lztd |
  33. +----+----------+------+
  34. 1 rows in set (0.00 sec)

 

6.读写分离数据库管理(40分)

使用提供的虚拟机与软件包,基于上一题构建的主从数据库,进一步完成Mycat读写分离数据库的配置安装。需要用的配置文件schema.xml文件如下所示(server.xml文件不再给出): select user() 配置读写分离数据库完毕后,使用netstat -ntpl命令查询端口启动情况。最后将netstat -ntpl命令的返回结果以文本形式提交到答题框。

Mycat & Mysql1 & Mysql2都执行以下操作:

  1. # 注释:这个其实配不配都可以,看个人喜欢用主机名还是IP地址咯
  2. [root@mycat ~]# vim /etc/hosts
  3. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  4. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  5. 192.168.1.111 mysql1
  6. 192.168.1.112 mysql2
  7. 192.168.1.113 mycat

 

Mycat:

  • 上传gpmall-repo中有mariadb子文件的文件和Mycat-server-1.6-RELEASE-20161028204710-linux.gz到/root目录下,并配置yum源:
  1. [root@mycat ~]# cat /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.1.111 mysql1
  5. 192.168.1.112 mysql2
  6. 192.168.1.113 mycat
  7. [root@mycat~]# vim /etc/yum.repos.d/local.repo
  8. [centos]
  9. name=centos
  10. baseurl=file:///opt/centos
  11. enabled=1
  12. gpgcheck=0
  13. [mariadb]
  14. name=mariadb
  15. baseurl=file:///root/gpmall-repo
  16. enabled=1
  17. gpgcheck=0
  18. [root@mycat ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  19. [root@mycat ~]# tar -zvxf Mycat-server-1.6-RELEASE-20161028204710-linux.gz -C /usr/local/
  20. [root@mycat ~]# chown -R 777 /usr/local/mycat/
  21. [root@mycat ~]# vim /etc/profile
  22. export MYCAT_HOME=/usr/local/mycat/
  23. [root@mycat ~]# source /etc/profile
  24. [root@mycat ~]# vim /usr/local/mycat/conf/schema.xml
  25. <?xml version='1.0'?>
  26. <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
  27. <mycat:schema xmlns:mycat="http://io.mycat/">
  28. <!--注释:name=USERDB指的是逻辑数据库,在后面添加一个dataNode="dn1",dn1上绑定的是真是数据库-->
  29. <schema name="USERDB" checkSQLschema="true" sqlMaxLimit="100"
  30. dataNode="dn1"></schema>
  31. <!--注释:name="dn1"上面与逻辑数据库引用的名称,database="test"真实数据库名字-->
  32. <dataNode name="dn1" dataHost="localhost1" database="test" />
  33. <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql"
  34. dbDriver="native" writeType="0" switchType="1" slaveThreshold="100">
  35. <heartbeat>select user()</heartbeat>
  36. <writeHost host="hostM1" url="192.168.1.111:3306" user="root" password="000000">
  37. <readHost host="hostS1" url="192.168.1.112:3306" user="root" password="000000" />
  38. </writeHost>
  39. </dataHost>
  40. </mycat:schema>
  41. [root@mycat ~]# chown root:root /usr/local/mycat/conf/schema.xml
  42. # 注释:修改root用户的访问密码与数据库
  43. [root@mycat ~]# vim /usr/local/mycat/conf/server.xml
  44. <user name="root">
  45. <property name="password">000000</property>
  46. <property name="schemas">USERDB</property>
  47. <!-- 表级 DML 权限设置 -->
  48. <!--
  49. <privileges check="false">
  50. <schema name="TESTDB" dml="0110" >
  51. <table name="tb01" dml="0000"></table>
  52. <table name="tb02" dml="1111"></table>
  53. </schema>
  54. </privileges>
  55. -->
  56. </user>
  57. # 注释:删除之后的<user name="user"></user>的标签与内容
  58. [root@mycat ~]# /bin/bash /usr/local/mycat/bin/mycat start
  59. Starting Mycat-server...
  60. [root@mycat ~]# netstat -ntlp
  61. Active Internet connections (only servers)
  62. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  63. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1114/sshd
  64. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1992/master
  65. tcp 0 0 127.0.0.1:32000 0.0.0.0:* LISTEN 3988/java
  66. tcp6 0 0 :::45929 :::* LISTEN 3988/java
  67. tcp6 0 0 :::9066 :::* LISTEN 3988/java
  68. tcp6 0 0 :::40619 :::* LISTEN 3988/java
  69. tcp6 0 0 :::22 :::* LISTEN 1114/sshd
  70. tcp6 0 0 ::1:25 :::* LISTEN 1992/master
  71. tcp6 0 0 :::1984 :::* LISTEN 3988/java
  72. tcp6 0 0 :::8066 :::* LISTEN 3988/java
  73. # 注释:验证结果(读写分离是否成功):
  74. [root@mycat ~]# yum install -y MariaDB-client
  75. # 注释:查看逻辑库
  76. [root@mycat ~]# mysql -h 127.0.0.1 -P8066 -uroot -p000000
  77. Welcome to the MariaDB monitor. Commands end with ; or \g.
  78. Your MySQL connection id is 2
  79. Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (OpenCloundDB)
  80. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  81. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  82. MySQL [(none)]> show databases;
  83. +----------+
  84. | DATABASE |
  85. +----------+
  86. | USERDB |
  87. +----------+
  88. 1 row in set (0.003 sec)
  89. MySQL [(none)]> use USERDB
  90. Reading table information for completion of table and column names
  91. You can turn off this feature to get a quicker startup with -A
  92. Database changed
  93. MySQL [USERDB]> show tables;
  94. +----------------+
  95. | Tables_in_test |
  96. +----------------+
  97. | demotables |
  98. +----------------+
  99. 1 row in set (0.007 sec)
  100. MySQL [USERDB]> select * from demotables;
  101. +----+----------+------+
  102. | id | name | addr |
  103. +----+----------+------+
  104. | 1 | zhangsan | lztd |
  105. | 2 | xiaohong | lztd |
  106. | 3 | xiaoli | lztd |
  107. | 4 | lihua | nnzy |
  108. +----+----------+------+
  109. 4 rows in set (0.060 sec)
  110. MySQL [USERDB]> insert into demotables values(5,'tomo','hfdx');
  111. Query OK, 1 row affected (0.013 sec)
  112. MySQL [USERDB]> select * from demotables;
  113. +----+----------+------+
  114. | id | name | addr |
  115. +----+----------+------+
  116. | 1 | zhangsan | lztd |
  117. | 2 | xiaohong | lztd |
  118. | 3 | xiaoli | lztd |
  119. | 4 | lihua | nnzy |
  120. | 5 | tomo | hfdx |
  121. +----+----------+------+
  122. 5 rows in set (0.004 sec)
  123. MySQL [USERDB]> exit;
  124. Bye
  125. # 注释:查询对数据库读写操作的分离信息
  126. [root@mycat ~]# mysql -h 127.0.0.1 -P9066 -uroot -p000000 -e 'show @@datasource;'
  127. +----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
  128. | DATANODE | NAME | TYPE | HOST | PORT | W/R | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
  129. +----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
  130. | dn1 | hostM1 | mysql | 192.168.1.111 | 3306 | W | 0 | 10 | 1000 | 45 | 0 | 1 |
  131. | dn1 | hostS1 | mysql | 192.168.1.112 | 3306 | R | 0 | 6 | 1000 | 43 | 4 | 0 |
  132. +----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+

 

一些参数注释:

sqlMaxLimit 配置默认查询数量
database 为真实数据库名
balance="0" 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上
balance="1" 全部的readHost与stand by writeHost参与select语句的负载均衡,简单来说,当双主双从模式(M1->S1,M2->S2,并且M1与M2互为主备),正常情况下,M2、S1、S2都参与select语句的负载均衡
balance="2" 所有读操作都随机的在writeHost、readhost上分发
balance="3" 所有读请求随机地分发到wiriterHost对应的readhost执行,writerHost不负担读压力,注意balance=3只在1.4及其以后版本有,1.3版本没有
writeType="0" 所有写操作发送到配置的第一个writeHost,第一个挂了需要切换到还生存的第二个writeHost,重新启动后已切换后的为准,切换记录在配置文件dnindex.properties中
writeType="1" 所有写操作都随机的发送到配置的writeHost

 

7.Zookeeper集群(40分)

使用提供的三台虚拟机和软件包,完成Zookeeper集群的安装与配置,配置完成后,在相应的目录使用./zkServer.sh status命令查看三个Zookeeper节点的状态,将三个节点的状态以文本形式提交到答题框。

Zookeeper1:

  1. [root@xiandian ~]# hostnamectl set-hostname zookeeper1
  2. [root@xiandian ~]# bash
  3. [root@zookeeper1 ~]# vim /etc/hosts
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 192.168.1.10 zookeeper1
  7. 192.168.1.20 zookeeper2
  8. 192.168.1.30 zookeeper3
  9. # 注释:在zookeeper1节点上传gpmall-repo,然后做vsftp进行共享,我上传到/opt
  10. [root@zookeeper1 ~]# vim /etc/yum.repos.d/local.repo
  11. [centos]
  12. name=centos
  13. baseurl=file:///opt/centos
  14. enabled=1
  15. gpgcheck=0
  16. [gpmall]
  17. name=gpmall
  18. baseurl=file:///opt/gpmall-repo
  19. enabled=1
  20. gpgcheck=0
  21. [root@zookeeper1 ~]# yum repolist
  22. [root@zookeeper1 ~]# yum install -y vsftpd
  23. [root@zookeeper1 ~]# vim /etc/vsftpd/vsftpd.conf
  24. # 注释:添加:
  25. anon_root=/opt
  26. [root@zookeeper1 ~]# systemctl restart vsftpd
  27. [root@zookeeper1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  28. [root@zookeeper1 ~]# java -version
  29. openjdk version "1.8.0_252"
  30. OpenJDK Runtime Environment (build 1.8.0_252-b09)
  31. OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
  32. # 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
  33. [root@zookeeper1 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
  34. [root@zookeeper1 ~]# cd zookeeper-3.4.14/conf/
  35. [root@zookeeper1 conf]# mv zoo_sample.cfg zoo.cfg
  36. [root@zookeeper1 conf]# vim zoo.cfg
  37. # The number of milliseconds of each tick
  38. tickTime=2000
  39. # The number of ticks that the initial
  40. # synchronization phase can take
  41. initLimit=10
  42. # The number of ticks that can pass between
  43. # sending a request and getting an acknowledgement
  44. syncLimit=5
  45. # the directory where the snapshot is stored.
  46. # do not use /tmp for storage, /tmp here is just
  47. # example sakes.
  48. dataDir=/tmp/zookeeper
  49. # the port at which the clients will connect
  50. clientPort=2181
  51. # the maximum number of client connections.
  52. # increase this if you need to handle more clients
  53. #maxClientCnxns=60
  54. #
  55. # Be sure to read the maintenance section of the
  56. # administrator guide before turning on autopurge.
  57. #
  58. # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  59. #
  60. # The number of snapshots to retain in dataDir
  61. #autopurge.snapRetainCount=3
  62. # Purge task interval in hours
  63. # Set to "0" to disable auto purge feature
  64. #autopurge.purgeInterval=1
  65. server.1=192.168.1.10:2888:3888
  66. server.2=192.168.1.20:2888:3888
  67. server.3=192.168.1.30:2888:3888
  68. [root@zookeeper1 conf]# grep -n '^'[a-Z] zoo.cfg
  69. 2:tickTime=2000
  70. 5:initLimit=10
  71. 8:syncLimit=5
  72. 12:dataDir=/tmp/zookeeper
  73. 14:clientPort=2181
  74. 29:server.1=192.168.1.10:2888:3888
  75. 30:server.2=192.168.1.20:2888:3888
  76. 31:server.3=192.168.1.30:2888:3888
  77. [root@zookeeper1 conf]# cd
  78. [root@zookeeper1 ~]# mkdir /tmp/zookeeper
  79. [root@zookeeper1 ~]# vim /tmp/zookeeper/myid
  80. 1
  81. [root@zookeeper1 ~]# cat /tmp/zookeeper/myid
  82. 1
  83. [root@zookeeper1 ~]# cd zookeeper-3.4.14/bin/
  84. [root@zookeeper1 bin]# ./zkServer.sh start
  85. zookeeper JMX enabled by default
  86. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  87. Starting zookeeper ... STARTED
  88. [root@zookeeper1 bin]# ./zkServer.sh status
  89. zookeeper JMX enabled by default
  90. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  91. Mode: follower

 

Zookeeper2:

  1. [root@xiandian ~]# hostnamectl set-hostname zookeeper2
  2. [root@xiandian ~]# bash
  3. [root@zookeeper2 ~]# vim /etc/hosts
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 192.168.1.10 zookeeper1
  7. 192.168.1.20 zookeeper2
  8. 192.168.1.30 zookeeper3
  9. [root@zookeeper2 ~]# vim /etc/yum.repos.d/local.repo
  10. [centos]
  11. name=centos
  12. baseurl=file:///opt/centos
  13. enabled=1
  14. gpgcheck=0
  15. [gpmall]
  16. name=gpmall
  17. baseurl=ftp://zookeeper1/gpmall-repo
  18. enabled=1
  19. gpgcheck=0
  20. [root@zookeeper2 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  21. [root@zookeeper2 ~]# java -version
  22. openjdk version "1.8.0_252"
  23. OpenJDK Runtime Environment (build 1.8.0_252-b09)
  24. OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
  25. # 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
  26. [root@zookeeper2 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
  27. [root@zookeeper2 ~]# cd zookeeper-3.4.14/conf/
  28. [root@zookeeper2 conf]# mv zoo_sample.cfg zoo.cfg
  29. [root@zookeeper2 conf]# vim zoo.cfg
  30. # The number of milliseconds of each tick
  31. tickTime=2000
  32. # The number of ticks that the initial
  33. # synchronization phase can take
  34. initLimit=10
  35. # The number of ticks that can pass between
  36. # sending a request and getting an acknowledgement
  37. syncLimit=5
  38. # the directory where the snapshot is stored.
  39. # do not use /tmp for storage, /tmp here is just
  40. # example sakes.
  41. dataDir=/tmp/zookeeper
  42. # the port at which the clients will connect
  43. clientPort=2181
  44. # the maximum number of client connections.
  45. # increase this if you need to handle more clients
  46. #maxClientCnxns=60
  47. #
  48. # Be sure to read the maintenance section of the
  49. # administrator guide before turning on autopurge.
  50. #
  51. # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  52. #
  53. # The number of snapshots to retain in dataDir
  54. #autopurge.snapRetainCount=3
  55. # Purge task interval in hours
  56. # Set to "0" to disable auto purge feature
  57. #autopurge.purgeInterval=1
  58. server.1=192.168.1.10:2888:3888
  59. server.2=192.168.1.20:2888:3888
  60. server.3=192.168.1.30:2888:3888
  61. [root@zookeeper2 conf]# grep -n '^'[a-Z] zoo.cfg
  62. 2:tickTime=2000
  63. 5:initLimit=10
  64. 8:syncLimit=5
  65. 12:dataDir=/tmp/zookeeper
  66. 14:clientPort=2181
  67. 29:server.1=192.168.1.10:2888:3888
  68. 30:server.2=192.168.1.20:2888:3888
  69. 31:server.3=192.168.1.30:2888:3888
  70. [root@zookeeper2 conf]# cd
  71. [root@zookeeper2 ~]# mkdir /tmp/zookeeper
  72. [root@zookeeper2 ~]# vim /tmp/zookeeper/myid
  73. 2
  74. [root@zookeeper1 ~]# cat /tmp/zookeeper/myid
  75. 2
  76. [root@zookeeper2 ~]# cd zookeeper-3.4.14/bin/
  77. [root@zookeeper2 bin]# ./zkServer.sh start
  78. zookeeper JMX enabled by default
  79. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  80. Starting zookeeper ... STARTED
  81. [root@zookeeper2 bin]# ./zkServer.sh status
  82. zookeeper JMX enabled by default
  83. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  84. Mode: leader

 

Zookeeper3:

  1. [root@xiandian ~]# hostnamectl set-hostname zookeeper3
  2. [root@xiandian ~]# bash
  3. [root@zookeeper3 ~]# vim /etc/hosts
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 192.168.1.10 zookeeper1
  7. 192.168.1.20 zookeeper2
  8. 192.168.1.30 zookeeper3
  9. [root@zookeeper3 ~]# vim /etc/yum.repos.d/local.repo
  10. [centos]
  11. name=centos
  12. baseurl=file:///opt/centos
  13. enabled=1
  14. gpgcheck=0
  15. [gpmall]
  16. name=gpmall
  17. baseurl=ftp://zookeeper1/gpmall-repo
  18. enabled=1
  19. gpgcheck=0
  20. [root@zookeeper3 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  21. [root@zookeeper3 ~]# java -version
  22. openjdk version "1.8.0_252"
  23. OpenJDK Runtime Environment (build 1.8.0_252-b09)
  24. OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
  25. # 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
  26. [root@zookeeper3 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
  27. [root@zookeeper3 ~]# cd zookeeper-3.4.14/conf/
  28. [root@zookeeper3 conf]# mv zoo_sample.cfg zoo.cfg
  29. [root@zookeeper3 conf]# vim zoo.cfg
  30. # The number of milliseconds of each tick
  31. tickTime=2000
  32. # The number of ticks that the initial
  33. # synchronization phase can take
  34. initLimit=10
  35. # The number of ticks that can pass between
  36. # sending a request and getting an acknowledgement
  37. syncLimit=5
  38. # the directory where the snapshot is stored.
  39. # do not use /tmp for storage, /tmp here is just
  40. # example sakes.
  41. dataDir=/tmp/zookeeper
  42. # the port at which the clients will connect
  43. clientPort=2181
  44. # the maximum number of client connections.
  45. # increase this if you need to handle more clients
  46. #maxClientCnxns=60
  47. #
  48. # Be sure to read the maintenance section of the
  49. # administrator guide before turning on autopurge.
  50. #
  51. # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  52. #
  53. # The number of snapshots to retain in dataDir
  54. #autopurge.snapRetainCount=3
  55. # Purge task interval in hours
  56. # Set to "0" to disable auto purge feature
  57. #autopurge.purgeInterval=1
  58. server.1=192.168.1.10:2888:3888
  59. server.2=192.168.1.20:2888:3888
  60. server.3=192.168.1.30:2888:3888
  61. [root@zookeeper3 conf]# grep -n '^'[a-Z] zoo.cfg
  62. 2:tickTime=2000
  63. 5:initLimit=10
  64. 8:syncLimit=5
  65. 12:dataDir=/tmp/zookeeper
  66. 14:clientPort=2181
  67. 29:server.1=192.168.1.10:2888:3888
  68. 30:server.2=192.168.1.20:2888:3888
  69. 31:server.3=192.168.1.30:2888:3888
  70. [root@zookeeper3 conf]# cd
  71. [root@zookeeper3 ~]# mkdir /tmp/zookeeper
  72. [root@zookeeper3 ~]# vim /tmp/zookeeper/myid
  73. 3
  74. [root@zookeeper3 ~]# cat /tmp/zookeeper/myid
  75. 3
  76. [root@zookeeper3 ~]# cd zookeeper-3.4.14/bin/
  77. [root@zookeeper3 bin]# ./zkServer.sh start
  78. zookeeper JMX enabled by default
  79. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  80. Starting zookeeper ... STARTED
  81. [root@zookeeper3 bin]# ./zkServer.sh status
  82. zookeeper JMX enabled by default
  83. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  84. Mode: follower

 

8.Kafka集群(40分)

使用提供的三台虚拟机和软件包,完成Kafka集群的安装与配置,配置完成后,在相应的目录使用 ./kafka-topics.sh --create --zookeeper 你的IP:2181 --replication-factor 1 --partitions 1 --topic test创建topic,将输入命令后的返回结果以文本形式提交到答题框。

  • 将kafka_2.11-1.1.1.tgz上传至三个节点:(可以在上体的基础上做kafka,因为kafka依赖于zookeeper)

Zookeeper1:

  1. [root@zookeeper1 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
  2. [root@zookeeper1 ~]# cd kafka_2.11-1.1.1/config/
  3. [root@zookeeper1 config]# vim server.properties
  4. 把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
  5. #broker.id=0
  6. #zookeeper.connect=localhost:2181
  7. broker.id = 1
  8. zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
  9. listeners = PLAINTEXT://192.168.1.10:9092
  10. [root@zookeeper1 config]# cd /root/kafka_2.11-1.1.1/bin/
  11. [root@zookeeper1 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
  12. [root@zookeeper1 bin]# jps
  13. 17645 QuorumPeerMain
  14. 18029 Kafka
  15. 18093 Jps
  16. # 注释:创建topic(下面的IP请设置为自己节点的IP):
  17. [root@zookeeper1 bin]# ./kafka-topics.sh --create --zookeeper 192.168.1.10:2181 --replication-factor 1 --partitions 1 --topic test
  18. Created topic "test".
  19. # 注释:测试结果:
  20. [root@zookeeper1 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.10:2181
  21. test

 

Zookeeper2:

  1. [root@zookeeper2 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
  2. [root@zookeeper2 ~]# cd kafka_2.11-1.1.1/config/
  3. [root@zookeeper2 config]# vim server.properties
  4. # 注释:把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
  5. #broker.id=0
  6. #zookeeper.connect=localhost:2181
  7. broker.id = 2
  8. zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
  9. listeners = PLAINTEXT://192.168.1.20:9092
  10. [root@zookeeper2config]# cd /root/kafka_2.11-1.1.1/bin/
  11. [root@zookeeper2 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
  12. [root@zookeeper2 bin]# jps
  13. 3573 Kafka
  14. 3605 Jps
  15. 3178 QuorumPeerMain
  16. # 注释:测试结果:
  17. [root@zookeeper2 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.20:2181
  18. test

 

Zookeeper3:

  1. [root@zookeeper3 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
  2. [root@zookeeper3 ~]# cd kafka_2.11-1.1.1/config/
  3. [root@zookeeper3 config]# vim server.properties
  4. # 注释:把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
  5. #broker.id=0
  6. #zookeeper.connect=localhost:2181
  7. broker.id = 3
  8. zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
  9. listeners = PLAINTEXT://192.168.1.30:9092
  10. [root@zookeeper3 config]# cd /root/kafka_2.11-1.1.1/bin/
  11. [root@zookeeper3 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
  12. [root@zookeeper3 bin]# jps
  13. 3904 QuorumPeerMain
  14. 4257 Kafka
  15. 4300 Jps
  16. # 注释:测试结果:
  17. [root@zookeeper3 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.30:2181
  18. test

 

9.应用商城系统(40分)

继续使用上题的三台虚拟机,使用提供的软件包,基于集群应用系统部署。部署完成后,进行登录,(订单中填写的收货地址填写自己学校的地址,收货人填写自己的实际联系方式)最后使用curl命令去获取商城首页的返回信息,将curl http://你自己的商城IP/#/home获取到的结果以文本形式提交到答题框。

这是gpmall上传的开源项目,如果想了解的更加深入的可以访问这个地址 https://gitee.com/mic112/gpmall

  • 将所需的zookeep,kafka和gpmall-repo的包上传到mall虚拟机(按题目要求的答案,其实单节点也一样,推荐单节点部署,集群部署,看大家是否需要,需要的话评论一下,我到时候更新集群部署):
  1. [root@mall ~]# vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.1.111 mall
  5. 192.168.1.111 kafka.mall
  6. 192.168.1.111 redis.mall
  7. 192.168.1.111 mysql.mall
  8. 192.168.1.111 zookeeper.mall
  9. [root@mall ~]# vim /etc/yum.repos.d/local.repo
  10. [centos]
  11. name=centos
  12. baseurl=file:///opt/centos
  13. enabled=1
  14. gpgcheck=0
  15. [gpmall]
  16. name=gpmall
  17. baseurl=file:///root/gpmall-repo
  18. enabled=1
  19. gpgcheck=0
  20. [root@mall ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
  21. [root@mall ~]# java -version
  22. openjdk version "1.8.0_252"
  23. OpenJDK Runtime Environment (build 1.8.0_252-b09)
  24. OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
  25. [root@mall ~]# yum install -y redis
  26. [root@mall ~]# yum install -y nginx
  27. [root@mall ~]# yum install -y mariadb mariadb-server
  28. [root@mall ~]# tar -zvxf zookeeper-3.4.14.tar.gz
  29. [root@mall ~]# cd zookeeper-3.4.14/conf
  30. [root@mall conf]# mv zoo_sample.cfg zoo.cfg
  31. [root@mall conf]# cd /root/zookeeper-3.4.14/bin/
  32. [root@mall bin]# ./zkServer.sh start
  33. [root@mall bin]# ./zkServer.sh status
  34. ZooKeeper JMX enabled by default
  35. Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
  36. Mode: standalone
  37. [root@mall bin]# cd
  38. [root@mall ~]# tar -zvxf kafka_2.11-1.1.1.tgz
  39. [root@mall ~]# cd kafka_2.11-1.1.1/bin/
  40. [root@mall bin]# ./kafka-server-start.sh -daemon ../config/server.properties
  41. [root@mall bin]# jps
  42. 7249 Kafka
  43. 17347 Jps
  44. 6927 QuorumPeerMain
  45. [root@mall bin]# cd
  46. [root@mall ~]# vim /etc/my.cnf
  47. # This group is read both both by the client and the server
  48. # use it for options that affect everything
  49. #
  50. [client-server]
  51. #
  52. # include all files from the config directory
  53. #
  54. !includedir /etc/my.cnf.d
  55. [mysqld]
  56. init_connect='SET collation_connection = utf8_unicode_ci'
  57. init_connect='SET NAMES utf8'
  58. character-set-server=utf8
  59. collation-server=utf8_unicode_ci
  60. skip-character-set-client-handshake
  61. [root@mall ~]# systemctl restart mariadb
  62. [root@mall ~]# systemctl enable mariadb
  63. [root@mall ~]# mysqladmin -uroot password 123456
  64. [root@mall ~]# mysql -uroot -p123456
  65. Welcome to the MariaDB monitor. Commands end with ; or \g.
  66. Your MariaDB connection id is 9
  67. Server version: 10.3.18-MariaDB MariaDB Server
  68. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  69. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  70. MariaDB [(none)]> create database gpmall;
  71. Query OK, 1 row affected (0.002 sec)
  72. MariaDB [(none)]> grant all privileges on *.* to root@localhost identified by '123456' with grant option;
  73. Query OK, 0 rows affected (0.001 sec)
  74. MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '123456' with grant option;
  75. Query OK, 0 rows affected (0.001 sec)
  76. MariaDB [(none)]> use gpmall;
  77. Database changed
  78. MariaDB [gpmall]> source /root/gpmall-xiangmubao-danji/gpmall.sql
  79. MariaDB [gpmall]> Ctrl-C -- exit!
  80. [root@mall ~]# vim /etc/redis.conf
  81. # 注释:将bind 127.0.0.1这一行注释掉;将protected-mode yes 改为 protected-mode no
  82. #bind 127.0.0.1
  83. Protected-mode no
  84. [root@mall ~]# systemctl restart redis
  85. [root@mall ~]# systemctl enable redis
  86. Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.
  87. [root@mall ~]# rm -rf /usr/share/nginx/html/*
  88. [root@mall ~]# cp -rf gpmall-xiangmubao-danji/dist/* /usr/share/nginx/html/
  89. [root@mall ~]# vim /etc/nginx/conf.d/default.conf
  90. # 注释:在server块中添加三个location
  91. server {
  92. ...
  93. location /user {
  94. proxy_pass http://127.0.0.1:8082;
  95. }
  96. location /shopping {
  97. proxy_pass http://127.0.0.1:8081;
  98. }
  99. location /cashier {
  100. proxy_pass http://127.0.0.1:8083;
  101. }
  102. ...
  103. }
  104. [root@mall ~]# systemctl restart nginx
  105. [root@mall ~]# systemctl enable nginx
  106. Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
  107. [root@mall ~]# cd gpmall-xiangmubao-danji/
  108. [root@mall gpmall-xiangmubao-danji]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
  109. [1] 3531
  110. [root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’
  111. [root@mall gpmall-xiangmubao-danji]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
  112. [2] 3571
  113. [root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’
  114. [root@mall gpmall-xiangmubao-danji]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
  115. [3] 3639
  116. [root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’
  117. [root@mall gpmall-xiangmubao-danji]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
  118. [4] 3676
  119. [root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’
  120. [root@mall gpmall-xiangmubao-danji]# jobs
  121. [1] Running nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
  122. [2] Running nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
  123. [3]- Running nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
  124. [4]+ Running nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
  125. [root@mall gpmall-xiangmubao-danji]# curl http://192.168.1.111/#/home
  126. <!DOCTYPE html><html><head><meta charset=utf-8><title>1+x-示例项目</title><meta name=keywords content=""><meta name=description content=""><meta http-equiv=X-UA-Compatible content="IE=Edge"><meta name=wap-font-scale content=no><link rel="shortcut icon " type=images/x-icon href=/static/images/favicon.ico><link href=/static/css/app.8d4edd335a61c46bf5b6a63444cd855a.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.2d17a82764acff8145be.js></script><script type=text/javascript src=/static/js/vendor.4f07d3a235c8a7cd4efe.js></script><script type=text/javascript src=/static/js/app.81180cbb92541cdf912f.js></script></body></html><style>body{
  127. min-width:1250px;}</style>

 

10.Keystone管理(40分)

使用提供的“all-in-one”虚拟机,在keystone中创建用户testuser,密码为password,创建好之后,查看testuser的详细信息,以文本形式提交以上操作命令到答题框。

  1. [root@xiandian~]# source /etc/keystone/admin-openrc.sh
  2. [root@xiandian~]# openstack user create --domain xiandian --password password testuser
  3. +-----------+----------------------------------+
  4. | Field | Value |
  5. +-----------+----------------------------------+
  6. | domain_id | 5a486c51bc8e4dffa4a181f6c54e0938 |
  7. | enabled | True |
  8. | id | ec6d67cdb3ac4b3ca827587c14be0a3e |
  9. | name | testuser |
  10. +-----------+----------------------------------+
  11. [root@xiandian~]# openstack user show testuser
  12. +-----------+----------------------------------+
  13. | Field | Value |
  14. +-----------+----------------------------------+
  15. | domain_id | 639e7d52170d4759b5438e3b29bbf339 |
  16. | enabled | True |
  17. | id | df8ca15f17a8435d8889987b4b78c7a2 |
  18. | name | testuser |
  19. +-----------+----------------------------------+

 

11.对象存储管理(40分)

使用提供的“all-in-one”虚拟机,使用openstack命令,创建名为examtest的容器并查询,上传一个aaa.txt(可自行创建)文件到这个容器中并查询。依次将操作命令和返回结果以文本形式提交到答题框。

  1. [root@xiandian ~]# openstack container create examtest
  2. +---------------------------------------+-----------+------------------------------------+
  3. | account | container | x-trans-id |
  4. +---------------------------------------+-----------+------------------------------------+
  5. | AUTH_0ab2dbde4f754b699e22461426cd0774 | examtest | tx9e7b54f8042d4a6ca5ccf-005a93daf3 |
  6. +---------------------------------------+-----------+------------------------------------+
  7. [root@xiandian ~]# openstack container list
  8. +----------+
  9. | Name |
  10. +----------+
  11. | examtest |
  12. +----------+
  13. [root@xiandian ~]# openstack object create examtest aaa.txt
  14. +---------+-----------+----------------------------------+
  15. | object | container | etag |
  16. +---------+-----------+----------------------------------+
  17. | aaa.txt | examtest | 45226aa24b72ce0ccc4ff73eefe2e26f |
  18. +---------+-----------+----------------------------------+
  19. [root@xiandian ~]# openstack object list examtest
  20. +---------+
  21. | Name |
  22. +---------+
  23. | aaa.txt |
  24. +---------+

 

12.Glance管理(40分)

登录“all-in-one”节点,使用crt的传输工具将提供的cirros-0.3.4-x86_64-disk.img镜像上传至“all-in-one”节点的/root目录下;使用glance命令将镜像上传,并命名为mycirros,最后使用glance命令查看该镜像的详细信息。将上述所有操作命令和返回结果以文本形式提交到答题框。

  1. [root@xiandian ~]# source /etc/keystone/admin-openrc.sh
  2. [root@xiandian ~]# glance image-create --name mycirros --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img
  3. [=============================>] 100%
  4. +------------------+--------------------------------------+
  5. | Property | Value |
  6. +------------------+--------------------------------------+
  7. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  8. | container_format | bare |
  9. | created_at | 2019-10-24T10:16:52Z |
  10. | disk_format | qcow2 |
  11. | id | d3663be2-3ebf-443a-b3fc-b3e39bda8783 |
  12. | min_disk | 0 |
  13. | min_ram | 0 |
  14. | name | mycirros |
  15. | owner | 0ab2dbde4f754b699e22461426cd0774 |
  16. | protected | False |
  17. | size | 13287936 |
  18. | status | active |
  19. | tags | [] |
  20. | updated_at | 2019-10-24T10:16:52Z |
  21. | virtual_size | None |
  22. | visibility | private |
  23. +------------------+--------------------------------------+
  24. [root@xiandian ~]# glance image-show d3663be2-3ebf-443a-b3fc-b3e39bda8783
  25. +------------------+--------------------------------------+
  26. | Property | Value |
  27. +------------------+--------------------------------------+
  28. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  29. | container_format | bare |
  30. | created_at | 2019-10-24T10:16:52Z |
  31. | disk_format | qcow2 |
  32. | id | d3663be2-3ebf-443a-b3fc-b3e39bda8783 |
  33. | min_disk | 0 |
  34. | min_ram | 0 |
  35. | name | mycirros |
  36. | owner | 0ab2dbde4f754b699e22461426cd0774 |
  37. | protected | False |
  38. | size | 13287936 |
  39. | status | active |
  40. | tags | [] |
  41. | updated_at | 2019-10-24T10:16:52Z |
  42. | virtual_size | None |
  43. | visibility | private |
  44. +------------------+--------------------------------------+

 

13.Docker安装(40分)

使用提供的虚拟机和软件包,自行配置YUM源,安装docker-ce服务。安装完毕后执行docker info命令的返回结果以文本形式提交到答题框。

  • 先上传Docker.tar.gz到/root目录,并解压:
  1. [root@xiandian ~]# tar -zvxf Docker.tar.gz
  2. [root@xiandian ~]# vim /etc/yum.repos.d/local.repo
  3. [centos]
  4. name=centos
  5. baseurl=file:///opt/centos
  6. enabled=1
  7. gpgcheck=0
  8. [docker]
  9. name=docker
  10. baseurl=file:///root/Docker
  11. enabled=1
  12. gpgcheck=0
  13. [root@xiandian ~]# iptables -F
  14. [root@xiandian ~]# iptables -X
  15. [root@xiandian ~]# iptables -Z
  16. [root@xiandian ~]# iptables-save
  17. # Generated by iptables-save v1.4.21 on Fri May 15 02:00:29 2020
  18. *filter
  19. :INPUT ACCEPT [20:1320]
  20. :FORWARD ACCEPT [0:0]
  21. :OUTPUT ACCEPT [11:1092]
  22. COMMIT
  23. # Completed on Fri May 15 02:00:29 2020
  24. [root@xiandian ~]# vim /etc/selinux/config
  25. SELINUX=disabled
  26. # 注释:关闭交换分区:
  27. [root@xiandian ~]# vim /etc/fstab
  28. #/dev/mapper/centos-swap swap swap defaults 0 0
  29. [root@xiandian ~]# free -m
  30. total used free shared buff/cache available
  31. Mem: 1824 95 1591 8 138 1589
  32. Swap: 0 0 0
  33. # 注释:在配置路由转发前,先升级系统并重启,不然会有两条规则可能报错:
  34. [root@xiandian ~]# yum upgrade -y
  35. [root@xiandian ~]# reboot
  36. [root@xiandian ~]# vim /etc/sysctl.conf
  37. net.ipv4.ip_forward = 1
  38. net.bridge.bridge-nf-call-ip6tables = 1
  39. net.bridge.bridge-nf-call-iptables = 1
  40. [root@xiandian ~]# modprobe br_netfilter
  41. [root@xiandian ~]# sysctl -p
  42. net.ipv4.ip_forward = 1
  43. net.bridge.bridge-nf-call-ip6tables = 1
  44. net.bridge.bridge-nf-call-iptables = 1
  45. [root@xiandian ~]# yum install -y yum-utils device-mapper-persistent-data
  46. [root@xiandian ~]# yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
  47. [root@xiandian ~]# systemctl daemon-reload
  48. [root@xiandian ~]# systemctl restart docker
  49. [root@xiandian ~]# systemctl enable docker
  50. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
  51. [root@xiandian ~]# docker info
  52. Containers: 0
  53. Running: 0
  54. Paused: 0
  55. Stopped: 0
  56. Images: 0
  57. Server Version: 18.09.6
  58. Storage Driver: devicemapper
  59. Pool Name: docker-253:0-100765090-pool
  60. Pool Blocksize: 65.54kB
  61. Base Device Size: 10.74GB
  62. Backing Filesystem: xfs
  63. Udev Sync Supported: true
  64. Data file: /dev/loop0
  65. Metadata file: /dev/loop1
  66. Data loop file: /var/lib/docker/devicemapper/devicemapper/data
  67. Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
  68. Data Space Used: 11.73MB
  69. Data Space Total: 107.4GB
  70. Data Space Available: 24.34GB
  71. Metadata Space Used: 17.36MB
  72. Metadata Space Total: 2.147GB
  73. Metadata Space Available: 2.13GB
  74. Thin Pool Minimum Free Space: 10.74GB
  75. Deferred Removal Enabled: true
  76. Deferred Deletion Enabled: true
  77. Deferred Deleted Device Count: 0
  78. Library Version: 1.02.164-RHEL7 (2019-08-27)
  79. Logging Driver: json-file
  80. Cgroup Driver: cgroupfs
  81. Plugins:
  82. Volume: local
  83. Network: bridge host macvlan null overlay
  84. Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
  85. Swarm: inactive
  86. Runtimes: runc
  87. Default Runtime: runc
  88. Init Binary: docker-init
  89. containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
  90. runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
  91. init version: fec3683
  92. Security Options:
  93. seccomp
  94. Profile: default
  95. Kernel Version: 3.10.0-1127.8.2.el7.x86_64
  96. Operating System: CentOS Linux 7 (Core)
  97. OSType: linux
  98. Architecture: x86_64
  99. CPUs: 2
  100. Total Memory: 1.777GiB
  101. Name: xiandian
  102. ID: OUR6:6ERV:3UCH:WJCM:TDLL:5ATV:E7IQ:HLAR:JKQB:OBK2:HZ7G:JC3Q
  103. Docker Root Dir: /var/lib/docker
  104. Debug Mode (client): false
  105. Debug Mode (server): false
  106. Registry: https://index.docker.io/v1/
  107. Labels:
  108. Experimental: false
  109. Insecure Registries:
  110. 127.0.0.0/8
  111. Live Restore Enabled: false
  112. Product License: Community Engine
  113. WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
  114. WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
  115. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

 

14.Docker Harbor安装(40分)

使用提供的虚拟机与软件包,部署Docker Harbor镜像仓库服务。安装完毕后,将执行./install.sh --with-notary --with-clair命令返回结果中的[step4]的内容以文本形式提交到答题框。
之前安装docker-cd时解压Docker.tar.gz时会产生有一个image.sh的脚本(是一个自动上传镜像到本地仓库的脚本):

  1. [root@zookeeper1 ~]# ./image.sh
  2. [root@zookeeper1 ~]# mkdir -p /data/ssl
  3. [root@zookeeper1 ~]# cd /data/ssl/
  4. [root@zookeeper1 ssl]# which openssl
  5. /usr/bin/openssl
  6. [root@zookeeper1 ssl]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -days 2.235 -out ca.crt
  7. Generating a 4096 bit RSA private key
  8. ...................................................................................................................++
  9. ............................................................................................................................................++
  10. writing new private key to 'ca.key'
  11. -----
  12. You are about to be asked to enter information that will be incorporated
  13. into your certificate request.
  14. What you are about to enter is what is called a Distinguished Name or a DN.
  15. There are quite a few fields but you can leave some blank
  16. For some fields there will be a default value,
  17. If you enter '.', the field will be left blank.
  18. -----
  19. Country Name (2 letter code) [XX]:CN # 国家
  20. State or Province Name (full name) []:Guangxi # 地区(州或省名)
  21. Locality Name (eg, city) [Default City]:Liuzhou # 城市
  22. Organization Name (eg, company) [Default Company Ltd]:lztd # 公司名称
  23. Organizational Unit Name (eg, section) []:xxjsxy # 单位名称
  24. Common Name (eg, your name or your server's hostname) []: # 服务器主机名,域名
  25. Email Address []: # 邮箱地址
  26. [root@zookeeper1 ssl]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout www.yidaoyun.com.key -out www.yidaoyun.com.csr
  27. Generating a 4096 bit RSA private key
  28. .........................................................++
  29. ......++
  30. writing new private key to 'www.yidaoyun.com.key'
  31. -----
  32. You are about to be asked to enter information that will be incorporated
  33. into your certificate request.
  34. What you are about to enter is what is called a Distinguished Name or a DN.
  35. There are quite a few fields but you can leave some blank
  36. For some fields there will be a default value,
  37. If you enter '.', the field will be left blank.
  38. -----
  39. Country Name (2 letter code) [XX]:CN # 国家
  40. State or Province Name (full name) []:LGuangxi # 地区(州或省名)
  41. Locality Name (eg, city) [Default City]:Liuzhou # 城市
  42. Organization Name (eg, company) [Default Company Ltd]:lztd # 公司名称
  43. Organizational Unit Name (eg, section) []:xxjsxy # 单位名称
  44. Common Name (eg, your name or your server's hostname) []:www.yidaoyun.com # 服务器主机名,域名
  45. Email Address []: # 邮箱地址
  46. Please enter the following 'extra' attributes
  47. to be sent with your certificate request
  48. A challenge password []:
  49. An optional company name []:
  50. [root@zookeeper1 ssl]# openssl x509 -req -days 2.235 -in www.yidaoyun.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out www.yidaoyun.com.crt
  51. Signature ok
  52. subject=/C=CN/ST=Guangxi/L=Liuzhou/O=lztd/OU=xxjsxy/CN=www.yidaoyun.com
  53. Getting CA Private Key
  54. [root@zookeeper1 ssl]# cp -rvf www.yidaoyun.com.crt /etc/pki/ca-trust/source/anchors/
  55. ‘www.yidaoyun.com.crt’ ->/etc/pki/ca-trust/source/anchors/www.yidaoyun.com.crt
  56. [root@zookeeper1 ssl]# update-ca-trust enable
  57. [root@zookeeper1 ssl]# update-ca-trust extract
  58. # 注释:上传docker-compose-Linux-x86_64.64并重命名docker-compose:
  59. [root@zookeeper1 ~]# mv docker-compose-Linux-x86_64.64 /usr/local/bin/
  60. [root@zookeeper1 ~]# mv /usr/local/bin/docker-compose-Linux-x86_64.64 /usr/local/bin/docker-compose
  61. [root@zookeeper1 ~]# chmod +x /usr/local/bin/docker-compose
  62. [root@zookeeper1 ~]# docker-compose --version
  63. docker-compose version 1.26.0-rc4, build d279b7a8
  64. [root@zookeeper1 opt]# tar -zvxf harbor-offline-installer-v1.5.3.tgz -C /opt/
  65. [root@zookeeper1 opt]# ll
  66. total 1097260
  67. drwxr-xr-x. 8 root root 4096 May 14 08:03 centos
  68. drwx--x--x 4 root root 26 May 19 23:16 containerd
  69. -rw-r--r--. 1 root root 1123583789 May 15 04:26 Docker.tar.gz
  70. drwxr-xr-x 4 root root 4096 May 20 03:55 harbor
  71. [root@zookeeper1 opt]# cd harbor/
  72. [root@zookeeper1 harbor]# ll
  73. total 895708
  74. drwxr-xr-x 3 root root 22 May 20 03:55 common
  75. -rw-r--r-- 1 root root 1185 Sep 12 2018 docker-compose.clair.yml
  76. -rw-r--r-- 1 root root 1725 Sep 12 2018 docker-compose.notary.yml
  77. -rw-r--r-- 1 root root 3596 Sep 12 2018 docker-compose.yml
  78. drwxr-xr-x 3 root root 150 Sep 12 2018 ha
  79. -rw-r--r-- 1 root root 6956 Sep 12 2018 harbor.cfg
  80. -rw-r--r-- 1 root root 915878468 Sep 12 2018 harbor.v1.5.3.tar.gz
  81. -rwxr-xr-x 1 root root 5773 Sep 12 2018 install.sh
  82. -rw-r--r-- 1 root root 10764 Sep 12 2018 LICENSE
  83. -rw-r--r-- 1 root root 482 Sep 12 2018 NOTICE
  84. -rw-r--r-- 1 root root 1247461 Sep 12 2018 open_source_license
  85. -rwxr-xr-x 1 root root 27840 Sep 12 2018 prepare
  86. [root@zookeeper1 harbor]# vim harbor.cfg
  87. # 注释:修改配置文件内容:
  88. hostname = 192.168.1.111
  89. ui_url_protocol = https
  90. ssl_cert = /data/ssl/www.yidaoyun.com.crt
  91. ssl_cert_key = /data/ssl/www.yidaoyun.com.key
  92. harbor_admin_password = 000000
  93. [root@zookeeper1 harbor]# ./prepare
  94. Generated and saved secret to file: /data/secretkey
  95. Generated configuration file: ./common/config/nginx/nginx.conf
  96. Generated configuration file: ./common/config/adminserver/env
  97. Generated configuration file: ./common/config/ui/env
  98. Generated configuration file: ./common/config/registry/config.yml
  99. Generated configuration file: ./common/config/db/env
  100. Generated configuration file: ./common/config/jobservice/env
  101. Generated configuration file: ./common/config/jobservice/config.yml
  102. Generated configuration file: ./common/config/log/logrotate.conf
  103. Generated configuration file: ./common/config/jobservice/config.yml
  104. Generated configuration file: ./common/config/ui/app.conf
  105. Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
  106. The configuration files are ready, please use docker-compose to start the service.
  107. [root@zookeeper1 harbor]# ./install.sh --with-notary --with-clair
  108. [Step 0]: checking installation environment ...
  109. Note: docker version: 18.09.6
  110. Note: docker-compose version: 1.26.0
  111. [Step 1]: loading Harbor images ...
  112. dba693fc2701: Loading layer 133.4MB/133.4MB
  113. 5773887c4c41: Loading layer 30.09MB/30.09MB
  114. 6fc2abbcae42: Loading layer 15.37MB/15.37MB
  115. d85f176a11ec: Loading layer 15.37MB/15.37MB
  116. Loaded image: vmware/harbor-adminserver:v1.5.3
  117. 462b14c85230: Loading layer 410.1MB/410.1MB
  118. c2e0c8cb2903: Loading layer 9.216kB/9.216kB
  119. 11bdb24cded2: Loading layer 9.216kB/9.216kB
  120. 5d8f974b49ef: Loading layer 7.68kB/7.68kB
  121. ee04f13f4147: Loading layer 1.536kB/1.536kB
  122. 799db4dfe41a: Loading layer 11.78kB/11.78kB
  123. f7d813585bdd: Loading layer 2.56kB/2.56kB
  124. 6300bbdbd7ab: Loading layer 3.072kB/3.072kB
  125. Loaded image: vmware/harbor-db:v1.5.3
  126. 1d7516778a05: Loading layer 30.09MB/30.09MB
  127. f7ec8d1b47d0: Loading layer 20.91MB/20.91MB
  128. 22b0ad749c21: Loading layer 20.91MB/20.91MB
  129. Loaded image: vmware/harbor-jobservice:v1.5.3
  130. 2d449d67c05a: Loading layer 89.58MB/89.58MB
  131. 0bfd4e706575: Loading layer 3.072kB/3.072kB
  132. 6100e173c230: Loading layer 59.9kB/59.9kB
  133. 86fe093d1358: Loading layer 61.95kB/61.95kB
  134. Loaded image: vmware/redis-photon:v1.5.3
  135. Loaded image: photon:1.0
  136. 3bf3086a6569: Loading layer 30.09MB/30.09MB
  137. 641d0f77d675: Loading layer 10.95MB/10.95MB
  138. 89efbaabea87: Loading layer 17.3MB/17.3MB
  139. 1276e51f4dc2: Loading layer 15.87kB/15.87kB
  140. 49e187d04e78: Loading layer 3.072kB/3.072kB
  141. e62fbfea411d: Loading layer 28.24MB/28.24MB
  142. Loaded image: vmware/notary-signer-photon:v0.5.1-v1.5.3
  143. Loaded image: vmware/mariadb-photon:v1.5.3
  144. 201f6ade61d8: Loading layer 102.5MB/102.5MB
  145. 81221fbb5879: Loading layer 6.656kB/6.656kB
  146. 2268e3c9e521: Loading layer 2.048kB/2.048kB
  147. 9fca06f4b193: Loading layer 7.68kB/7.68kB
  148. Loaded image: vmware/postgresql-photon:v1.5.3
  149. 11d6e8a232c9: Loading layer 30.09MB/30.09MB
  150. 42650b04d53d: Loading layer 24.41MB/24.41MB
  151. a1cd8af19e29: Loading layer 7.168kB/7.168kB
  152. 4b1cda90ba19: Loading layer 10.56MB/10.56MB
  153. 1351f0f3006a: Loading layer 24.4MB/24.4MB
  154. Loaded image: vmware/harbor-ui:v1.5.3
  155. e335f4c3af7d: Loading layer 79.93MB/79.93MB
  156. 2aea487bc2c4: Loading layer 3.584kB/3.584kB
  157. d2efec3de68b: Loading layer 3.072kB/3.072kB
  158. d0d71a5ce1dd: Loading layer 4.096kB/4.096kB
  159. 19930367abf0: Loading layer 3.584kB/3.584kB
  160. 03e5b7640db5: Loading layer 9.728kB/9.728kB
  161. Loaded image: vmware/harbor-log:v1.5.3
  162. 5aebe8cc938c: Loading layer 11.97MB/11.97MB
  163. Loaded image: vmware/nginx-photon:v1.5.3
  164. ede6a57cbd7e: Loading layer 30.09MB/30.09MB
  165. 4d6dd4fc1d87: Loading layer 2.56kB/2.56kB
  166. c86a69f49f60: Loading layer 2.56kB/2.56kB
  167. 0cf6e04c5927: Loading layer 2.048kB/2.048kB
  168. 6fbff4fe9739: Loading layer 22.8MB/22.8MB
  169. 6f527a618092: Loading layer 22.8MB/22.8MB
  170. Loaded image: vmware/registry-photon:v2.6.2-v1.5.3
  171. e29a8834501b: Loading layer 12.16MB/12.16MB
  172. aaf67f1da2c7: Loading layer 17.3MB/17.3MB
  173. 8d5718232133: Loading layer 15.87kB/15.87kB
  174. fc89aca1dd12: Loading layer 3.072kB/3.072kB
  175. 076eb5a76f6d: Loading layer 29.46MB/29.46MB
  176. Loaded image: vmware/notary-server-photon:v0.5.1-v1.5.3
  177. 454c81edbd3b: Loading layer 135.2MB/135.2MB
  178. e99db1275091: Loading layer 395.4MB/395.4MB
  179. 051e4ee23882: Loading layer 9.216kB/9.216kB
  180. 6cca4437b6f6: Loading layer 9.216kB/9.216kB
  181. 1d48fc08c8bc: Loading layer 7.68kB/7.68kB
  182. 0419724fd942: Loading layer 1.536kB/1.536kB
  183. 543c0c1ee18d: Loading layer 655.2MB/655.2MB
  184. 4190aa7e89b8: Loading layer 103.9kB/103.9kB
  185. Loaded image: vmware/harbor-migrator:v1.5.0
  186. 45878c64fc3c: Loading layer 165.3MB/165.3MB
  187. fc3d407ce98f: Loading layer 10.93MB/10.93MB
  188. d7a0785bb902: Loading layer 2.048kB/2.048kB
  189. a17e0f23bc84: Loading layer 48.13kB/48.13kB
  190. 57c7181f2336: Loading layer 10.97MB/10.97MB
  191. Loaded image: vmware/clair-photon:v2.0.5-v1.5.3
  192. [Step 2]: preparing environment ...
  193. Clearing the configuration file: ./common/config/adminserver/env
  194. Clearing the configuration file: ./common/config/ui/env
  195. Clearing the configuration file: ./common/config/ui/app.conf
  196. Clearing the configuration file: ./common/config/ui/private_key.pem
  197. Clearing the configuration file: ./common/config/db/env
  198. Clearing the configuration file: ./common/config/jobservice/env
  199. Clearing the configuration file: ./common/config/jobservice/config.yml
  200. Clearing the configuration file: ./common/config/registry/config.yml
  201. Clearing the configuration file: ./common/config/registry/root.crt
  202. Clearing the configuration file: ./common/config/nginx/cert/www.yidaoyun.com.crt
  203. Clearing the configuration file: ./common/config/nginx/cert/www.yidaoyun.com.key
  204. Clearing the configuration file: ./common/config/nginx/nginx.conf
  205. Clearing the configuration file: ./common/config/log/logrotate.conf
  206. loaded secret from file: /data/secretkey
  207. Generated configuration file: ./common/config/nginx/nginx.conf
  208. Generated configuration file: ./common/config/adminserver/env
  209. Generated configuration file: ./common/config/ui/env
  210. Generated configuration file: ./common/config/registry/config.yml
  211. Generated configuration file: ./common/config/db/env
  212. Generated configuration file: ./common/config/jobservice/env
  213. Generated configuration file: ./common/config/jobservice/config.yml
  214. Generated configuration file: ./common/config/log/logrotate.conf
  215. Generated configuration file: ./common/config/jobservice/config.yml
  216. Generated configuration file: ./common/config/ui/app.conf
  217. Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
  218. Copying sql file for notary DB
  219. Generated certificate, key file: ./cert_tmp/notary-signer-ca.key, cert file: ./cert_tmp/notary-signer-ca.crt
  220. Generated certificate, key file: ./cert_tmp/notary-signer.key, cert file: ./cert_tmp/notary-signer.crt
  221. Copying certs for notary signer
  222. Copying notary signer configuration file
  223. Generated configuration file: ./common/config/notary/server-config.json
  224. Copying nginx configuration file for notary
  225. Generated configuration file: ./common/config/nginx/conf.d/notary.server.conf
  226. Generated and saved secret to file: /data/defaultalias
  227. Generated configuration file: ./common/config/notary/signer_env
  228. Generated configuration file: ./common/config/clair/postgres_env
  229. Generated configuration file: ./common/config/clair/config.yaml
  230. Generated configuration file: ./common/config/clair/clair_env
  231. The configuration files are ready, please use docker-compose to start the service.
  232. [Step 3]: checking existing instance of Harbor ...
  233. [Step 4]: starting Harbor ...
  234. Creating network "harbor_harbor" with the default driver
  235. Creating network "harbor_harbor-clair" with the default driver
  236. Creating network "harbor_harbor-notary" with the default driver
  237. Creating network "harbor_notary-mdb" with the default driver
  238. Creating network "harbor_notary-sig" with the default driver
  239. Creating harbor-log ... done
  240. Creating redis ... done
  241. Creating clair-db ... done
  242. Creating notary-db ... done
  243. Creating harbor-db ... done
  244. Creating registry ... done
  245. Creating harbor-adminserver ... done
  246. Creating notary-signer ... done
  247. Creating clair ... done
  248. Creating harbor-ui ... done
  249. Creating notary-server ... done
  250. Creating nginx ... done
  251. Creating harbor-jobservice ... done
  252. ✔ ----Harbor has been installed and started successfully.----
  253. Now you should be able to visit the admin portal at https://192.168.1.111.
  254. For more details, please visit https://github.com/vmware/harbor .

 

15.Shell脚本补全(40分)

下面有一段脚本,作用是自动配置nginx服务,由于工程师的失误,将脚本中的某些代码删除了,但注释还在,请根据注释,填写代码。最后将填写的代码按照顺序以文本形式提交至答题框。 nginx(){ cd #删除默认项目路径下的文件 rm -rf /usr/share/nginx/html/* #将提供的dist静态文件复制到nginx项目目录 cp -rvf /root/dist/* /usr/share/nginx/html #修改nginx配置文件如下 cat > /etc/nginx/conf.d/default.conf << EOF server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /user { (此处需要填写) } location /shopping { proxy_pass http://127.0.0.1:8081; } location /cashier { proxy_pass http://127.0.0.1:8083; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF #启动nginx服务 systemctl start nginx #设置nginx开机自启 (此处需填写) #检查nginx服务是否成功启动 if [ $? -eq 0 ] then sleep 3 echo -e "\033[36m==========nginx启动成功==========\033[0m" else echo -e "\033[31mnginx启动失败,请检查\033[0m" exit 1 fi sleep 2 }

  1. proxy_pass http://127.0.0.1:8082;
  2. systemctl enable nginx

©版权声明

著作权归作者所有:如需转载,请注明出处,谢谢。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/471456
推荐阅读
相关标签
  

闽ICP备14008679号