当前位置:   article > 正文

[云原生专题-18]:容器 - docker自带的集群管理工具swarm - 手工搭建集群服务全过程详细解读_在dockerswarm集群中创建服务

在dockerswarm集群中创建服务

作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客

本文网址:https://blog.csdn.net/HiWangWenBing/article/details/122743643


目录

准备工作:集群规划

第一步:搭建云服务器

1.1 安装服务器

1.2 安装后检查

第二步:搭建docker环境(云平台手工操作)

第三步:搭建集群角色(swarm)

2.1 搭建leader manager 角色(swarm int)

2.2 搭建普通 manager 角色  (swarm join as mananger)

2.3 搭建worker 角色  (swarm join as worker)

2.4 查看集群的节点情况

第四步:搭建集群服务(docker service)--在管理节点上执行命令

4.1 docer service命令

4.2 新创建服务

4.3 查看服务

4.4 验证服务

4.5 服务实例的弹性增加

4.6 服务实例的弹性删除

4.7 在线滚动升级服务镜像的版本: service update

4.8 停止某个节点接受服务请求


准备工作:集群规划

第一步:搭建云服务器

1.1 安装服务器

(1)服务器

  • 创建5台服务器, 即实例数=5,一次性创建5台服务器
  • 5台服务器在同一个默认网络中
  • 1核2G
  • 标准共享型
  • 操作系统选择CentOS

(2)网络

  • 分配共有IP地址
  • 默认专有网络
  • 按流量使用,1M带宽

(3)默认安全组

  • 设定公网远程服务服务器的用户名和密码
  •  添加80端口,用于后续nginx服务的测试。

 

1.2 安装后检查

(1)通过公网IP登录到每台服务器

 

(2) 为服务器划分角色

leader mananger:172.24.130.164

normal manager:172.24.130.165

worker1:172.24.130.166

worker2:172.24.130.167

worker3:172.24.130.168

(3)安装ifconfig和ping工具

  1. $ yum install -y yum-utils
  2. $ yum install iputils
  3. $ yum install net-tools.x86_64

(4)通过私网IP可以ping通

第二步:搭建docker环境(云平台手工操作)

CentOS Docker 安装 | 菜鸟教程

How nodes work | Docker Documentation

2.1  为每台虚拟服务器安装docker环境

(1)分别ssh到每台虚拟服务器中

(2)一键安装docker环境

  1. # 增加阿里云镜像repo
  2. $ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. $ curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

2.2 为每台虚拟服务器启动docker环境

  1. $ systemctl start docker
  2. $ docker version
  3. $ docker ps
  4. $ docker images
  1. [root@Test ~]# docker info
  2. Client:
  3. Context: default
  4. Debug Mode: false
  5. Plugins:
  6. app: Docker App (Docker Inc., v0.9.1-beta3)
  7. buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  8. scan: Docker Scan (Docker Inc., v0.12.0)
  9. Server:
  10. Containers: 0
  11. Running: 0
  12. Paused: 0
  13. Stopped: 0
  14. Images: 0
  15. Server Version: 20.10.12
  16. Storage Driver: overlay2
  17. Backing Filesystem: xfs
  18. Supports d_type: true
  19. Native Overlay Diff: true
  20. userxattr: false
  21. Logging Driver: json-file
  22. Cgroup Driver: cgroupfs
  23. Cgroup Version: 1
  24. Plugins:
  25. Volume: local
  26. Network: bridge host ipvlan macvlan null overlay
  27. Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
  28. Swarm: inactive
  29. Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
  30. Default Runtime: runc
  31. Init Binary: docker-init
  32. containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
  33. runc version: v1.0.2-0-g52b36a2
  34. init version: de40ad0
  35. Security Options:
  36. seccomp
  37. Profile: default
  38. Kernel Version: 4.18.0-348.2.1.el8_5.x86_64
  39. Operating System: CentOS Linux 8
  40. OSType: linux
  41. Architecture: x86_64
  42. CPUs: 1
  43. Total Memory: 759.7MiB
  44. Name: Test
  45. ID: QOVF:RC73:VI4W:TAUR:3NIE:GSAW:6HMY:L2SM:LRWF:DYIZ:5BJT:SQTB
  46. Docker Root Dir: /var/lib/docker
  47. Debug Mode: false
  48. Registry: https://index.docker.io/v1/
  49. Labels:
  50. Experimental: false
  51. Insecure Registries:
  52. 127.0.0.0/8
  53. Live Restore Enabled: false

第三步:搭建集群角色(swarm)

2.1 搭建leader manager 角色(swarm int)

(1)命令

  1. $ docker swarm init --advertise-addr 172.24.130.164 #这里的 IP 为创建机器时分配的私有ip。

(2)输出

[root@Test ~]# docker swarm init --advertise-addr 172.24.130.164
Swarm initialized: current node (tkox6q8o48l7b2aofzygzvjow) is now a manager.

如何添加woker:

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-23znzigz1ycuuy1hmp77a4nxp2w8ffvri3zwelc8fx4aeo2xr6-bpda5otyc47k1iz064353adwj 172.24.130.164:2377

如何添加manager:

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

以上输出,证明已经初始化成功。需要把以下这行复制出来,在增加工作节点时会用到。

在集群中,

--token为集群的标识,其组成为token-id + leader mananger ip地址 + leader mananger的端口号。

(3) docker swarm用法

[root@Test ~]# docker swarm

Usage:  docker swarm COMMAND

Manage Swarm

Commands:
  ca          Display and rotate the root CA
  init        Initialize a swarm
  join        Join a swarm as a node and/or manager
  join-token  Manage join tokens
  leave       Leave the swarm
  unlock      Unlock swarm
  unlock-key  Manage the unlock key
  update      Update the swarm

Run 'docker swarm COMMAND --help' for more information on a command.
 

2.2 搭建普通 manager 角色  (swarm join as mananger)

(1) 在leader manager上执行如下命令,获取以manager的身份加入到swarm集群中的方法

$ docker swarm join-token manager

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-23znzigz1ycuuy1hmp77a4nxp2w8ffvri3zwelc8fx4aeo2xr6-262vije87ut842ejp8cmfdwqn 172.24.130.164:2377

(2)登录到mananger-2上,执行上述输出结果中的命令,

docker swarm join --token SWMTKN-1-23znzigz1ycuuy1hmp77a4nxp2w8ffvri3zwelc8fx4aeo2xr6-262vije87ut842ejp8cmfdwqn 172.24.130.164:2377

This node joined a swarm as a manager.
 

2.3 搭建worker 角色  (swarm join as worker)

(1)在leader mananger节点上执行如下命令

$ docker swarm join-token worker

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-23znzigz1ycuuy1hmp77a4nxp2w8ffvri3zwelc8fx4aeo2xr6-bpda5otyc47k1iz064353adwj 172.24.130.165:2377
 

(2)在每个worker机器分别执行如下命令:

[root@Test ~]# docker swarm join --token SWMTKN-1-23znzigz1ycuuy1hmp77a4nxp2w8ffvri3zwelc8fx4aeo2xr6-bpda5otyc47k1iz064353adwj 172.24.130.164:2377

This node joined a swarm as a worker.

2.4 查看集群的节点情况

(1)docker node命令

  1. [root@Test ~]# docker node
  2. Usage: docker node COMMAND
  3. Manage Swarm nodes
  4. Commands:
  5. demote Demote one or more nodes from manager in the swarm
  6. inspect Display detailed information on one or more nodes
  7. ls List nodes in the swarm
  8. promote Promote one or more nodes to manager in the swarm
  9. ps List tasks running on one or more nodes, defaults to current node
  10. rm Remove one or more nodes from the swarm
  11. update Update a node

从上述命令可以看出,可以通过rm命令把部分节点从swarm集群中删除。也可以通过ls命令展现当前集群中的所有节点。

(2)查看集群中的所有节点

  1. [root@Test ~]# docker node ls
  2. ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
  3. notb2otq9cd6ie5tcs99ozoam Test Ready Active 20.10.12
  4. s59g2ygqm8lbrngpxhyt3zzg3 Test Ready Active 20.10.12
  5. stea3vvu3hj9x619yb64e5kjo Test Ready Active 20.10.12
  6. tkox6q8o48l7b2aofzygzvjow Test Ready Active Leader 20.10.12
  7. zclgojyadj75g4qtx5bmyj9kk * Test Ready Active Reachable 20.10.12

从上面的输出可以看出,5个节点都按照各自规划的身份,已经加入到网络,

现在就可以,集群中节点部署服务docker service了。

第四步:搭建集群服务(docker service)--在管理节点上执行命令

4.1 docer service命令

  1. [root@Test ~]# docker service
  2. Usage: docker service COMMAND
  3. Manage services
  4. Commands:
  5. create Create a new service
  6. inspect Display detailed information on one or more services
  7. logs Fetch the logs of a service or task
  8. ls List services
  9. ps List the tasks of one or more services
  10. rm Remove one or more services
  11. rollback Revert changes to a service's configuration
  12. scale Scale one or multiple replicated services
  13. update Update a service

4.2 新创建服务

在一个工作节点上创建一个名为 helloworld 的服务,这里是随机指派给一个工作节点:

  1. # 展现当前已有的service
  2. $ docker service ls
  3. # 删除已有的service(现在测试已经部署的service)
  4. $ docker service rm
  5. # 部署自己的service
  6. $ [root@Test ~]# docker service create --replicas 2 --name my_nginx -p 80:80 nginx

[root@Test ~]# docker service create --replicas 2 --name my_nginx -p 80:80 nginx
d326pd2c5mmiosc9m5lwbdp2f
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged

4.3 查看服务

(1)查看所有服务

  1. [root@Test ~]# docker service ls
  2. ID NAME MODE REPLICAS IMAGE PORTS
  3. d326pd2c5mmi my_nginx replicated 2/2 nginx:latest *:80->80/tcp

这里有一个服务,my_nginx  ,有两个实例,服务的端口为80.

(2)在master上查看指定service的详情

  1. [root@Test ~]# docker service ps my_nginx
  2. ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
  3. k9n4h3u0fprx my_nginx.1 nginx:latest Test Running Running 6 minutes ago
  4. 7jainolzyb20 my_nginx.2 nginx:latest Test Running Running 6 minutes ago

(3)每台服务器查看在自身服务上的部署情形

选择其中服务器:

  1. [root@Test ~]# docker ps
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 3b7a1a984b42 nginx:latest "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp my_nginx.7.lctwil7sohn4l0h62vbvgagju

4.4 验证服务

在5个服务器公网地址,分别执行http远程访问

  1. http://116.62.229.233/
  2. http://120.26.75.192/
  3. http://120.27.251.144/
  4. http://121.199.64.187/
  5. http://120.27.193.74/

 我们会发现一个神奇的现象:

虽然我们只部署了nginx服务的两个实例,但居然通过集群服务器的所有地址,都可以访问相同的nginx服务。

这其实就是集群服务的优势:

无论后续部署多少个实例,都可以通过相同的IP地址访问服务,自动实现work节点的负载均衡,以及多个IP地址的汇集功能 。

也就是说,5个公网IP地址是为集群共享,所有部署的服务,也是为集群共享。

由docker swarm来实现上述功能。

4.5 服务实例的弹性增加

我们可以根据实际用户的访问流量,非常方便、简洁、快速地、弹性部署服务的实例。

  1. [root@Test ~]# docker service scale my_nginx=8
  2. my_nginx scaled to 8
  3. overall progress: 8 out of 8 tasks
  4. 1/8: running [==================================================>]
  5. 2/8: running [==================================================>]
  6. 3/8: running [==================================================>]
  7. 4/8: running [==================================================>]
  8. 5/8: running [==================================================>]
  9. 6/8: running [==================================================>]
  10. 7/8: running [==================================================>]
  11. 8/8: running [==================================================>]
  12. verify: Service converged
  1. [root@Test ~]# docker service ls
  2. ID NAME MODE REPLICAS IMAGE PORTS
  3. d326pd2c5mmi my_nginx replicated 8/8 nginx:latest *:80->80/tcp
  1. [root@Test ~]# docker service ps my_nginx
  2. ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
  3. k9n4h3u0fprx my_nginx.1 nginx:latest Test Running Running 25 minutes ago
  4. 7jainolzyb20 my_nginx.2 nginx:latest Test Running Running 25 minutes ago
  5. 5ib5pvokzul3 my_nginx.3 nginx:latest Test Running Running 53 seconds ago
  6. h0l58xuot3b1 my_nginx.4 nginx:latest Test Running Running 53 seconds ago
  7. fj6nfnh4iwhc my_nginx.5 nginx:latest Test Running Running 52 seconds ago
  8. m4qhmh2ng8ff my_nginx.6 nginx:latest Test Running Running 52 seconds ago
  9. lctwil7sohn4 my_nginx.7 nginx:latest Test Running Running 53 seconds ago
  10. 9mk5v48d6m9x my_nginx.8 nginx:latest Test Running Running 53 seconds ago

登录到每个服务器上,查看这8个服务中,部署在自身服务器上有 几个:

  1. [root@Test ~]# docker ps
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 3b7a1a984b42 nginx:latest "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp my_nginx.7.lctwil7sohn4l0h62vbvgagju
  4. 174f62943bf2 nginx:latest "/docker-entrypoint.…" 29 minutes ago Up 29 minutes 80/tcp my_nginx.2.7jainolzyb2012xuzswo1nfpr
  5. [root@Test ~]#

4.6 服务实例的弹性删除

  1. [root@Test ~]# docker service scale my_nginx=4
  2. my_nginx scaled to 4
  3. overall progress: 4 out of 4 tasks
  4. 1/4: running [==================================================>]
  5. 2/4: running [==================================================>]
  6. 3/4: running [==================================================>]
  7. 4/4: running [==================================================>]
  8. verify: Service converged
  9. [root@Test ~]# docker service ls
  10. ID NAME MODE REPLICAS IMAGE PORTS
  11. d326pd2c5mmi my_nginx replicated 4/4 nginx:latest *:80->80/tcp
  12. [root@Test ~]#

4.7 在线滚动升级服务镜像的版本: service update

$ docker service update --image XXX

4.8 停止某个节点接受服务请求

  1. [root@Test ~]# docker node ls
  2. ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
  3. notb2otq9cd6ie5tcs99ozoam Test Ready Active 20.10.12
  4. s59g2ygqm8lbrngpxhyt3zzg3 Test Ready Active 20.10.12
  5. stea3vvu3hj9x619yb64e5kjo Test Ready Active 20.10.12
  6. tkox6q8o48l7b2aofzygzvjow * Test Ready Active Leader 20.10.12
  7. zclgojyadj75g4qtx5bmyj9kk Test Ready Active Reachable 20.10.12
  1. [root@Test ~]# docker node update --availability drain notb2otq9cd6ie5tcs99ozoam
  2. notb2otq9cd6ie5tcs99ozoam
  3. [root@Test ~]# docker node ls
  4. ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
  5. notb2otq9cd6ie5tcs99ozoam Test Ready Drain 20.10.12
  6. s59g2ygqm8lbrngpxhyt3zzg3 Test Ready Active 20.10.12
  7. stea3vvu3hj9x619yb64e5kjo Test Ready Active 20.10.12
  8. tkox6q8o48l7b2aofzygzvjow * Test Ready Active Leader 20.10.12
  9. zclgojyadj75g4qtx5bmyj9kk Test Ready Active Reachable 20.10.12
  10. [root@Test ~]#

通过一条简单的命令,就可以让某个节点停止提供服务。

备注:

停止服务,并不是,其服务器的IP地址不可访问,而是在SWARM在该节点不参与负载均衡调度。

对该服务器的所有访问,都会均衡到其他服务器上。


作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客

本文网址:https://blog.csdn.net/HiWangWenBing/article/details/122743643

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号