当前位置:   article > 正文

一文讲明白docker网络 理论+实战_doker管网

doker管网

说明

本文的参考的资料以及示例全部来源于docker管网,写本篇博客的目的是对自己学习过程的一个记录包含了实操过程总踩得一些坑,由于笔者水平有限有些官网原话实在不知道怎么翻译才合适就直接贴了英文

docker默认network介绍

  • bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. See bridge networks.
  • host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. See use the host network.
  • overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
  • macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. See Macvlan networks.
  • none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services. See disable container networking.
  • Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

docker默认network使用场景

  • User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.
  • Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
  • Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
  • Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
  • Third-party network plugins allow you to integrate Docker with specialized network stacks.

Docker 网络实战

使用默认的bridge network实现容器通信

准工作
  • 一台装有docker的物理主机或虚拟主机
  • 改物理主机或者虚拟主机能够访问互联网
列出docker的network
# 新安装的docker默认的network有如下几个
helloworld@tt ~ % docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
0e88248fd126   bridge    bridge    local
94a83c47d62c   host      host      local
6f9273cd8147   none      null      local
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
启动两个容器
# 在启动容器前先用如下命令清除掉所有的容器,不清除也行,清除的目的主要是为了方便我们操作
# 当没有容器时执行此命令会报错,不用管它
docker rm -f $(docker ps -aq)

docker run -dit --name alpine1 alpine ash # 指定容器名为alpine1 镜像名为 alpine
docker run -dit --name alpine2 alpine ash 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
查看容器基础信息
helloworld@tt ~ % docker ps 
CONTAINER ID   IMAGE     COMMAND   CREATED          STATUS          PORTS     NAMES
62027b92bf28   alpine    "ash"     13 minutes ago   Up 13 minutes             alpine2
b67e7fa7891a   alpine    "ash"     14 minutes ago   Up 14 minutes             alpine1

  • 1
  • 2
  • 3
  • 4
  • 5
查看network详细信息
# 容器启动如果不指定network默认的network为bridge通过docker network inspect bridge 我们可以看到如下信息
# 通过 docker network inspect network名称/networkId  可以查看network的详细信息
helloworld@tt ~ % docker network inspect bridge
[
    {
        "Name": "bridge", # network 名称
        "Id": "0e88248fd126f19c62a7b0253cbe0ca1b40ba2d75913ee153b5835cb36000c98",
        "Created": "2021-04-10T01:45:03.160054289Z",
        "Scope": "local",
        "Driver": "bridge", # Driver bridge docker默认的Driver有bridge、host、none
        "EnableIPv6": false, # 是否启用ipv6
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16", # 子网掩码16位 255.255.0.0
                    "Gateway": "172.17.0.1" # 改bridge与容器宿主机进行通信的网关地址
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": { # Containers key 记录了指定该bridge运行的容器ip
           # 小面列出了我们刚刚启动的两个容器ip信息
           # 加入到同一个bridge的容器间可以互相通信
            "62027b92bf28f95f743ad1562876ad5277f043bd674bfebc45b83c599b14c54a": {
                "Name": "alpine2",
                "EndpointID": "952ba0e002f444acf2057b45bc1036c87996f6a7897b23444af78ec992a9e766",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "b67e7fa7891a0d1a2e4574c1bf180a82000d6b92f8cd3b39b7949d7863b1fb7e": {
                "Name": "alpine1",
                "EndpointID": "18adfd9babd1a0037190b92ee29cb8d8611b3b9e6f4d55f7d773c5d8ccaf898d",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
检测容器之间的通信
#查看alpine2和alpine1两个容器的ip并检测两个容器之间以及容器与docker宿主机的通信

# 进入alpine1容器
helloworld@tt rocketmqlogs % docker attach alpine1 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1000
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
# 验证加入bridge network后两个容器是否能通信
# 在alpine1容器中ping alpine2的ip发现能够正常通信
# 在alpine2容器中ping alpine1也能正常ping通这里就不演示了
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.446 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.263 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.576 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.191 ms

# 在容器alpine1或alpine2中ping 公网ip 发现通信正常这说明什么问题呢
# 1、由于容器alpine1或alpine2启动时指定的network为bridge
# 2、而bridge通过网关172.17.0.1与容器宿主机进行通信
# 3、当宿主docker的宿主机能够正常访问公网那么容器也能正常访问公网

# 114.114.114.114是国内的一个DNS
/ # ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114): 56 data bytes
64 bytes from 114.114.114.114: seq=0 ttl=37 time=48.319 ms
64 bytes from 114.114.114.114: seq=1 ttl=37 time=41.837 ms
64 bytes from 114.114.114.114: seq=2 ttl=37 time=33.327 ms
64 bytes from 114.114.114.114: seq=3 ttl=37 time=46.445 ms
64 bytes from 114.114.114.114: seq=4 ttl=37 time=40.795 ms
64 bytes from 114.114.114.114: seq=5 ttl=37 time=40.631 ms
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

使用自定义的bridge network实现容器间的通信

准备工作
  • 一台装有docker的物理主机或虚拟主机

  • 改物理主机或虚拟主机能够正常访问互联网

自定义一个bridge network
# 创建一个名为alpine-net的自定义bridge network
helloworld@tt ~ % docker network create --driver bridge alpine-net 
f0a8b81b6a900fc4e05a738d201c7a25ce0d688a73c2c423d363bf55b2a5214e
# 查看 network 列表
helloworld@tt ~ % docker network ls
NETWORK ID     NAME         DRIVER    SCOPE
f0a8b81b6a90   alpine-net   bridge    local # 刚刚创建的bridge network
0e88248fd126   bridge       bridge    local
94a83c47d62c   host         host      local
6f9273cd8147   none         null      local
# 查看刚刚自定义的alpine-net 详细信息
helloworld@tt ~ % docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "f0a8b81b6a900fc4e05a738d201c7a25ce0d688a73c2c423d363bf55b2a5214e",
        "Created": "2021-04-10T08:06:03.5909169Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1" # 当前自定义的bridge与宿主机进行通信的网关地址
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {}, # 由于还没有任何容器加入此处暂时为空
        "Options": {},
        "Labels": {}
    }
]

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
启动4个容器
# 指定network为刚刚自定义的alpine-net
docker run -dit --name alpine1 --network alpine-net alpine ash

# 指定network为刚刚自定义的alpine-net
docker run -dit --name alpine2 --network alpine-net alpine ash

# 不指定network默认的network为bridge
docker run -dit --name alpine3 alpine ash

# 指定network为刚刚自定义的alpine-net
docker run -dit --name alpine4 --network alpine-net alpine ash

# 将alpine4同时加入到默认的bridge 相当于alpine4 多了一个ip(暂时可以这么理解)
docker network connect bridge alpine4

# 进入容器alpine4 查看ip信息,很明显ip 172.18.0.4 与 ip 172.17.0.3 与不同的network进行通信
helloworld@tt ~ % docker attach alpine4
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1000
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
    
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
       
33: eth1@if34: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    
# 查看自定义network alpine-net 详情,下面注释的命令代表了对应容器加入定义network alpine-net的时机
helloworld@tt ~ % docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "f0a8b81b6a900fc4e05a738d201c7a25ce0d688a73c2c423d363bf55b2a5214e",
        "Created": "2021-04-10T08:06:03.5909169Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "023017cef677cedda5bd4837bfde2e83fcb682707b59a2c81a76491e1f400f47": {
            # docker run -dit --name alpine1 --network alpine-net alpine ash
                "Name": "alpine1",
                "EndpointID": "a70c03f4179b3ac263e831bfa92594aa2694d610f066571eedac7f6c2174009a",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "15cc57278c5fb4163a74534b510ad56796753a46dba63f69362fa23cd3461973": {
            # docker run -dit --name alpine4 --network alpine-net alpine ash
                "Name": "alpine4",
                "EndpointID": "9949cb29b514653bf38e01ab81bc123c030c793284f4d3d76310641d1f09a466",
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "600f5e8bbeee65af1b4468224cfb2d9e26593aa3e05dda681a87dbc599a2dc34": {
            # docker run -dit --name alpine2 --network alpine-net alpine ash
                "Name": "alpine2",
                "EndpointID": "4c6da34fd084b206ad40bd292c9264c3aaa7ab0571c198d2f78758f68d193dc6",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]


# 查看默认network bridge的详情 下面注释的命令代表了对应容器加入默认network bridge的时机
helloworld@tt ~ % docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "0e88248fd126f19c62a7b0253cbe0ca1b40ba2d75913ee153b5835cb36000c98",
        "Created": "2021-04-10T01:45:03.160054289Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "15cc57278c5fb4163a74534b510ad56796753a46dba63f69362fa23cd3461973": {
            # docker network connect bridge alpine4
                "Name": "alpine4",
                "EndpointID": "e3b3040b388f3a1968c4f86ba5be7fcd90105ae8c3b26ec44440cfe05323a883",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "f1632921673fbcaec7fd3f98747cbf79c3404ff344ff8bf68a17b8e3ec5367f5": {
            # docker run -dit --name alpine3 alpine ash
                "Name": "alpine3",
                "EndpointID": "2855d82dfb2b718c3e599bcbf6162542f1a9f94abeb6fe0df743b1042ff90420",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
测试容器间的通信
# 测试容器间的通信(使用自定义的networks 不但可以ip进行通信也可以直接用容器名称进行通信)
# 由上述分析可知加入默认bridge的容器有:alpine3、alpine4
# 加入自定义network alpine-net的容器有 alpine1、alpine2、alpine4

# 进入容器alpine1 ping 容器alpine2、容器alpine4 由于这三个容器加入同一个network alpine-net 因此网络互通
helloworld@tt ~ % docker container attach alpine1
/ # ping alpine2
PING alpine2 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.249 ms
64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.197 ms
64 bytes from 172.18.0.3: seq=3 ttl=64 time=0.638 ms
64 bytes from 172.18.0.3: seq=4 ttl=64 time=0.094 ms
/ # ping alpine4
PING alpine4 (172.18.0.4): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.224 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.708 ms
64 bytes from 172.18.0.4: seq=2 ttl=64 time=0.324 ms
64 bytes from 172.18.0.4: seq=3 ttl=64 time=0.111 ms
/ # ping alpine3
ping: bad address 'alpine3' # 由于 容器alpine3与alpine1没有加入同一个network 因此网络不通

# 容器alpine3、alpine4网络也是互通的这里就不做演示了

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

network host模式实战

准备工作(来源于管网原话)
  • This procedure requires port 80 to be available on the Docker host. To make Nginx listen on a different port, see the documentation for the nginx image
  • The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server. 特别注意这一点本人使用的是Mac系统有踩过这个坑
操作
  • 由于host模式比较简单就不演示了(弄个NGINX镜像指定network为host模式运行容器我们在docker宿主机通过127.0.0.1:80便能访问到启动的NGINX,如果访问不到请检查防火墙)

overlay network 实战

准备工作
  • 三台装有docker的物理主机或者虚拟主机
  • 三台主机能上互联网
  • 三台主机之间能互相通信
  • 三台主机防火墙需要关闭(centos7操作防火墙命令如下)
    • systemctl status firewalld 查看防火墙转态
    • systemctl start firewalld 启动防火墙
    • systemctl restart firewalld 重启防火墙
    • systemctl disable firewalld 设置防火墙开机不可见
说明
  • 本次操作将搭建一个manager 两个worker (worker-1、worker-2) 其中manager扮演manager和worker的角色worker仅扮演worker的角色(总结:相当于一主两从)
  • 笔者的三个主机名为docker01、docker02、docker03
  • docker01当manager;docker02、docker03当worker-1和worker-2
搭建docker swarm
  • docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>在manager节点初始化swarm

    # 如果主机只有一个network interface 则 --advertise-addr=<IP-ADDRESS-OF-MANAGER> 是可选的
    [root@docker01 ~]# docker swarm init
    Swarm initialized: current node (m62vner4g6tmyyw03ir66ww5l) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
    # 注意这个token别弄丢了记录下来
        docker swarm join --token SWMTKN-1-3ussah8hplv6oud6uafdr4z2kxtlboy75tr0e7mqkjvbrzfgdk-cugznzf4new0nabra2jpil3wg 192.168.203.10:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
  • 在worker-1对应笔者的docker02连接刚刚在manager(docker01)初始化的swarmdocker swarm join --token <TOKEN> --advertise-addr <IP-ADDRESS-OF-WORKER-1> <IP-ADDRESS-OF-MANAGER>:2377

    # 如果主机只有一个network interface 则 --advertise-addr=<IP-ADDRESS-OF-MANAGER> 是可选的
    [root@docker02 ~]# docker swarm join --token SWMTKN-1-3ussah8hplv6oud6uafdr4z2kxtlboy75tr0e7mqkjvbrzfgdk-cugznzf4new0nabra2jpil3wg 192.168.203.10:2377
    This node joined a swarm as a worker.
    
    • 1
    • 2
    • 3
  • 在worker-2 (docker03) 连接manager (docker01) 初始化的swarm

    [root@docker03 ~]# docker swarm join --token SWMTKN-1-3ussah8hplv6oud6uafdr4z2kxtlboy75tr0e7mqkjvbrzfgdk-cugznzf4new0nabra2jpil3wg 192.168.203.10:2377
    This node joined a swarm as a worker.
    [root@docker03 ~]# 
    
    # 切换到manager (docker01) 
    # 我们可以看到swarm的各个节点
    [root@docker01 ~]# docker node ls
    ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
    m62vner4g6tmyyw03ir66ww5l *   docker01   Ready     Active         Leader           20.10.5
    io445n31l4nqh5pwiimqrhphv     docker02   Ready     Active                          20.10.5
    fmy21l7ks0z33m81nunz58qqd     docker03   Ready     Active                          20.10.5
    
    # 按角色进行过滤查看节点
    docker node ls --filter role=worker
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
  • 查看各个节点的network

    docker network ls
    NETWORK ID     NAME              DRIVER    SCOPE
    a8c869e5ea5a   bridge            bridge    local
    7c9836779c3e   docker_gwbridge   bridge    local
    1f19e7efc9b5   host              host      local
    j04dx4c7y871   ingress           overlay   swarm
    0475d33ca6e8   none              null      local
    
    # 我们会发现创建完swarm后三个节点都多了 docker_gwbridge(bridge)、ingress(overlay) network
    # docker_gwbridge network将ingress network 连接到 docker主机使swarm之间可以流量畅通
    # 管网原话说明了docker_gwbridge 与 ingress network的作用
    #The docker_gwbridge connects the ingress network to the Docker host’s network interface so that traffic #can flow to and from swarm managers and workers
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
docker swarm 搭建NGINX集群
  • 在manager (docker01)上创建一个新的overlay network

    docker network create -d overlay nginx-net
    # 说明:不需要在其他节点创建这个overlay network 因为当其他节点需要它时会自动创建
    
    • 1
    • 2
  • 在manager (docker01)创建 5-replica Nginx service 并连接到刚刚创建的overlay network nginx-net

    docker service create \
      --name my-nginx \
      --publish target=80,published=80 \
      --replicas=5 \
      --network nginx-net \
      nginx 
      # 我们去worker-1(docker02) worker-2(docker03) 执行
      # docker network ls
      # docker ps
      # 会发现每个节点的network多了my-nginx
      # 每个节点分布着刚刚在manager(docker01)启动的5-replica Nginx service 
      
      # 尝试断一下网然后在观察一下每个节点的network 和运行的容器看看有什么变化,这里就不做演示了(会发现次数manager承担了整个swarm的工作)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
  • 检查worker-1 (docker02) 并分析其他节点就不演示了道理一样

    # 查看容器我们会发现在manager(docker01)运行的5-replica Nginx service 3和5分配到了worker-1 (docker02)
    [root@docker02 ~]# docker ps
    CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
    b4f36e87df2a   nginx:latest   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    my-nginx.3.ubmzvpogexuuba86a7jp9tvdt
    70c0ec2ec5d2   nginx:latest   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    my-nginx.5.2i8410g63wqgh52yqqlv2fj5i
    [root@docker02 ~]# docker network inspect nginx-net
    [
        {
            "Name": "nginx-net",
            "Id": "yjj2rpqb2mg7fc5ni8aoe7utr",
            "Created": "2021-04-11T15:14:54.277840294+08:00",
            "Scope": "swarm",
            "Driver": "overlay",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "10.0.1.0/24",
                        "Gateway": "10.0.1.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Ingress": false,
            "ConfigFrom": {
                "Network": ""
            },
            "ConfigOnly": false,
            "Containers": {
                "70c0ec2ec5d26911be6a863908afc7118b57d38c61d72e3c7593836aeeeeb6d1": {
                    "Name": "my-nginx.5.2i8410g63wqgh52yqqlv2fj5i",
                    "EndpointID": "f857738e252dde36598c57836407d833c4a1f4a1c0bc4e4f6d74f7c07b5a17ec",
                    "MacAddress": "02:42:0a:00:01:1b",
                    "IPv4Address": "10.0.1.27/24",
                    "IPv6Address": ""
                },
                "b4f36e87df2a9d5a9880a35d9c12e04371c6358b7c23a8c4f6ef3634b9b2f680": {
                    "Name": "my-nginx.3.ubmzvpogexuuba86a7jp9tvdt",
                    "EndpointID": "766e90cc18314d0bbe761a1a9b4206aaffa62f039de793faf3f5b7373288fe3e",
                    "MacAddress": "02:42:0a:00:01:19",
                    "IPv4Address": "10.0.1.25/24",
                    "IPv6Address": ""
                },
                "lb-nginx-net": {
                    "Name": "nginx-net-endpoint",
                    "EndpointID": "806382f5a731561ab4f9b3efabf83c5fbab2fc46547980383b7527ce1b0913ab",
                    "MacAddress": "02:42:0a:00:01:1e",
                    "IPv4Address": "10.0.1.30/24",
                    "IPv6Address": ""
                }
            },
            "Options": {
                "com.docker.network.driver.overlay.vxlanid_list": "4097"
            },
            "Labels": {},
            "Peers": [
                {
                    "Name": "8ddf5a3b917c",
                    "IP": "192.168.203.11"
                },
                {
                    "Name": "201f7ac25467",
                    "IP": "192.168.203.10"
                },
                {
                    "Name": "b888156e78a2",
                    "IP": "192.168.203.12"
                }
            ]
        }
    ]
    # 至此一个docker swarm NGINX集群就搭建好了
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
扩展

# =====================演示如何给docker swarm替换network=====================
[root@docker01 ~]# docker network create -d overlay nginx-net-2
pzdxmir25r2fsfxnggxo5e4lp

[root@docker01 ~]# docker service update \
  --network-add nginx-net-2 \
  --network-rm nginx-net \
  my-nginx
  
overall progress: 5 out of 5 tasks 
1/5: running   [==================================================>] 
2/5: running   [==================================================>] 
3/5: running   [==================================================>] 
4/5: running   [==================================================>] 
5/5: running   [==================================================>] 
verify: Service converged 
[root@docker01 ~]# 

# ===================== 扩展2=====================
# 在manager(docker01) 执行下列编码便可以把搭建的NGINX集群卸载掉
docker service rm my-nginx
docker network rm nginx-net nginx-net-2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

对独立容器使用overlay network

准备工作
  • 搭建一个docker swarm(参照上面docker swarm搭建步骤)
说明(官网原话)

This example demonstrates DNS container discovery – specifically, how to communicate between standalone containers on different Docker daemons using an overlay network. Steps are:

  • On host1, initialize the node as a swarm (manager).
  • On host2, join the node to the swarm (worker).
  • On host1, create an attachable overlay network (test-net).
  • On host1, run an interactive alpine container (alpine1) on test-net.
  • On host2, run an interactive, and detached, alpine container (alpine2) on test-net.
  • On host1, from within a session of alpine1, ping alpine2.

翻译

这个例子示范 DNS container 发现 --具体说明:在不懂docker daemons 上运行的容器是如何通过overlay network进行通信的

  • 在主机1,初始化docker swarm (这个节点作为manager)

  • 在主机2,连接到docker swarm (这个节点作为worker)

  • 在主机1,创建一个可连接的overlay network(test-net)

  • 在主机1,以交互式的方式运行alpine 容器(alpine1)并指定network为test-net

  • 在主机2,以交互的方式独立的运行 alpine容器(alpine2)并指定network为test-net

  • 在主机1,在容器alpine1 ping alpine2

For this test, you need two different Docker hosts that can communicate with each other. Each host must have the following ports open between the two Docker hosts:

  • TCP port 2377
  • TCP and UDP port 7946
  • UDP port 4789
搭建集群
#=====登入host1=====
[root@host1 ~]# docker swarm init
# 如果出现如下错误需要移除之前的swarm 执行docker swarm leave 或 docker swarm leave --force命令即可
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
[root@host1 ~]# docker swarm leave
Node left the swarm.
[root@host1 ~]# docker swarm init
Swarm initialized: current node (x4rsfeqjq8h60771uwx68y5in) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4btdccwy4ulfl0rgdc6laksq8kphfbrm2mzbyg2ii6fb02duti-39kb0ouwoc21itn0k753rmg43 192.168.203.13:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

#=====登入host2=====
[root@host2 ~]# docker swarm join --token SWMTKN-1-4btdccwy4ulfl0rgdc6laksq8kphfbrm2mzbyg2ii6fb02duti-39kb0ouwoc21itn0k753rmg43 192.168.203.13:2377
# 如果出现如下错误同样需要移除之前的swarm 执行 docker swarm leave 或 docker swarm leave --force即可
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
[root@host2 ~]# docker swarm leave
Node left the swarm.
[root@host2 ~]# docker swarm join --token SWMTKN-1-4btdccwy4ulfl0rgdc6laksq8kphfbrm2mzbyg2ii6fb02duti-39kb0ouwoc21itn0k753rmg43 192.168.203.13:2377
This node joined a swarm as a worker.
[root@host2 ~]# 

# 至此一个docker swarm搭建成功
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
通信测试
# 1、 在host1创建一个可以连接的 overlay network 命名为test-net
[root@host1 ~]# docker network create --driver=overlay --attachable test-net
tk2b96t1l2ck8wlz5x2vsa4zw
[root@host1 ~]#

# 2、在host1以交互式的方式启动容器alpine1 并指定network为test-net
[root@host1 ~]# docker run -it --name alpine1 --network test-net alpine
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ca3cd42a7c95: Pull complete 
Digest: sha256:ec14c7992a97fc11425907e908340c6c3d6ff602f5f13d899e6b7027c9b4133a
Status: Downloaded newer image for alpine:latest
/ # 
# 3、在host2 查看network 发现test-net不存在
[root@host2 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
8e05854e071d   bridge            bridge    local
82b1da75d873   docker_gwbridge   bridge    local
1f19e7efc9b5   host              host      local
leuzg5jbud0t   ingress           overlay   swarm
0475d33ca6e8   none              null      local
[root@host2 ~]# 
# 4、在host2 以交互式、守护线程的模式启动容器alpine2 并指定network为test-net
[root@host2 ~]# docker run -dit --name alpine2 --network test-net alpine
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ca3cd42a7c95: Pull complete 
Digest: sha256:ec14c7992a97fc11425907e908340c6c3d6ff602f5f13d899e6b7027c9b4133a
Status: Downloaded newer image for alpine:latest
feaf4ba3202a0582ca80034c810d3ac4a503e24d6778a0c607a7c474e55f2d4d
[root@host2 ~]# 
# 4、再次在host2 查看network 发现host2也也有了test-net 并且NETWORK ID与host1的一样
[root@host2 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
8e05854e071d   bridge            bridge    local
82b1da75d873   docker_gwbridge   bridge    local
1f19e7efc9b5   host              host      local
leuzg5jbud0t   ingress           overlay   swarm
0475d33ca6e8   none              null      local
tk2b96t1l2ck   test-net          overlay   swarm
# 5、在host1进入容器alpine1 ping alpine2
PING alpine2 (10.0.1.4): 56 data bytes
64 bytes from 10.0.1.4: seq=0 ttl=64 time=1.024 ms
64 bytes from 10.0.1.4: seq=1 ttl=64 time=1.591 ms
64 bytes from 10.0.1.4: seq=2 ttl=64 time=0.686 ms

# 至此实现了在不同主机的docker容器通过docker swarm 指定overlay network进行通信



# 清理工作 可选
docker container stop alpine2
docker network ls
docker container rm alpine2

docker container rm alpine1
docker network rm test-net

docker swarm leave

docker swarm leave --force
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61

macvlan network

说明(管网原话)
  • Most cloud providers block macvlan networking. You may need physical access to your networking equipment.
  • The macvlan networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
  • You need at least version 3.9 of the Linux kernel, and version 4.0 or higher is recommended.
  • The examples assume your ethernet interface is eth0. If your device has a different name, use that instead.

由于目前没这多台装有Linux系统的物理设备占时没法操作具体操作请参照管网

macvlan network 官网实战教程

网络理论深入

bridge网络

什么是bridge网络?

对网络而言 bridge network是在network segments 之间进行转发的数据链路层设备,bridge network可以运行在主机内核中的硬件设备或软件设备中。

对docker而言加入到相同的bridge network中的容器间能够互相进行通信。

bridge适用范围

bridge network适用于运行在同一个docker daemon 上的容器进行通信,不同docker daemon 的容器进行通信可以采用 overlay network或者在操作系统层面来控制进行通信

用户自定义bridge网络与docker默认的bridge网络有什么区别
用户自定的bridge提供容器之间的DNS自动解析

加入用户自定义的bridge网络的容器,容器之间能够直接通过容器名访问对方,而加入默认bridge网络的容器不能直接通过容器名访问除非用来 --link来指定(不建议这么做)

用户自定义的bridge网路给容器提供更好的隔离

因为容器启动不通过--network来指定网络默认的都是加入默认的bridge网路,这可能会使不相干的容器能够进行通信,这是一种风险。

容器可以动态的从用户自定义的bridge网络中分离

在容器的生命周期内可以连接或者断开用户自定义的bridge网路,而要从默认的bridge网络中移除容器需要停止容器使用不同的网络再次创建它。

Each user-defined network creates a configurable bridge.

如果容器使用默认的bridge网络你可以配置它但配置完后,所有加入该默认bridge网络的容器都使用相同的配置例如:MTU、 iptables rules

In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.

使用用户自定义bridge网络时,如果不同的应用组有不同的网络需求时 you can configure each user-defined bridge separately, as you create it.

连接在默认bridge网络上的容器共享环境变量

用户自定义bridge网络中的容器实现变量共享可以通过docker-compose指定共享环境变量,通过docker volume挂载出需要共享的文件

当创建或删除一个用户自定义bridge网络时到底发生了什么?

当我们创建或删除一个用户定义bridge网络又或者是让一个容器连接到用户自定义bridge网络docker会用指定的工具操作操作系统网络基础设施(比如在Linux添加或删除网桥设备,配置iptables规则)

持续更新

后续将完善 overlay networks、Macvlan networks理论

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/501320
推荐阅读
相关标签
  

闽ICP备14008679号