赞
踩
宿主机中的 docker0 的虚拟网卡:该网卡是一个用于桥接宿主机与容器间网络通信的虚拟网卡。当我们启动容器但没有指定网络配置时,都会使用该网卡作为网桥。
对于每个容器,会创建一对虚拟网卡用作映射。一个在容器中:eth0,一个在宿主机中:vethX,映射关系维护在网桥中。其他主机访问容器时,必须经过宿主机,并在网桥 docker0 中中找到 vethX 继而访问容器。这也是默认的网络模式。
容器共享主机的网络空间,即容器与主机在同一网段。缺点很明显,不安全。外部可以直接访问到容器,而没有一个挡板做拦截转发。
每个容器中只有本地 lo 网卡,即无法与其他容器或者主机通信。最安全,除非侵入了宿主机,否则无法访问到容器。
我们可以使用命令:docker network ls 查看 docker 默认支持的以上三种网络模式
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
7b61cc22b3cf bridge bridge local
90abe9832be3 host host local
69357e9d4394 none null local
根据需求自定义配置容器网络。将会在宿主机中创建一个虚拟网卡(类似 docker0),用来关联未来在该网络环境下的容器的 vethX 虚拟网卡。可以但不限于配置:子网网段、网关、子网掩码、网络模式(桥接 or else)。
$ docker network --help
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
$ docker network create --help Usage: docker network create [OPTIONS] NETWORK Create a network Options: --attachable Enable manual container attachment --aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[]) --config-from string The network from which to copy the configuration --config-only Create a configuration only network -d, --driver string Driver to manage the Network (default "bridge") --gateway strings IPv4 or IPv6 Gateway for the master subnet --ingress Create swarm routing-mesh network --internal Restrict external access to the network --ip-range strings Allocate container ip from a sub-range --ipam-driver string IP Address Management Driver (default "default") --ipam-opt map Set IPAM driver specific options (default map[]) --ipv6 Enable IPv6 networking --label list Set metadata on a network -o, --opt map Set driver specific options (default map[]) --scope string Control the network's scope --subnet strings Subnet in CIDR format that represents a network segment
例子:
docker network create --driver bridge --subnet 192.168.2.0/24 --gateway 192.168.2.1 my_net
192.168.2.0
- 192.168.2.255
255.255.255.0 (/24)
192.168.2.1
检查一下宿主机中的网络设备:
$ ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever ... 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:00:6b:b7:ab brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:ff:fe6b:b7ab/64 scope link valid_lft forever preferred_lft forever # Here! 115: br-48a9b08a5c44: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:24:0c:55:a4 brd ff:ff:ff:ff:ff:ff inet 192.168.2.1/24 brd 192.168.2.255 scope global br-48a9b08a5c44 valid_lft forever preferred_lft forever
观察一下虚拟网卡(网桥) br-48a9b08a5c44
中的 ip 192.168.2.1/24
,正是网关地址
。
$ docker network inspect my_net [ { "Name": "my_net", "Id": "59f36f39b05ae61586aa9c1480edc6e57beab18fb8f8bc68301aef434b496d6b", "Created": "2023-06-20T02:11:19.065380096Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.2.0/24", "Gateway": "192.168.2.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
(其余的命令比如 ls、删除之类的,没有必要详细解释了。根据 --help 的提示进行操作即可)
my_net
$ docker run -it -d -q --network my_net --name os1 busybox
$ docker run -it -d -q --network my_net --name os2 busybox
此时查看一下网卡配置,可以发现多出了两个用于结合 br-48a9b08a5c44
网卡进行桥接的虚拟网卡:veth3654615@if95
、veth8b144b4@if97
$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever ... ... ... 96: veth3654615@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-76115cefdaad state UP group default link/ether 62:69:f9:e9:94:52 brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::6069:f9ff:fee9:9452/64 scope link valid_lft forever preferred_lft forever 98: veth8b144b4@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-76115cefdaad state UP group default link/ether 9e:2f:13:75:8f:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 4 inet6 fe80::9c2f:13ff:fe75:8fc2/64 scope link valid_lft forever preferred_lft forever
$ docker exec -it os1 sh $ ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:02:02 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1102 (1.0 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ ping 192.168.2.3 PING 192.168.2.3 (192.168.2.3): 56 data bytes 64 bytes from 192.168.2.3: seq=0 ttl=64 time=0.118 ms 64 bytes from 192.168.2.3: seq=1 ttl=64 time=0.070 ms 64 bytes from 192.168.2.3: seq=2 ttl=64 time=0.074 ms
$ docker exec -it os2 sh $ ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:02:03 inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:656 (656.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ ping 192.168.2.2 PING 192.168.2.2 (192.168.2.2): 56 data bytes 64 bytes from 192.168.2.2: seq=0 ttl=64 time=0.096 ms 64 bytes from 192.168.2.2: seq=1 ttl=64 time=0.086 ms 64 bytes from 192.168.2.2: seq=2 ttl=64 time=0.078 ms
除了根据容器 ip,还可以根据容器名称相互访问。这样做对比直接根据 ip 访问更加灵活,无论 ip 如何变化,只要容器名称正确就可以正确访问,推荐。
$ docker exec -it os1 ping os2
PING os2 (192.168.2.3): 56 data bytes
64 bytes from 192.168.2.3: seq=0 ttl=64 time=0.083 ms
64 bytes from 192.168.2.3: seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.2.3: seq=2 ttl=64 time=0.074 ms
$ docker exec -it os2 ping os1
PING os1 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=64 time=0.089 ms
64 bytes from 192.168.2.2: seq=1 ttl=64 time=0.082 ms
64 bytes from 192.168.2.2: seq=2 ttl=64 time=0.069 ms
不指定--network
默认使用bridge
模式,启动两个容器:
$ docker run -it -d --rm --name os1 busybox
$ docker run -it -d --rm --name os2 busybox
通过容器 ip 进行访问:
# 查看各自ip $ docker exec -it os1 ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 107: eth0@if108: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever docker exec -it os2 ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 111: eth0@if112: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:06 brd ff:ff:ff:ff:ff:ff inet 172.17.0.6/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever # 访问 $ docker exec -it os1 ping 172.17.0.6 PING 172.17.0.6 (172.17.0.6): 56 data bytes 64 bytes from 172.17.0.6: seq=0 ttl=64 time=0.083 ms 64 bytes from 172.17.0.6: seq=1 ttl=64 time=0.084 ms $ docker exec -it os2 ping 172.17.0.5 PING 172.17.0.5 (172.17.0.5): 56 data bytes 64 bytes from 172.17.0.5: seq=0 ttl=64 time=0.109 ms 64 bytes from 172.17.0.5: seq=1 ttl=64 time=0.073 ms
在通过容器名称访问:
$ docker exec -it os1 ping os2
ping: bad address 'os2'
$ docker exec -it os2 ping os1
ping: bad address 'os1'
因为没有指定的自定义的网络,只可以通过 ip 而不能根据容器名称进行访问。那么如何也可以通过容器名称进行访问呢:--link 参数
。重启 os2:
$ docker run -it -d --rm --link os1 --name os2 busybox
$ docker exec -it os2 ping os1
PING os1 (172.17.0.5): 56 data bytes
64 bytes from 172.17.0.5: seq=0 ttl=64 time=0.137 ms
64 bytes from 172.17.0.5: seq=1 ttl=64 time=0.087 ms
可以成功访问了。那么是如何做到的根据容器名称就能正常访问的呢?查看一下容器中/etc/hosts
文件就明白了:
$ docker exec -it os2 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.5 os1 c528fc3de106 # Here!
172.17.0.6 d69d0a2dfb0b
实际上就是将ip、容器名称、容器 id
做了映射。
但是通过默认的桥接模式可以发现一个问题,十分的不方便,每个容器启动时都需要指定 --link。一旦容器的数量过多时,是极其麻烦的。所以,还是推荐自定义网络模式
。
前面讨论的都是同网络间的容器通信,那么不同网络间的容器是如何通信的?比如即一个使用默认 docker0 网桥的容器与使用了自定义网络的容器如何进行通信?
下面进行测试:
$ docker run -it -d --name def_net_os busybox
$ docker run -it -d --name my_net_os --network my_net busybox
# 查看 def_net_os 的 ip
$ docker exec def_net_os ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
134: eth0@if135: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
# 在 my_net_os 容器中 ping 测试
$ docker exec my_net_os ping 172.17.0.5
(无响应)
实际结果确实为无法访问。检察一下容器的网络信息以及自定义网络信息:
# 容器的网络信息 $ docker inspect def_net_os [{ ... "NetworkSettings": { "Bridge": "", "SandboxID": "f4a4fac0a42297c3913ad7326d5d54163f34fe2f01c5088c4361517092fa24b3", ... "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.5", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:05", "Networks": { # Here! "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "19c85157d58fecd939cf4b34aefe8f041b8a2d2b649c3c778cdc3d699df03810", "EndpointID": "20b1a057de5de9d9c8a64d72ff215efcc56b50abeccbf208be1ef6473c8b1e19", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.5", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:05", "DriverOpts": null } } } }] $ docker inspect my_net_os [{ ... "NetworkSettings": { "Bridge": "", "SandboxID": "34b9a9c4a5172b438d7568be0edeeadb26a56ba0db53b0a43d97c3b5ab2708e8", ... "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { # Here! "my_net": { "IPAMConfig": null, "Links": null, "Aliases": ["8ffa841b7bb2"], "NetworkID": "48a9b08a5c44f34a1716885fab264ea6ccf58ceaa07adbc14f3b1ae5ac0a41d8", "EndpointID": "8e8b5f37df1217ad6e357a4f3f191ad68a2427357b905dc110df0762e4e885ba", "Gateway": "192.168.2.1", "IPAddress": "192.168.2.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:02:02", "DriverOpts": null } } } }] $ docker network inspect my_net [{ "Name": "my_net", "Id": "48a9b08a5c44f34a1716885fab264ea6ccf58ceaa07adbc14f3b1ae5ac0a41d8", "Created": "2023-06-20T13:13:52.508531488+08:00", ... # Here! "Containers": { "8ffa841b7bb24ac29a5765701bfd7025c19fe109556672ddee4ceb2f46e6e56a": { "Name": "my_net_os", "EndpointID": "8e8b5f37df1217ad6e357a4f3f191ad68a2427357b905dc110df0762e4e885ba", "MacAddress": "02:42:c0:a8:02:02", "IPv4Address": "192.168.2.2/24", "IPv6Address": "" } }, "Options": {}, "Labels": {} }]
可以发现两个容器分别在不同的网络中。my_net
的自定义网络信息中描述了处于当前自定义网络的所有容器实例:my_net_os
。
下面进行不同网络间的链接,在my_net
网络中加入def_net_os
容器。
docker network connect <NETWORK> <CONTAINER>
$ docker network connect my_net def_net_os
$ docker exec -it my_net_os ping def_net_os
PING def_net_os (192.168.2.3): 56 data bytes
64 bytes from 192.168.2.3: seq=0 ttl=64 time=0.109 ms
64 bytes from 192.168.2.3: seq=1 ttl=64 time=0.070 ms
这时就可以在 my_net_os 中与 def_net_os 通信了。我们在检察一下 def_net_os 容器的网络信息以及自定义网络信息:
$ docker inspect def_net_os [{ ... "NetworkSettings": { "Bridge": "", "SandboxID": "f4a4fac0a42297c3913ad7326d5d54163f34fe2f01c5088c4361517092fa24b3", ... "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.5", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:05", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "19c85157d58fecd939cf4b34aefe8f041b8a2d2b649c3c778cdc3d699df03810", "EndpointID": "20b1a057de5de9d9c8a64d72ff215efcc56b50abeccbf208be1ef6473c8b1e19", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.5", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:05", "DriverOpts": null }, # Here! "my_net": { "IPAMConfig": {}, "Links": null, "Aliases": [ "5d73baeef4e7" ], "NetworkID": "48a9b08a5c44f34a1716885fab264ea6ccf58ceaa07adbc14f3b1ae5ac0a41d8", "EndpointID": "5e92f9f7767684daa035a92a153b65a334baf7f8b561ed8acff06eca41a0765b", "Gateway": "192.168.2.1", "IPAddress": "192.168.2.3", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:02:03", "DriverOpts": {} } } } }] $ docker network inspect my_net [{ "Name": "my_net", "Id": "48a9b08a5c44f34a1716885fab264ea6ccf58ceaa07adbc14f3b1ae5ac0a41d8", "Created": "2023-06-20T13:13:52.508531488+08:00", ... "Containers": { # Here! "5d73baeef4e74a8a5808a16e12c080f0289d6ea2cf12a00aa576c6b0afe5ec93": { "Name": "def_net_os", "EndpointID": "5e92f9f7767684daa035a92a153b65a334baf7f8b561ed8acff06eca41a0765b", "MacAddress": "02:42:c0:a8:02:03", "IPv4Address": "192.168.2.3/24", "IPv6Address": "" }, "8ffa841b7bb24ac29a5765701bfd7025c19fe109556672ddee4ceb2f46e6e56a": { "Name": "my_net_os", "EndpointID": "8e8b5f37df1217ad6e357a4f3f191ad68a2427357b905dc110df0762e4e885ba", "MacAddress": "02:42:c0:a8:02:02", "IPv4Address": "192.168.2.2/24", "IPv6Address": "" } }, "Options": {}, "Labels": {} }]
容器与自定义网络间都维护了双方的信息,并且在 def_net_os 容器中,也新增了一个映射 my_net 网络网桥的网卡 eth1
:
$ docker exec -it def_net_os ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
134: eth0@if135: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
# Here!
138: eth1@if139: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:02:03 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.3/24 brd 192.168.2.255 scope global eth1
valid_lft forever preferred_lft forever
通过一个简单示意图总结描述一下不同网络间通信的方式:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。