赞
踩
Calico是一个纯三层的协议,为OpenStack虚机和Docker容器提供多主机间通信。Calico不使用重叠网络比如flannel和libnetwork重叠网络驱动,它是一个纯三层的方法,使用虚拟路由代替虚拟交换,每一台虚拟路由通过BGP协议传播可达信息(路由)到剩余数据中心。
Calico不使用重叠网络比如flannel和libnetwork重叠网络驱动,它是一个纯三层的方法,使用虚拟路由代替虚拟交换,每一台虚拟路由通过BGP协议传播可达信息(路由)到剩余数据中心;Calico在每一个计算节点利用Linux Kernel实现了一个高效的vRouter来负责数据转发,而每个vRouter通过BGP协议负责把自己上运行的workload的路由信息像整个Calico网络内传播——小规模部署可以直接互联,大规模下可通过指定的BGP route reflector来完成。
结合上面这张图,我们来过一遍 Calico 的核心组件:
通过将整个互联网的可扩展 IP 网络原则压缩到数据中心级别,Calico 在每一个计算节点利用Linux kernel实现了一个高效的vRouter来负责数据转发而每个vRouter通过BGP协议负责把自己上运行的 workload 的路由信息像整个 Calico 网络内传播 - 小规模部署可以直接互联,大规模下可通过指定的BGP route reflector 来完成。这样保证最终所有的 workload 之间的数据流量都是通过 IP 包的方式完成互联的。
Calico把每个操作系统的协议栈认为是一个路由器,然后把所有的容器认为是连在这个路由器上的网络终端,在路由器之间跑标准的路由协议——BGP的协议,然后让它们自己去学习这个网络拓扑该如何转发。所以Calico方案其实是一个纯三层的方案,也就是说让每台机器的协议栈的三层去确保两个容器,跨主机容器之间的三层连通性。
对于控制平面,它每个节点上会运行两个主要的程序,一个是Felix,它会监听ECTD中心的存储,从它获取事件,比如说用户在这台机器上加了一个IP,或者是分配了一个容器等。接着会在这台机器上创建出一个容器,并将其网卡、IP、MAC都设置好,然后在内核的路由表里面写一条,注明这个IP应该到这张网卡。绿色部分是一个标准的路由程序,它会从内核里面获取哪一些IP的路由发生了变化,然后通过标准BGP的路由协议扩散到整个其他的宿主机上,让外界都知道这个IP在这里,你们路由的时候得到这里来。
由于Calico是一种纯三层的实现,因此可以避免与二层方案相关的数据包封装的操作,中间没有任何的NAT,没有任何的overlay,所以它的转发效率可能是所有方案中最高的,因为它的包直接走原生TCP/IP的协议栈,它的隔离也因为这个栈而变得好做。因为TCP/IP的协议栈提供了一整套的防火墙的规则,所以它可以通过IPTABLES的规则达到比较复杂的隔离逻辑。
IPIP
从字面来理解,就是把一个IP数据包又套在一个IP包里,即把 IP 层封装到 IP 层的一个 tunnel,看起来似乎是浪费,实则不然。它的作用其实基本上就相当于一个基于IP层的网桥!一般来说,普通的网桥是基于mac层的,根本不需 IP,而这个 ipip 则是通过两端的路由做一个 tunnel,把两个本来不通的网络通过点对点连接起来。ipip 的源代码在内核 net/ipv4/ipip.c 中可以找到。
BGP
边界网关协议(Border Gateway Protocol, BGP)是互联网上一个核心的去中心化自治路由协议。它通过维护IP路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于矢量路由协议。BGP不使用传统的内部网关协议(IGP)的指标,而使用基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议。BGP,通俗的讲就是讲接入到机房的多条线路(如电信、联通、移动等)融合为一体,实现多线单IP,BGP 机房的优点:服务器只需要设置一个IP地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统。
calico是纯三层的SDN 实现,它基于BPG 协议和Linux自身的路由转发机制,不依赖特殊硬件,容器通信也不依赖iptables NAT或Tunnel 等技术。
能够方便的部署在物理服务器、虚拟机(如 OpenStack)或者容器环境下。同时calico自带的基于iptables的ACL管理组件非常灵活,能够满足比较复杂的安全隔离需求。
在主机网络拓扑的组织上,calico的理念与weave类似,都是在主机上启动虚拟机路由器,将每个主机作为路由器使用,组成互联互通的网络拓扑。当安装了calico的主机组成集群后,其拓扑如下图所示:
每个主机上都部署了calico/node作为虚拟路由器,并且可以通过calico将宿主机组织成任意的拓扑集群。当集群中的容器需要与外界通信时,就可以通过BGP协议将网关物理路由器加入到集群中,使外界可以直接访问容器IP,而不需要做任何NAT之类的复杂操作。
当容器通过calico进行跨主机通信时,其网络通信模型如下图所示:
从上图可以看出,当容器创建时,calico为容器生成veth pair,一端作为容器网卡加入到容器的网络命名空间,并设置IP和掩码,一端直接暴露在宿主机上,并通过设置路由规则,将容器IP暴露到宿主机的通信路由上。于此同时,calico为每个主机分配了一段子网作为容器可分配的IP范围,这样就可以根据子网的CIDR为每个主机生成比较固定的路由规则。
当容器需要跨主机通信时,主要经过下面的简单步骤:
从上面的通信过程来看,跨主机通信时,整个通信路径完全没有使用NAT或者UDP封装,性能上的损耗确实比较低。但正式由于calico的通信机制是完全基于三层的,这种机制也带来了一些缺陷,例如:
这就是两台普通的主机,ip和主机名信息如下
#etcd1
[root@etcd1 ~]# ip a | grep 59
inet 192.168.59.156/24 brd 192.168.59.255 scope global ens32
[root@etcd1 ~]#
# etcd2
[root@etcd2 ~]# ip a | grep 59
inet 192.168.59.157/24 brd 192.168.59.255 scope global ens32
[root@etcd2 ~]#
[root@etcd1 ~]# yum -y install etcd
[root@etcd2 ~]# yum -y install etcd
[root@etcd1 ~]# cat /etc/etcd/etcd.conf #etcd1 ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.59.156:2380,http://localhost:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.59.156:2379,http://localhost:2379" ETCD_NAME="etcd-156" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.59.156:2380" ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.59.156:2379" ETCD_INITIAL_CLUSTER="etcd-156=http://192.168.59.156:2380,etcd-157=http://192.168.59.157:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@etcd1 ~]# #etcd2 [root@etcd2 ~]# cat /etc/etcd/etcd.conf ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.59.157:2380,http://localhost:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.59.157:2379,http://localhost:2379" ETCD_NAME="etcd-157" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.59.157:2380" ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.59.157:2379" ETCD_INITIAL_CLUSTER="etcd-156=http://192.168.59.156:2380,etcd-157=http://192.168.59.157:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@etcd2 ~]#
[root@etcd1 ~]# systemctl restart etcd
[root@etcd1 ~]#
[root@etcd2 ~]# systemctl restart etcd
[root@etcd2 ~]#
[root@etcd1 ~]# yum -y install docker-ce
[root@etcd2 ~]# yum -y install docker-ce
[root@etcd1 docker]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://docs.docker.com
[root@etcd1 docker]#
#上面状态后面括号中的就是脚本路径了
/usr/lib/systemd/system/docker.service
--cluster-store=etcd://192.168.59.156:2379
【注意修改ip为对应的主机】,为了看出的区别,我是新增的一行,之前行注释保留了#etcd1
[root@etcd1 docker]# cat -n /usr/lib/systemd/system/docker.service | egrep ExecStart=
13 #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
14 ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.59.156:2379 -H fd:// --containerd=/run/containerd/containerd.sock
[root@etcd1 docker]#
#etcd2
[root@etcd2 docker]# cat -n /usr/lib/systemd/system/docker.service | egrep ExecStart=
13 #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
14 ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.59.157:2379 -H fd:// --containerd=/run/containerd/containerd.sock
[root@etcd2 docker]#
[root@etcd1 docker]# systemctl daemon-reload ; systemctl restart docker
[root@etcd1 docker]#
[root@etcd2 docker]# systemctl daemon-reload ; systemctl restart docker
[root@etcd2 docker]#
[root@etcd1 ~]# mkdir /etc/calico
[root@etcd2 ~]# mkdir /etc/calico
# 下面是格式【注意修改ip为当前主机ip】 cat > /etc/calico/calicoctl.cfg <<EOF apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.59.157:2379" EOF [root@etcd1 ~]# cat > /etc/calico/calicoctl.cfg <<EOF > apiVersion: v1 > kind: calicoApiConfig > metadata: > spec: > datastoreType: "etcdv2" > etcdEndpoints: "http://192.168.59.156:2379" > EOF [root@etcd1 ~]# [root@etcd2 ~]# cat > /etc/calico/calicoctl.cfg <<EOF > apiVersion: v1 > kind: calicoApiConfig > metadata: > spec: > datastoreType: "etcdv2" > etcdEndpoints: "http://192.168.59.157:2379" > EOF [root@etcd2 ~]#
#我已经通过sftp将这2个文件上传到我的etcd1上了,然后scp到etcd2上 [root@etcd1 calico]# ls calicoctl calicoctl.cfg calico-node-v2.tar [root@etcd1 calico]# scp calico-node-v2.tar calicoctl 192.168.59.157:/etc/calico/ calico-node-v2.tar 100% 269MB 22.4MB/s 00:12 calicoctl 100% 31MB 22.5MB/s 00:01 [root@etcd1 calico]# [root@etcd1 calico]# chmod +x calicoctl [root@etcd1 calico]# mv calicoctl /bin/ [root@etcd1 calico]# [root@etcd1 calico]# docker load -i calico-node-v2.tar df64d3292fd6: Loading layer 4.672MB/4.672MB d6f0e85be2d0: Loading layer 8.676MB/8.676MB c9818c503193: Loading layer 250.9kB/250.9kB 1f748fca5871: Loading layer 4.666MB/4.666MB 714c5990d9e8: Loading layer 263.9MB/263.9MB Loaded image: quay.io/calico/node:v2.6.12 [root@etcd1 calico]# [root@etcd1 calico]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v2.6.12 401cc3e56a1a 2 years ago 281MB [root@etcd1 calico]# [root@etcd2 ~]# cd /etc/calico/ [root@etcd2 calico]# ls calicoctl calicoctl.cfg calico-node-v2.tar [root@etcd2 calico]# chmod +x calicoctl [root@etcd2 calico]# mv calicoctl /bin/ [root@etcd2 calico]# [root@etcd2 calico]# docker load -i calico-node-v2.tar df64d3292fd6: Loading layer 4.672MB/4.672MB d6f0e85be2d0: Loading layer 8.676MB/8.676MB c9818c503193: Loading layer 250.9kB/250.9kB 1f748fca5871: Loading layer 4.666MB/4.666MB 714c5990d9e8: Loading layer 263.9MB/263.9MB Loaded image: quay.io/calico/node:v2.6.12 [root@etcd2 calico]# [root@etcd2 calico]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v2.6.12 401cc3e56a1a 2 years ago 281MB [root@etcd2 calico]#
calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg
命令就会创建成功,成功结果看下面,并且会自动生成一个docker容器并运行。[root@etcd1 calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg Running command to load modules: modprobe -a xt_set ip6_tables Enabling IPv4 forwarding Enabling IPv6 forwarding Increasing conntrack limit Removing old calico-node container (if running). Running the following command to start calico-node: docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=etcd1 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://192.168.59.156:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12 Image may take a short time to download if it is not available locally. Container started, checking progress logs. 2021-10-28 08:50:54.131 [INFO][10] startup.go 173: Early log level set to info 2021-10-28 08:50:54.131 [INFO][10] client.go 202: Loading config from environment 2021-10-28 08:50:54.132 [INFO][10] startup.go 83: Skipping datastore connection test 2021-10-28 08:50:54.138 [INFO][10] startup.go 259: Building new node resource Name="etcd1" 2021-10-28 08:50:54.138 [INFO][10] startup.go 273: Initialise BGP data 2021-10-28 08:50:54.139 [INFO][10] startup.go 467: Using autodetected IPv4 address on interface ens32: 192.168.59.156/24 2021-10-28 08:50:54.139 [INFO][10] startup.go 338: Node IPv4 changed, will check for conflicts 2021-10-28 08:50:54.157 [INFO][10] startup.go 530: No AS number configured on node resource, using global value 2021-10-28 08:50:54.160 [INFO][10] etcd.go 111: Ready flag is already set 2021-10-28 08:50:54.162 [INFO][10] client.go 139: Using previously configured cluster GUID 2021-10-28 08:50:54.213 [INFO][10] compat.go 796: Returning configured node to node mesh 2021-10-28 08:50:54.233 [INFO][10] startup.go 131: Using node name: etcd1 2021-10-28 08:50:54.340 [INFO][15] client.go 202: Loading config from environment Starting libnetwork service Calico node started successfully [root@etcd1 calico]# [root@etcd1 calico]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aa6c10de2cc0 quay.io/calico/node:v2.6.12 "start_runit" 3 minutes ago Up 3 minutes calico-node [root@etcd1 calico]# [root@etcd2 calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg Running command to load modules: modprobe -a xt_set ip6_tables Enabling IPv4 forwarding Enabling IPv6 forwarding Increasing conntrack limit Removing old calico-node container (if running). Running the following command to start calico-node: docker run --net=host --privileged --name=calico-node -d --restart=always -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://192.168.59.157:2379 -e NODENAME=etcd2 -e CALICO_NETWORKING_BACKEND=bird -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12 Image may take a short time to download if it is not available locally. Container started, checking progress logs. 2021-10-28 08:50:19.626 [INFO][10] startup.go 173: Early log level set to info 2021-10-28 08:50:19.626 [INFO][10] client.go 202: Loading config from environment 2021-10-28 08:50:19.627 [INFO][10] startup.go 83: Skipping datastore connection test 2021-10-28 08:50:19.632 [INFO][10] startup.go 259: Building new node resource Name="etcd2" 2021-10-28 08:50:19.632 [INFO][10] startup.go 273: Initialise BGP data 2021-10-28 08:50:19.633 [INFO][10] startup.go 467: Using autodetected IPv4 address on interface ens32: 192.168.59.157/24 2021-10-28 08:50:19.633 [INFO][10] startup.go 338: Node IPv4 changed, will check for conflicts 2021-10-28 08:50:19.635 [INFO][10] etcd.go 430: Error enumerating host directories error=100: Key not found (/calico) [22] 2021-10-28 08:50:19.635 [INFO][10] startup.go 530: No AS number configured on node resource, using global value 2021-10-28 08:50:19.637 [INFO][10] etcd.go 105: Ready flag is now set 2021-10-28 08:50:19.640 [INFO][10] client.go 133: Assigned cluster GUID ClusterGUID="fc9bbe7296ad4159a66cc4561c640a00" 2021-10-28 08:50:19.657 [INFO][10] startup.go 419: CALICO_IPV4POOL_NAT_OUTGOING is true (defaulted) through environment variable 2021-10-28 08:50:19.657 [INFO][10] startup.go 659: Ensure default IPv4 pool is created. IPIP mode: off 2021-10-28 08:50:19.660 [INFO][10] startup.go 670: Created default IPv4 pool (192.168.0.0/16) with NAT outgoing true. IPIP mode: off 2021-10-28 08:50:19.660 [INFO][10] startup.go 419: FELIX_IPV6SUPPORT is true (defaulted) through environment variable 2021-10-28 08:50:19.660 [INFO][10] startup.go 626: IPv6 supported on this platform: true 2021-10-28 08:50:19.660 [INFO][10] startup.go 419: CALICO_IPV6POOL_NAT_OUTGOING is false (defaulted) through environment variable 2021-10-28 08:50:19.660 [INFO][10] startup.go 659: Ensure default IPv6 pool is created. IPIP mode: off 2021-10-28 08:50:19.663 [INFO][10] startup.go 670: Created default IPv6 pool (fd80:24e2:f998:72d6::/64) with NAT outgoing false. IPIP mode: off 2021-10-28 08:50:19.699 [INFO][10] startup.go 131: Using node name: etcd2 2021-10-28 08:50:19.813 [INFO][15] client.go 202: Loading config from environment Starting libnetwork service Calico node started successfully [root@etcd2 calico]# [root@etcd2 calico]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5eb93890158a quay.io/calico/node:v2.6.12 "start_runit" 3 minutes ago Up 3 minutes calico-node [root@etcd2 calico]#
calicoctl node status
就可以看到对方信息【注意,是对方信息,如etcd1上查看到的是etcd2的信息】[root@etcd1 calico]# calicoctl node status Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+-------------------+-------+----------+-------------+ | 192.168.59.157 | node-to-node mesh | up | 08:50:58 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. [root@etcd1 calico]# [root@etcd2 calico]# calicoctl node status Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+-------------------+-------+----------+-------------+ | 192.168.59.156 | node-to-node mesh | up | 08:50:59 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. [root@etcd2 calico]#
[root@etcd1 calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
5056751d668a bridge bridge local
3a14232a28b1 host host local
0a9f37c6164c none null local
[root@etcd1 calico]#
[root@etcd2 calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
160bc7545ec9 bridge bridge local
b736353d6e29 host host local
4889c48d25ed none null local
[root@etcd2 calico]#
docker network create --driver calico --ipam-driver calico-ipam calnet1
【如果上面的calico配置没有完成,执行这个会报错的】#命令说明 docker network create --driver calico --ipam-driver calico-ipam calnet1 #--driver calico 指定使用 calico 的 libnetwork CNM driver。 #--ipam-driver calico-ipam 指定使用 calico 的 IPAM driver 管理 IP。 #calico 为 global 网络,etcd 会将 calnet1 同步到所有主机。 [root@etcd1 calico]# docker network create --driver calico --ipam-driver calico-ipam calnet1 c2a6cac969ec31f45bd6482f705526db4d0050d05832526120c66b036df6834c [root@etcd1 calico]# [root@etcd1 calico]# docker network list NETWORK ID NAME DRIVER SCOPE 5056751d668a bridge bridge local c2a6cac969ec calnet1 calico global 3a14232a28b1 host host local 0a9f37c6164c none null local [root@etcd1 calico]# [root@etcd2 calico]# docker network list NETWORK ID NAME DRIVER SCOPE 160bc7545ec9 bridge bridge local c2a6cac969ec calnet1 calico global b736353d6e29 host host local 4889c48d25ed none null local [root@etcd2 calico]#
docker pull
也行。[root@etcd1 calico]# docker load -i /busybox.tar
5b8c72934dfc: Loading layer 1.455MB/1.455MB
Loaded image: busybox:latest
[root@etcd1 calico]# docker images | grep bus
busybox latest 69593048aa3a 4 months ago 1.24MB
[root@etcd1 calico]#
[root@etcd2 calico]# docker load -i /busybox.tar
5b8c72934dfc: Loading layer 1.455MB/1.455MB
Loaded image: busybox:latest
[root@etcd2 calico]#
[root@etcd2 calico]# docker images | grep bus
busybox latest 69593048aa3a 4 months ago 1.24MB
[root@etcd2 calico]#
[root@etcd1 calico]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:d6:47 brd ff:ff:ff:ff:ff:ff inet 192.168.59.156/24 brd 192.168.59.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe87:d647/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:36:3f:d9:aa brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@etcd1 calico]# [root@etcd1 calico]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.59.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.59.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 [root@etcd1 calico]# [root@etcd2 calico]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:df:06:d7 brd ff:ff:ff:ff:ff:ff inet 192.168.59.157/24 brd 192.168.59.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fedf:6d7/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:a4:4b:94:5a brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@etcd2 calico]# [root@etcd2 calico]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.59.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.59.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 [root@etcd2 calico]#
[root@etcd1 calico]# docker run --name etcd1 --net calnet1 -itd busybox d787b70c5f805e058cbbe4735c5f105ef50adad3162942319969ed6e827b57b5 [root@etcd1 calico]# [root@etcd1 calico]# docker exec -it etcd1 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.36.192/32 scope global cali0 valid_lft forever preferred_lft forever / # [root@etcd2 calico]# docker run --name etcd2 --net calnet1 -itd busybox 499d18627413f69b44fafe0087e2c8fe9e7f230ffc8f2ca58ad49f5f614fb5be [root@etcd2 calico]# [root@etcd2 calico]# docker exec -it etcd2 sh / # / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.57.64/32 scope global cali0 valid_lft forever preferred_lft forever / #
[root@etcd1 calico]# docker exec -it etcd1 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.36.192/32 scope global cali0 valid_lft forever preferred_lft forever / # / # ping 192.168.57.64 PING 192.168.57.64 (192.168.57.64): 56 data bytes 64 bytes from 192.168.57.64: seq=0 ttl=62 time=0.936 ms 64 bytes from 192.168.57.64: seq=1 ttl=62 time=0.356 ms ^C --- 192.168.57.64 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.356/0.646/0.936 ms / # [root@etcd2 calico]# docker exec -it etcd2 sh / # / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.57.64/32 scope global cali0 valid_lft forever preferred_lft forever / # / # ping 192.168.36.192 PING 192.168.36.192 (192.168.36.192): 56 data bytes 64 bytes from 192.168.36.192: seq=0 ttl=62 time=0.409 ms ^C --- 192.168.36.192 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss rou
[root@etcd1 ~]# ip a | tail -n4 5: calibbcc148cc85@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether f6:7a:db:ae:ff:77 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::f47a:dbff:feae:ff77/64 scope link valid_lft forever preferred_lft forever [root@etcd1 ~]# [root@etcd1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.36.192 0.0.0.0 255.255.255.255 UH 0 0 0 calibbcc148cc85 192.168.36.192 0.0.0.0 255.255.255.192 U 0 0 0 * 192.168.57.64 192.168.59.157 255.255.255.192 UG 0 0 0 ens32 [root@etcd1 ~]# [root@etcd2 ~]# ip a | tail -n4 5: caliabb8fbfeffb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 46:7a:78:2b:da:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::447a:78ff:fe2b:dacf/64 scope link valid_lft forever preferred_lft forever [root@etcd2 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.36.192 192.168.59.156 255.255.255.192 UG 0 0 0 ens32 192.168.57.64 0.0.0.0 255.255.255.255 UH 0 0 0 caliabb8fbfeffb 192.168.57.64 0.0.0.0 255.255.255.192 U 0 0 0 * [root@etcd2 ~]#
下面我用其他的信息来说明【注意看主机名】
容器和主机接口的模式是:veth pair
查看容器 c91 的 IP:
从这里可以看到容器里的虚拟网卡 cali0 和物理机的 calif6391d136be 是 veth pair 关系。 veth pair 的概念
Linux 虚拟网络设备 veth-pair 详解,看这一篇就够了
看下总的拓扑图如下:
看下路由的关系,在 c91 上:
不管目的地是哪里都走 cali0.
看下 vms91 的路由:
目的地址到 192.168.120.129 的数据包都从 calif6391d136be(vms91 新产生的虚拟网卡)走。
目的地址到 192.168.223.64/64 网段的数据包都从 ens32 发到 192.168.26.92 上去。
每台主机都知道不同的容器在哪台主机上,所以会动态的设置路由。
最后,删除上面实验创建的内容
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx1 1/1 Running 0 24h nginx2 1/1 Running 0 24h nginx3 1/1 Running 0 24h [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc1 ClusterIP 10.104.209.169 <none> 80/TCP 24h svc2 ClusterIP 10.109.14.124 <none> 80/TCP 24h svc3 ClusterIP 10.111.107.232 <none> 80/TCP 24h [root@master ~]# [root@master ~]# kubectl delete pod nginx1 pod "nginx1" deleted [root@master ~]# kubectl delete pod nginx2 pod "nginx2" deleted [root@master ~]# kubectl delete pod nginx3 pod "nginx3" deleted [root@master ~]# kubectl delete svc svc1 service "svc1" deleted [root@master ~]# kubectl delete svc svc2 service "svc2" deleted [root@master ~]# kubectl delete svc svc3 service "svc3" deleted [root@master ~]# [root@master ~]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE myingress <none> www1.rhce.cc,www2.rhce.cc 10.109.130.87 80 23h [root@master ~]# kubectl delete ingress myingress ingress.networking.k8s.io "myingress" deleted [root@master ~]#
去这边博客
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。