当前位置:   article > 正文

云原生|kubernetes|kubernetes的网络插件calico和flannel安装以及切换_安装flannel插件

安装flannel插件

前言:

kubernetes的网络想比较原生docker来说要完善了很多很多,同时这也意味着kubernetes的网络要更为复杂了。当然,复杂肯定比简单功能更多,但麻烦也是更多了嘛。

下面就以二进制安装的kubernetes集群来做一些基本的概念梳理并介绍一哈如何安装两大主流网络插件calico和flannel以及两个都想要之如何从flannel切换到calico(二进制和别的方式安装的配置基本都是大同小异,比如kubeadmin方式,学会一种方式后,是可以灵活套用的,因此,别的部署方式不需要讲,殊途同归嘛。

一些基础概念

一, 

cluster-ip 和cluster-cidr

A,cluster-cidr

CIDR一般指无类别域间路由。 无类别域间路由(Classless Inter-Domain Routing、CIDR)是一个用于给用户分配IP地址以及在互联网上有效地路由IP数据包的对IP地址进行归类的方法。说人话就是,在kubernetes集群内,cidr是分配给pod使用的,例如下面的这个查询pod的扩展信息,10.244.1.29就是了:

  1. [root@master cfg]# k get po -A -owide
  2. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. default hello-server-85d885f474-8ddcz 1/1 Running 0 3h32m 10.244.1.29 k8s-node1 <none> <none>
  4. default hello-server-85d885f474-jbklt 1/1 Running 0 3h32m 10.244.0.27 k8s-master <none> <none>
  5. default nginx-demo-76c8bff45f-6nfnl 1/1 Running 0 3h32m 10.244.1.30 k8s-node1 <none> <none>
  6. default nginx-demo-76c8bff45f-qv4w6 1/1 Running 0 3h32m 10.244.2.7 k8s-node2 <none> <none>
  7. default web-5dcb957ccc-xd9hl 1/1 Running 2 25h 10.244.0.26 k8s-master <none> <none>
  8. ingress-nginx ingress-nginx-admission-create-xc2z4 0/1 Completed 0 26h 192.168.169.133 k8s-node2 <none> <none>
  9. ingress-nginx ingress-nginx-admission-patch-7xgst 0/1 Completed 3 26h 192.168.235.197 k8s-master <none> <none>

那么,应该很多同学应该有一个疑问,为什么node1的cidr是10.244.1,node2的是10.244.2呢?OK,简单的说,这个是由于网络插件flannel或者calico造成的,深层次原因暂且不表。

OK,在二进制方式安装的,这个cidr一般是定义在kube-proxy和kube-controller-manager这两个核心服务的配置文件内的。

  1. [root@master cfg]# grep -r -i "10.244" ./
  2. ./kube-controller-manager.conf:--cluster-cidr=10.244.0.0/16 \
  3. ./kube-proxy-config.yml:clusterCIDR: 10.244.0.0/24

vim kube-proxy-config.yaml

  1. kind: KubeProxyConfiguration
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 0.0.0.0
  4. metricsBindAddress: 0.0.0.0:10249
  5. clientConnection:
  6. kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
  7. hostnameOverride: k8s-master
  8. clusterCIDR: 10.244.0.0/24 #这个是cidr
  9. mode: "ipvs"
  10. ipvs:
  11. minSyncPeriod: 0s
  12. scheduler: "rr"
  13. strictARP: false
  14. syncPeriod: 0s
  15. tcpFinTimeout: 0s
  16. tcpTimeout: 0s
  17. udpTimeout: 0s

vim kube-controller-manager.conf

  1. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
  2. --v=2 \
  3. --log-dir=/opt/kubernetes/logs \
  4. --leader-elect=true \
  5. --master=127.0.0.1:8080 \
  6. --bind-address=127.0.0.1 \
  7. --allocate-node-cidrs=true \
  8. --cluster-cidr=10.244.0.0/16 \ #这个就是cidr了
  9. --service-cluster-ip-range=10.0.0.0/24 \
  10. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  11. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  12. --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  13. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  14. --experimental-cluster-signing-duration=87600h0m0s"

 两个配置文件定义的cidr要保持一致,这点需要非常注意!!!!!!!!!!!!!!!!!

如果是使用flannel网络插件,这两个cidr可以不一样,无所谓啦,因为它用的是iptables,那如果是calico,用的是ipvs,OK,你可以看到非常多的报错,pod调度会出问题的(具体表现就是删除新建pod都不行了,反正打开日志满屏红,以后有机会了给各位演示一哈)。

B,cluster-ip

集群的IP地址,OK,看一哈service的IP地址:

这些地址就比较的统一了,10.0.0.*,可以看到即使是nodeport也是10.0.0网段。

  1. [root@master cfg]# k get svc -o wide
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  3. hello-server ClusterIP 10.0.0.78 <none> 8000/TCP 3h36m app=hello-server
  4. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 33d <none>
  5. nginx-demo ClusterIP 10.0.0.127 <none> 8000/TCP 3h36m app=nginx-demo
  6. web NodePort 10.0.0.100 <none> 80:31296/TCP 25h app=web

 在配置文件的表现形式是:

  1. [root@master cfg]# grep -r -i "10.0.0" ./
  2. ./kube-apiserver.conf:--service-cluster-ip-range=10.0.0.0/24 \
  3. ./kube-controller-manager.conf:--service-cluster-ip-range=10.0.0.0/24 \
  4. ./kubelet-config.yml: - 10.0.0.2
  5. ./kubelet-config.yml:maxOpenFiles: 1000000

也就是kube-apiserver 和kube-controller-manager 两个配置文件内,那么,我这定义的是10.0.0.0/24  ,这个有问题吗?

答案是有,而且问题会比较大,我这里这个是错误的哈(由于我的集群是测试性质,无所谓喽,爱谁谁,一般service不会太多的,生产上就不好说了),如果常和网络打交道,应该明白,10.0.0.0/24只有254个可用IP地址,那么也就是说,如果你的service超过了254个,抱歉,在创建service会报错的哦(报错为:Internal error occurred: failed to allocate a serviceIP: range is full)。因此,正确的设置应该是10.0.0.0/16, 这样service可用的IP地址将会是65536个ip地址(6w多个service应该很难达到吧!!!!!!~~~~~)

OK,这个问题说清楚了,那么,修改就比较简单了嘛,24换成16谁都会,然后相关服务重启一哈就可以了,此操作也相当于是网络的扩展嘛,但还是善意提醒一哈,如果是从16换成24,那么,以往存在的service会受到影响。因此,在生产环境中还是建议尽可能在规划阶段都要考虑到这个容量规划的问题,否则很有可能不会解决问题,而是解决掉出问题的人。

服务重启命令为:

systemctl restart kube-apiserver kube-controller-manager

c:

OK,我估计上面的配置文件很多同学并没仔细看,10.0.0.2这个网段是什么鬼呢?

这个也是一对的哦,coredns的service和kubelet要统一使用这个cluster子网段哦,当然,你可以把它修改成10.0.0.3 4  5  6 随便啦,不过两个必须是clusterip的子网段并且一样就可以啦,你也可以把那个clusterip设置为10.90.0.0,coredns这里就使用10.90.0.2就可以啦,意思明白就可以了。那,又有同学会有疑问了,不一样会咋滴?不会咋滴,就是集群会报各种错。

kubelet的配置文件(请注意相关IP段的定义):

  1. kind: KubeletConfiguration
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. address: 0.0.0.0
  4. port: 10250
  5. readOnlyPort: 10255
  6. cgroupDriver: cgroupfs
  7. clusterDNS:
  8. - 10.0.0.2
  9. clusterDomain: cluster.local
  10. failSwapOn: false
  11. authentication:
  12. anonymous:
  13. enabled: false
  14. webhook:
  15. cacheTTL: 2m0s
  16. enabled: true
  17. x509:
  18. clientCAFile: /opt/kubernetes/ssl/ca.pem
  19. authorization:
  20. mode: Webhook
  21. webhook:
  22. cacheAuthorizedTTL: 5m0s
  23. cacheUnauthorizedTTL: 30s
  24. evictionHard:
  25. imagefs.available: 15%
  26. memory.available: 100Mi
  27. nodefs.available: 10%
  28. nodefs.inodesFree: 5%
  29. maxOpenFiles: 1000000
  30. maxPods: 110

coredns的service文件(请注意相关IP段的定义):

  1. [root@master cfg]# cat ~/coredns/coredns-svc.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: coredns
  6. namespace: kube-system
  7. labels:
  8. k8s-app: coredns
  9. kubernetes.io/cluster-service: "true"
  10. kubernetes.io/name: "CoreDNS"
  11. spec:
  12. selector:
  13. k8s-app: coredns
  14. clusterIP: 10.0.0.2
  15. ports:
  16. - name: dns
  17. port: 53
  18. protocol: UDP
  19. - name: dns-tcp
  20. port: 53
  21. protocol: TCP

二,

flannel网络插件的安装

cat ~/coredns/coredns-svc.yaml

没什么好说的,直接apply这个文件就行了,只是有个个地方需要关注一哈:

a,

network的值应该和kube-proxy一致,如果不一致,当然是报错,type不更改,无需更改。

  1. net-conf.json: |
  2. {
  3. "Network": "10.244.0.0/16",
  4. "Backend": {
  5. "Type": "vxlan"
  6. }
  7. }

b,

pod映射到宿主机的目录,一哈卸载的时候需要删除它。

  1. allowedHostPaths:
  2. - pathPrefix: "/etc/cni/net.d"

c,

部署完成后应该有的虚拟网卡:

 

d:flannel的部署文件

根据前面说的那几点注意事项(IP,路径),确定是否正常后apply此文件,apply后查看网卡是否有上图标的虚拟网卡,有,表明flannel成功部署。

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: psp.flannel.unprivileged
  6. annotations:
  7. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  8. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  9. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  10. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  11. spec:
  12. privileged: false
  13. volumes:
  14. - configMap
  15. - secret
  16. - emptyDir
  17. - hostPath
  18. allowedHostPaths:
  19. - pathPrefix: "/etc/cni/net.d"
  20. - pathPrefix: "/etc/kube-flannel"
  21. - pathPrefix: "/run/flannel"
  22. readOnlyRootFilesystem: false
  23. # Users and groups
  24. runAsUser:
  25. rule: RunAsAny
  26. supplementalGroups:
  27. rule: RunAsAny
  28. fsGroup:
  29. rule: RunAsAny
  30. # Privilege Escalation
  31. allowPrivilegeEscalation: false
  32. defaultAllowPrivilegeEscalation: false
  33. # Capabilities
  34. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  35. defaultAddCapabilities: []
  36. requiredDropCapabilities: []
  37. # Host namespaces
  38. hostPID: false
  39. hostIPC: false
  40. hostNetwork: true
  41. hostPorts:
  42. - min: 0
  43. max: 65535
  44. # SELinux
  45. seLinux:
  46. # SELinux is unused in CaaSP
  47. rule: 'RunAsAny'
  48. ---
  49. kind: ClusterRole
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. metadata:
  52. name: flannel
  53. rules:
  54. - apiGroups: ['extensions']
  55. resources: ['podsecuritypolicies']
  56. verbs: ['use']
  57. resourceNames: ['psp.flannel.unprivileged']
  58. - apiGroups:
  59. - ""
  60. resources:
  61. - pods
  62. verbs:
  63. - get
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes
  68. verbs:
  69. - list
  70. - watch
  71. - apiGroups:
  72. - ""
  73. resources:
  74. - nodes/status
  75. verbs:
  76. - patch
  77. ---
  78. kind: ClusterRoleBinding
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. name: flannel
  82. roleRef:
  83. apiGroup: rbac.authorization.k8s.io
  84. kind: ClusterRole
  85. name: flannel
  86. subjects:
  87. - kind: ServiceAccount
  88. name: flannel
  89. namespace: kube-system
  90. ---
  91. apiVersion: v1
  92. kind: ServiceAccount
  93. metadata:
  94. name: flannel
  95. namespace: kube-system
  96. ---
  97. kind: ConfigMap
  98. apiVersion: v1
  99. metadata:
  100. name: kube-flannel-cfg
  101. namespace: kube-system
  102. labels:
  103. tier: node
  104. app: flannel
  105. data:
  106. cni-conf.json: |
  107. {
  108. "name": "cbr0",
  109. "cniVersion": "0.3.1",
  110. "plugins": [
  111. {
  112. "type": "flannel",
  113. "delegate": {
  114. "hairpinMode": true,
  115. "isDefaultGateway": true
  116. }
  117. },
  118. {
  119. "type": "portmap",
  120. "capabilities": {
  121. "portMappings": true
  122. }
  123. }
  124. ]
  125. }
  126. net-conf.json: |
  127. {
  128. "Network": "10.244.0.0/16",
  129. "Backend": {
  130. "Type": "vxlan"
  131. }
  132. }
  133. ---
  134. apiVersion: apps/v1
  135. kind: DaemonSet
  136. metadata:
  137. name: kube-flannel-ds
  138. namespace: kube-system
  139. labels:
  140. tier: node
  141. app: flannel
  142. spec:
  143. selector:
  144. matchLabels:
  145. app: flannel
  146. template:
  147. metadata:
  148. labels:
  149. tier: node
  150. app: flannel
  151. spec:
  152. affinity:
  153. nodeAffinity:
  154. requiredDuringSchedulingIgnoredDuringExecution:
  155. nodeSelectorTerms:
  156. - matchExpressions:
  157. - key: kubernetes.io/os
  158. operator: In
  159. values:
  160. - linux
  161. hostNetwork: true
  162. priorityClassName: system-node-critical
  163. tolerations:
  164. - operator: Exists
  165. effect: NoExecute
  166. serviceAccountName: flannel
  167. initContainers:
  168. - name: install-cni
  169. image: quay.io/coreos/flannel:v0.13.0
  170. command:
  171. - cp
  172. args:
  173. - -f
  174. - /etc/kube-flannel/cni-conf.json
  175. - /etc/cni/net.d/10-flannel.conflist
  176. volumeMounts:
  177. - name: cni
  178. mountPath: /etc/cni/net.d
  179. - name: flannel-cfg
  180. mountPath: /etc/kube-flannel/
  181. containers:
  182. - name: kube-flannel
  183. image: quay.io/coreos/flannel:v0.13.0
  184. command:
  185. - /opt/bin/flanneld
  186. args:
  187. - --ip-masq
  188. - --kube-subnet-mgr
  189. resources:
  190. requests:
  191. cpu: "100m"
  192. memory: "50Mi"
  193. limits:
  194. cpu: "100m"
  195. memory: "50Mi"
  196. securityContext:
  197. privileged: false
  198. capabilities:
  199. add: ["NET_ADMIN", "NET_RAW"]
  200. env:
  201. - name: POD_NAME
  202. valueFrom:
  203. fieldRef:
  204. fieldPath: metadata.name
  205. - name: POD_NAMESPACE
  206. valueFrom:
  207. fieldRef:
  208. fieldPath: metadata.namespace
  209. volumeMounts:
  210. - name: run
  211. mountPath: /run/flannel
  212. - name: flannel-cfg
  213. mountPath: /etc/kube-flannel/
  214. volumes:
  215. - name: run
  216. hostPath:
  217. path: /run/flannel
  218. - name: cni
  219. hostPath:
  220. path: /etc/cni/net.d
  221. - name: flannel-cfg
  222. configMap:
  223. name: kube-flannel-cfg

我是三个节点,因此,看到三个pod 是running就可以了,有多少节点就多少个flannel的pod:

  1. [root@master cfg]# k get po -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-76648cbfc9-zwjqz 1/1 Running 0 6h51m
  4. kube-flannel-ds-4mx69 1/1 Running 1 7h9m
  5. kube-flannel-ds-gmdph 1/1 Running 3 7h9m
  6. kube-flannel-ds-m8hzz 1/1 Running 1 7h9m

如果是新搭建集群,此时查看节点就会是ready的状态,证明确实安装好了:

  1. [root@master cfg]# k get no
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready <none> 33d v1.18.3
  4. k8s-node1 Ready <none> 33d v1.18.3
  5. k8s-node2 Ready <none> 33d v1.18.3

当然,还有一个svc:

  1. [root@master cfg]# k get svc -n kube-system
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. coredns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 33d



三,

calico网络插件部署

Calico有三四种安装方式:

  • 使用calico.yaml清单文件安装(推荐使用)
  • 二进制安装方式(很少用,不介绍了)
  • 插件方式(也很少用了,不介绍了)
  • 使用Tigera Calico Operator安装Calico(官方最新指导)
    Tigera Calico Operator,Calico操作员是一款用于管理Calico安装、升级的管理工具,它用于管理Calico的安装生命周期。从Calico-v3.15版本官方开始使用此工具。
    Calico安装要求:
  • x86-64, arm64, ppc64le, or s390x processor
  • 2个CPU
  • 2GB运行内存
  • 10GB硬盘空间
  • RedHat Enterprise Linux 7.x+, CentOS 7.x+, Ubuntu 16.04+, or Debian 9.x+
  • 确保Calico可以管理主机上的cali和tunl接口。

本例选用的是calico清单文件的方式安装:

calico和kubernetes之间的版本关系:

Kubernetes 版本    Calico 版本    Calico 文档    
1.18、1.19、1.20    3.18    https://projectcalico.docs.tigera.io/archive/v3.18/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.18/manifests/calico.yaml
1.19、1.20、1.21    3.19    https://projectcalico.docs.tigera.io/archive/v3.19/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.19/manifests/calico.yaml
1.19、1.20、1.21    3.20    https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml
1.20、1.21、1.22    3.21    https://projectcalico.docs.tigera.io/archive/v3.21/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml
1.21、1.22、1.23    3.22    https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml
1.21、1.22、1.23    3.23    https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml
1.22、1.23、1.24    3.24    https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements    https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml

安装命令为(先下载下来,一哈有些地方需要修改哦)

wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

清单文件一些配置详解:

该清单文件安装了以下Kubernetes资源:

  • 使用DaemonSet在每个主机上安装calico/node容器;
  • 使用DaemonSet在每个主机上安装Calico CNI二进制文件和网络配置;
  • 使用Deployment运行calico/kube-controller;
  • Secert/calico-etcd-secrets提供可选的Calico连接到etcd的TLS密钥信息;
  • ConfigMap/calico-config提供安装Calico时的配置参数。

(1)

清单文件中"CALICO_IPV4POOL_CIDR"部分

设置成了kube-proxy-config.yaml 文件相同的cidr,本例是10.244.0.0。

再次提醒此项用于设置安装Calico时要创建的默认IPv4池,PodIP将从该范围中选择。
Calico安装完成后修改此值将再无效。
默认情况下calico.yaml中"CALICO_IPV4POOL_CIDR"是注释的,如果kube-controller-manager的"--cluster-cidr"不存在任何值的话,则通常取默认值"192.168.0.0/16,172.16.0.0/16,..,172.31.0.0/16"。
当使用kubeadm时,PodIP的范围应该与kubeadm init的清单文件中的"podSubnet"字段或者"--pod-network-cidr"选项填写的值相同。

  1. - name: CALICO_IPV4POOL_IPIP
  2. value: "Always"
  3. # Set MTU for tunnel device used if ipip is enabled
  4. - name: FELIX_IPINIPMTU
  5. valueFrom:
  6. configMapKeyRef:
  7. name: calico-config
  8. key: veth_mtu
  9. # The default IPv4 pool to create on startup if none exists. Pod IPs will be
  10. # chosen from this range. Changing this value after installation will have
  11. # no effect. This should fall within `--cluster-cidr`.
  12. - name: CALICO_IPV4POOL_CIDR
  13. value: "10.244.0.0/16"

(2)

calico_backend: "bird"

设置Calico使用的后端机制。支持值:
bird,开启BIRD功能,根据Calico-Node的配置来决定主机的网络实现是采用BGP路由模式还是IPIP、VXLAN覆盖网络模式。这个是默认的模式。
vxlan,纯VXLAN模式,仅能够使用VXLAN协议的覆盖网络模式。

  1. # Configure the backend to use.
  2. calico_backend: "bird"

其它的不需要更改,默认就好了,也没什么可设置的。

三,

flannel切换到calico

rm -rf /etc/cni/net.d/10-flannel.conflist(所有节点都这么操作,删除flannel相关配置文件),然后apply calico的清单文件,然后重启节点,当然,也可以重启相关服务,删除flannel的网卡和路由,但太麻烦了。

等待相关pod运行正常

  1. [root@master ~]# k get po -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-57546b46d6-hcfg5 1/1 Running 1 32m
  4. calico-node-7x7ln 1/1 Running 2 32m
  5. calico-node-dbsmv 1/1 Running 1 32m
  6. calico-node-vqbqn 1/1 Running 3 32m
  7. coredns-76648cbfc9-zwjqz 1/1 Running 11 17h

查看网卡:

  1. [root@master ~]# ip a
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
  9. link/ether 00:0c:29:55:91:06 brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.217.16/24 brd 192.168.217.255 scope global ens33
  11. valid_lft forever preferred_lft forever
  12. inet6 fe80::20c:29ff:fe55:9106/64 scope link
  13. valid_lft forever preferred_lft forever
  14. 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
  15. link/ether 02:42:51:da:97:25 brd ff:ff:ff:ff:ff:ff
  16. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  17. valid_lft forever preferred_lft forever
  18. 4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
  19. link/ether 4e:2f:8c:a7:d3:12 brd ff:ff:ff:ff:ff:ff
  20. 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
  21. link/ether 2a:8d:65:11:8f:7a brd ff:ff:ff:ff:ff:ff
  22. inet 10.0.0.12/32 brd 10.0.0.12 scope global kube-ipvs0
  23. valid_lft forever preferred_lft forever
  24. inet 10.0.0.2/32 brd 10.0.0.2 scope global kube-ipvs0
  25. valid_lft forever preferred_lft forever
  26. inet 10.0.0.78/32 brd 10.0.0.78 scope global kube-ipvs0
  27. valid_lft forever preferred_lft forever
  28. inet 10.0.0.102/32 brd 10.0.0.102 scope global kube-ipvs0
  29. valid_lft forever preferred_lft forever
  30. inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0
  31. valid_lft forever preferred_lft forever
  32. inet 10.0.0.127/32 brd 10.0.0.127 scope global kube-ipvs0
  33. valid_lft forever preferred_lft forever
  34. inet 10.0.0.100/32 brd 10.0.0.100 scope global kube-ipvs0
  35. valid_lft forever preferred_lft forever
  36. 6: cali21d67233fc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP
  37. link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
  38. inet6 fe80::ecee:eeff:feee:eeee/64 scope link
  39. valid_lft forever preferred_lft forever
  40. 7: calibbdaeb2fa53@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP
  41. link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
  42. inet6 fe80::ecee:eeff:feee:eeee/64 scope link
  43. valid_lft forever preferred_lft forever
  44. 8: cali29233485d0f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP
  45. link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
  46. inet6 fe80::ecee:eeff:feee:eeee/64 scope link
  47. valid_lft forever preferred_lft forever
  48. 9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1000
  49. link/ipip 0.0.0.0 brd 0.0.0.0
  50. inet 10.244.235.192/32 brd 10.244.235.192 scope global tunl0
  51. valid_lft forever preferred_lft forever

新建一些测试用的series和pod,都运行正常,表明切换网络插件成功:

  1. [root@master ~]# k get po -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default hello-server-85d885f474-jbggc 1/1 Running 0 65s
  4. default hello-server-85d885f474-sx562 1/1 Running 0 65s
  5. default nginx-demo-76c8bff45f-pln6h 1/1 Running 0 65s
  6. default nginx-demo-76c8bff45f-tflnz 1/1 Running 0 65s

总结一哈:

快速查看kubernetes的网络配置:

可以看到是使用的ipip模式,vxlan没有启用

  1. [root@master ~]# kubectl get ippools -o yaml
  2. apiVersion: v1
  3. items:
  4. - apiVersion: crd.projectcalico.org/v1
  5. kind: IPPool
  6. metadata:
  7. annotations:
  8. projectcalico.org/metadata: '{"uid":"85bfeb95-da98-4710-aed1-1f3f2ae16159","creationTimestamp":"2022-09-30T03:17:58Z"}'
  9. creationTimestamp: "2022-09-30T03:17:58Z"
  10. generation: 1
  11. managedFields:
  12. - apiVersion: crd.projectcalico.org/v1
  13. fieldsType: FieldsV1
  14. manager: Go-http-client
  15. operation: Update
  16. time: "2022-09-30T03:17:58Z"
  17. name: default-ipv4-ippool
  18. resourceVersion: "863275"
  19. selfLink: /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool
  20. uid: 1886cacb-700f-4440-893a-a24ae9b5d2d3
  21. spec:
  22. blockSize: 26
  23. cidr: 10.244.0.0/16
  24. ipipMode: Always
  25. natOutgoing: true
  26. nodeSelector: all()
  27. vxlanMode: Never
  28. kind: List
  29. metadata:
  30. resourceVersion: ""
  31. selfLink: ""

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/128367
推荐阅读
相关标签
  

闽ICP备14008679号