当前位置:   article > 正文

k8s 1.18.20版本部署_flannel-cni-plugin-v1.1.2.tar.gz

flannel-cni-plugin-v1.1.2.tar.gz

身为k8s初学者,在掌握k8s理论知识的同时,也需要掌握一下实际部署k8s的过程,对于理论的学习起到一定的帮助作用。罗列了一下相关步骤,请各位参考:

一、环境准备

三台虚机:

操作系统: CentOS Linux release 7.9.2009 (Core)   

内核版本:3.10.0-1160.88.1.el7.x86_64

k8s-master01: 192.168.66.200   

k8s-master02: 192.168.66.201

k8s-node01:    192.168.66.250

本次安装k8s 1.18.20版本

二、服务器初始化配置

服务器初始化配置可以参考原先我写的1.15.0版本部署方案

K8s 1.15.0 版本部署安装_好好学习之乘风破浪的博客-CSDN博客

三、安装docker

  1. 3.1、安装指定的Docker版本
  2. yum list docker-ce.x86_64 --showduplicates | sort -r
  3. yum install -y containerd.io-1.6.18 docker-ce-23.0.1 docker-ce-cli-23.0.1
  4. 3.2、创建docker daemon.json文件
  5. mkdir /etc/docker
  6. cat > /etc/docker/daemon.json <<EOF
  7. {
  8. "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
  9. "exec-opts": ["native.cgroupdriver=systemd"],
  10. "log-driver": "json-file",
  11. "log-opts": {
  12. "max-size": "100m"
  13. },
  14. "storage-driver": "overlay2",
  15. "storage-opts": [
  16. "overlay2.override_kernel_check=true"
  17. ]
  18. }
  19. EOF
  20. 3.3 启动docker
  21. systemctl enable docker; systemctl start docker;systemctl status docker
  22. 3.4 查看安装后的docker版本
  23. docker --version

四、安装软件包

  1. 4.1、安装软件
  2. yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 ipvsadm
  3. 4.2、配置kubelet
  4. cat > /etc/sysconfig/kubelet <<EOF
  5. KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
  6. KUBELET_EXTRA_ARGS="--fail-swap-on=false"
  7. EOF
  8. 4.3、启动kubelet并设置开机启动
  9. systemctl enable kubelet && systemctl start kubelet
  10. 4.4、检查版本
  11. kubeadm version
  12. 确认是所安装的版本

五、部署集群

以下操作在k8s-master01节点上操作

5.1 、生成初始化配置
kubeadm config print init-defaults > kubeadm.yaml

  1. 5.1 、生成初始化配置
  2. kubeadm config print init-defaults > kubeadm.yaml
  3. [root@k8s-master01 ~]# cat kubeadm.yaml
  4. apiVersion: kubeadm.k8s.io/v1beta2
  5. bootstrapTokens:
  6. - groups:
  7. - system:bootstrappers:kubeadm:default-node-token
  8. token: abcdef.0123456789abcdef
  9. ttl: 24h0m0s
  10. usages:
  11. - signing
  12. - authentication
  13. kind: InitConfiguration
  14. localAPIEndpoint:
  15. advertiseAddress: 192.168.66.200 ####需要修改为k8s-master01的地址
  16. bindPort: 6443
  17. nodeRegistration:
  18. criSocket: /var/run/dockershim.sock
  19. name: k8s-master01 ####需要修改为k8s-master01的主机名
  20. taints:
  21. - effect: NoSchedule
  22. key: node-role.kubernetes.io/master
  23. ---
  24. apiServer:
  25. timeoutForControlPlane: 4m0s
  26. apiVersion: kubeadm.k8s.io/v1beta2
  27. certificatesDir: /etc/kubernetes/pki
  28. clusterName: kubernetes
  29. controllerManager: {}
  30. dns:
  31. type: CoreDNS
  32. etcd:
  33. local:
  34. dataDir: /var/lib/etcd
  35. imageRepository: k8s.gcr.io
  36. kind: ClusterConfiguration
  37. kubernetesVersion: v1.18.20 ####需要修改为v1.18.20
  38. networking:
  39. dnsDomain: cluster.local
  40. serviceSubnet: 10.96.0.0/12 ####保持默认
  41. podSubnet: 10.2.0.0/16 ####新增
  42. scheduler: {}
  43. --- ####新增
  44. apiVersion: kubeproxy.config.k8s.io/v1alpha1 ####新增
  45. kind: KubeProxyConfiguration ####新增
  46. mode: ipvs ####新增

5.2 、执行初始化配置
kubeadm init --config kubeadm.yaml
成功会显示如下信息

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.200:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8ba059662766394bbc081324dfba0bc6a1360687f99e93a1b5c7c3a1e6d53097 

5.3 按照要求执行生成文件

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.4 查看组件状态

kubectl get cs

该状态其实不影响整个集群使用,是因为默认的kube-scheduler.yaml和kube-controller-manager.yaml 中默认使用了10251和10252的端口。可以将对应的配置文件中- --port=0 给注释掉

 

  1. [root@k8s-master01 ~]# sed -i 's/- --port=0/#&/' /etc/kubernetes/manifests/kube-scheduler.yaml
  2. [root@k8s-master01 ~]# sed -i 's/- --port=0/#&/' /etc/kubernetes/manifests/kube-controller-manager.yaml
  3. [root@k8s-master01 ~]# systemctl restart kubelet

 

 5.5、部署flannel组件

mirrors / coreos / flannel · GitCode

可以自行去github,找到 flannel/Documentation/kube-flannel.yml 路径,拷贝出代码,新建文件上传,然后修改为.yml

# vi kube-flannel.yml

  net-conf.json: |
    {
      "Network": "10.2.0.0/16",      ####修改为podsubnet地址段
      "Backend": {
        "Type": "vxlan"
      }
    }

(注:镜像路径可以不用修改)

  1. [root@k8s-master01 ~]# cat kube-flannel.yml
  2. ---
  3. kind: Namespace
  4. apiVersion: v1
  5. metadata:
  6. name: kube-flannel
  7. labels:
  8. pod-security.kubernetes.io/enforce: privileged
  9. ---
  10. kind: ClusterRole
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. metadata:
  13. name: flannel
  14. rules:
  15. - apiGroups:
  16. - ""
  17. resources:
  18. - pods
  19. verbs:
  20. - get
  21. - apiGroups:
  22. - ""
  23. resources:
  24. - nodes
  25. verbs:
  26. - get
  27. - list
  28. - watch
  29. - apiGroups:
  30. - ""
  31. resources:
  32. - nodes/status
  33. verbs:
  34. - patch
  35. - apiGroups:
  36. - "networking.k8s.io"
  37. resources:
  38. - clustercidrs
  39. verbs:
  40. - list
  41. - watch
  42. ---
  43. kind: ClusterRoleBinding
  44. apiVersion: rbac.authorization.k8s.io/v1
  45. metadata:
  46. name: flannel
  47. roleRef:
  48. apiGroup: rbac.authorization.k8s.io
  49. kind: ClusterRole
  50. name: flannel
  51. subjects:
  52. - kind: ServiceAccount
  53. name: flannel
  54. namespace: kube-flannel
  55. ---
  56. apiVersion: v1
  57. kind: ServiceAccount
  58. metadata:
  59. name: flannel
  60. namespace: kube-flannel
  61. ---
  62. kind: ConfigMap
  63. apiVersion: v1
  64. metadata:
  65. name: kube-flannel-cfg
  66. namespace: kube-flannel
  67. labels:
  68. tier: node
  69. app: flannel
  70. data:
  71. cni-conf.json: |
  72. {
  73. "name": "cbr0",
  74. "cniVersion": "0.3.1",
  75. "plugins": [
  76. {
  77. "type": "flannel",
  78. "delegate": {
  79. "hairpinMode": true,
  80. "isDefaultGateway": true
  81. }
  82. },
  83. {
  84. "type": "portmap",
  85. "capabilities": {
  86. "portMappings": true
  87. }
  88. }
  89. ]
  90. }
  91. net-conf.json: |
  92. {
  93. "Network": "10.2.0.0/16",
  94. "Backend": {
  95. "Type": "vxlan"
  96. }
  97. }
  98. ---
  99. apiVersion: apps/v1
  100. kind: DaemonSet
  101. metadata:
  102. name: kube-flannel-ds
  103. namespace: kube-flannel
  104. labels:
  105. tier: node
  106. app: flannel
  107. spec:
  108. selector:
  109. matchLabels:
  110. app: flannel
  111. template:
  112. metadata:
  113. labels:
  114. tier: node
  115. app: flannel
  116. spec:
  117. affinity:
  118. nodeAffinity:
  119. requiredDuringSchedulingIgnoredDuringExecution:
  120. nodeSelectorTerms:
  121. - matchExpressions:
  122. - key: kubernetes.io/os
  123. operator: In
  124. values:
  125. - linux
  126. hostNetwork: true
  127. priorityClassName: system-node-critical
  128. tolerations:
  129. - operator: Exists
  130. effect: NoSchedule
  131. serviceAccountName: flannel
  132. initContainers:
  133. - name: install-cni-plugin
  134. image: docker.io/flannel/flannel-cni-plugin:v1.1.2
  135. #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
  136. command:
  137. - cp
  138. args:
  139. - -f
  140. - /flannel
  141. - /opt/cni/bin/flannel
  142. volumeMounts:
  143. - name: cni-plugin
  144. mountPath: /opt/cni/bin
  145. - name: install-cni
  146. image: docker.io/flannel/flannel:v0.21.3
  147. #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.3
  148. command:
  149. - cp
  150. args:
  151. - -f
  152. - /etc/kube-flannel/cni-conf.json
  153. - /etc/cni/net.d/10-flannel.conflist
  154. volumeMounts:
  155. - name: cni
  156. mountPath: /etc/cni/net.d
  157. - name: flannel-cfg
  158. mountPath: /etc/kube-flannel/
  159. containers:
  160. - name: kube-flannel
  161. image: docker.io/flannel/flannel:v0.21.3
  162. #image: quay-mirror.qiniu.com/coreos/flannel:v0.21.3
  163. #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.3
  164. command:
  165. - /opt/bin/flanneld
  166. args:
  167. - --ip-masq
  168. - --kube-subnet-mgr
  169. resources:
  170. requests:
  171. cpu: "100m"
  172. memory: "50Mi"
  173. securityContext:
  174. privileged: false
  175. capabilities:
  176. add: ["NET_ADMIN", "NET_RAW"]
  177. env:
  178. - name: POD_NAME
  179. valueFrom:
  180. fieldRef:
  181. fieldPath: metadata.name
  182. - name: POD_NAMESPACE
  183. valueFrom:
  184. fieldRef:
  185. fieldPath: metadata.namespace
  186. - name: EVENT_QUEUE_DEPTH
  187. value: "5000"
  188. volumeMounts:
  189. - name: run
  190. mountPath: /run/flannel
  191. - name: flannel-cfg
  192. mountPath: /etc/kube-flannel/
  193. - name: xtables-lock
  194. mountPath: /run/xtables.lock
  195. volumes:
  196. - name: run
  197. hostPath:
  198. path: /run/flannel
  199. - name: cni-plugin
  200. hostPath:
  201. path: /opt/cni/bin
  202. - name: cni
  203. hostPath:
  204. path: /etc/cni/net.d
  205. - name: flannel-cfg
  206. configMap:
  207. name: kube-flannel-cfg
  208. - name: xtables-lock
  209. hostPath:
  210. path: /run/xtables.lock
  211. type: FileOrCreate

应用flannel组件

kubectl apply -f kube-flannel.yml

查询flannel的pod信息

kubectl get pod -n kube-flannel  

查看pod和node节点

kubectl get pod -n kube-system

kubectl get node

六、部署node节点

将k8s-node01节点添加到集群中

kubeadm join 192.168.66.200:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8ba059662766394bbc081324dfba0bc6a1360687f99e93a1b5c7c3a1e6d53097 

查询node节点情况

登录master01节点检查所有组件运行情况

kubectl get pod -A -owide

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/煮酒与君饮/article/detail/952907
推荐阅读
相关标签
  

闽ICP备14008679号