当前位置:   article > 正文

【考点】CKA 03_Kubernetes版本升级 对 kubeadm 集群中的控制平面节点和工作节点进行升级_cka认证 升级集群

cka认证 升级集群

官方文档:升级 kubeadm 集群

1. 准备开始

  1. 升级工作的基本流程如下:

    升级主控制平面节点
    升级其他控制平面节点
    升级工作节点

  2. 首先选择一个要先行升级的控制面节点。该节点上必须拥有 /etc/kubernetes/admin.conf 文件。

  3. 确定要升级到哪个版本
    使用操作系统的包管理器找到最新的补丁版本 Kubernetes 1.22

2. 升级控制平面节点

2.1 升级 kubeadm

  1. 安装新版本的 kubeadm
[root@k8s1 ~]# yum install -y kubeadm-1.22.2-0
  • 1
  1. 验证下载操作正常,并且 kubeadm 版本正确
[root@k8s1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
  • 1
  • 2

2.2 验证升级计划

  1. 此命令检查你的集群是否可被升级,并取回你要升级的目标版本。 命令也会显示一个包含组件配置版本状态的表格。(就是检查能取到那个版本)
[root@k8s1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.22.1
[upgrade/versions] kubeadm version: v1.22.2
I0420 18:17:29.418051   14782 version.go:255] remote version is much newer: v1.23.5; falling back to: stable-1.22
[upgrade/versions] Target version: v1.22.8
[upgrade/versions] Latest version in the v1.22 series: v1.22.8

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     3 x v1.22.1   v1.22.8

Upgrade to the latest version in the v1.22 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.22.1   v1.22.8
kube-controller-manager   v1.22.1   v1.22.8
kube-scheduler            v1.22.1   v1.22.8
kube-proxy                v1.22.1   v1.22.8
CoreDNS                   v1.8.4    v1.8.4
etcd                      3.5.0-0   3.5.0-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.22.8

Note: Before you can perform this upgrade, you have to update kubeadm to v1.22.8.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45

2.3 执行 kubeadm upgrade

  1. 选择要升级到的目标版本,运行合适的命令。例如:
[root@k8s1 ~]# kubeadm upgrade apply v1.22.2
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.2"
[upgrade/versions] Cluster version: v1.22.1
[upgrade/versions] kubeadm version: v1.22.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.2"...
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-scheduler-k8s1 hash: 059f507ac11831e865a5bbde108257cd
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s1 hash: 0ec5717bc681762beb1d11ba2da3aa34
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests011680339"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: bffe53f30eb1fe1d65832eb152b1d1aa
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-controller-manager-k8s1 hash: ccc9a7c02cb35434c2475dc0600f58f4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s1 hash: 059f507ac11831e865a5bbde108257cd
Static pod: kube-scheduler-k8s1 hash: 0f92a29a90afeff2dcccce64190f29e6
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  1. 一旦该命令结束,你应该会看到:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  • 1
  • 2
  • 3
  1. 对于其它控制面节点
  • 与第一个控制面节点相同,但是使用
kubeadm upgrade node
  • 1
  • 而不是
kubeadm upgrade apply
  • 1
  • 此外,不需要执行 kubeadm upgrade plan

  • 此时的节点状态 Ready

[root@k8s1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE     VERSION
k8s1   Ready    control-plane,master   23m     v1.22.1
k8s2   Ready    <none>                 10m     v1.22.1
k8s3   Ready    <none>                 9m38s   v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5

2.4 腾空节点

  1. 通过将节点标记为不可调度并腾空节点为节点作升级准备
[root@k8s1 ~]# kubectl drain k8s1 --ignore-daemonsets
node/k8s1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-jjnbp, kube-system/kube-proxy-r2z2t
evicting pod kube-system/coredns-7f6cbbb7b8-z42dt
evicting pod kube-system/coredns-7f6cbbb7b8-44762
pod/coredns-7f6cbbb7b8-44762 evicted
pod/coredns-7f6cbbb7b8-z42dt evicted
node/k8s1 evicted
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  1. 这个时候查看节点状态 SchedulingDisabled
    (此时这个节点不会再有调度啦)
[root@k8s1 ~]# kubectl get node
NAME   STATUS                     ROLES                  AGE   VERSION
k8s1   Ready,SchedulingDisabled   control-plane,master   25m   v1.22.1
k8s2   Ready                      <none>                 12m   v1.22.1
k8s3   Ready                      <none>                 11m   v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5

2.5 升级 kubelet 和 kubectl

  1. 升级
[root@k8s1 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
  • 1
  1. 重启守护进程 kubelet
[root@k8s1 ~]# systemctl daemon-reload 
[root@k8s1 ~]# systemctl restart kubelet.service 
  • 1
  • 2

2.6 解除节点的保护

  1. 解除之前的状态
[root@k8s1 ~]# kubectl get node
NAME   STATUS                     ROLES                  AGE   VERSION
k8s1   Ready,SchedulingDisabled   control-plane,master   27m   v1.22.1
k8s2   Ready                      <none>                 14m   v1.22.1
k8s3   Ready                      <none>                 13m   v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5
  1. 通过将节点标记为可调度,让其重新上线
[root@k8s1 ~]# kubectl uncordon k8s1
node/k8s1 uncordoned
  • 1
  • 2
  1. 检查接触后节点的状态和版本,发现 k8s1 的版本成功升级
[root@k8s1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE   VERSION
k8s1   Ready    control-plane,master   27m   v1.22.2
k8s2   Ready    <none>                 14m   v1.22.1
k8s3   Ready    <none>                 13m   v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5

3. 升级工作节点

  • 工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点, 以不影响运行工作负载所需的最小容量。

  • 因为当前我只有 2 个工作节点,所以,不能同时腾空 k8s2 和 k8s3。否则pod不知道如何运行。

第一个工作节点升级过程如下:

3.1 升级 kubeadm

[root@k8s2 ~]# yum install -y kubeadm-1.22.2-0
  • 1

3.2 执行 kubeadm upgrade

  • 对于工作节点,下面的命令会升级本地的 kubelet 配置
[root@k8s2 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

3.3 腾空节点

  1. 将节点标记为不可调度并驱逐所有负载,准备节点的维护
  2. 这里先腾空一个工作节点 k8s2
[root@k8s1 ~]# kubectl drain k8s2 --ignore-daemonsets
node/k8s2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-q2lkh, kube-system/kube-proxy-9fwlc
evicting pod kube-system/coredns-7f6cbbb7b8-8g8v5
pod/coredns-7f6cbbb7b8-8g8v5 evicted
node/k8s2 evicted
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

3.4 升级 kubelet 和 kubectl

  1. 升级 kubelet 和 kubectl
[root@k8s2 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
  • 1
  1. 先在工作节点 k8s2 上重启 kubelet
[root@k8s2 ~]# systemctl daemon-reload 
[root@k8s2 ~]# systemctl restart kubelet.service 
  • 1
  • 2

3.5 取消对节点的保护

  1. 通过将节点标记为可调度,让节点重新上线
[root@k8s1 ~]# kubectl uncordon k8s2
node/k8s2 uncordoned
  • 1
  • 2
  1. 查看节点的状态,k8s2 已经升级成功
[root@k8s1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE   VERSION
k8s1   Ready    control-plane,master   34m   v1.22.2
k8s2   Ready    <none>                 21m   v1.22.2
k8s3   Ready    <none>                 20m   v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5

第二个工作节点升级过程如下:

3.1 升级 kubeadm

[root@k8s3 ~]# yum install -y kubeadm-1.22.2-0
  • 1

3.2 执行 kubeadm upgrade

[root@k8s3 ~]# kubeadm upgrade node
  • 1

3.3 腾空节点


[root@k8s1 ~]# kubectl drain k8s3 --ignore-daemonsets
node/k8s3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-d9r55, kube-system/kube-proxy-vwp45
evicting pod kube-system/coredns-7f6cbbb7b8-n7m5m
evicting pod kube-system/coredns-7f6cbbb7b8-mx7qr
pod/coredns-7f6cbbb7b8-mx7qr evicted
pod/coredns-7f6cbbb7b8-n7m5m evicted
node/k8s3 evicted
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

3.4 升级 kubelet 和 kubectl

  1. 升级
[root@k8s3 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
  • 1
  1. 重启
[root@k8s3 ~]# systemctl daemon-reload 
[root@k8s3 ~]# systemctl restart kubelet.service 
  • 1
  • 2

3.5 取消对节点的保护

  1. 通过将节点标记为可调度,让节点重新上线
[root@k8s1 ~]# kubectl uncordon k8s3
node/k8s3 uncordoned
  • 1
  • 2

4. 验证集群的状态

  • 在所有节点上升级 kubelet 后,通过从 kubectl 可以访问集群的任何位置运行以下命令, 验证所有节点是否再次可用
  • STATUS 应显示所有节点为 Ready 状态,并且版本号已经被更新。
[root@k8s1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE   VERSION
k8s1   Ready    control-plane,master   36m   v1.22.2
k8s2   Ready    <none>                 23m   v1.22.2
k8s3   Ready    <none>                 22m   v1.22.2
  • 1
  • 2
  • 3
  • 4
  • 5

注意!!!

考试主要做 控制平面节点 的升级!不需要动 工作节点!
不要求的东西,不能动!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/171505?site
推荐阅读
相关标签
  

闽ICP备14008679号