当前位置:   article > 正文

云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)_kubeadm更新集群证书

kubeadm更新集群证书

前言:

kubernetes集群根据部署手法来分类,一般是两种,一种是基于kubeadm搭建的集群,一种是二进制方式搭建的集群。那么,二进制集群升级和证书更新就完全是手动处理了,而kubeadm的集群可以自动的升级集群(有一些手动的操作,但不多)和证书更新。

kubernetes集群升级的目的和意义:

1,

某些版本有安全隐患的时候需要升级集群

例如,kubernetes CVE-2022-3172 安全漏洞,kubernetes CVE-2019-11247,kubernetesCVE-2018-1002105,kubernetes CVE-2020-8559等等漏洞。

这些内容可以在阿里云查找到,链接如下:

【CVE安全】漏洞修复公告_容器服务Kubernetes版-阿里云帮助中心

一般情况下,不管是kubernetes集群还是什么其它的集群,还是什么tomcat,什么elasticsearch,ssh等等各类软件,当然也包括各个操作系统,比如centos7.Ubuntu,debian等等所有软件类,规避各种各样的安全漏洞基本都是通过升级版本的方式来完成的。

多说一句,在软件制作者的角度来说,升级软件版本的动力一个是安全漏洞的压力,一个是更多的,更   新的功能以及更美观的软件界面。在软件使用者的角度来说,升级软件版本的动力第一个是安全漏洞,其次才是更 新的功能,最后才是美观的软件界面。

2,

有更加强大的功能

例如,kubernetes集群从最初的1.1版本发展到现在的1.26版本,无疑功能是更多了,某些版本也更加的好用,易用,是能够提升生产效率的。那么,很多的新功能无疑是需要升级集群版本才可以做到的。

3,

有更加美观的界面

例如,kubernetes的组件dashboard,每个版本界面都是不一样的,虽然总体的风格类似,但,其中的某个web界面总能打动某些人的心,这些也是需要通过升级版本(这里是组件的版本)才可以做到的。

4,

对于kubernetes集群来说,kubernetes集群升级会同时刷新集群内的证书期限,某些时候,这个比较特定的机制也是kubernetes升级的一个小小动力吧。




OK,以上是对kubernetes集群升级的目的以及升级的用途做了一个简单的总结,下面将就一个Ubuntu-18.04版本的操作系统内部署的kubernetes-1.22.0集群做一个小小的升级---从1.22.0升级到1.22.2,说明如何升级一个kubernetes集群并更新集群的所有证书。

一,

环境介绍

操作系统:Ubuntu-18.04

集群架构:三个节点,一主两从节点,kubernetes集群初始版本1.22.0,kubeadm部署的集群

IP地址:192.168.123.150,192.168.123.151,192.168.123.152

集群状态:三个节点由于证书过期,全部挂掉的

二,

kubernetes集群和证书的关系

由于kubernetes集群出于安全方面的考虑,因此,从一开始发布就基于RBAC(权限管理系统),到1.16还是1.17版本,这个不太记得了,集群基本是默认使用RBAC了,这就使得集群内的各个组件之间的通信都是基于证书的,例如kubelet的配置文件(这里随便找了一个kubelet的配置文件):

  1. cat /etc/kubernetes/kubelet.conf
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  6. server: https://192.168.123.150:6443
  7. name: kubernetes
  8. contexts:
  9. - context:
  10. cluster: kubernetes
  11. user: system:node:k8s-master
  12. name: system:node:k8s-master@kubernetes
  13. current-context: system:node:k8s-master@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: system:node:k8s-master
  18. user:
  19. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKVENDQWcyZ0F3SUJBZ0lJS0h2czRzK3FaR0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTURneApGVEFUQmdOVkJBb1RESE41YzNSbGJUcHViMlJsY3pFZk1CMEdBMVVFQXhNV2MzbHpkR1Z0T201dlpHVTZhemh6CkxXMWhjM1JsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHRQVkZIdWJ1KzgKNkJCZ0I4UGJadzV0OGF2S3VxeklQMHRPN2tRbVN0ZkpTbjNxdjFHS0xwL2U4UllLbkpuUFRzaytYKzRPcitxSwoyeDhXQWxKUGtjQUFXNEJxUlZOT3R5WDhaZHRYc3J2d3VVaGRoZEZkU3dFV3ZzaWRVOEJJamRwQktKVkN0dHR0CkJPQ0hobXBSY0VyQ1JqYkt2Zy90WE9YcDE1cmx0aS92ZHJHaXpKRUt2cUJQZElpVU05UUh3WEZqdmFqMXFnT2YKaUpraTRlZDBnZ3AwSmtxM0grSXp6MFZNUG9YdTJXdnBTTU81dmtGbi9Lay92bUxhaHRwaTlJeCtPN1VMYUxTNQpsdEVGaVJXQnEyT1R2N3YzMndEVUJUcFdwN25wWlU0WE9ra3ltaW5VT3E3ZGhVZWxrQTMwWVhucjB0MklwOWI4ClgxRVNuSEtKMHMwQ0F3RUFBYU5XTUZRd0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0cKQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV2eG8wcGdUZHpuLzZQMzJPdjliSwo2bm5XK3Rnd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFORk5MVXA5SzRMM0s0NTRXbmp2OHprNVVhREU0ZGovCnpmRXBRdHRWSHRLV3VreHZTRWRRTG15RGZURDFoNm8veGFqNkc3Wk1YbmFWUTQ1WmkzZ3F5ck1ibTdETHhxWDYKV0pLdkZkNUJNY2F4YW16dWhoN0I4R2xrMkNsNUZsK3Z0QnUxREtya293blpydFBTeGFaVjhsUmo2bmFHU1k4RQo4RVVqUWN5VXF3Z2duZWwwanNoaFVKOGdKMHV0MXN5UVAxWEJJcEpsTEZ5b0dDQmNuWkFvdE9oWnFWTWwxcTdOClQ5aVZEVy9IZ2xPbll0WktTbXREN2JvMk4rSDZxNmhaUVFJWmVzWVJxNUoxV0IwclR5SkkzbHJEV3J2QWRrcDEKTUZrekJJK3d2WHF2MEdtTFNYNzRudU4wZnY2K0VvQkdUakFVbkNIdUxvQ0RQNmQxTVBYL3ZZcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  20. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdTA5VVVlNXU3N3pvRUdBSHc5dG5EbTN4cThxNnJNZy9TMDd1UkNaSzE4bEtmZXEvClVZb3VuOTd4RmdxY21jOU95VDVmN2c2djZvcmJIeFlDVWsrUndBQmJnR3BGVTA2M0pmeGwyMWV5dS9DNVNGMkYKMFYxTEFSYSt5SjFUd0VpTjJrRW9sVUsyMjIwRTRJZUdhbEZ3U3NKR05zcStEKzFjNWVuWG11VzJMKzkyc2FMTQprUXErb0U5MGlKUXoxQWZCY1dPOXFQV3FBNStJbVNMaDUzU0NDblFtU3JjZjRqUFBSVXcraGU3WmErbEl3N20rClFXZjhxVCsrWXRxRzJtTDBqSDQ3dFF0b3RMbVcwUVdKRllHclk1Ty91L2ZiQU5RRk9sYW51ZWxsVGhjNlNUS2EKS2RRNnJ0MkZSNldRRGZSaGVldlMzWWluMXZ4ZlVSS2Njb25TelFJREFRQUJBb0lCQVFDcHNWUFZxaW9zM1RwcwpZMk9GaDhhVXB2d3p3OFZjOVVtS1EyYk9yTlpQS2hobmZQMTR0TFJLdCtJNE1zTHZBWVlDQVpWTkNWZE1LQ0lkCnhvV3g1azVINE1zRXlzSWxtQUdLMDErLzJIS2ZtNVZ3UHZJVjIrd3dmMWUyVGZuckVKQWFzNzg5Z2lSQkpFSXYKMi9mbGFBUlFaakxRUHRyemVQb1pmTUdNbmlGd3lIVVJVQWZkL0U0QlFGNS8zUWVFeEpEVkNtaEZEOG5YRXlqRwpLSG5CSlI1TGFCeFpPcmh0bDVqckRZRGxtaWcyZGNPY21TOU5xN2xCUVExb3NmR1FhbjlTODQ1VnlFdjQvcWZrCjAwZkpJN0JpSStDcWNtM0lHRmhNMG5lNXlvVE8zSEo4Sy94bTdpRmxIcVg1cXZaYjJnc2hTRzZIR1E4YjZPV2UKT09id0xxZnRBb0dCQU1VNisvaEdsaVJrNGd4S3NmUE12dERsV3hGTFVWaTdlK1BGQlJ6Zm5FRlh3U055dlI4ZgpYTmlWZStUOFQvcTgwUnZrTmQ1QVNnd1NBTkwraUZnN0R4SXIzUC9GSkc5SjYwcURYMDZzZi9tKzM5c2VzZFp0Ck9Fanl6UXBmNnVtRGVKMHhvNDlrVWtiTkNvQUNPamE4QndDeFdRcjJHb0ZyL0ptZGEra0JteWRUQW9HQkFQTWYKbDkzbXo1QUpiVkE3cVRWckRiVDNDWEsxRjQyV0RuQ0FDRU5kcE8vZlhOZWtuWEhWcjZiRUl6UTluelQ4R2hmOApUUlBtb1VJTVNVVGpHZmdUMTlkbTliODdZb2NkMENCd0pZejlwcmtRaUpMdm13QS82cytQalkvVUwvSHdrT05QCllvcm9YclB5WVpIeHNjMW54di9lRWJic0UyWjdVSXpXbGpxK0lIbGZBb0dBYlN1OUZTeGRKMEFBTDdXWTBzNWUKUU5yemtac1RKLzUvRVJDWlIrWXVZNnpqWjIrM1oyYkF5ZEhVaG1kekRlTStEQ1pCK3dleTlRTnlHVmh5dUFQWQp6OElmemlPZGkweHJSUTk2emQyRjZRUFNmVU44Uktpb0l4amlqZitSMURmRnA1MDJYOFMwRmlTZ3owSnNYcWV0CmFLRENIT01rd01hNVIzNXZvTVlXejZrQ2dZRUE1TWdiR2ZhRDVjL3BMUEluaFp3SzF2c015Z045ZVgvMmNJa2EKdllIV250ODZ0N1l4YnBpZDVUbDJ3MGNsbFMrU3duVnFkc3ExZnJpZkRoTURNZjVDUTNHZzJXWmhqakpRMHVXVgpnSHFFdEd2SmlUT3VVV3JVWktONm5Ca1pVUHVHN0ZDY3M0aDg3YXF0aEMvRG1ENEs5bVlibDEzSjE4czgvbnRECi9WMUNvOU1DZ1lFQWlCb2tPTEFBNlFDbHNYOXcwNlMrZ1pXT0FpbDlRRXhHalhVR2o0ZEtielpibmgrWnkwbUoKNW9LRHpydTFKeGdoa1JtQ0ZiZmx6aHpMMktWU2xIbXNPYWZDSU1JSHJSZTdic0gvdjg2SVpkYXlnTWxLckJ2LwpXUVMxdmJoQjVvNGdwRkE5aG0rMFcwTW9ZOUVsaERPUG52ajFxT2lTRVArN0dXdDAxOW5HdWdzPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

可以看到,此配置文件携带了三个证书和api-server通信认证,也就是说,如果此配置文件没有证书或者证书过期了,那么,kubelet服务奖不会成功启动的。

同样的,controller-manager服务也携带了证书来与api-server通信,同样的,如果此配置文件内的证书是不对的,或者证书过期了,controller-manager服务也不会成功启动,即使这个服务是存在于静态pod内。

  1. cat /etc/kubernetes/controller-manager.conf
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  6. server: https://192.168.123.150:6443
  7. name: kubernetes
  8. contexts:
  9. - context:
  10. cluster: kubernetes
  11. user: system:kube-controller-manager
  12. name: system:kube-controller-manager@kubernetes
  13. current-context: system:kube-controller-manager@kubernetes
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: system:kube-controller-manager
  18. user:
  19. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lJVXJjNDVZUVF1NE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUtkU2d1d1NDMVhVSmlvRjIvbFhaNHQwWjBQRVEvc0QKWk9ZTzdPdlUwbHVSYngzcW1VaEN6NjlPbFBtY0h0OUllRHl2a09qYzFBZ2FlZWwzR3VnSjlKZ2xFM0RmZm5hegpZVVBrOXVzTVBIQjZtMUhNR2ZjUVloZkhLUG9TNmk5bEVWZUhKOXEvOTVaSUdGKzVMR0F2eXVBUWg1NmZYT1hrCjRZRTRsSmYzRGhzdGRNeEtDNXVZTXpxazR3RmlNVkNxYkRhdlVqc3VmZzhhYlFQQWFVQ3NieWFFMm04RVllOGgKb2VkTkdVdml5WkxrQUM2ckM4bGVIeGNBbjB2ZlVvYXU3UzJFdWpNK01jV3RLZ1o4NmVVdjJaU2dXemFCa1VWZAo4QlRxK0VDOUtlb1pWRHJnUlBpejVub3FsVDBYTTJacUVJK01Ud1lTRWt3aTJ5WlpqNS9yWFQ4Q0F3RUFBYU5XCk1GUXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVdnhvMHBnVGR6bi82UDMyT3Y5Yks2bm5XK3Rnd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR0d3NGxaaTZDSzB5YWpsOGNlUEg3a20vY1VLcGFFTld1aWJyVTQyNEVDM1A0UkFZZlJXCmRranZjNm9JbVhzR2V2T2Y0ZWlJU0dxaWNOb254N2RkWUxLY2tDaWxLQmcwS2hyZGFRT3o3N3ZCQitvamczbmgKMHByb05oYW12dkVpc0lUY212cmdzNTZqMk1Id2lUK3ZHeXFHbWxPOG9TRHZmWVFnMUVqTkRxWlVEd0g3OFlHYwowT0h5cXU3SW1hYngvKzdWOGcvMmlBS3NEVVVja3I3UHVMWWI3RlA0ZlZvVjlDWkIzVHI3bXFRQ2FrUmJmMnF1CjUvd3pEMG9lYjFBeHV0aUFSVjBlM2JBZUxXV0tqckEyNW9ISVBCRW1zTEFQSmtlMDVlRk9LK05ZUHBMdjBNU04KVnlzaXEzcVl0RkxZSzRaN0kyaGgrKzc5MXM4Y2g1TDNFeGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  20. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcDFLQzdCSUxWZFFtS2dYYitWZG5pM1JuUThSRCt3Tms1ZzdzNjlUU1c1RnZIZXFaClNFTFByMDZVK1p3ZTMwaDRQSytRNk56VUNCcDU2WGNhNkFuMG1DVVRjTjkrZHJOaFErVDI2d3c4Y0hxYlVjd1oKOXhCaUY4Y28raExxTDJVUlY0Y24yci8zbGtnWVg3a3NZQy9LNEJDSG5wOWM1ZVRoZ1RpVWwvY09HeTEwekVvTAptNWd6T3FUakFXSXhVS3BzTnE5U095NStEeHB0QThCcFFLeHZKb1RhYndSaDd5R2g1MDBaUytMSmt1UUFMcXNMCnlWNGZGd0NmUzk5U2hxN3RMWVM2TXo0eHhhMHFCbnpwNVMvWmxLQmJOb0dSUlYzd0ZPcjRRTDBwNmhsVU91QkUKK0xQbWVpcVZQUmN6Wm1vUWo0eFBCaElTVENMYkpsbVBuK3RkUHdJREFRQUJBb0lCQUZ0Vm9QMjRBOVFBRUMwVQpNYlZ6enFQREVMTmZLVFNWNzdmZElkckJ1Mm9jZ3lremJDU1R3OGFRQUtZWVlJbkZoMHlwRVZMcmFCcGNTWHYxCmRneC9rcktTV29CY255MndVVUc4ZEVSdDAzZ2FsVG9iVFhrZHlrM3NleU8ydTNyUGtwM1N1eUNmZFVqbFpkaXEKdmR4cmVqVEJFU2EzR3dDcTVhV2grd3JRNHpSVnc5eERTVGF0MXA1cS82UFRPcHhkWElJRWhHQVd4SEE2RnVnSgpuR2RrMWFxT1ZweFFSZVZ1QVluNVlYemZ0MXByU0wwdWQwOFRNd3FaRUowa0dabUdIODJDV0tTRnQ4cEs1MnJ2CnpURDFnWE5hZzhrNGF3czhrQXpnT1lWek8wMHR2SjdVQjNFcVBSSThKQVJucWpyTGpNaTRqeDVMZGFQZkc2VUQKMERwNFdsRUNnWUVBelRYT2FJdUQ2QUhkaTNPaFlpQ2puOUwwM3ZOUGY1c1JtRmJzclc2UFFxdEVTTit2aVFxMApFMVREU2REM3BpSndqSC9zQTNhRW5ISStRZVYxT1ZFbVR3YkxpaFhOVWtYTWo2OWRmVHdxYTd1MnJEWm0zQnF6CkVrYUI3N0FHb3BvR2hRajBHcGMzTnY5MDZwN3orekVEKzFSajFaeGtsaXVXQ2NDMzViVlZqejBDZ1lFQTBMd2QKa1VYdGxhK0hWb2dicFJsRlE2dTVtbVFzRXpMYWdrQkFoRDFqYkpORWtVL1o0RjJGNkwwVTN0SWtWNWRyRG9XRQowbUo5TmdELzhPWVdXbTZSdVo1bVhPZlRSOVdkc2k5MGpUNU9NRkJHbHVMdlUyNllhTEhUdmMwbHdiQlVPelJECjJvb2ZEZURjU3Y2d1oxSVU3QlQ2YTA4dWtON0FpNVR6ZWhrQ1ppc0NnWUVBdWVNeHRKWWN5TDlYMW9qSitiK2oKT0pXNTUzUHo0WjJ3bEpTNUZHbUFNRjVBSHRzeGdTeEc3dlByYXlSMkVQSkZqYUFiUlEvSkZJYVFTdFQyR1JPZgpaaHE3cWJ3U0g2TEdxS21zUUZPT0FjVXF0bGtaVit4L3BlQmt0NkIyZ2ppUUMxYU8rTDlkN3QzOUpNTVVNOGkwCjJLZ2JQMWJKN3haUWRVa3p6RXMwMCtrQ2dZQlJraUlQNG43dEx4STVpNmthQk4wZmk5MVZhMjRaOXBhVHJoNUkKVDJFcVRnYk9ycURiWUZEeldlanRCcnd6Q3JaSWozOFBaSFBBQmZYL0l6dDdEWmlmTERxZWRlNElOWCtSNFordgpqcmlwZ3NXRE01NEpRY0FIc2U2b1RxSkJwZkhVelNEekoyVHBYSVZhUFZ1Y2xPUWVPamgrZFF3aWl4bzlzZkRRCk56UEx6d0tCZ1FDaVU2QUZ4NjdPdW1jRGtwK21tRXpYNWpsQmtlZk9XakJlNlZ4ZGwzNjA1TjJ6YS9CbDlINzEKZ3pjdlZhQTY5RC9uVG50bE1xaWhnUm1NTGdEMGovOWkrV0VkMzR2Y0JTOHI2M1VZZVRvUWRoakJCZC8yeUM5NAppQ0ZiNlZ0dGRrSmg5SEhkV1pTRkxwQ0FmTnVlMVRxclA5L1RobTNRaTFjM2lZZVJUblh0YkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

  三,

报错信息

集群很明显没有启动

  1. root@k8s-master:~# kubectl get no
  2. The connection to the server 192.168.123.150:6443 was refused - did you specify the right host or port?

系统日志报错如下;

12月8号证书已经过期。

  1. ec 12 23:11:18 k8s-master kubelet[2750]: I1212 23:11:18.900395 2750 server.go:868] "Client rotation is on, will bootstrap in background"
  2. Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.903330 2750 bootstrap.go:265] part of the existing bootstrap client certificate in /etc/kubernetes/kubelet.conf is expired: 2022-12-08 06:32:35 +0000 UTC
  3. Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.905482 2750 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or

   查看证书时间,再次确认一哈:

  1. root@k8s-master:~# kubeadm certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  4. [check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
  5. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  6. admin.conf Dec 08, 2022 06:32 UTC <invalid> no
  7. apiserver Dec 08, 2022 06:32 UTC <invalid> ca no
  8. apiserver-etcd-client Dec 08, 2022 06:32 UTC <invalid> etcd-ca no
  9. apiserver-kubelet-client Dec 08, 2022 06:32 UTC <invalid> ca no
  10. controller-manager.conf Dec 08, 2022 06:32 UTC <invalid> no
  11. etcd-healthcheck-client Dec 08, 2022 06:32 UTC <invalid> etcd-ca no
  12. etcd-peer Dec 08, 2022 06:32 UTC <invalid> etcd-ca no
  13. etcd-server Dec 08, 2022 06:32 UTC <invalid> etcd-ca no
  14. front-proxy-client Dec 08, 2022 06:32 UTC <invalid> front-proxy-ca no
  15. scheduler.conf Dec 08, 2022 06:32 UTC <invalid> no
  16. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  17. ca Dec 06, 2031 06:32 UTC 8y no
  18. etcd-ca Dec 06, 2031 06:32 UTC 8y no
  19. front-proxy-ca Dec 06, 2031 06:32 UTC 8y no

四,   

kubernetes集群的升级 

添加阿里云的apt源:

  1. cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
  2. # 阿里源
  3. deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
  4. deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
  5. deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
  6. deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
  7. deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
  8. deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
  9. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe
  10. multiverse
  11. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
  12. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe
  13. multiverse
  14. deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe
  15. multiverse
  16. deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
  17. EOF

更新apt源:

sudo apt-get update

输出如下:

有警告,可以不处理,也可以处理一哈,这里的警告是说两个文件的源重复配置,冲突了,因此,删除/etc/apt/sources.list这个文件即可。

  1. root@k8s-master:~# sudo apt-get update
  2. Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease
  3. Hit:2 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease
  4. Hit:3 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
  5. Hit:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease
  6. Hit:5 http://mirrors.aliyun.com/ubuntu bionic-security InRelease
  7. Hit:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease
  8. Get:7 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]
  9. Get:8 http://mirrors.aliyun.com/ubuntu bionic/universe Sources [9,051 kB]
  10. Get:9 http://mirrors.aliyun.com/ubuntu bionic/restricted Sources [5,324 B]
  11. Get:10 http://mirrors.aliyun.com/ubuntu bionic/main Sources [829 kB]
  12. Get:11 http://mirrors.aliyun.com/ubuntu bionic/multiverse Sources [181 kB]
  13. Get:12 http://mirrors.aliyun.com/ubuntu bionic-updates/restricted Sources [33.1 kB]
  14. Get:13 http://mirrors.aliyun.com/ubuntu bionic-updates/universe Sources [486 kB]
  15. Get:14 http://mirrors.aliyun.com/ubuntu bionic-updates/multiverse Sources [17.2 kB]
  16. Get:15 http://mirrors.aliyun.com/ubuntu bionic-updates/main Sources [537 kB]
  17. Get:16 http://mirrors.aliyun.com/ubuntu bionic-backports/universe Sources [6,600 B]
  18. Get:17 http://mirrors.aliyun.com/ubuntu bionic-backports/main Sources [10.5 kB]
  19. Get:18 http://mirrors.aliyun.com/ubuntu bionic-security/restricted Sources [30.2 kB]
  20. Get:19 http://mirrors.aliyun.com/ubuntu bionic-security/multiverse Sources [10.6 kB]
  21. Get:20 http://mirrors.aliyun.com/ubuntu bionic-security/main Sources [288 kB]
  22. Get:21 http://mirrors.aliyun.com/ubuntu bionic-security/universe Sources [309 kB]
  23. Get:22 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Sources [8,164 B]
  24. Get:23 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Sources [9,428 B]
  25. Get:24 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Sources [75.6 kB]
  26. Get:25 http://mirrors.aliyun.com/ubuntu bionic-proposed/main amd64 Packages [145 kB]
  27. Get:26 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Translation-en [32.2 kB]
  28. Get:27 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted amd64 Packages [132 kB]
  29. Get:28 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Translation-en [18.5 kB]
  30. Get:29 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe amd64 Packages [11.0 kB]
  31. Get:30 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Translation-en [6,676 B]
  32. Fetched 12.5 MB in 14s (878 kB/s)
  33. Reading package lists... Done
  34. W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2
  35. W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2

2,

重新安装kubelet,kubeadm,kubectl

sudo apt-get install kubelet=1.22.2-00  kubeadm=1.22.2-00 kubectl=1.22.2-00  -y

3,

开始升级

先更新证书以启动kubelet

kubeadm  certs renew all

输出如下:

  1. [renew] Reading configuration from the cluster...
  2. [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  3. [renew] Error reading configuration from the Cluster. Falling back to default configuration
  4. certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
  5. certificate for serving the Kubernetes API renewed
  6. certificate the apiserver uses to access etcd renewed
  7. certificate for the API server to connect to kubelet renewed
  8. certificate embedded in the kubeconfig file for the controller manager to use renewed
  9. certificate for liveness probes to healthcheck etcd renewed
  10. certificate for etcd nodes to communicate with each other renewed
  11. certificate for serving etcd renewed
  12. certificate for the front proxy client renewed
  13. certificate embedded in the kubeconfig file for the scheduler manager to use renewed
  14. Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

 以上输出最后提示需要重启restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd,在此之前先看看证书是否更新:

已经更新了,但kubelet的配置文件等等这些还是使用的旧证书,因此,此时的kubelet等服务还是不能启动的状态

  1. root@k8s-master:~# kubeadm certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  4. [check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
  5. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  6. admin.conf Dec 12, 2023 15:29 UTC 364d no
  7. apiserver Dec 12, 2023 15:29 UTC 364d ca no
  8. apiserver-etcd-client Dec 12, 2023 15:29 UTC 364d etcd-ca no
  9. apiserver-kubelet-client Dec 12, 2023 15:29 UTC 364d ca no
  10. controller-manager.conf Dec 12, 2023 15:29 UTC 364d no
  11. etcd-healthcheck-client Dec 12, 2023 15:29 UTC 364d etcd-ca no
  12. etcd-peer Dec 12, 2023 15:29 UTC 364d etcd-ca no
  13. etcd-server Dec 12, 2023 15:29 UTC 364d etcd-ca no
  14. front-proxy-client Dec 12, 2023 15:29 UTC 364d front-proxy-ca no
  15. scheduler.conf Dec 12, 2023 15:29 UTC 364d no
  16. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  17. ca Dec 06, 2031 06:32 UTC 8y no
  18. etcd-ca Dec 06, 2031 06:32 UTC 8y no
  19. front-proxy-ca Dec 06, 2031 06:32 UTC 8y no

因此,这个时候需要删除这些服务的配置文件,使用kubeadm重新生成这些文件:

  1. root@k8s-master:~# rm -rf /etc/kubernetes/*.conf
  2. root@k8s-master:~# kubeadm init phase kubeconfig all
  3. I1212 23:35:49.775848 19629 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
  4. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  5. [kubeconfig] Writing "admin.conf" kubeconfig file
  6. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  7. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  8. [kubeconfig] Writing "scheduler.conf" kubeconfig file

 如上所示,可以看到新的配置文件都使用的新的证书了,此时kubelet可以重启了。

  1. root@k8s-master:~# systemctl restart kubelet
  2. root@k8s-master:~# systemctl status kubelet
  3. ● kubelet.service - kubelet: The Kubernetes Node Agent
  4. Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  5. Drop-In: /etc/systemd/system/kubelet.service.d
  6. └─10-kubeadm.conf
  7. Active: active (running) since Mon 2022-12-12 23:36:57 CST; 2s ago
  8. Docs: https://kubernetes.io/docs/home/
  9. Main PID: 21307 (kubelet)
  10. Tasks: 20 (limit: 2210)
  11. CGroup: /system.slice/kubelet.service
  12. ├─21307 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.
  13. └─21589 /opt/cni/bin/calico
  14. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316164 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a5
  15. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316204 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1
  16. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316279 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwxm\" (UniqueName: \"kubernetes.io/projected/8ad3a63e-e
  17. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316323 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-
  18. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316364 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6q6\" (UniqueName: \"kubernetes.io/projected/5ef5e743-e
  19. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316407 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1f1b0
  20. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316447 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c8
  21. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316497 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-local-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71
  22. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316540 21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a
  23. Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316562 21307 reconciler.go:157] "Reconciler: start to sync state"

此时查看节点状态发现报错,因为是使用的.kube目录下congfig,这个文件是使用的旧的证书,因此,删除它,重新引用变量,并覆盖旧的config:

  1. root@k8s-master:~# kubectl get no
  2. error: You must be logged in to the server (Unauthorized)

OK,master节点的正式就更新完了,那么,工作节点的证书怎么处理呢?

解决方案为:由于整个集群是kubeadm搭建的,而etcd是静态pod 形式存在在master节点的,因此,master节点恢复后,确认etcd正常后,工作节点重新加入集群即可:

删除工作节点:

  1. root@k8s-master:~# kubectl delete nodes k8s-node1
  2. node "k8s-node1" deleted
  3. root@k8s-master:~# kubectl delete nodes k8s-node2
  4. node "k8s-node2" deleted

生成加入节点命令:

  1. root@k8s-master:~# kubeadm token create --print-join-command
  2. kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923

 在工作节点重设节点并重新加入节点(151和152节点执行):

  1. root@k8s-node1:~# kubeadm reset -f
  2. [preflight] Running pre-flight checks
  3. W1212 23:52:30.989714 84567 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
  4. [reset] No etcd config found. Assuming external etcd
  5. [reset] Please, manually reset etcd to prevent further issues
  6. [reset] Stopping the kubelet service
  7. [reset] Unmounting mounted directories in "/var/lib/kubelet"
  8. [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
  9. [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
  10. [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
  11. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
  12. The reset process does not reset or clean up iptables rules or IPVS tables.
  13. If you wish to reset iptables, you must do so manually by using the "iptables" command.
  14. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
  15. to reset your system's IPVS tables.
  16. The reset process does not clean your kubeconfig files and you must remove them manually.
  17. Please, check the contents of the $HOME/.kube/config file.
  18. root@k8s-node1:~# kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923
  19. [preflight] Running pre-flight checks
  20. [preflight] Reading configuration from the cluster...
  21. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  22. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  23. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  24. [kubelet-start] Starting the kubelet
  25. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  26. This node has joined the cluster:
  27. * Certificate signing request was sent to apiserver and a response was received.
  28. * The Kubelet was informed of the new secure connection details.
  29. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  1. root@k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf
  2. root@k8s-master:~# kubectl get no
  3. NAME STATUS ROLES AGE VERSION
  4. k8s-master Ready control-plane,master 369d v1.22.2
  5. k8s-node1 NotReady <none> 369d v1.22.0
  6. k8s-node2 NotReady <none> 369d v1.22.0
  7. root@k8s-master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  8. cp: overwrite '/root/.kube/config'? y

 此时,回到master节点,查看节点状态,可以看到恢复正常了:

  1. root@k8s-master:~# kubectl get no
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready control-plane,master 369d v1.22.2
  4. k8s-node1 Ready <none> 2m18s v1.22.0
  5. k8s-node2 Ready <none> 12s v1.22.0

 查看pod,也都是正常的状态了:

  1. root@k8s-master:~# kubectl get po -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default front-end-6f94965fd9-dq7t8 1/1 Running 0 27m
  4. default guestbook-86bb8f5bc9-mcdvg 1/1 Running 0 27m
  5. default guestbook-86bb8f5bc9-zh7zq 1/1 Running 0 27m
  6. default nfs-client-provisioner-56dd5765dc-gp6mz 1/1 Running 0 27m

五,

集群升级

先升级kubeadm到1.22.10

  1. root@k8s-master:~# apt-get install kubeadm=1.22.10-00
  2. Reading package lists... Done
  3. Building dependency tree
  4. Reading state information... Done
  5. The following additional packages will be installed:
  6. cri-tools
  7. The following packages will be upgraded:
  8. cri-tools kubeadm
  9. 2 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
  10. Need to get 26.7 MB of archives.
  11. After this operation, 19.9 MB of additional disk space will be used.
  12. Do you want to continue? [Y/n] y
  13. Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.25.0-00 [17.9 MB]
  14. Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.10-00 [8,728 kB]
  15. Fetched 26.7 MB in 51s (522 kB/s)
  16. (Reading database ... 67719 files and directories currently installed.)
  17. Preparing to unpack .../cri-tools_1.25.0-00_amd64.deb ...
  18. Unpacking cri-tools (1.25.0-00) over (1.19.0-00) ...
  19. Preparing to unpack .../kubeadm_1.22.10-00_amd64.deb ...
  20. Unpacking kubeadm (1.22.10-00) over (1.22.2-00) ...
  21. Setting up cri-tools (1.25.0-00) ...
  22. Setting up kubeadm (1.22.10-00) ...

 在升级kubernetes集群:

kubeadm upgrade apply v1.22.10

输出如下:

  1. root@k8s-master:~# kubeadm upgrade apply v1.22.10
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Running cluster health checks
  7. [upgrade/version] You have chosen to change the cluster version to "v1.22.10"
  8. [upgrade/versions] Cluster version: v1.22.0
  9. [upgrade/versions] kubeadm version: v1.22.10
  10. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
  11. [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
  12. [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
  13. [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
  14. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.10"...
  15. Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
  16. Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
  17. Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
  18. [upgrade/etcd] Upgrading to TLS for etcd
  19. Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
  20. [upgrade/staticpods] Preparing for "etcd" upgrade
  21. [upgrade/staticpods] Renewing etcd-server certificate
  22. [upgrade/staticpods] Renewing etcd-peer certificate
  23. [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
  24. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/etcd.yaml"
  25. [upgrade/staticpods] Waiting for the kubelet to restart the component
  26. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  27. Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
  28. Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
  29. Static pod: etcd-k8s-master hash: ee7d79d2b2967f03af72732ecda2b44f
  30. [apiclient] Found 1 Pods for label selector component=etcd
  31. [upgrade/staticpods] Component "etcd" upgraded successfully!
  32. [upgrade/etcd] Waiting for etcd to become available
  33. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests149634972"
  34. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
  35. [upgrade/staticpods] Renewing apiserver certificate
  36. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
  37. [upgrade/staticpods] Renewing front-proxy-client certificate
  38. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
  39. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-apiserver.yaml"
  40. [upgrade/staticpods] Waiting for the kubelet to restart the component
  41. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  42. Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
  43. Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
  44. Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
  45. Static pod: kube-apiserver-k8s-master hash: d2601c13ace3af023db083125c56d47b
  46. [apiclient] Found 1 Pods for label selector component=kube-apiserver
  47. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
  48. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
  49. [upgrade/staticpods] Renewing controller-manager.conf certificate
  50. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-controller-manager.yaml"
  51. [upgrade/staticpods] Waiting for the kubelet to restart the component
  52. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  53. Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
  54. Static pod: kube-controller-manager-k8s-master hash: 648269e02b16780e315b096eec7eaa5d
  55. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
  56. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
  57. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
  58. [upgrade/staticpods] Renewing scheduler.conf certificate
  59. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-scheduler.yaml"
  60. [upgrade/staticpods] Waiting for the kubelet to restart the component
  61. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  62. Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
  63. Static pod: kube-scheduler-k8s-master hash: ec4c9f7722e075d30583bde88d591749
  64. [apiclient] Found 1 Pods for label selector component=kube-scheduler
  65. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
  66. [upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
  67. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  68. [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
  69. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  70. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  71. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  72. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  73. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  74. [addons] Applied essential addon: CoreDNS
  75. [addons] Applied essential addon: kube-proxy
  76. [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.10". Enjoy!
  77. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

提示kubelet也应该升级:

  1. root@k8s-master:~# apt-get install kubelet=1.22.10-00
  2. Reading package lists... Done
  3. Building dependency tree
  4. Reading state information... Done
  5. The following packages will be upgraded:
  6. kubelet
  7. 1 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
  8. Need to get 19.2 MB of archives.
  9. After this operation, 32.1 MB disk space will be freed.
  10. Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.10-00 [19.2 MB]
  11. Fetched 19.2 MB in 42s (453 kB/s)
  12. (Reading database ... 67719 files and directories currently installed.)
  13. Preparing to unpack .../kubelet_1.22.10-00_amd64.deb ...
  14. Unpacking kubelet (1.22.10-00) over (1.22.2-00) ...
  15. Setting up kubelet (1.22.10-00) ...

证书的时间也刷新了:

(为什么一开始不直接升级呢?因为升级的时候需要集群是正常运行的,但前面证书是已经过期,集群宕机状态了,没办法升级)

  1. root@k8s-master:~# kubeadm certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  4. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  5. admin.conf Dec 12, 2023 16:11 UTC 364d ca no
  6. apiserver Dec 12, 2023 16:11 UTC 364d ca no
  7. apiserver-etcd-client Dec 12, 2023 16:11 UTC 364d etcd-ca no
  8. apiserver-kubelet-client Dec 12, 2023 16:11 UTC 364d ca no
  9. controller-manager.conf Dec 12, 2023 16:11 UTC 364d ca no
  10. etcd-healthcheck-client Dec 12, 2023 16:10 UTC 364d etcd-ca no
  11. etcd-peer Dec 12, 2023 16:10 UTC 364d etcd-ca no
  12. etcd-server Dec 12, 2023 16:10 UTC 364d etcd-ca no
  13. front-proxy-client Dec 12, 2023 16:11 UTC 364d front-proxy-ca no
  14. scheduler.conf Dec 12, 2023 16:11 UTC 364d ca no
  15. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  16. ca Dec 06, 2031 06:32 UTC 8y no
  17. etcd-ca Dec 06, 2031 06:32 UTC 8y no
  18. front-proxy-ca Dec 06, 2031 06:32 UTC 8y no

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/247446
推荐阅读
相关标签
  

闽ICP备14008679号