当前位置:   article > 正文

k8s1.18-kubeadm安装手册_s.82ydaxg.top

s.82ydaxg.top

一、初始化

参考链接:https://kubernetes.io/docs/reference/setup-tools/kubeadm/

1.1、配置要求

  • 节点配置为2c4g+
  • 集群中所有机器之间互通
  • hostname 和 mac 地址集群内唯一
  • CentOS版本为7,最好是7.9,低版本的建议更新下
  • 集群中所有节点可以访问外网拉取镜像
  • swap分区关闭
  • 注:这里进行单master方式安装。多master安装方式同理

1.2、初始化

  1. 1.所有主机上都要操作
  2. systemctop disable firewalld;systemctl stop firewalld
  3. swapoff -a ;sed -i '/swap/s/^/#/g' /etc/fstab
  4. setenforce 0;sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  5. 2.主机名初始化
  6. hostnamectl set-hostname $主机名 #不同的主机山设置不同的主机名
  7. [root@master1 $]# vim /etc/hosts
  8. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  9. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  10. 192.168.74.128 master1 registry.mt.com
  11. 192.168.74.129 master2
  12. 192.168.74.130 master3
  13. 3、内核初始化-所有主机
  14. [root@master1 yaml]# cat /etc/sysctl.d/kubernetes.conf
  15. net.bridge.bridge-nf-call-iptables=1 #需要modprobe br_netfilter模块才会有
  16. net.bridge.bridge-nf-call-ip6tables=1
  17. net.ipv4.ip_forward=1
  18. net.ipv4.tcp_tw_recycle=0
  19. vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
  20. vm.overcommit_memory=1 # 不检查物理内存是否够用
  21. vm.panic_on_oom=0
  22. fs.inotify.max_user_instances=8192
  23. fs.inotify.max_user_watches=1048576
  24. fs.file-max=655360000
  25. fs.nr_open=655360000
  26. net.ipv6.conf.all.disable_ipv6=1
  27. net.netfilter.nf_conntrack_max=2310720
  28. [root@master1 yaml]# sysctl --system
  29. 4、时间同步
  30. ntpdate time.windows.com
  31. 5、设置资源配置文件
  32. echo "* soft nofile 65536" >> /etc/security/limits.conf
  33. echo "* hard nofile 65536" >> /etc/security/limits.conf
  34. echo "* soft nproc 65536" >> /etc/security/limits.conf
  35. echo "* hard nproc 65536" >> /etc/security/limits.conf
  36. echo "* soft memlock unlimited" >> /etc/security/limits.conf
  37. echo "* hard memlock unlimited" >> /etc/security/limits.conf
  38. 6、开机加载模块
  39. [root@master1 ~]# cat /etc/modules-load.d/k8s.conf
  40. lvs
  41. lvs_rr
  42. br_netfilter
  43. lvs_wrr
  44. lvs_sh
  45. [root@master1 ~]# modprobe lvs
  46. [root@master1 ~]# modprobe lvs_rr
  47. [root@master1 ~]# modprobe br_netfilter
  48. [root@master1 ~]# modprobe lvs_wrr
  49. [root@master1 ~]# modprobe lvs_sh
  50. [root@iZ2zef0llgs69lx3vc9rfgZ ~]# lsmod |grep vs
  51. ip_vs_sh 12688 0
  52. ip_vs_wrr 12697 0
  53. ip_vs_rr 12600 0
  54. ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  55. nf_conntrack 137239 1 ip_vs
  56. libcrc32c 12644 2 ip_vs,nf_conntrack

二、安装

2.1、安装docker

  1. # 在所有节点上执行:
  2. wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. yum list docker-ce.x86_64 --showduplicates | sort -r #可以看到可以安装的docker-ce版本,选择其中一个安装即可
  4. yum install docker-ce-18.06.1.ce-3.el7
  5. # 修改/etc/docker/docker-daemon.json配置文件
  6. {
  7. "exec-opts": ["native.cgroupdriver=systemd"],
  8. "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  9. "insecure-registries": ["www.mt.com:9500"]
  10. }
  11. 说明:
  12. registry-mirrors 为aliyun镜像加速设置的地址,用于加速下载国外的镜像。不然下载会很慢
  13. #修改完成后,docker info验证
  14. systemctl disbale docker;systemctl enable docker
  15. [root@master1 $]# docker info |tail -10
  16. Registry: https://index.docker.io/v1/
  17. Labels:
  18. Experimental: false
  19. Insecure Registries:
  20. www.mt.com:9500
  21. 127.0.0.0/8
  22. Registry Mirrors:
  23. https://b9pmyelo.mirror.aliyuncs.com/
  24. Live Restore Enabled: false

2.2、安装kubectl/kubeadm/kubelet

  1. #创建repo文件 /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. #baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. #安装
  9. yum install kubeadm-1.18.0 kubectl-1.18.0 kubelet-1.18.0
  10. #启动服务
  11. systemctl start kubelet;systemctl enable kubelet

2.3、初始化

  1. 注意:针对多master节点,初始化方法有差异。 //本实验没有用到该步骤
  2. kubeadm init --kubernetes-version=$版本 --apiserver-advertise-address=$本masterip \
  3. --control-plane-endpoint=$本masterip:6443 \ #这一行必须要
  4. --pod-network-cidr=10.244.0.0/16 \
  5. --image-repository registry.aliyuncs.com/google_containers \
  6. -service-cidr=10.96.0.0/12
  7. 第一个节点如此操作,第二个节点使用kubeadm join方式加入control-plane
  1. #只在master上运行
  2. [root@master1 ~]# kubeadm init --apiserver-advertise-address=192.168.74.128 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
  3. W0503 16:48:24.338256 38262 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  4. [init] Using Kubernetes version: v1.18.0
  5. [preflight] Running pre-flight checks #1、预检查,拉镜像
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" #2、初始化kubelet信息
  11. [kubelet-start] Starting the kubelet
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki" #3、证书准备
  13. [certs] Generating "ca" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.74.128]
  16. [certs] Generating "apiserver-kubelet-client" certificate and key
  17. [certs] Generating "front-proxy-ca" certificate and key
  18. [certs] Generating "front-proxy-client" certificate and key
  19. [certs] Generating "etcd/ca" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
  22. [certs] Generating "etcd/peer" certificate and key
  23. [certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
  24. [certs] Generating "etcd/healthcheck-client" certificate and key
  25. [certs] Generating "apiserver-etcd-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes" #4、写入kubeconfig信息
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  30. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  31. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests" #5、写入control-plane 配置信息
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. W0503 16:49:05.984728 38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  36. [control-plane] Creating static Pod manifest for "kube-scheduler"
  37. W0503 16:49:05.985466 38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  38. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  39. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  40. [apiclient] All control plane components are healthy after 17.002641 seconds
  41. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  42. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  43. [upload-certs] Skipping phase. Please see --upload-certs
  44. [mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''" #6、打标签
  45. [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  46. [bootstrap-token] Using token: i91fyq.rp8jv1t086utyxif #7、配置bootstrap和RBAC
  47. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  49. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  50. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  51. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  52. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  53. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  54. [addons] Applied essential addon: CoreDNS #8、插件安装
  55. [addons] Applied essential addon: kube-proxy
  56. Your Kubernetes control-plane has initialized successfully!
  57. To start using your cluster, you need to run the following as a regular user:
  58. # 创建kubectl访问apiserver 配置文件。在master上执行
  59. mkdir -p $HOME/.kube
  60. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  61. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  62. # 创建network
  63. You should now deploy a pod network to the cluster.
  64. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  65. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  66. Then you can join any number of worker nodes by running the following on each as root:
  67. #加入集群使用
  68. kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif \
  69. --discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37
  70. [root@master1 ~]# docker images
  71. REPOSITORY TAG IMAGE ID CREATED SIZE
  72. registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
  73. registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 13 months ago 173MB
  74. registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 13 months ago 162MB
  75. registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 13 months ago 95.3MB
  76. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 14 months ago 683kB
  77. registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB
  78. registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 18 months ago 288MB
  79. k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 years ago 220MB
  80. k8s.gcr.io/coredns 1.2.2 367cdc8433a4 2 years ago 39.2MB
  81. k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 2 years ago 122MB
  82. k8s.gcr.io/flannel v0.10.0-amd64 f0fad859c909 3 years ago 44.6MB
  83. k8s.gcr.io/pause 3.1 da86e6ba6ca1 3 years ago 742kB
  84. [root@master1 ~]#
  1. # 创建kubectl配置文件:
  2. mkdir -p $HOME/.kube
  3. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  4. sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4、加入集群

  1. [root@master1 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master1 NotReady master 36m v1.18.0
  4. # master2和master3上执行
  5. kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif --discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37
  6. [root@master1 ~]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. master1 NotReady master 40m v1.18.0
  9. master2 NotReady <none> 78s v1.18.0
  10. master3 NotReady <none> 31s v1.18.0

2.5、安装网络插件

  1. [root@master1 yaml]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  2. [root@master1 yaml]# kubectl get pods -n kube-system
  3. NAME READY STATUS RESTARTS AGE
  4. coredns-7ff77c879f-l9h6x 1/1 Running 0 60m
  5. coredns-7ff77c879f-lp9p2 1/1 Running 0 60m
  6. etcd-master1 1/1 Running 0 60m
  7. kube-apiserver-master1 1/1 Running 0 60m
  8. kube-controller-manager-master1 1/1 Running 0 60m
  9. kube-flannel-ds-524n6 1/1 Running 0 45s
  10. kube-flannel-ds-8gqwp 1/1 Running 0 45s
  11. kube-flannel-ds-fgvpf 1/1 Running 0 45s
  12. kube-proxy-2fz26 1/1 Running 0 21m
  13. kube-proxy-46psf 1/1 Running 0 20m
  14. kube-proxy-p7jnd 1/1 Running 0 60m
  15. kube-scheduler-master1 1/1 Running 0 60m
  16. [root@master1 yaml]# kubectl get nodes #在安装网络插件之前STATUS都是NotReady状态
  17. NAME STATUS ROLES AGE VERSION
  18. master1 Ready master 60m v1.18.0
  19. master2 Ready <none> 21m v1.18.0
  20. master3 Ready <none> 20m v1.18.0

三、验证

  1. [root@master1 yaml]# kubectl create deployment nginx --image=nginx
  2. [root@master1 yaml]# kubectl expose deployment nginx --port=80 --type=NodePort
  3. [root@master1 yaml]# kubectl get pods -o wide
  4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  5. nginx-f89759699-fvlfg 1/1 Running 0 99s 10.244.2.3 master3 <none> <none>
  6. [root@master1 yaml]# kubectl get svc/nginx
  7. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  8. nginx NodePort 10.105.249.76 <none> 80:30326/TCP 85s
  9. [root@master1 yaml]# curl -s -I master2:30326 #访问集群中任意一个node节点的 30326端口都可以通
  10. HTTP/1.1 200 OK
  11. Server: nginx/1.19.10
  12. Date: Mon, 03 May 2021 09:58:47 GMT
  13. Content-Type: text/html
  14. Content-Length: 612
  15. Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
  16. Connection: keep-alive
  17. ETag: "6075b537-264"
  18. Accept-Ranges: bytes
  19. [root@master1 yaml]# curl -s -I master3:30326
  20. HTTP/1.1 200 OK
  21. Server: nginx/1.19.10
  22. Date: Mon, 03 May 2021 09:58:50 GMT
  23. Content-Type: text/html
  24. Content-Length: 612
  25. Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
  26. Connection: keep-alive
  27. ETag: "6075b537-264"
  28. Accept-Ranges: bytes
  29. [root@master1 yaml]# curl -s -I master1:30326 #master1为master节点非worker节点,
  30. ^C
  31. 镜像:
  32. [root@master1 yaml]# docker images
  33. REPOSITORY TAG IMAGE ID CREATED SIZE
  34. quay.io/coreos/flannel v0.14.0-rc1 0a1a2818ce59 2 weeks ago 67.9MB
  35. registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
  36. registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 13 months ago 162MB
  37. registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 13 months ago 173MB
  38. registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 13 months ago 95.3MB
  39. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 14 months ago 683kB
  40. registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB
  41. registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 18 months ago 288MB
  42. [root@master2 ~]# docker images
  43. REPOSITORY TAG IMAGE ID CREATED SIZE
  44. quay.io/coreos/flannel v0.14.0-rc1 0a1a2818ce59 2 weeks ago 67.9MB
  45. registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
  46. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 14 months ago 683kB
  47. registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB
  48. [root@master3 ~]# docker images
  49. REPOSITORY TAG IMAGE ID CREATED SIZE
  50. quay.io/coreos/flannel v0.14.0-rc1 0a1a2818ce59 2 weeks ago 67.9MB
  51. nginx latest 62d49f9bab67 2 weeks ago 133MB
  52. registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
  53. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 14 months ago 683kB
  54. registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB

四、kubeadm用法

4.1、init

用途:用于初始化master节点

4.1.1、简介

  1. preflight Run pre-flight checks
  2. certs Certificate generation
  3. /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
  4. /apiserver Generate the certificate for serving the Kubernetes API
  5. /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet
  6. /front-proxy-ca Generate the self-signed CA to provision identities for front proxy
  7. /front-proxy-client Generate the certificate for the front proxy client
  8. /etcd-ca Generate the self-signed CA to provision identities for etcd
  9. /etcd-server Generate the certificate for serving etcd
  10. /etcd-peer Generate the certificate for etcd nodes to communicate with each other
  11. /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd
  12. /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd
  13. /sa Generate a private key for signing service account tokens along with its public key
  14. kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  15. /admin Generate a kubeconfig file for the admin to use and for kubeadm itself
  16. /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
  17. /controller-manager Generate a kubeconfig file for the controller manager to use
  18. /scheduler Generate a kubeconfig file for the scheduler to use
  19. kubelet-start Write kubelet settings and (re)start the kubelet
  20. control-plane Generate all static Pod manifest files necessary to establish the control plane
  21. /apiserver Generates the kube-apiserver static Pod manifest
  22. /controller-manager Generates the kube-controller-manager static Pod manifest
  23. /scheduler Generates the kube-scheduler static Pod manifest
  24. etcd Generate static Pod manifest file for local etcd
  25. /local Generate the static Pod manifest file for a local, single-node local etcd instance
  26. upload-config Upload the kubeadm and kubelet configuration to a ConfigMap
  27. /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap
  28. /kubelet Upload the kubelet component config to a ConfigMap
  29. upload-certs Upload certificates to kubeadm-certs
  30. mark-control-plane Mark a node as a control-plane
  31. bootstrap-token Generates bootstrap tokens used to join a node to a cluster
  32. kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap
  33. /experimental-cert-rotation Enable kubelet client certificate rotation
  34. addon Install required addons for passing conformance tests
  35. /coredns Install the CoreDNS addon to a Kubernetes cluster
  36. /kube-proxy Install the kube-proxy addon to a Kubernetes cluster

4.1.2、工作流 kubeadm init workflow

  • 1)预检查,检查系统状态,进行异常warning警告或者exit退出。直到问题修复或者 --ignore-preflight-errors=<list-of-errors>

  • 2)产生自签ca为集群中各个组件设置集群身份。用户可以自行设置ca或者key--cert-dir,默认为/etc/kubernetes/pki, APIServer 证书将为任何 --apiserver-cert-extra-sans 参数值提供附加的 SAN(Optional extra Subject Alternative Names )条目,必要时将其小写。

  • 3)将kubeconfig写入/etc/kubernetes/目录,以便kubelet/controller-manager/scheduler访问apiserver使用。他们都有自己的身份标识。同时生成一个名为admin.conf的独立的kubeconfig文件,用于管理操作。

  • 4)为apiserver/controller-manager/scheduler生成静态 manifests (默认在/etc/kubernetes/manifests), 假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。 kubelet会监听/etc/kubernetes/manifests目录,以便在启动的时候创建pod

  • 5)对control-plane节点进行打标和taints防止作为workload被调度

  • 6)生成token,供节点加入

  • 7)为节点能够通过Bootstrap TokensTLS Bootstrap加入集群做必要的配置;创建一个configmap提供集群节点添加所需要的信息,并为configmap配置相关的RBAC访问规则。允许启动引导令牌访问CSR签名API。配置自动签发新的CSR请求

  • 8)通过apiserver安装coreDns和kube-proxy等addon。这里的coreDns之后在cni插件部署完成后才会被调度

4.1.3、分阶段创建control-plane

  • kubeadm允许使用kubeadm init phase命令分段创建控制平面节点
  1. [root@master1 ~]# kubeadm init phase --help
  2. Use this command to invoke single phase of the init workflow
  3. Usage:
  4. kubeadm init phase [command]
  5. Available Commands:
  6. addon Install required addons for passing Conformance tests
  7. bootstrap-token Generates bootstrap tokens used to join a node to a cluster
  8. certs Certificate generation
  9. control-plane Generate all static Pod manifest files necessary to establish the control plane
  10. etcd Generate static Pod manifest file for local etcd
  11. kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  12. kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap
  13. kubelet-start Write kubelet settings and (re)start the kubelet
  14. mark-control-plane Mark a node as a control-plane
  15. preflight Run pre-flight checks
  16. upload-certs Upload certificates to kubeadm-certs
  17. upload-config Upload the kubeadm and kubelet configuration to a ConfigMap
  18. ...
  • kubeadm init 还公开了一个名为 --skip-phases 的参数,该参数可用于跳过某些阶段。 参数接受阶段名称列表,并且这些名称可以从上面的有序列表中获取 sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
  • kubeadm中kube-proxy说明参考
  • kubeadm中ipvs说明参考
  • 向控制平面传递自定义的命令行参数说明
  • 使用自定义镜像:
    • 使用其他的 imageRepository 来代替 k8s.gcr.io
    • 将 useHyperKubeImage 设置为 true,使用 HyperKube 镜像。
    • 为 etcd 或 DNS 附件提供特定的 imageRepository 和 imageTag
  • 设置节点名称--hostname-override
  • ...

4.2、join

4.2.1、阶段划分

当节点加入 kubeadm 初始化的集群时,我们需要建立双向信任。 这个过程可以分解为发现(让待加入节点信任 Kubernetes control-plane节点)和 TLS 引导(让Kubernetes 控制平面节点信任待加入节点)两个部分。

发现阶段有两种主要的发现方案(二者择其一):

  • 1、使用共享令牌和 API 服务器的 IP 地址。传递 --discovery-token-ca-cert-hash 参数来验证 Kubernetes 控制平面节点提供的根证书颁发机构(CA)的公钥。 此参数的值指定为 ":",其中支持的哈希类型为 "sha256"。哈希是通过 Subject Public Key Info(SPKI)对象的字节计算的(如 RFC7469)。 这个值可以从 "kubeadm init" 的输出中获得,或者可以使用标准工具进行计算。 可以多次重复 --discovery-token-ca-cert-hash 参数以允许多个公钥。
  • 2、提供一个文件 - 标准 kubeconfig 文件的一个子集,该文件可以是本地文件,也可以通过 HTTPS URL 下载。 格式是:
    • kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
    • kubeadm join --discovery-file path/to/file.conf
    • kubeadm join --discovery-file https://url/file.conf #这里必须是https

TLS引导阶段

  • 也通过共享令牌驱动。 用于向 Kubernetes 控制平面节点进行临时的身份验证,以提交本地创建的密钥对的证书签名请求(CSR)。 默认情况下,kubeadm 将设置 Kubernetes 控制平面节点自动批准这些签名请求。 这个令牌通过--tls-bootstrap-token abcdef.1234567890abcdef 参数传入。在集群安装完成后,可以关闭自动签发,改为手动approve csr

通常两个部分会使用相同的令牌。 在这种情况下可以使用 --token 参数,而不是单独指定每个令牌。

4.2.2、命令用法

  1. "join [api-server-endpoint]" 命令执行下列阶段:
  2. preflight Run join pre-flight checks
  3. control-plane-prepare Prepare the machine for serving a control plane
  4. /download-certs [EXPERIMENTAL] Download certificates shared among control-plane nodes from the kubeadm-certs Secret
  5. /certs Generate the certificates for the new control plane components
  6. /kubeconfig Generate the kubeconfig for the new control plane components
  7. /control-plane Generate the manifests for the new control plane components
  8. kubelet-start Write kubelet settings, certificates and (re)start the kubelet
  9. control-plane-join Join a machine as a control plane instance
  10. /etcd Add a new local etcd member
  11. /update-status Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap
  12. /mark-control-plane Mark a node as a control-plane
  13. [flag] #这里只列出常用的参数
  14. --apiserver-advertise-address string #如果该节点托管一个新的控制平面实例,则 API 服务器将公布其正在侦听的 IP 地址。如果未设置,则使用默认网络接口
  15. --apiserver-bind-port int32 默认值: 6443 #如果节点应该托管新的控制平面实例,则为 API 服务器要绑定的端口。
  16. --control-plane 在此节点上创建一个新的控制平面实例
  17. --discovery-file string 对于基于文件的发现,给出用于加载集群信息的文件或者 URL。
  18. --discovery-token string 对于基于令牌的发现,该令牌用于验证从 API 服务器获取的集群信息。
  19. --discovery-token-ca-cert-hash stringSlice 对基于令牌的发现,验证根 CA 公钥是否与此哈希匹配 (格式: "<type>:<value>")。
  20. --tls-bootstrap-token string 指定在加入节点时用于临时通过 Kubernetes 控制平面进行身份验证的令牌。
  21. --token string 如果未提供这些值,则将它们用于 discovery-token 令牌和 tls-bootstrap 令牌。

4.2.3、Join工作流

  • kubeadm 从 apiserver下载必要的集群信息。 默认情况下,它使用引导令牌和 CA 密钥哈希来验证数据的真实性。 也可以通过文件或 URL 直接发现根 CA。

  • 一旦知道集群信息,kubelet 就可以开始 TLS 引导过程。

    TLS 引导程序使用共享令牌与 Kubernetes API 服务器进行临时的身份验证,以提交证书签名请求 (CSR); 默认情况下,控制平面自动对该 CSR 请求进行签名。

  • 最后,kubeadm 配置本地 kubelet 使用分配给节点的确定标识连接到 API 服务器。

对于新增节点需要增加到master需要额外的步骤:

  • 从集群下载控制平面节点之间共享的证书(如果用户明确要求)。
  • 生成控制平面组件清单、证书和 kubeconfig。
  • 添加新的本地 etcd 成员。
  • 将此节点添加到 kubeadm 集群的 ClusterStatus。

4.2.4、发现要信任的集群 CA

发现集群ca的集中方式:

方式1、带 CA 锁定模式的基于令牌的发现

Kubernetes 1.8 及以上版本中的默认模式。 在这种模式下,kubeadm 下载集群配置(包括根CA)并使用令牌验证它, 并且会验证根 CA 的公钥与所提供的哈希是否匹配, 以及 API 服务器证书在根 CA 下是否有效。

CA 键哈希格式为 sha256:<hex_encoded_hash>。 默认情况下,在 kubeadm init 最后打印的 kubeadm join 命令 或者 kubeadm token create --print-join-command 的输出信息中返回哈希值。 它使用标准格式 (请参考 RFC7469) 并且也能通过第三方工具或者制备系统进行计算。openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

  1. 对于工作节点:
  2. kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443
  3. 对于控制节点:
  4. kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443

注意:CA 哈希通常在主节点被提供之前是不知道的,这使得构建使用 kubeadm 的自动化配置工具更加困难。 通过预先生成CA,你可以解除这个限制。

方式2、无 CA 锁定模式的基于令牌的发现

Kubernetes 1.7 和早期版本_中的默认设置;使用时要注意一些重要的补充说明。 此模式仅依赖于对称令牌来签名(HMAC-SHA256)发现信息,这些发现信息为主节点建立信任根。 在 Kubernetes 1.8 及以上版本中仍然可以使用 --discovery-token-unsafe-skip-ca-verification 参数,但是如果可能的话,你应该考虑使用一种其他模式。

kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443

方式3、基于 HTTPS 或文件发现

这种方案提供了一种带外方式在主节点和引导节点之间建立信任根。 如果使用 kubeadm 构建自动配置,请考虑使用此模式。 发现文件的格式为常规的 Kubernetes kubeconfig 文件。

如果发现文件不包含凭据,则将使用 TLS 发现令牌。

kubeadm join 命令示例:

  • kubeadm join --discovery-file path/to/file.conf (本地文件)
  • kubeadm join --discovery-file https://url/file.conf (远程 HTTPS URL)

4.2.5、提高安全性

  1. # 关闭自动approve csr
  2. kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap #
  3. #关闭对集群信息configmap的公开访问
  4. kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml
  5. 使用 cluster-info.yaml 文件作为 kubeadm join --discovery-file 参数。
  6. kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo

4.3、config

  1. [root@master1 pki]# kubeadm config --help
  2. There is a ConfigMap in the kube-system namespace called "kubeadm-config" that kubeadm uses to store internal configuration about the
  3. cluster. kubeadm CLI v1.8.0+ automatically creates this ConfigMap with the config used with 'kubeadm init', but if you
  4. initialized your cluster using kubeadm v1.7.x or lower, you must use the 'config upload' command to create this
  5. ConfigMap. This is required so that 'kubeadm upgrade' can configure your upgraded cluster correctly.
  6. Usage:
  7. kubeadm config [flags]
  8. kubeadm config [command]
  9. Available Commands:
  10. images Interact with container images used by kubeadm
  11. migrate Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer version
  12. print Print configuration
  13. view View the kubeadm configuration stored inside the cluster
  1. # 1)images kubeadm使用到的镜像查看和拉取
  2. list #Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized
  3. pull #Pull images used by kubeadm
  4. [root@master1 yaml]# kubeadm config images list
  5. I0503 22:55:00.007902 111954 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18
  6. W0503 22:55:01.118197 111954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  7. k8s.gcr.io/kube-apiserver:v1.18.18
  8. k8s.gcr.io/kube-controller-manager:v1.18.18
  9. k8s.gcr.io/kube-scheduler:v1.18.18
  10. k8s.gcr.io/kube-proxy:v1.18.18
  11. k8s.gcr.io/pause:3.2
  12. k8s.gcr.io/etcd:3.4.3-0
  13. k8s.gcr.io/coredns:1.6.7
  14. # 2)view #查看kubeadm集群配置
  15. [root@master1 yaml]# kubeadm config view
  16. apiServer:
  17. extraArgs:
  18. authorization-mode: Node,RBAC
  19. timeoutForControlPlane: 4m0s
  20. apiVersion: kubeadm.k8s.io/v1beta2
  21. certificatesDir: /etc/kubernetes/pki
  22. clusterName: kubernetes
  23. controllerManager: {}
  24. dns:
  25. type: CoreDNS
  26. etcd:
  27. local:
  28. dataDir: /var/lib/etcd
  29. imageRepository: registry.aliyuncs.com/google_containers
  30. kind: ClusterConfiguration
  31. kubernetesVersion: v1.18.0
  32. networking:
  33. dnsDomain: cluster.local
  34. podSubnet: 10.244.0.0/16
  35. serviceSubnet: 10.96.0.0/12
  36. scheduler: {}
  37. 可以修改config view的内容,然后指定文件初始化集群:kubeadm init --config kubeadm.yml
  38. config文件支持
  39. # 3)print init-defaults
  40. 注意:可以使用init-defaults作为kubeadm.yml配置文件,
  41. [root@master1 yaml]# kubeadm config print 'init-defaults'
  42. W0503 23:00:53.662149 113332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  43. apiVersion: kubeadm.k8s.io/v1beta2
  44. bootstrapTokens:
  45. - groups:
  46. - system:bootstrappers:kubeadm:default-node-token
  47. token: abcdef.0123456789abcdef
  48. ttl: 24h0m0s
  49. usages:
  50. - signing
  51. - authentication
  52. kind: InitConfiguration
  53. localAPIEndpoint:
  54. advertiseAddress: 1.2.3.4
  55. bindPort: 6443
  56. nodeRegistration:
  57. criSocket: /var/run/dockershim.sock
  58. name: master1
  59. taints:
  60. - effect: NoSchedule
  61. key: node-role.kubernetes.io/master
  62. ---
  63. apiServer:
  64. timeoutForControlPlane: 4m0s
  65. apiVersion: kubeadm.k8s.io/v1beta2
  66. certificatesDir: /etc/kubernetes/pki
  67. clusterName: kubernetes
  68. controllerManager: {}
  69. dns:
  70. type: CoreDNS
  71. etcd:
  72. local:
  73. dataDir: /var/lib/etcd
  74. imageRepository: k8s.gcr.io
  75. kind: ClusterConfiguration
  76. kubernetesVersion: v1.18.0
  77. networking:
  78. dnsDomain: cluster.local
  79. serviceSubnet: 10.96.0.0/12
  80. scheduler: {}
  81. [root@master1 yaml]# kubeadm config print init-defaults --component-configs KubeProxyConfiguration
  82. [root@master1 yaml]# kubeadm config print init-defaults --component-configs KubeletConfiguration
  83. # 4)print join-defaults
  84. [root@master1 yaml]# kubeadm config print 'join-defaults'
  85. apiVersion: kubeadm.k8s.io/v1beta2
  86. caCertPath: /etc/kubernetes/pki/ca.crt
  87. discovery:
  88. bootstrapToken:
  89. apiServerEndpoint: kube-apiserver:6443
  90. token: abcdef.0123456789abcdef
  91. unsafeSkipCAVerification: true
  92. timeout: 5m0s
  93. tlsBootstrapToken: abcdef.0123456789abcdef
  94. kind: JoinConfiguration
  95. nodeRegistration:
  96. criSocket: /var/run/dockershim.sock
  97. name: master1
  98. taints: null
  99. [root@master1 yaml]# kubeadm config print join-defaults --component-configs KubeProxyConfiguration
  100. [root@master1 yaml]# kubeadm config print join-defaults --component-configs KubeletConfiguration

4.4、certs

  1. 1)[root@master1 pki]# kubeadm alpha certs --help
  2. Commands related to handling kubernetes certificates
  3. Usage:
  4. kubeadm alpha certs [command]
  5. Aliases:
  6. certs, certificates
  7. Available Commands:
  8. certificate-key Generate certificate keys #生成证书
  9. check-expiration Check certificates expiration for a Kubernetes cluster #检查过期
  10. renew Renew certificates for a Kubernetes cluster #刷新证书
  11. [root@master1 pki]# kubeadm alpha certs check-expiration
  12. [check-expiration] Reading configuration from the cluster...
  13. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  14. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  15. admin.conf May 03, 2022 08:49 UTC 362d no
  16. apiserver May 03, 2022 08:49 UTC 362d ca no
  17. apiserver-etcd-client May 03, 2022 08:49 UTC 362d etcd-ca no
  18. apiserver-kubelet-client May 03, 2022 08:49 UTC 362d ca no
  19. controller-manager.conf May 03, 2022 08:49 UTC 362d no
  20. etcd-healthcheck-client May 03, 2022 08:49 UTC 362d etcd-ca no
  21. etcd-peer May 03, 2022 08:49 UTC 362d etcd-ca no
  22. etcd-server May 03, 2022 08:49 UTC 362d etcd-ca no
  23. front-proxy-client May 03, 2022 08:49 UTC 362d front-proxy-ca no
  24. scheduler.conf May 03, 2022 08:49 UTC 362d no
  25. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  26. ca May 01, 2031 08:49 UTC 9y no
  27. etcd-ca May 01, 2031 08:49 UTC 9y no
  28. front-proxy-ca May 01, 2031 08:49 UTC 9y no

4.5、kubeconfig

用于管理kubeconfig文件

[root@master1 pki]# kubeadm alpha kubeconfig  user --client-name=kube-apiserver-etcd-clientI0505 23:28:58.999787   49317 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18W0505 23:29:01.024319   49317 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]apiVersion: v1clusters:- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=    server: https://192.168.74.133:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: kube-apiserver-etcd-client  name: kube-apiserver-etcd-client@kubernetescurrent-context: kube-apiserver-etcd-client@kuberneteskind: Configpreferences: {}users:- name: kube-apiserver-etcd-client  user:    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0ekNDQWN1Z0F3SUJBZ0lJTlZmQkNWZFNrYTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURVeE5USTVNREZhTUNVeApJekFoQmdOVkJBTVRHbXQxWW1VdFlYQnBjMlZ5ZG1WeUxXVjBZMlF0WTJ4cFpXNTBNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFyTGc2cG9EK28zMnZpLytMVkhoV0FSbHZBZHFrWFRFYjAvR0QKY0xDeFlraDdhYUVVVmxKTjBHUFFkTkJhb3R5TGtvak9jL3Y4WEdxcVh2bGRpOUN0eVFMR2EydGNuNWlKZkQweAp1bjFNQXZIT1cxaUVDM2RLR1BXeXIraUZBVFExakk2ZXA2aDNidEVsSHkwSVVabEhQUDJ3WW5JQnV1NCtsLy9xCmsrc1lMQjZ0N0ZoalVQbmhnaHk4T2dMa0o3UmRjMWNBSDN3ejR2L0xoYy9yK0ppc0kvZnlRTXpiYVdqdE5GRTEKYk1MdnlYM0RXbmhlVnlod3EyUTZEbHhMaGVFWUJRWWxhbjNjdVQ3aG5YSm9NTGJKUnRhSVFWbktVOVJzYVlSUgpVVnRvdUZ2QkN5d21qOGlvOUdJakV6M2dMa0JPTk1BMERvVTlFZjhBcFpuZFY4VmN5d0lEQVFBQm95Y3dKVEFPCkJnTlZIUThCQWY4RUJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RRWUpLb1pJaHZjTkFRRUwKQlFBRGdnRUJBTGw5NURPVlFmRTkxejgwQUFCOE5mR2x3d200ZnNHM0hjRXh2cXBudjdhb2I4WTQ3QWRvMkNBaApNY1pUdUxWQ3B2SHRUYk15YXhoRE41VzNmdnU2NXRuUVJKM21RNzhBSllsWjZzVmZWODNTNzgzOGRnYWtKckluCjRtNm5KME5sRjRyeFpZbEVwRU5yWUxBOUlYM25oZndVY0FpbG9uWG03cmlaOVIvV1FJZHZ1WXVQMFdDMUI1RUoKY1ZiTWN4dUJOZGwwTHpKa1dYWUc3Y3ZaYjR5NmR5U2FPZnBranNKdFFudzlzbm9nNHVBUW1DMENnZFZpcUx5ZwpscExuYVExR3BVeVF5bTlDSTVvRlMzSThrS2RaQmV5d1duQURCYXFjS3BKTnFRWkFRWnRQYzhXSjRIczREYlVMCjM3YnlPSEp6VUZkbWxPMU9ubDRhQWRsVXp3Y0IxemM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckxnNnBvRCtvMzJ2aS8rTFZIaFdBUmx2QWRxa1hURWIwL0dEY0xDeFlraDdhYUVVClZsSk4wR1BRZE5CYW90eUxrb2pPYy92OFhHcXFYdmxkaTlDdHlRTEdhMnRjbjVpSmZEMHh1bjFNQXZIT1cxaUUKQzNkS0dQV3lyK2lGQVRRMWpJNmVwNmgzYnRFbEh5MElVWmxIUFAyd1luSUJ1dTQrbC8vcWsrc1lMQjZ0N0ZoagpVUG5oZ2h5OE9nTGtKN1JkYzFjQUgzd3o0di9MaGMvcitKaXNJL2Z5UU16YmFXanRORkUxYk1MdnlYM0RXbmhlClZ5aHdxMlE2RGx4TGhlRVlCUVlsYW4zY3VUN2huWEpvTUxiSlJ0YUlRVm5LVTlSc2FZUlJVVnRvdUZ2QkN5d20KajhpbzlHSWpFejNnTGtCT05NQTBEb1U5RWY4QXBabmRWOFZjeXdJREFRQUJBb0lCQUJ1R1VISndaQ1FSeDRQNworV3hBc1JRRHhaajZDdTkrLy94S3BMTzB0TkFBMVFvRVRZVmtJRnB4VGFzUCtTR3pHOXNDU2tSWmgrSUNiWnd0CkNTZGEzaGNHaGpCZ0w2YVBYSG1jRnV5dFF3dkZGU21oZFltT1BSUzFNd0N0Z1dTcnVVem8vWWVpWlVZWHRsNjkKZ25IZWgyZkUxZk1hVUFSR0sxdDF3U0JKZXRTczIrdG5wcnhFZ3E5a1BxRkpFUEJ3SG8vRElkQ3gzRWJLZWY2NQpSSzRhMURDcWIzLzNrYzE2dGoweWwrNXFZKzVFQ2xBMjZhbTNuSFJHWVdUQ0QyUEVsNnZoUXVZMHN5bHh3djAwClgwLzFxakNVeU1DWkJVYlo1RkNpM25yOE5HMnBETnFGbEtnZnJXTXVkMkdoTEMxWUp5ekdIU3AyTm91L0U3L2cKRmVzV2ZVRUNnWUVBMVlTWUtJeS9wa3pKZFhCTVhBOUVoYnZaQ3hudk40bkdjNXQxdXZxb2h6WVBVbjhIbDRmNwpxN1Q0d2tpZEtDN29CMXNIUVdCcmg5d3UvRXVyQ0ZRYzRremtHQmZJNkwzSzBHalg3VXVCc3VOODQyelBZbmtvCjFQakJTTXdLQlVpb0ZlMnlJWnZmOU53V0FLczA3VU9VMTNmeVpodHVVcTVyYklrOVE2VXRGeU1DZ1lFQXp4V1oKSjQwa3lPTmliSml5ZFFtYkd4UG9sK0dzYjFka1JNUlVCMjk4K3l4L0JnOE4zOWppVko1bjZkbmVpazc5MklYdQprU0k5VENJdkh4ZlBkUTR0Q1JpdnFWUVcza0xvSk5jVkg3UW8rWnIzd3JOYmZrOWR3MDJ1WGZqWlpNTmx3ZmgwCkR3S0NnSFBIVVBucU4wcW1RUzVva3A0QUM4WmVKOHQ2S3FHR1Vqa0NnWUVBeTM4VStjaXpPNDhCanBFWjViK1QKWWhZWGxQSUJ3Ui9wYVBOb2NHMUhRNTZ0V2NYQitaVGJzdG5ISUh2T2RLYkg4NEs1Vm9ETDIyOXB4SUZsbjRsegpBZWVnbUtuS2pLK2VaYVVXN28xQkxycUxvOEZub2dXeGVkRWZmZjhoS2NvR2tPZTdGemNWYXF4N3QrVjBpeEVYCkFZakxHSy9hSktraHJ3N1p1ZWZxSXBzQ2dZQUpLcWFWNXB5TE8rMStheC96S0ZLeVZ5WkRtdHk4TFAwbVFoNksKR2JoSmtnV3BhZjh1T25hQ1VtUzlLRVMra0pLU0JCTzBYdlNocXgyMDNhUDBSWVZlMHJYcjQrb0RPcWoyQUlOUgozUEszWWRHM3o2S3NLNjAxMlBsdjlYVUNEZGd5UnVJMFMrTWs5bnNMTFpUZGo3TmVUVVNad042MXBybENQN0tQCnNvaTBtUUtCZ0hPRC9ZV2h0NENLNWpXMlA0UWQyeUJmL1M1UUlPakRiMk45dVgvcEk2OFNYcVNYUWQrYXZ3cjkKRjN2MFkxd3gvVXc3U1lqVy8wLytNaE4ycW5ZYWZibDI2UEtyRTM1ZTZNYXhudFMxNTJHTzVaTHFPWVByOHpDVAp6Nk9MMko0a0lONHJ5QUhsdkFqc1htUVAzTG4yZ0JESVFhb3dZWUtTa2phZE5jc1lMUWhhCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

4.6、reset

用于,集群init失败后,重置集群

--ignore-preflight-errors stringSlice 错误将显示为警告的检查列表;例如:IsPrivilegedUser,Swap。取值为 'all' 时将忽略检查中的所有错误。

  1. [root@master1 pki]# kubeadm reset phase --help
  2. Use this command to invoke single phase of the reset workflowUsage:
  3. kubeadm reset phase [command]Available Commands:
  4. cleanup-node Run cleanup node. #清理节点 preflight
  5. Run reset pre-flight checks #检查
  6. remove-etcd-member Remove a local etcd member. #清理etcd成员
  7. update-cluster-status Remove this node from the ClusterStatus object. #更新集群状态

回到顶部

五、附件

  1. [root@master1 ~]# tree /etc/kubernetes/
  2. /etc/kubernetes/
  3. ├── admin.conf
  4. ├── controller-manager.conf
  5. ├── kubelet.conf
  6. ├── manifests
  7. │   ├── etcd.yaml
  8. │   ├── kube-apiserver.yaml
  9. │   ├── kube-controller-manager.yaml
  10. │   └── kube-scheduler.yaml
  11. ├── pki
  12. │   ├── apiserver.crt
  13. │   ├── apiserver-etcd-client.crt
  14. │   ├── apiserver-etcd-client.key
  15. │   ├── apiserver.key
  16. │   ├── apiserver-kubelet-client.crt
  17. │   ├── apiserver-kubelet-client.key
  18. │   ├── ca.crt
  19. │   ├── ca.key
  20. │   ├── etcd
  21. │   │   ├── ca.crt
  22. │   │   ├── ca.key
  23. │   │   ├── healthcheck-client.crt
  24. │   │   ├── healthcheck-client.key
  25. │   │   ├── peer.crt
  26. │   │   ├── peer.key
  27. │   │   ├── server.crt
  28. │   │   └── server.key
  29. │   ├── front-proxy-ca.crt
  30. │   ├── front-proxy-ca.key
  31. │   ├── front-proxy-client.crt
  32. │   ├── front-proxy-client.key
  33. │   ├── sa.key
  34. │   └── sa.pub
  35. └── scheduler.conf
  36. 3 directories, 30 files
  37. [root@master2 ~]# tree /etc/kubernetes/
  38. /etc/kubernetes/
  39. ├── kubelet.conf
  40. ├── manifests
  41. └── pki
  42. └── ca.crt
  43. 2 directories, 2 files
  44. #同master1

5.1、conf配置

crt文件查看: openssl x509 -in apiserver.crt -text

5.1.1、admin.conf

  1. 1)[root@master1 kubernetes]# cat admin.conf
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  6. server: https://192.168.74.128:6443
  7. name: kubernetescontexts:
  8. - context:
  9. cluster: kubernetes
  10. user: kubernetes-admin
  11. name: kubernetes-admin@kubernetes
  12. current-context: kubernetes-admin@kubernetes
  13. kind: Configpreferences: {}users:
  14. - name: kubernetes-admin
  15. user:
  16. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJVFVrUFJHZUxsYTB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJnbUFsZmVCZ3FsWG5OeEIKSVZ2V040WnBUTHNOMTRsN0I1c0hnTUQ4UUQzNEsyOUpmOFVaV2ZhUlJPR3IwK0hadXhhdldCM1F4TGQ3SDQ1RwpkTkFMMFNxWmlqRUlPL1JCc1BhS0tQcEYvdGIzaVlrWGk5Y0tzL3UxOVBLMVFHb29kOWwrUzR1Vzh1OG9tTHA2CldJQ3VjUWYwa2sxOTVJNnVHVXBRYmZpY1BRYVdLWC9yK1lYbDFhbUl2YTlGOEVJZlEzVjZQU0Jmb3BBajNpVjkKVU1Ic0dIWU1mLzlKcThscGNxNWxSZGZISkNJaVRPQ21SSjZhekIrRGpVLytES0RiRG5FaEJ2b2ZYQldYczZPWQpJbVdodEhFbENVZ3BnQWZBNjRyd3ZEaVlteERjbitLUWd5dk1GamwzNDMzMy9yWTZDNWZoUmVjSmtranJtNHI1CjFRVmUyd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFOREJBVy9qTVNRYlE4TGd5cktTRDZZYmtZY3ZYWHRaNUVodAovUXBsZnNTS1Frd1IvNzhnQjRjUTFQMHFXMmZrWnhhZjZTODJ3cGxGcEQ0SktId3VkNG1SZy9mTU5oYVdiY0tDCnRJejMydnpsY2dlMm9ydEdMSmU3MVVPcVIxY0l4c25qZHhocFk3SkRWeFNDcE9vbDkzc1ZUT3hNaTRwYjErNXEKL1MxWERLdk1FZmxQTEtQcERpTGVLZFBVVHo4Y21qdlhFUndvS0NLOHYvUVY3YStoN0gvKzlYZHc2bEkyOXpRawp6V01kamxua2RJVzQ2L1Q1RDVUMnNQVkZKak9nZEhLR2loMG9pU1ZBZWZiVzVwdTFlTktrb2h3SWJ1Q0RRM0oyCmk5ZUFsbmFvYWU0QkU0bTBzVGpTR0FLTTZqTUxTZXZiaTY1dnZCUGw3OU9CSXQ2WXZjVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  17. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMmdtQWxmZUJncWxYbk54QklWdldONFpwVExzTjE0bDdCNXNIZ01EOFFEMzRLMjlKCmY4VVpXZmFSUk9HcjArSFp1eGF2V0IzUXhMZDdINDVHZE5BTDBTcVppakVJTy9SQnNQYUtLUHBGL3RiM2lZa1gKaTljS3MvdTE5UEsxUUdvb2Q5bCtTNHVXOHU4b21McDZXSUN1Y1FmMGtrMTk1STZ1R1VwUWJmaWNQUWFXS1gvcgorWVhsMWFtSXZhOUY4RUlmUTNWNlBTQmZvcEFqM2lWOVVNSHNHSFlNZi85SnE4bHBjcTVsUmRmSEpDSWlUT0NtClJKNmF6QitEalUvK0RLRGJEbkVoQnZvZlhCV1hzNk9ZSW1XaHRIRWxDVWdwZ0FmQTY0cnd2RGlZbXhEY24rS1EKZ3l2TUZqbDM0MzMzL3JZNkM1ZmhSZWNKa2tqcm00cjUxUVZlMndJREFRQUJBb0lCQUhQNjdBQloyUFZVK1ByQwptbzZSR0dFT3lZSjhXYitXTFBCOXdiNzJhUGdQUHF4MEZTZTNBMlk4WjBlNXR6b05BRkdwbm5vRDJpSlo2MDk4CjBmT2ZHem9YSy9jN1g4THNpZWtGSzdiaWNrczl0QXpmOUx0NUZ3Tm9XSURFZmkrV2lKSkFDaE5MWEc4N1VsL3oKaWRMOEdFNmR5Ylh0TEpOZ1pqR2p1eWJVUU4rZ1h1VTFZdUo1NEtEWnFFblp4Uk1PT0pqMm1YU2dZWW9kNTFqMQpyZkZ0Z0xidGVLckhBSFFubzhheDdOQjVudzh4U3lBTDdVbGxnNWl5MDRwNHpzcFBRZnpOckhwdytjMG5FR3lICmdIa3pkdC94RUJKNkg0YWk1bnRMdVJGSU54ZE5oeEFEM2d5YU1SRFBzLzF2N3E5SFNTem91ejR1Ri94WURaYXgKenIrWjlNa0NnWUVBNXhOUXl6SUsvSFAwOUYxdHpDMFgzUDg0eEc5YWNXTUxMYkxzdGdJYWpwSWxzQ1JGMzV4cgpYeUtSeFpOb0JBS1M4RnhlZ1ZkbmJKbUNvL0FwZHVtb2ZJZGovYXEwQ1VHK3FVV3dFcUVlYXRMSWhUdUE0aFFkClg4dEJLSStndFU1alpqdmMzSjlFSGczWnpKSFUwc0UyKzAyTDVjb3BZZWRZMGczaTJsWmIwOTBDZ1lFQThZNG8KeUZtMFkxZjdCamUvdGNnYTVEMWEyaG5POWozUjNic1lMb0xrVE52MWpmMm9PTG0vMTBKQnhVVkFQd1lYbm8zUAo0U3FPallsQlNxUjRKOTZCd1hOaGRmenoxS0MwSXpXNUZ2T1lHRFpXSXBuMFMxZy9KTnRqOVdyS1RpWjNEenU0CjBQbGx3dzZXaE5hWmlpeWs0SDYvT0Z4NFR6MzE5NzBWelRHWVRoY0NnWUFOMGtueTNYdHF2a1RZbVA0SVNHbzAKL2M4WGNOR29GcFNFbHo4eFk4N1MyRXNJemlLZnpXdGV0V0tpdnI1cC92MXJBeHRrQVNaZWlKQVgzaldjdHowcwp0YXgxYjlCMC9VbTZOa0RoM0dGRlluWThBZU1qb3JCZkduazdROXdJL0RkVjFoN1AwM2J2bFVTQngvZEM0K3UxCi9GMXgwVFhJZFY0S3NtbnZSVnNZd1FLQmdRRFpyTlBQaUJib2x6WWM2a3dXVWhiNXF0aWVSamVjNnlTZC9hWFMKOUIwcnJlUGdhcjhYTHp4VGpOK2NGOFhIaFlQdlc3Z0RIc2lMZnk2WlJ4RUlUSmo5YlM1Y2x2QmJvZDN6Qk15ZwpoQytCVWlYWTFJZXpCZmtSQzZ0T1UwZXZtVFlkUWlKUUh3NjI4Z1J0L0wwc0tRTURVdlNhbzZtL0x3VGlsVUI2ClFzRVBUUUtCZ0ZpWUpLZklnZ0NqamltRWZZb1ZxaXp6MnVUMmtPWHhCNlhycUJDbGlXclNCUElFT2lmcjIraU4KTmx6MTBIcTkzRWVUd0VlbUhpQy9DVmR5amh3N2JjY2gvQWZWLzZJSzhoWHRlUmhRcDZvdnpFSUtVa0NYQWx0Mwo2RzA3RUw2Qk9JVFgwbFBIWlYzUDBDRGVvRnFXSUd6RHpnUHA2ak40SVhPdGQwc0RPbFZkCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.2、controller-manager.conf

  1. [root@master1 kubernetes]# cat controller-manager.conf
  2. apiVersion: v1clusters:
  3. - cluster:
  4. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  5. server: https://192.168.74.128:6443
  6. name: kubernetescontexts:
  7. - context:
  8. cluster: kubernetes
  9. user: system:kube-controller-manager
  10. name: system:kube-controller-manager@kubernetes
  11. current-context: system:kube-controller-manager@kubernetes
  12. kind: Config
  13. preferences: {}users:
  14. - name: system:kube-controller-manager
  15. user:
  16. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lJVWpSRlRjUG9MWUl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxGd3NVRWU3am5nNGQwVHAwclAxSklMVzJ2aFNIMzgKY09hdnhIRnMzOForZHhXNVJ5NDB2TTRyeldmakhtRUhyQXpydmxUWjhHL1FKS0xWSlYrd0R5c2MraEJxSEhYegpyUzF0MHlOTWZERktHQU04dDRSQjQxSjVCaUhIcFFOYWl4cVk4WGhPbnZsLzJaV3dhVHNoMGVEcFdpKy9YbUFpCm9oc2xjL2U0Nk1Rb1hSdmlGK0Uva0o0K05YeEFQdXVCeVBNQVd1Y0VUeTlrRXU0d1ladkp6bVJEcnY4SjdIam0KQVNkL2JoYjlzc0s1QlFITlE5QncyaUMvUWp0YTZZVTRlQ2l3RTFEOGNndUszbzMvcVFsMkVkdGNoSEtHMGxmTgoyY3JvQW1yMWFqZkRwUFpvRERxa0lDU0x0STdtL3dNZ1QybUdyaDgzUFcxTFdjSjVyR1FRWU1FQ0F3RUFBYU1uCk1DVXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBelBjUUNPOElHN0lhSmdqdlN4RWYzU0NER3RXT0JZd0ZOYUtZck9LUDVKZGlJN01ZZQpiTnBEZWZKR2szd1V6bEVMYm5NOTQ1NkpjNExaYXp1WFFwUkF5d0VJelVTUHdKV2JicERsQlpIWnV3Ym5wd1lJCkdhZExyUVRXVXBBV3RaSXhOb1RjbzJlK09menVack9IS0YzYjR2akljWGdUV2VtVmJqazJ4RUd6dHpDMG5UZ04KZFR1UW1GaDNJRnl1VmdjbjNqZytTeTZQb3ZKV1lianFBeW9MWlFkUGJUQnl0YXZQcWcrNjhRNXdZamVpZXhTZwo1bHdlMmVrcHB3VUkxVU1oZlk5a2ZBY1Bma0NWbjdxKzEveGtpdlF0dTJ0UTN6dDR0dzVnaks5ZkR5NTROejlPCkJJd1ZiYTBRdVgySW1OemVCR2EvYzVNUjV2S09tcFpHSnkrZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  17. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBc1hDeFFSN3VPZURoM1JPblNzL1VrZ3RiYStGSWZmeHc1cS9FY1d6ZnhuNTNGYmxICkxqUzh6aXZOWitNZVlRZXNET3UrVk5ud2I5QWtvdFVsWDdBUEt4ejZFR29jZGZPdExXM1RJMHg4TVVvWUF6eTMKaEVIalVua0dJY2VsQTFxTEdwanhlRTZlK1gvWmxiQnBPeUhSNE9sYUw3OWVZQ0tpR3lWejk3am94Q2hkRytJWAo0VCtRbmo0MWZFQSs2NEhJOHdCYTV3UlBMMlFTN2pCaG04bk9aRU91L3duc2VPWUJKMzl1RnYyeXdya0ZBYzFECjBIRGFJTDlDTzFycGhUaDRLTEFUVVB4eUM0cmVqZitwQ1hZUjIxeUVjb2JTVjgzWnl1Z0NhdlZxTjhPazltZ00KT3FRZ0pJdTBqdWIvQXlCUGFZYXVIemM5YlV0WndubXNaQkJnd1FJREFRQUJBb0lCQVFDVUV5b280UW9Hek85UAowYzNpOWFzOE1UUWF4QWI5OUVPM2oyak5Dd0Yzb1NQNXdnTnZ3TnpxNU16bWJEZDIyN010bVRIZGwzNDVvU1poCnFLUW14VUx6Ukp3K1JINzV3OTk2TU5ObytyUU5ZZnJHQU01WkZhOEJyVE43enlLYXVOMnExWVYxVTQ4QlFUc3YKMnVjR1RNUGNBSUNkcGdLNUVVM2NmNVhXWGI0SnF3MjlzdWJER0ZtL2kzUlpiTzlTejFBZUFEU0tXN1lGS2thMwpQRzFsWklUenlndkRjV20zK2U5TlRYR3VKTVNHK1FXOGlSUWJkZk9HbGdtRDNUa0FzUGpxYkphZ0Z3NGpoVEErCjJwODhDNVRvVVVkakRhS0d3RTBWcmpsWUUxandBRnRoWTY4dXd0T0l1MHlWYlN6R3RnOWxlNUVVMEsvWGdnekcKOGd5TWZPQUJBb0dCQU5oQmRjNm9SdVZ6ZmpsV3NQK0QybExjbTZFVDZwNU15T1A0WlJ4TU0yejJoUTR2UGNRZQorTXd4UHA3YUJWczQvVlJadk5JSWJIcTdHZCtKdXNFOFdyeHQ5Z0xBcDU3QTdTUXdkeWUvMGtuZk5tSDFWdTNuCkxDM3AybWU3SnhETlF6bTdRMVpRWVM5TTl4RHBpak9oc2RHY3BRN3hMODk3cUV2aFBaSUtQL0pCQW9HQkFOSU4KQStWNVNsRTYwWDNtZHhPMHEzV0ZCaEFKMWtMNnhHNThjNnpYdGgvYnBXdFRlNXF2Q1FteUhyZ1I1a2pvSzlsZgpYeUdRTEtIMHhRRVBzRG94cGpjMmZzemhXN1cxNVJSRkpyMlM1Q0kybndZSVdUUG5vMXBEbG8rWXJ5TXAxWnZDCkxrVlpHOFpZeURPeURIY3VueStMSHM2Z3NseUdpMTliK2dhZEo4NkJBb0dBR0x0THhNbWI2Z3ZPU0xKd1paaG4KdEloRVNDU2w5VnFrc3VXcWNwVUlZSkxFM3IxcVcrNksxNWRlS1A2WUZEbXRSeU5JSStFUXZ1eDg1Z0t6Vi93VwpDR3l1OE51bGo5TlNpNHY3WkpGY2RGUlJ2Tnc1QjlZalNGRHhTR0d2OHd6MmZqaTdWN2l6bEp4QnVTNXNQc0ZrCk82dWxlTkwrZThVUmx6UDRQYVpzYjhFQ2dZQWVtOExybDRjYTJ5Vlg0Vk9NelpFR3FRRy9LSS9PWnRoaytVR3AKK0MwVDYxL3BpZHJES2FwNWZUazR2WEwvUU1YVEFURE5wVUs3dnYxT01Fa1AwZGhVeDE0bTROZ0tYSjBySFFDTwpNMitIQk1xYmlHL25QbVB4YlZQdFRPU0lqVG9SWG5SN3FvWi9tc1JodEJwWTY3UktxMDByOHdMS3ROaHVadXJDCk4vaHJBUUtCZ0VHalk3U0ZrbCtudGgvUkh0aWFublpSallicnFaNXdUbk1EYjMwTEJKOFVwd01zQUJiWnYweFgKdk9wVXROdWVMV243QWtEVCtRYWtIanBDbmVDQUNRcE55VVhRZmJtYVgwaEFkbDVKVlRrWUZHaDUzcmhOL1UzRAowc3FZbDRjOWJjYWVua2xZMUdqbUNUcmlwTDFsVjFjZlA1bklXTTRVbUlCSm9GV1RYS2VECi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.3、kubelet.conf

  1. [root@master1 kubernetes]# cat kubelet.conf
  2. apiVersion: v1clusters:
  3. - cluster:
  4. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  5. server: https://192.168.74.128:6443
  6. name: kubernetescontexts:
  7. - context:
  8. cluster: kubernetes
  9. user: system:node:master1
  10. name: system:node:master1@kubernetes
  11. current-context: system:node:master1@kubernetes
  12. kind: Config
  13. preferences: {}users:
  14. - name: system:node:master1
  15. user:
  16. client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
  17. client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
  18. # master2的kubelet.conf
  19. [root@master2 kubernetes]# cat kubelet.conf
  20. apiVersion: v1
  21. clusters:
  22. - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  23. server: https://192.168.74.128:6443
  24. name: default-clustercontexts:
  25. - context:
  26. cluster: default-cluster
  27. namespace: default
  28. user: default-auth
  29. name: default-context
  30. current-context: default-
  31. contextkind: Config
  32. preferences: {}
  33. users:
  34. - name: default-auth
  35. user:
  36. client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
  37. client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
  38. #master3的配置
  39. [root@master3 ~]# cat /etc/kubernetes/kubelet.conf
  40. apiVersion: v1
  41. clusters:
  42. - cluster:
  43. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  44. server: https://192.168.74.128:6443
  45. name: default-clustercontexts:
  46. - context:
  47. cluster: default-cluster
  48. namespace: default
  49. user: default-auth
  50. name: default-context
  51. current-context: default-context
  52. kind: Config
  53. preferences: {}users:
  54. - name: default-auth
  55. user:
  56. client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
  57. client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

5.1.4、scheduler.conf

  1. [root@master1 kubernetes]# cat scheduler.conf
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  6. server: https://192.168.74.128:6443
  7. name: kubernetescontexts:
  8. - context:
  9. cluster: kubernetes
  10. user: system:kube-scheduler
  11. name: system:kube-scheduler@kubernetes
  12. current-context: system:kube-scheduler@kubernetes
  13. kind: Config
  14. preferences: {}users:
  15. - name: system:kube-scheduler
  16. user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMzakNDQWNhZ0F3SUJBZ0lJRnA0V1FFTFp0Ymt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTUNBeApIakFjQmdOVkJBTVRGWE41YzNSbGJUcHJkV0psTFhOamFHVmtkV3hsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUx1ZmJmeG8vQUlORWMrL3ZTNFY5RDdFejdRUU5PazM4Y21tbFBrMitWVTYKQXpxWFZKT2E2VDQ1Tmpib0hIeFZOWEZVNStvRmlmTU11M3N3WEthdllMYSs4RW1lK1gxU05zaGg2RlF4L2FOUApOVlpLRE9XMzJFZnhMSkpKdml2OEZuenE5MkdjTTNOWFd5MjlCdkp0UHBIRmx3SjFFSzc0QXh5NmQ5dm9GN2VsCml0WUNNUk92L3pWV0szNjhlb0xSaUNOd1A0NWtnbW5MeHBvU1VyUmgrWHhHeEdjcTJCdVg0ZTZSTzd5REVtdUsKNjhpUFprRjRlRE5aUWpieEhnUzRKNTE2aGFqR1RKWExNMUovbVEvaFo0TEU2L2JXOWlKZCtkVEEzeGgyOG9SagpNREZISUwzUk9wcHJnRFZodGxGY2VGUmhpamJlRmpDcXNXWEthOXNGZ01NQ0F3RUFBYU1uTUNVd0RnWURWUjBQCkFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFBdVBWYmhIblU2MmpkV1l6TmdQM1JjZEtBblNhZWszMjUwQWZqM3dBOWJhL0drdzlZS3Nsb2M5eWhrekYyegptK0lsVWlMczdXM0MxTlFzSDNmUytCS3llWS9NMGtQVWRYVWw2YVVwOGZ2cW1iRDdRZi80Ty94eDhWK2oweG9MCmhZaWpmNGFCc2dYTFN4YlBMZ0FDWDJTYXNHaXgxYkZRSlBtTFUrem5PVWpQUnJzQWdlMlJtY2ZOS0VwUEMwaEoKR1F2ZkdaTDY1TkgvamNDSHpHM3prQlBxeCtQTUZOc2RuK3hnYndUU0haTlFYWk00OE0rWnR0eG5uYm1sL1Rxcwp4Slc2OWJMdU80cVVaTGtiemZVN29oaFhFejBhcHRid2R3QUpXUVdtQy9heEpvbmVHQ2lEb1A4c3hoSnpoUmtWCkVoQlQyYWxBOVdpdFFqNDFyMitMdlhwOQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  17. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdTU5dC9HajhBZzBSejcrOUxoWDBQc1RQdEJBMDZUZnh5YWFVK1RiNVZUb0RPcGRVCms1cnBQamsyTnVnY2ZGVTFjVlRuNmdXSjh3eTdlekJjcHE5Z3RyN3dTWjc1ZlZJMnlHSG9WREg5bzA4MVZrb00KNWJmWVIvRXNra20rSy93V2ZPcjNZWnd6YzFkYkxiMEc4bTAra2NXWEFuVVFydmdESExwMzIrZ1h0NldLMWdJeApFNi8vTlZZcmZyeDZndEdJSTNBL2ptU0NhY3ZHbWhKU3RHSDVmRWJFWnlyWUc1Zmg3cEU3dklNU2E0cnJ5STltClFYaDRNMWxDTnZFZUJMZ25uWHFGcU1aTWxjc3pVbitaRCtGbmdzVHI5dGIySWwzNTFNRGZHSGJ5aEdNd01VY2cKdmRFNm1tdUFOV0cyVVZ4NFZHR0tOdDRXTUtxeFpjcHIyd1dBd3dJREFRQUJBb0lCQUh5KzlldmJDYU43ZVJvKwpDOVIyZUZ5N2tyWFFDTDMvbWwxT3lzSWdVUXJmZFlJaFYvU0VEUXg0RVpuVUhneDB3d0hGU0NVSzVidVovWlZjCmhGMjNRWUIvMTFlN3dYb1hqYUVScDkxREY3YmJWVVU0R3ZjcGt6M1NGcVoxTFdJbFMvWm1hM0NVNElpUnptZk0KeEsrdS91a0JEUFJ2VFZab1EvbDM2WFZuRFUzbU9NOTBVQ1QwaHA3akVNVitRa2k1K2Vnam5GOExodEpRcmVDTQpTNHZQUE91UGNxTjdSQm9IRkJVSG0zMFBheXlaN3FHZWdBZlVjZFAzM3U5bXd0WUl2VmxzVDNvVkJXTjNub0E5CkFjZGU4QXFmY0dnUlB1YVBTWlI0TW5xK1Bhb2RnSGVwUTR5NEc3Y01oSi9SeUFTRndwWkdYTFhHdnJCcVpKS3MKdDBGOGV0RUNnWUVBeWZ3R250RmM0OWlUSXIvU2NEV1hnS0Z4QVpFSTdBR3h2Yi91bW9IZVp4R0VjbkxVYXNKTApFeHl0aEJtY1VDSEZncVZmWW5mejY2bzVJVGExY1JJcjcxbVFuUnRPdWgxUzAwRCtCQzF2c0ovN2krMHc3Nm9LCmtsbUpSUE5ud2tGUjBvaHpvL1JQWCtvL1Z0Tm1nYUdTNDhJdmh2SzRaRENudnRORHdRbXFTOVVDZ1lFQTdjd3gKZGJMbURuUFJRL3lTcE9Scy8yWklCUGxtQ0wrQWg2RDJZQThiTVBmeVNIRnhGek40TFUxUWprS2RrK2tKbit0UApIcE5WekJiemlDalg1R0UvaDhRRC9RZmdnUDdLcjJId1RZd2NuNW1IRXl2WDdDQ2NLUUhmL05RYnhVQmh4OEdiCjFPekllMUU5NndMV2p1RDhpNzhjU3ZEdm9WZmlaTllJNGUrN1hqY0NnWUFMSmNhend6aE9OdWkvOVRoSEN4NG0KY2tLTFpKYktkN2w0a0h3NXVNc3Vndy85UlFzbUxUejVmQTZ6aUxwUXpkeFp2b2pLSlhhbjNnZ3pKaExUZjc0LwpBb0Z4dWswWkJuOUl1NENKZUh4K2tnWFBEak15TnY5SVhucXQvSVVRZW94cWd5OW1zQmdsWWdkRzRubjQwNU1JCjBQSFFqOXJQWk1RTlN4bWxNTVJlVlFLQmdCdVJwdEpNY1Z1UGxkMVo5TzVsQlRYKzk2Nkw4NFprSFZTY0ZyUkEKVEJpN1JqMmIyVTZsU3ZPRm1TZEZGZHZHRXJXVnBGQ1pLRU5IRGVqbFExSlk2L0tqaVFyVzFQSmZsOFFKaU1DVQowK1MwK2ZJQkRVRjA3bVhhcjhzeUZCNGtQckhZQW1jSEpKOFhaaVJPNmUwYXJHelBOVXFDOEdVMk9Tc1RuV2dFClVTYTFBb0dCQUlXZDZFSzZ4ZC8vdmF3dFhvUDU2anF4WTdQY3dGVnJPUEV3ekF6UGtWYU1wMElvNnR3TkJQVmoKZ0FDQjMzcUwxaWxWQ1NaZ2JBOFhRVWNqcHdTT2o2Wm8wNmRBUWpqM3I3b2ZJeHJKSWhtMndhSEIvbGFNVkdyWgpEQ1hHdlpCNm9HRGIyZCtVQUhxWkNDUzRMRCtwQmdwbG9sZ0hYbVN2MW00RFJiR0FZcW5PCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.2、manifests

注:此时只有master有

5.2.1、etcd.yaml

  1. [root@master1 manifests]# cat etcd.yaml
  2. apiVersion: v1
  3. kind: Podmetadata:
  4. annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.74.128:2379
  5. creationTimestamp: null
  6. labels:
  7. component: etcd
  8. tier: control-plane
  9. name: etcd
  10. namespace: kube-system
  11. spec:
  12. containers:
  13. - command:
  14. - etcd
  15. - --advertise-client-urls=https://192.168.74.128:2379
  16. - --cert-file=/etc/kubernetes/pki/etcd/server.crt
  17. - --client-cert-auth=true
  18. - --data-dir=/var/lib/etcd
  19. - --initial-advertise-peer-urls=https://192.168.74.128:2380
  20. - --initial-cluster=master1=https://192.168.74.128:2380
  21. - --key-file=/etc/kubernetes/pki/etcd/server.key
  22. - --listen-client-urls=https://127.0.0.1:2379,https://192.168.74.128:2379
  23. - --listen-metrics-urls=http://127.0.0.1:2381
  24. - --listen-peer-urls=https://192.168.74.128:2380
  25. - --name=master1
  26. - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
  27. - --peer-client-cert-auth=true
  28. - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
  29. - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
  30. - --snapshot-count=10000
  31. - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
  32. image: registry.aliyuncs.com/google_containers/etcd:3.4.3-0
  33. imagePullPolicy: IfNotPresent
  34. livenessProbe:
  35. failureThreshold: 8
  36. httpGet:
  37. host: 127.0.0.1
  38. path: /health
  39. port: 2381
  40. scheme: HTTP
  41. initialDelaySeconds: 15
  42. timeoutSeconds: 15
  43. name: etcd
  44. resources: {}
  45. volumeMounts:
  46. - mountPath: /var/lib/etcd
  47. name: etcd-data
  48. - mountPath: /etc/kubernetes/pki/etcd
  49. name: etcd-certs
  50. hostNetwork: true
  51. priorityClassName: system-cluster-critical
  52. volumes:
  53. - hostPath:
  54. path: /etc/kubernetes/pki/etcd
  55. type: DirectoryOrCreate
  56. name: etcd-certs
  57. - hostPath:
  58. path: /var/lib/etcd
  59. type: DirectoryOrCreate
  60. name: etcd-datastatus: {}

5.2.2、kube-apiserver.yaml

  1. [root@master1 manifests]# cat kube-apiserver.yaml
  2. apiVersion: v1
  3. kind: Podmetadata:
  4. annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.74.128:6443
  5. creationTimestamp: null
  6. labels:
  7. component: kube-apiserver
  8. tier: control-plane
  9. name: kube-apiserver
  10. namespace: kube-system
  11. spec:
  12. containers:
  13. - command:
  14. - kube-apiserver
  15. - --advertise-address=192.168.74.128
  16. - --allow-privileged=true
  17. - --authorization-mode=Node,RBAC
  18. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  19. - --enable-admission-plugins=NodeRestriction
  20. - --enable-bootstrap-token-auth=true
  21. - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
  22. - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
  23. - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
  24. - --etcd-servers=https://127.0.0.1:2379
  25. - --insecure-port=0
  26. - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
  27. - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
  28. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  29. - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  30. - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  31. - --requestheader-allowed-names=front-proxy-client
  32. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  33. - --requestheader-extra-headers-prefix=X-Remote-Extra-
  34. - --requestheader-group-headers=X-Remote-Group
  35. - --requestheader-username-headers=X-Remote-User
  36. - --secure-port=6443
  37. - --service-account-key-file=/etc/kubernetes/pki/sa.pub
  38. - --service-cluster-ip-range=10.96.0.0/12
  39. - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  40. - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  41. image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
  42. imagePullPolicy: IfNotPresent
  43. livenessProbe:
  44. failureThreshold: 8
  45. httpGet:
  46. host: 192.168.74.128
  47. path: /healthz
  48. port: 6443
  49. scheme: HTTPS
  50. initialDelaySeconds: 15
  51. timeoutSeconds: 15
  52. name: kube-apiserver
  53. resources:
  54. requests:
  55. cpu: 250m
  56. volumeMounts:
  57. - mountPath: /etc/ssl/certs
  58. name: ca-certs
  59. readOnly: true
  60. - mountPath: /etc/pki
  61. name: etc-pki
  62. readOnly: true
  63. - mountPath: /etc/kubernetes/pki
  64. name: k8s-certs
  65. readOnly: true
  66. hostNetwork: true
  67. priorityClassName: system-cluster-critical
  68. volumes:
  69. - hostPath:
  70. path: /etc/ssl/certs
  71. type: DirectoryOrCreate
  72. name: ca-certs
  73. - hostPath:
  74. path: /etc/pki
  75. type: DirectoryOrCreate
  76. name: etc-pki
  77. - hostPath:
  78. path: /etc/kubernetes/pki
  79. type: DirectoryOrCreate
  80. name: k8s-certsstatus: {}

5.2.3、kube-controller-manager.yaml

  1. [root@master1 manifests]# cat kube-controller-manager.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. creationTimestamp: null
  6. labels:
  7. component: kube-controller-manager
  8. tier: control-plane
  9. name: kube-controller-manager
  10. namespace: kube-system
  11. spec:
  12. containers:
  13. - command:
  14. - kube-controller-manager
  15. - --allocate-node-cidrs=true
  16. - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
  17. - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
  18. - --bind-address=127.0.0.1
  19. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  20. - --cluster-cidr=10.244.0.0/16
  21. - --cluster-name=kubernetes
  22. - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
  23. - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
  24. - --controllers=*,bootstrapsigner,tokencleaner
  25. - --kubeconfig=/etc/kubernetes/controller-manager.conf
  26. - --leader-elect=true
  27. - --node-cidr-mask-size=24
  28. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  29. - --root-ca-file=/etc/kubernetes/pki/ca.crt
  30. - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
  31. - --service-cluster-ip-range=10.96.0.0/12
  32. - --use-service-account-credentials=true
  33. image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
  34. imagePullPolicy: IfNotPresent
  35. livenessProbe:
  36. failureThreshold: 8
  37. httpGet:
  38. host: 127.0.0.1
  39. path: /healthz
  40. port: 10257
  41. scheme: HTTPS
  42. initialDelaySeconds: 15
  43. timeoutSeconds: 15
  44. name: kube-controller-manager
  45. resources:
  46. requests:
  47. cpu: 200m
  48. volumeMounts:
  49. - mountPath: /etc/ssl/certs
  50. name: ca-certs
  51. readOnly: true
  52. - mountPath: /etc/pki
  53. name: etc-pki
  54. readOnly: true
  55. - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
  56. name: flexvolume-dir
  57. - mountPath: /etc/kubernetes/pki
  58. name: k8s-certs
  59. readOnly: true
  60. - mountPath: /etc/kubernetes/controller-manager.conf
  61. name: kubeconfig
  62. readOnly: true
  63. hostNetwork: true
  64. priorityClassName: system-cluster-critical
  65. volumes:
  66. - hostPath:
  67. path: /etc/ssl/certs
  68. type: DirectoryOrCreate
  69. name: ca-certs
  70. - hostPath:
  71. path: /etc/pki
  72. type: DirectoryOrCreate
  73. name: etc-pki
  74. - hostPath:
  75. path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
  76. type: DirectoryOrCreate
  77. name: flexvolume-dir
  78. - hostPath:
  79. path: /etc/kubernetes/pki
  80. type: DirectoryOrCreate
  81. name: k8s-certs
  82. - hostPath:
  83. path: /etc/kubernetes/controller-manager.conf
  84. type: FileOrCreate
  85. name: kubeconfigstatus: {}

5.2.4、kube-scheduler.yaml

  1. [root@master1 manifests]# cat kube-scheduler.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. creationTimestamp: null
  6. labels:
  7. component: kube-scheduler
  8. tier: control-plane
  9. name: kube-scheduler
  10. namespace: kube-system
  11. spec:
  12. containers:
  13. - command:
  14. - kube-scheduler
  15. - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
  16. - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
  17. - --bind-address=127.0.0.1
  18. - --kubeconfig=/etc/kubernetes/scheduler.conf
  19. - --leader-elect=true
  20. image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
  21. imagePullPolicy: IfNotPresent
  22. livenessProbe:
  23. failureThreshold: 8
  24. httpGet:
  25. host: 127.0.0.1
  26. path: /healthz
  27. port: 10259
  28. scheme: HTTPS
  29. initialDelaySeconds: 15
  30. timeoutSeconds: 15
  31. name: kube-scheduler
  32. resources:
  33. requests:
  34. cpu: 100m
  35. volumeMounts:
  36. - mountPath: /etc/kubernetes/scheduler.conf
  37. name: kubeconfig
  38. readOnly: true
  39. hostNetwork: true
  40. priorityClassName: system-cluster-critical
  41. volumes:
  42. - hostPath:
  43. path: /etc/kubernetes/scheduler.conf
  44. type: FileOrCreate
  45. name: kubeconfigstatus: {}

5.3、pki-apiserver

  1. [root@master1 pki]# cat apiserver.crt
  2. -----BEGIN CERTIFICATE-----
  3. MIIDVzCCAj+gAwIBAgIIKUS+2WsvNC8wDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
  4. AxMKa3ViZXJuZXRlczAeFw0yMTA1MDMwODQ5MDNaFw0yMjA1MDMwODQ5MDNaMBkx
  5. FzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
  6. MIIBCgKCAQEA6yhoybfD8bD6+jTDpo+3m/ZtXvQXDCTETC0GOgdJPUyT6EtbPJRV
  7. Hs+t9aWDkKbl4K3lhsxmGsf6yDZot4ImZIgSpU2y6zU2t/lCyR0sM+2umCtdo/FA
  8. o53zFYe/UaepFFKA72sUvkCm4DnRzWMYzWofjHV7AkFOFZzbEyM9PpiGnUESY37o
  9. FLHhqe7c0pei1hypiLZDSHcloXObvO+kiRy4TmD8kdqW/3iR67edhYaUbpD/oZg1
  10. g9Qihir6QCs3iiNokrRbZEhsUeZmcqvAViZDye8FNwpzRe4e9M0ibOf3AD3ikpWw
  11. 8Nrs+BiTmeWYyxig/Od7kGwsqYA6t5d46wIDAQABo4GmMIGjMA4GA1UdDwEB/wQE
  12. AwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATB8BgNVHREEdTBzggdtYXN0ZXIxggpr
  13. dWJlcm5ldGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVs
  14. dC5zdmOCJGt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIcECmAA
  15. AYcEwKhKgDANBgkqhkiG9w0BAQsFAAOCAQEAfaMG4LqEIHhi+Bs0qJHXd+yA1UIU
  16. kM9AjfgbRUcJytF5LSZakH+b+S5+J9ihjrvi5jaGWmEKjagOUkkWtTq+uDXDZk55
  17. kNuAobt16o/jrOobpTCH4Bd9U/7R99Ui4H608LopNfEn3GChA658nEBxYELvCKEq
  18. 9YM/i05qhHbLJnocjd4pErkWbS7JE0H5JxGvA5avLHgwSqbHkAD8xn+nST4/tTFr
  19. 4S9GZd/8kB4QRG0TFFX8zAuZsihd3QpIpDuwgw3PwImHjD/Mpxw58eR71i68oqPd
  20. w2Vsc07Ir+YlSr5ZREdzWmp0xTRJ4DyykWdU0gM9MycfH48Lm5K78XQxfQ==
  21. -----END CERTIFICATE-----
  22. [root@master1 pki]# cat apiserver-etcd-client.crt
  23. -----BEGIN CERTIFICATE-----
  24. MIIC+TCCAeGgAwIBAgIIfOypnw+V1s0wDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
  25. AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDVaMD4xFzAV
  26. BgNVBAoTDnN5c3RlbTptYXN0ZXJzMSMwIQYDVQQDExprdWJlLWFwaXNlcnZlci1l
  27. dGNkLWNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALRhqUvX
  28. x93vtwxuaW5zEEKLTMreUuKmvFggLDyeRN2jSbI2RWX1UT230ZnJOt+C3hD6u65B
  29. 0cIYNyFRxoTIF7jtbc6YPvlHFS1JMh1OjZE42tLuXomdFLXjtuc9fRLtOLXhBofS
  30. hUZTaxEZzqthspFPzetaVmLEIMLOh989TJJI0HP+Go07T09XsaBFLyabyZl1QhfY
  31. Itjbr7NaEBzdRZ8GbETfi5nXgtsfguD1CslDqZTfMNP3scwco2kqVmvuJ/Fli6SA
  32. 5OnB7HdfL9jNW5mswXCQ+9N4jsTqOoHRg80iO7mAFDnR/gjnUob6WxeNipcSKjIN
  33. 8IaIuIM5HSPiibcCAwEAAaMnMCUwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoG
  34. CCsGAQUFBwMCMA0GCSqGSIb3DQEBCwUAA4IBAQAFIgXU0b+vtIXeBPaBVwMb5b//
  35. aLqAMnxbxHxjIee6NDxQHLK13aPUXuGdku3TI6cTFWOu2OYFxJbJ73J6xjtAJ2r4
  36. MU8yiOLddap4eFsHhFQUuXDcOyuiMau2E86IrTkXVQR3vk21k4bJNT51zjOrNg9I
  37. /MSIWA5CRaQop6WDXmnmbiZEqhB+OH6FL+7yn2lGXw7CYNMe9XaQjr37arFSXvlR
  38. zBjaEU+jBZMrQvdatH+LFcLn4Bvrhtao+cfdCP1dl6iJYlBAEnT+7Gmvl1O5BQQn
  39. qVP/zJxIl4VzzmlqLAGHz8UH/RR329jQHjS5ySK94LUFXRvGpZ4CvuanFBuJ
  40. -----END CERTIFICATE-----
  41. [root@master1 pki]# cat apiserver-etcd-client.key
  42. -----BEGIN RSA PRIVATE KEY-----
  43. MIIEowIBAAKCAQEAtGGpS9fH3e+3DG5pbnMQQotMyt5S4qa8WCAsPJ5E3aNJsjZF
  44. ZfVRPbfRmck634LeEPq7rkHRwhg3IVHGhMgXuO1tzpg++UcVLUkyHU6NkTja0u5e
  45. iZ0UteO25z19Eu04teEGh9KFRlNrERnOq2GykU/N61pWYsQgws6H3z1MkkjQc/4a
  46. jTtPT1exoEUvJpvJmXVCF9gi2Nuvs1oQHN1FnwZsRN+LmdeC2x+C4PUKyUOplN8w
  47. 0/exzByjaSpWa+4n8WWLpIDk6cHsd18v2M1bmazBcJD703iOxOo6gdGDzSI7uYAU
  48. OdH+COdShvpbF42KlxIqMg3whoi4gzkdI+KJtwIDAQABAoIBAHxeQZ3bPyDUYL8f
  49. eW3/w5w980qEk11WXNHeDOIWtaCjLvLC3IJ56/PDw65mwkLNNlM6rSBunTNYAtrk
  50. SR3P4BtPCMDC09iHnCBHMVhnitAwBSAd3ey/80GdqcQx7wSXrtwoNJp9GgrtBQsb
  51. YhVkHPx3q6Cz/o/GblgikifnWd4ZUGis14TvG1rEhxKEZneDVKuUIpuhZhCHt33s
  52. LAiHhrA6nPkd7w4LmYIWPQW491oQ9Fc+jRzEP9GhmcbYQKeMMg32etO23k0vVibX
  53. cQnnSL6uQmNIZcuzBi1LFerpw3Xz1xephNPuR1pEjm6Ph3bBq0b3G5NAqa4pHL8m
  54. Rof7dxECgYEAwqISJ69dmvaUuhtt+CmO8AJBBEzDBrFfkpI9h2JfcJPWBl4ts5ko
  55. e1OlxEpG9uPwso3NudzP3RH/iDt7pUanjBTtHRi7XYfR2U77ogr/OdbJKlGrsdwy
  56. x9UTsYx6LoOA82Xkc1+Bx6UHl9FCdj/AvdbEdYcrxE9CDwdpJXM1HtsCgYEA7UFD
  57. hkLGnyiYK7nZjgK3gRdAIGEmQWNb19e5DHVDECLOg7nI+CBCZA84wVecrbrsBWH+
  58. dfAt6ZC2tjebrnYYOOJwVkkLnzj6feIGqdzxBwsJ7ZzEQ7MIcYLWqgK/WV5NNlvJ
  59. EAj9lHfFhzoe4Xb4feJct1buYdcyS5JA0HA0UVUCgYEAkzzcEx186H/lXyzk8jku
  60. Iq7x1HjliKiiLlVnKoXmwVl1LXgNhrI0h6dt3aJ7MMabDdhsa1B6BzlYYAzvqsZa
  61. dYRXJA3ToBvhSk2P2rQLBAxSPitugayc1cOBlG06+PkOkhLg0c7MdOWJavYpGx97
  62. haF1GZvaJjX3OTtX9bbD1sUCgYAgrAkhdxalGlECTICiJsugclQ5YUeEX6tpKOLp
  63. zUgj87ceurnrOX4LC3GUZn1EC2avQxRop1+bN3uB0lyVBNxHER/JMhvwnEcaiMLE
  64. J5Hll2aRmzIH5KK4Bv2KwgAZzXuyjac9lw9cn7XK7n0MLXcA1uhPsx/2x0y8zXIx
  65. ghIiVQKBgALUXZc68Nc2ubVa86nr78g2wnEgxiMeIM7c7tMALyYRtvnmBoHBhIba
  66. VETlxkAdkzhmtDi1gUFrkJ8nLDrmmKA/5A6kObouTOHBbk5HGEJEkn9PnWj7pCRb
  67. rOk4a1nc4Y0IiAjM+WdyzOR5Mj+ENmFm2sKQ6LtKKXj7iVuO/F2k
  68. -----END RSA PRIVATE KEY-----
  69. [root@master1 pki]# cat apiserver.key
  70. -----BEGIN RSA PRIVATE KEY-----
  71. MIIEpAIBAAKCAQEA6yhoybfD8bD6+jTDpo+3m/ZtXvQXDCTETC0GOgdJPUyT6Etb
  72. PJRVHs+t9aWDkKbl4K3lhsxmGsf6yDZot4ImZIgSpU2y6zU2t/lCyR0sM+2umCtd
  73. o/FAo53zFYe/UaepFFKA72sUvkCm4DnRzWMYzWofjHV7AkFOFZzbEyM9PpiGnUES
  74. Y37oFLHhqe7c0pei1hypiLZDSHcloXObvO+kiRy4TmD8kdqW/3iR67edhYaUbpD/
  75. oZg1g9Qihir6QCs3iiNokrRbZEhsUeZmcqvAViZDye8FNwpzRe4e9M0ibOf3AD3i
  76. kpWw8Nrs+BiTmeWYyxig/Od7kGwsqYA6t5d46wIDAQABAoIBACNEozqlofCMr4d5
  77. BGLlqQ7uDYcxKoe6t+oI0qc/Un+sDX7IVn2mbYG6egeedDXsogtpaUQnQaUAmx8N
  78. 8fSbw3BObCV4mr3l9DfxXU/WXTvIiOfvkRK2axBe7wcqncn8UEJpAUdnEuxZu+1j
  79. HpEkLKMaKHMjZ3h2HOTm6oBbR6MsaVboR93Ux9otiGPO2ndb3PMtbg7NBVuD13dV
  80. w+gMaqFlN4aCjN6e5gEIHow3KOA8VTEwud4Uv8T3+aG884rkbnm+D9QkG/e0Hp5L
  81. NOxgn/iyYuPdAS4vGzDICwbYwBOFWmvyLqc4Hc+LAcYTp5IsJl3xUNEG8xKYcyKX
  82. qIx7ufkCgYEA7KKmrMTsnPEaFhk1RVFU/0pEjSDiFsV5JrxYxhJPEgf+AEI8Nazd
  83. Ku5yY1mNGZ0PwZONUpDFbXJKot7Xafj2YWs+yqQKjQe7BPSmuN83W9zr3f94YLxy
  84. VfOwoDpZfU6AMU5zsQrZ/DmE1+coBxLWtwav+VwlQudk/UpPe6nZY3UCgYEA/mbO
  85. NQQNrb926T+JApJqZvAH5emjDFTpp5y+p1zY6rLsOBFMdc+PhTpoM51zi0tuX7H+
  86. udU91PfjmZovMmoo8zwgxFFgfEwkOWImWzSOnG/KVcriBsTrlIirQVedhyjWn3O7
  87. 565dURNOpq8GH6mTVaqvKniTpkwO+sj9+u2lvt8CgYBXIVCjvuKsqu4DAwclXdwh
  88. H/R7zobRAacpRyKc0/L/Xaf96mWHEf5hp2jBAiE9NCKwESdxJlM7iGDI9ap1n7EA
  89. j9+P97TW1uja20ZkPfSBQ6gplr55SAoFcfQwGywGQphbD1rz7l3zTC6I3NlVOW+L
  90. 9s9mzrH9n3wE846upwyfXQKBgQDPJf76hFZvB9xXiPiTM42YTBLiTyAIxouLg8Jq
  91. nNu0IATgkpVjyKLgpPJ8NNUEs2MoYNM9ljlG1KJrTHTp5C97/5XexTR/gbBtWVJK
  92. Kb2F/DERMqZhRK9evvpTtnf6unIoXCDBQeWSQtpkN1gRKA9kThtbxdrUKlJ4Onk0
  93. fZXcmQKBgQCu2uxIW2QcrrT9eaWxs3sCdnliBCHo99SGRHP3ASe+8gzE4JpypI6r
  94. EHWdcb4z4ewc51AP8h4msKXuXx0OBWqZSp0SMv47c2asBTHXYwVP01KcJaHlecGq
  95. rCm2A5xWgkqieYsLoORJ1czKpw2UTyZ0YODCUDiPXQCVJ0Hpg4IxkA==
  96. -----END RSA PRIVATE KEY-----
  97. [root@master1 pki]# cat apiserver-kubelet-client.crt
  98. -----BEGIN CERTIFICATE-----
  99. MIIC/zCCAeegAwIBAgIIO4G9X1sn9vswDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
  100. AxMKa3ViZXJuZXRlczAeFw0yMTA1MDMwODQ5MDNaFw0yMjA1MDMwODQ5MDRaMEEx
  101. FzAVBgNVBAoTDnN5c3RlbTptYXN0ZXJzMSYwJAYDVQQDEx1rdWJlLWFwaXNlcnZl
  102. ci1rdWJlbGV0LWNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
  103. ALf7qf4RayVh5Tz81yjLF3KqNn7bPJWVN3DVTbIWRSLo8mDbQvDm7kCIaoXbVxds
  104. owiGHT67EQ9xzaDA6USMOfGplfenowOyjO1/8C8gbJgzGKOR/rKcvbUE5qGh1bLf
  105. gr/RMNyfT6hXvkLop/PDVgpYyg7OnJKwNzpez+XKnc8c4v7NJo7vKf3habnmCKRL
  106. yEVuQsU1lH71fl6BQlHyBxqJbAV3Hq3g8NyQZbzzPXdSevJB2QVE83IRATddEECH
  107. we1ITul/2/fi2DixFoXdaZxUd9QJ7Z3xIrs88dQ8FGKD5OYNJTo+osf0Z9srZrhs
  108. 2DELV7AGWShGCl+zl0dqcV8CAwEAAaMnMCUwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
  109. JQQMMAoGCCsGAQUFBwMCMA0GCSqGSIb3DQEBCwUAA4IBAQAoaC4zxVScmaazOT+P
  110. BYmxnkgyXnfghje+4X9OZkSqwJBT4qX4hC3ppI1FQbgb9vbF0AabT7tZmn8pdYKR
  111. eowfLvagwgoQm5jJQ1WvB2CIWPozPYPkulNG29e3DBvMEHkyqlVu3xBxHRtBHDIO
  112. JkzrsT5T7+y190JAWKleV4pZ8HpplTe47cC7E8wiHvHvd8a7tAEUd4KJ+oN/b4Ei
  113. r+78ZIlB8/WjXt79wAlhRtx4BcVtJt6a0hPMnqcdsiX0SsrXw9MtPzLuYvqLUOVp
  114. kq6uz8f2qqPIcWLBXg0/1OWp8voZiHGi2zGOqWKrF9ne48dvClXYHpMtW26iSiev
  115. 05a/
  116. -----END CERTIFICATE-----
  117. [root@master1 pki]# cat apiserver-kubelet-client.key
  118. -----BEGIN RSA PRIVATE KEY-----
  119. MIIEowIBAAKCAQEAt/up/hFrJWHlPPzXKMsXcqo2fts8lZU3cNVNshZFIujyYNtC
  120. 8ObuQIhqhdtXF2yjCIYdPrsRD3HNoMDpRIw58amV96ejA7KM7X/wLyBsmDMYo5H+
  121. spy9tQTmoaHVst+Cv9Ew3J9PqFe+Quin88NWCljKDs6ckrA3Ol7P5cqdzxzi/s0m
  122. ju8p/eFpueYIpEvIRW5CxTWUfvV+XoFCUfIHGolsBXcereDw3JBlvPM9d1J68kHZ
  123. BUTzchEBN10QQIfB7UhO6X/b9+LYOLEWhd1pnFR31AntnfEiuzzx1DwUYoPk5g0l
  124. Oj6ix/Rn2ytmuGzYMQtXsAZZKEYKX7OXR2pxXwIDAQABAoIBAQCEfFQwYaivdaxW
  125. 25ewh3buGkY92W/qI1aWCPP3DvRgLDEFsD6nLRRaIiHbHFS9yHwqUjFTD/A8F+5E
  126. GUahFv1O2ZjlirDno7a5+8wgk4+/lePjPemUAyzU4p+Vuu0g7rS/nks6Q/pftjeL
  127. BPCUp5AYyVFPklbLhttuTAIXbm1vSxwZ/HSKn5fhnWvdwN1Jd6iGU5En7XQ5yteS
  128. +szs+DJykzIotMAt9oybvCmd3pW/od0V+4lvuGD79092o+UdQ7vczpqHx3nX0OlV
  129. ByNhFy8pbv2yw0/e86NAvzXcgykN7YWwgy3KOzY4w+SA64RbCXyN085duqKUPE8v
  130. mGS5z/F5AoGBANC2iGHF4xrBnQJUX8ENEyJM1eMEbJRNzCQaeJo2MOVKvk1yqpVL
  131. B3UhnIbRUekSR1IrD72mfmaMxTr+i0e0tN/2n0mRuP//2dkyWS4oFYu7pQwR1g0s
  132. Xo36tLKxLEiV3XEFnUAMA6NHWFgazW76y79eusnP2XJGaD02XhXHxEeNAoGBAOGq
  133. x9KS9ro+IbCPzgecqZIk/TVD89ZjpbVCAiiZv9gdfxMH0qrBBVqVuEkqSE9J/lkb
  134. KgREBzknwXqarniTrIOmTzzSq2hhoADoTL3le7Rz3ON+47nPBhAWbVv91qPf1s23
  135. t0DaXJsjRkQByW0ehIn8iCeVFWVNPuMuHuf7tFubAoGACw3P1VXUvGMKvMfZNnFJ
  136. 1SQ6o8ZlNcmVCUh5oLlEB7DYuWNcU4HgyDxafO1zKCP2sQxkzgeWZDoKbCB1IfwZ
  137. JE98ijn0kWJsmEtJW991nKv4htYe/x2deGmRznEBxmphiw3gETdRrgEmVaw9uyX/
  138. Sohq3itq+dluxecuPnsREzUCgYA5r79u69SYXWOdT9V6CqkqS7xSjnFZn5VvlVUZ
  139. 7dulsjyWr8xBjCADPPyj72QWqLKVMqV1+7HhAXGrFrl85zsVWEEvKidZAoO1V6yu
  140. amhKA8g2e2xZRjulhyYjeusQbxro8Yqt0GQV4FmI7u//repxn5VqkOisQafOyS5r
  141. XOOI+wKBgDJq1Yuu+tXTR+jKx8MOMO4Mde7EKiAgPpAhKX+5Ot2B+M1vIJ8sSQLI
  142. Qk2rXZh5OJpz3gviQAR7cDVoPhKOj5dyUbqFbJ73I6bzSn5W5w0j3oxA+n4KHXfv
  143. Po8oG4LOiWVhV6l4JTxyOD1HK09ty/bSxfrchMF8mTT12W+90XJ5
  144. -----END RSA PRIVATE KEY-----

5.4、pki-ca

  1. [root@master1 pki]# cat ca.crt ca.key
  2. -----BEGIN CERTIFICATE-----
  3. MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
  4. cm5ldGVzMB4XDTIxMDUwMzA4NDkwM1oXDTMxMDUwMTA4NDkwM1owFTETMBEGA1UE
  5. AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANti
  6. Jd5n5bXH1DHJoaMjqCvzo+c4vGBelbLchXk/zvyJZ6NoAMOtoOEd00/XZiXJKIPA
  7. ZnqmBsfKoWaHtJTX5bK9reWDirO3t6AKLDnm31bDvnf1zsFwvyBStaRANQ4sdNml
  8. BAh5HYivbsn76KJbVMcvR/Hrcrl9SB2XJB2AQPtn8yhk/x1HN4lfkMJUNBNy8n5t
  9. S3RdcgYSTHICAF/1qvIzsMN63DtpHj/K62A5007tWPAx1ZOtap/Zq3U2y97KX1bq
  10. vMRjqr3IWeoY1glZroutcqZhzXvVeJDOCcZU4Q26iE32radEOv4m5odE3H7Db0QZ
  11. X1ST768HdXMkYXBq9bMCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
  12. /wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMgG2yQsoGrbJCDOueuVuS2WheDu
  13. VpdBQHtQqJAFJSRCS8iorC19+h1yh18HI5gxy9cPaAMy0Jg3EYpQFjm91AjIXPNB
  14. wchtGSKkPii7czjhWeiuwIl19xIk3ilcWm50HSaZUjs1uBRjLj3l8PE8BHNSaapC
  15. a96AcrZgVkU8Qi6YOiLC9Zh0LbBCVh6HWjQ0bTVuvhuecLyZRiNZ/PEfvhHkD7dR
  16. +DrVph5uG1wf/vjApaBc/MfvLk02Rxu3YGEMlgvNwO1+h9rwjtInCVuR2gxHb2mf
  17. ooDLDvS094upL6DzmvjvFRSZcXIKjIPglZ7EFvZkYf+r52mnZyQ45UOqIps=
  18. -----END CERTIFICATE-----
  19. -----BEGIN RSA PRIVATE KEY-----
  20. MIIEowIBAAKCAQEA22Il3mfltcfUMcmhoyOoK/Oj5zi8YF6VstyFeT/O/Ilno2gA
  21. w62g4R3TT9dmJckog8BmeqYGx8qhZoe0lNflsr2t5YOKs7e3oAosOebfVsO+d/XO
  22. wXC/IFK1pEA1Dix02aUECHkdiK9uyfvooltUxy9H8etyuX1IHZckHYBA+2fzKGT/
  23. HUc3iV+QwlQ0E3Lyfm1LdF1yBhJMcgIAX/Wq8jOww3rcO2keP8rrYDnTTu1Y8DHV
  24. k61qn9mrdTbL3spfVuq8xGOqvchZ6hjWCVmui61ypmHNe9V4kM4JxlThDbqITfat
  25. p0Q6/ibmh0TcfsNvRBlfVJPvrwd1cyRhcGr1swIDAQABAoIBABVDXghAabNEuvxY
  26. XqJBQnuAEdLHXPq6MCg113n5BUbUyoa7/db5bS5khaanae8foB2k+EnK7b1PlnUp
  27. kgcbJdg9Ki2kojzpAZMxaTfzeJIgRsW5vWBiXSP04EYbMwk8pdayd8Gae5JT7pkF
  28. IXcbAwyLOJ3qBCSWT/cOPyHc3G+BZcNsxiI+Y45/1wND9m7ZhAK1Hi2c3Fvil9zK
  29. N0UqMB25lGA1JfrrhiJR70BLwQ6PuqrdbkriEg3B2mzrcTWAVaRw6AMEN6qp1VtI
  30. eS86XqppfQ/AYr3x5JejXm/NJusWlOYNyL8/1ZMYhMsSv8xVBipZCUQzJY6bP0Bk
  31. 2BchjoECgYEA9/4vnRmsRsWHUC8IQBOlux352Zs6rnbNttHjYzCNx6gHXJngqKGe
  32. 2GWnKhR/+UwVfGESPgDYGJicR8KFkvPhRmf1l85Id2tV3xZtgWqs7NO21emUDGdM
  33. gk14WvoOuR+RaJyHfqgHKqKgEte4uSv6U5xL5YQzCy3V8/ljm6aDWJ0CgYEA4nd8
  34. kw+swxtn1B4jlRhOto/NCpB4IBBHA0XCPSEpIDrzf53vEhEGMz0eiHXXdaVPbd13
  35. kMpGSkwUb1rCEEFBorxlfSVUziC7/YW7RXprtpBMuBfUGiKI3BXgE+jTGrOUz952
  36. +E41IaZnjosh8lpUSsD7wiWkNQndw2yw9GENbo8CgYBpY3lCjyV6UflmJwafjHny
  37. 4hNK2b//YnebyOiUP48RGSQ/wxkJMN37Yn++z0VvYVkEKZCCDwPGuBw6Fr2DLOdA
  38. b2+cWsrLDS9KBhL1W6svXe2mTIRhHQkTmu6Z4wicvYCi71pZhfi9sqzKNSjIcJsK
  39. KzLJz/uNNaZl70bYX9QTtQKBgQCGjIkV8pUpIiow62smlNeHPa6LnUPRgPo/5n09
  40. xmrhvESZSKMWb8joPmLannDRc9LaKl90Rck3MTZe5mQwNiUh457EmJ5nDSnDuWWH
  41. JPHD+L2sDnQ0xtnbMJ/+FDEARzuduMWkRwroIC6ckOstSx+Tfk7VjXmfDWqVRglo
  42. WBUb3wKBgE8Fvn970GoqdiEJF+SIhEOAPAk5UMx5/Dd5kzSoZggL21umDYVa0xqX
  43. MXYvN2yF0SGvLxjqewcsacEhMiD9RuXyqjjs8MvjPdWiNZavuailf4QvcHwp8MJB
  44. UffsCwXBOGOhLDX56wTKzhstNu0Rd8BXZCrcaFNh4vJ3/gNvKNrt
  45. -----END RSA PRIVATE KEY-----

5.5、pki-front-proxy

  1. [root@master1 pki]# cat front-proxy-ca.crt
  2. -----BEGIN CERTIFICATE-----MIIC0DCCAbigAwIBAgIBADANBgkqhkiG9w0BAQsFADAZMRcwFQYDVQQDEw5mcm9udC1wcm94eS1jYTAeFw0yMTA1MDMwODQ5MDRaFw0zMTA1MDEwODQ5MDRaMBkxFzAVBgNVBAMTDmZyb250LXByb3h5LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA9+uyxi365UT3pKawVI7g0GKYUYce/jdY3ELiLSYRnHCVmEbKSIVeTdB7VbXQFIMB5p+aZWTgfWvdMOW9dI88HKNtszCsu97OOoO0sKB2Ipdso5utuj4aDvB7j6SUztu/jJUAavnH1aKVRNgn+QmAuSMcujzvyaAAQP8fnMFCSRw/jfrKYlV9E4PgawWzmmEnzuGxbvHiBsQjrtalYzAbnRGbk1pX5tuIc5qJvYdel1ReIt9IJ2BELPDNARK9QMSqFLIhBU1iaPtRSz72bxXaT7D21M3oYifhjNmzokTx7DXDlA4b5pS5byMZAF4AYBPmNkRppPGYY1a4cWyrUqRpNQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAF1IlamjA0+o+eb5Hi45p3qbSNMvbfYAwZg350IKcYVpdndR7wMoGFhWZJ8wOWJTNE94hgZ2qsVBjs2Gbst2cp9neK28m/2uDBJgz6OTzDZ8aSQ8zvQoVWoiYKxxNOP5rQOmjwKadg4kTICqs4RkgQxTS3LrhuqmHmXwVTvlqNb9InyJctN+V7ItDEvzYvLBxuYW6f2UKHtlsV/Qk5drBslA5yveZ+QXlNlgx0rzM4BNgeVQfHgTpjRHSLLhpveO0xMVtebln7i4tXNuUGjTgOd9OWwxUja4iTin3MCb8otQDH/xEwmavDkBCu+bAqmPNaVjfVd5+h0DpRsIfla9uvA==-----END CERTIFICATE-----
  3. [root@master1 pki]# cat front-proxy-ca.key
  4. -----BEGIN RSA PRIVATE KEY-----MIIEpAIBAAKCAQEA9+uyxi365UT3pKawVI7g0GKYUYce/jdY3ELiLSYRnHCVmEbKSIVeTdB7VbXQFIMB5p+aZWTgfWvdMOW9dI88HKNtszCsu97OOoO0sKB2Ipdso5utuj4aDvB7j6SUztu/jJUAavnH1aKVRNgn+QmAuSMcujzvyaAAQP8fnMFCSRw/jfrKYlV9E4PgawWzmmEnzuGxbvHiBsQjrtalYzAbnRGbk1pX5tuIc5qJvYdel1ReIt9IJ2BELPDNARK9QMSqFLIhBU1iaPtRSz72bxXaT7D21M3oYifhjNmzokTx7DXDlA4b5pS5byMZAF4AYBPmNkRppPGYY1a4cWyrUqRpNQIDAQABAoIBAF68R0Upfs0rTIIzXAAD1O5sLo5A1twHpEIOoMTl3ibsco2Mx3Fs3TtY5jg7UHb2FLze0i3anVnv5MbxkzK+JRdAcAPgHrFvk1iSyXIQ7vOK722ZaIpZfrWkuWKLXn2pRQngSheWuQDurqFvA99K/VBBlZGpBWwDYvVzR84rnzu1+ht6Duhhc/V4BP5KmRiIKs5SEtLU/w7nEWbhCrGY9SZ4fmgCxlahloRhCD71PLQxMunraA3L47CqshjPm4ylt1J9uAm71hwrby1Q5VXgpU8rz5n5k4VVLHzgQSx+pOWq51c7o7UU6sY4ZpmpNtwEcGsC3XzMywBq9VmCAQMJ6iUCgYEA+XJt5yyMChu4LuUg89Tj6iHt1JKqBxvWjQvd7FcSQtT1yYkx0er9ShPkPSv75yX34dXuWOHPp3FocWmAA9w9HVB8XmhBpfXoDB0wI1o7n/uGGevBasZxEFl4pfc8D9qoPq/qqAYQgzphzkyKLxsYxubnl5yNo9GcHO6l+jo9iOcCgYEA/m8BJKZH6tjXqL+/E1af2b8ZPqDuYSOZkTgh7wu7UMSdwkWEnRJ4fPnunkQlW7m3bu2s1+lUHwgyY8EucEeD+4jRPN5zPV1GiDiYsqXmBcmUtUpaqqjNTFGpGbkPzLXzprSdWwd3u0zwgtALDdhav6JA1iqCkDQSUXRzb2KbbYMCgYAaKfBxIPEHVmT5NjtAmAHX2vsxIrkGydq1LJt4YKGftOqa2vMIy5cJoBB+ghCH7CmV3HSFihnXvENyMdiljwIyAvEojdLk72gJbT5RVvOOEjm8mkfNRUcyqc/HyKjaGNswyA7a1NgCi6saklikHDl7E1kTQ+5vUlsHhdiO6HDv3QKBgQCtBqAoZEwUEVLXl05BwG8EjUiFprt1o9gTQbER91BzJMKEEvKUPrNhijYTuxQMxMdR0J/yVOK4F8Lsw7ro8Dl5HRnt4vlLidslWBe/pcI/vU4720y9Mf4rIH122Ls9457Gh51bAkESRshorUJXMALGv3iILHCN0FuEuUSnQs+gMQKBgQDfWbrsFmDaTvtrF/CekocQOtsOH8tSHIuTrx6KIislSJ/w+gwv/xugheUl5l9p9Pz6beulUINf7S3e5sI6ASa5nJvOMfSAJuzldzDWy8CfllyjXi+qZEmIDELHYjuJs0H2d3dVPM1F5vR8m7F0/99agzmmpHjZi8gOPswHH3Tjmw==-----END RSA PRIVATE KEY-----

5.6、pki-sa

  1. [root@master1 pki]# cat sa.key
  2. -----BEGIN RSA PRIVATE KEY-----
  3. MIIEpQIBAAKCAQEAu8m0vfJe70JTRZW288FXGUA3DIxPcvEvCll3l1NZ1CabfNki
  4. lW3A0CqGUX2YlPLz/GgFP6PgNNcJR0y9aa6m4QwLFPjpv65LghVtFgsLckgl3DZX
  5. MaEtY6iIENEnrqFRzkGdT3XDYdSZfXORs6HTT+NEcWsFrJaPFrk2/0iLvlFKSV4I
  6. basTLEXj/KDRLToMuxNZMeX+IJyhi9YI7CrgpDAxfJB+usBQVPl6Y4ZU/GmMT3bi
  7. Z7q0NXjSt+9+C4Dt2bLYEpuuxbDVDUEvC+4DOaVP55IVHFk70Y0JwE+xbNnvQkDA
  8. s3zvTo/Ued4u9tIGC9akvNjoL0FBDXyJ7v6xowIDAQABAoIBAAv92G3cwVUz/g9O
  9. fS1Zpk81e45wk046uo9FoU5ngy/5+yngz8WNCagBXyxrAchZL11p4xPqShH1vWDx
  10. NJNAFOYAF+ER+BNGdQnshlfHAsccdlZ2neDMcxKPG4k/YfJT2N578Ci302823UpW
  11. i/JVniHW2HMJq4YW4zJHR4zLvCi9+iLKAWYe3fnjF0OxB0CKpzROQcrV2jYMyLw/
  12. 3rxaLJ622S4aNlydkfgGI5tXaZ0abBuI9D45HoxkGVq6UvW3ikTA0aK7fbmWm1H5
  13. pr0M1nO9Ebnq2XnzFChZd92XYZhl8+osw9sSgj17x6m5fjPcxO/5bfdw4xjl9KmM
  14. kpCHxHkCgYEA5GKQ01RPnR1Q8ebSH0eAZUvNTb1Aw6hbz4+UImUlDYTuVKYeGdKh
  15. K+fzHiP50F/6W9l/RwQsR4wLheMQjjAVC67fd3XAC+aDtpUyid0P+C5v3crqg3U6
  16. VYUNa+mnKl7KW9bzkFQcoArjdP5sKdcw/pTglncx2gH0ghZL+t5n+VUCgYEA0n5/
  17. JFSdlLhVtWrBfEbezN+tauBjltf8YhoSXySLDnbyl3kTfWK2UiU0I4NyhezfLiOQ
  18. +dNqaKeNXoU+/okdtT/8RuMisFMTIINgJG/LqUHsvxDQOTZJpsb8RQPJjGPpQ4Wv
  19. jMAgTkk9GCBKkbvMx0/pZaHI6T4uAP/M55KeHxcCgYEAxKfa7R38L92+hY2sASMg
  20. fBj5f6cmzVN7Ow73D2bosOt2DY28/Z9RCO2BesKfqb37ZnuyDQSa3EDK607KQqVE
  21. efrqkYLjC1xCrkVqbyvbRGk4ClNf/DJFOL6JABMBzoow1UQSFoVW4Lh/g45QtPaH
  22. SbAIc4fPdVmZoSpx4mMARMECgYEAlR2tvjP/Sirn9NQC66JdFa/jb1I02th5X5nu
  23. p94AcKfNJYdNSkcSt9DJRdtJ1xw94rapboHZ4PfJi0tDnBfQpuUEN8eSfGztoNvQ
  24. 0R8tnOMp7xTfHZiaxn4ymkWbk0v4JLBg84nrmOoDUMMXcHQlFpFC24+n/6vf9S9B
  25. nk9cmtMCgYEAvcMkHKyTF8etT8WkLMYQHCFF6J+1CWWZ4iaVUcuu/kZY2bx5fOa1
  26. 4ENzIcEYBaUSrQuIlKN1RtMzl4WlPr5OWNE8W61mpdaTOP2acPyejwGAv4+naejb
  27. 6MQWWmQhmzPVsceiKVz9zCsU/+tgikYB1Hgzi3es8t7l2HHY7IHKsjU=
  28. -----END RSA PRIVATE KEY-----
  29. [root@master1 pki]# cat sa.pub
  30. -----BEGIN PUBLIC KEY-----
  31. MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu8m0vfJe70JTRZW288FX
  32. GUA3DIxPcvEvCll3l1NZ1CabfNkilW3A0CqGUX2YlPLz/GgFP6PgNNcJR0y9aa6m
  33. 4QwLFPjpv65LghVtFgsLckgl3DZXMaEtY6iIENEnrqFRzkGdT3XDYdSZfXORs6HT
  34. T+NEcWsFrJaPFrk2/0iLvlFKSV4IbasTLEXj/KDRLToMuxNZMeX+IJyhi9YI7Crg
  35. pDAxfJB+usBQVPl6Y4ZU/GmMT3biZ7q0NXjSt+9+C4Dt2bLYEpuuxbDVDUEvC+4D
  36. OaVP55IVHFk70Y0JwE+xbNnvQkDAs3zvTo/Ued4u9tIGC9akvNjoL0FBDXyJ7v6x
  37. owIDAQAB
  38. -----END PUBLIC KEY-----

5.7、pki-etcd

  1. 注意:/etc/kubernetes/pki/etcd目录下的ca.key和ca.crt不同于/etc/kubernetes/pki中的ca.crt和ca.key
  2. [root@master1 etcd]# cat ca.crt
  3. -----BEGIN CERTIFICATE-----
  4. MIICwjCCAaqgAwIBAgIBADANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDEwdldGNk
  5. LWNhMB4XDTIxMDUwMzA4NDkwNFoXDTMxMDUwMTA4NDkwNFowEjEQMA4GA1UEAxMH
  6. ZXRjZC1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMorsnk0NePN
  7. eBU4KEHAXicxWGjDvfP97YoXqwZPGKMnZnCgSY4srcehaatca5bUjoXQGRABtd7G
  8. 4QjS9ny2IdkZ3BX0PsWPCfTJb51GM5C9tkXqHG8O/bMvvEd88GPVpOKa3zS+JuSU
  9. h7JQUY7znuF+7HSDwkv3uCNXIzTlQB5MGyKrAD5suoX0Y893t4c+TMDtAFaFvoyF
  10. C/vQAFmS6LWgYoBDQWabeu2ZqoqWp1bZGFGjitUFQiTAOgtiFKyZNJtcVVglNten
  11. DxvpPib0R97nBTjRxCFwEtJGPbps++E4UhYVtj9b6jqKyq9jH/h13pJpopPs8NY5
  12. XY/wiAK3YSkCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMB
  13. Af8wDQYJKoZIhvcNAQELBQADggEBAJVzeFLomtXB62a3/JBZzjtlTFcAg/2xTxgR
  14. XsXBxoG8A51bqFcsGMdTjmllCAG3FcbxLzs+EdQ5QdIIHDvqGkCZ4JEUU+YyWLXb
  15. j2YZ7kO85uBbw0gY50C/vVx3tbDtt76bR/Q0cqHySlzh/JdNOm9ZY37sY+/u9OJE
  16. XQxYMw9nOGSWHrW1XtCErVXWq3d2QH0JzgCvj9aRt7nPOtzFBUX+fvkemsJ+8l9D
  17. MaF3zpJGdevk5H5a2rCr4oM/UFezwL9HmH+ibl70wDy11idIQkdAYu+8dBbeXGZP
  18. tyHqhBSJge9Oi6bgvNS6fQvOEAflgmRmPa+rMJFS5tyCRJFOCyc=
  19. -----END CERTIFICATE-----
  20. [root@master1 etcd]# cat ca.key
  21. -----BEGIN RSA PRIVATE KEY-----
  22. MIIEogIBAAKCAQEAyiuyeTQ14814FTgoQcBeJzFYaMO98/3tiherBk8YoydmcKBJ
  23. jiytx6Fpq1xrltSOhdAZEAG13sbhCNL2fLYh2RncFfQ+xY8J9MlvnUYzkL22Reoc
  24. bw79sy+8R3zwY9Wk4prfNL4m5JSHslBRjvOe4X7sdIPCS/e4I1cjNOVAHkwbIqsA
  25. Pmy6hfRjz3e3hz5MwO0AVoW+jIUL+9AAWZLotaBigENBZpt67ZmqipanVtkYUaOK
  26. 1QVCJMA6C2IUrJk0m1xVWCU216cPG+k+JvRH3ucFONHEIXAS0kY9umz74ThSFhW2
  27. P1vqOorKr2Mf+HXekmmik+zw1jldj/CIArdhKQIDAQABAoIBAAJ7uOx+NK9Apdn0
  28. 36G3IDDxDTn0NZAarWFF2ybvr8jJQhveDCk/6T6LgAXH09Z9c+a24KfurXI4FSmL
  29. ldWAUzgcdjSa1G6OzDuCgel3pEiB3AxNzN2cXIdn7bMfGMDRLf5OkrFOKKIkJOqO
  30. zAGqgmgYrATeXXObblqYxmju6/OzTAPbTV6Wn/DWNouoNQu/jWuIVvm70MTZcXKQ
  31. U2UF/ZnGsB0PKov6sXz2sKMb1yC4xXfOdG9uhWLgTmd+7ETRdFCc+Nuu2BcxmwBK
  32. OCDVTqfEKmv/Qx3tDzd8ILzU+gLKw1vYbbBTPZYW3i8aVhBhp6BnxOFx3//PJZS0
  33. L4ZAmFUCgYEA88h6fI59Fm6UnexymAkwpWbct7bTt4Vae1Sh5Gc98crOPr9HPzRA
  34. KK7GbZBDeIbBVO3FEiVBF1ZmnBJ/wMn2GKqlqBfVO/PQEhGr4nigN1cQpxwhqJ2C
  35. dK9XCUqLxP9VVIALOBT4O8vku42iw2JObEoSmqq7Lf6I2V9U+xgZcOMCgYEA1E1e
  36. PHy86KKdWIglp52qD/NsBxcEO5tK6GwaFm3qjMZ5xG42VIQCcnlIZMsbdJjKbQZl
  37. 3isrQtlCoYiFivvZIQHLhATwVI61iP+s2PYPiVoZufdDT5wKgu6rtyclUatCqhLE
  38. /wn5fk6z9vjhYcO4I7bh6VBO4ISkTs+wJg8mf4MCgYASzqSkd1mvIVjV1igBErRu
  39. DkF46uHqhp80ZJMYy947iSngLWGRvrY0bUdhrH+IDN1db/qEK9uZsVC5ObQha3NQ
  40. 89lT3oLU3TpwKmzYS/YQTuc5/TGbkIs/9UcBsH6X9BrhKf+zk+qSsmgzD/o+mJb0
  41. Q8KrrABEzB5CptgnhvRvgQKBgES8xA0rifJ8bBt1AVQSzTQa6VgmUJ2H+ynjjlLC
  42. xdVMkbJSyM52a2Bq+lCAHmSS7796+dKEAZ7EPzmTvUExp6xzK1SUUMff6NDxjyI0
  43. EPW0sW2vrCCDcjfQVNKZHxEhNRVhvFyi+x+1FbmZ/UctGlqd5OkoslEpQRWvUuYP
  44. s7RHAoGAUlDNQaVNYZZC+heKVqEOnMHDNl1/RdHCeGoMekZZklI9eIdpaGsQkf38
  45. Zbzl//1wgrQm7gW+ayRGWw4WJvnRTp2xY0aIdSjhnrUoWjCGsOMWhDCM4qiQuZdT
  46. I/+xBNP/ghdho6FiN1pFD71NnAOkDpPNZ4XkuNAOhhKEUC+WLt0=
  47. -----END RSA PRIVATE KEY-----
  48. [root@master1 etcd]# cat healthcheck-client.crt
  49. -----BEGIN CERTIFICATE-----
  50. MIIC+zCCAeOgAwIBAgIIXmaLbxJ/6EQwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
  51. AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDVaMEAxFzAV
  52. BgNVBAoTDnN5c3RlbTptYXN0ZXJzMSUwIwYDVQQDExxrdWJlLWV0Y2QtaGVhbHRo
  53. Y2hlY2stY2xpZW50MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtPxz
  54. xbho/LLxIv3s7LhO/knKOlc/UEt9J/0FlrWhzQRMTUwOXGHgKYXZfYx9Vc0XxUsn
  55. X4YHTNNuh5H6ps/QrVBkzbMRytGzcCmhr2Lymgim1QdSrD3s2A5Jhnhyv6ISXkf7
  56. I4Vx52Gh21698r9KJkzXQoeHCFABdqjFZmdN1HETnGQJx4tSFWI/1gy1d/af3XTv
  57. 8OIFHtgO21LwE3PO51uvrhBmkH6EcKn6mikd+qqvERoOz7IrZk54NVivA7ykPJMV
  58. bNuf/wxKJWv8MHRfJ0DAWkVNJxcqcS19gVoaenZMVh71wMK3VdQji7HJFayCe8ig
  59. 0MYspJj16P9xNpiPPwIDAQABoycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww
  60. CgYIKwYBBQUHAwIwDQYJKoZIhvcNAQELBQADggEBABxcPcvwyE/mKtHt/sgt2veK
  61. GwBy3rvqSIv7yXeh6A4j8cZ5AwNJn483Va6qkuJnc7+eFfydPZgPmCWJ5JDy0LAA
  62. W8/Gy/Q9MJLW/QZ8TyeZ5Ny5jbwXGwF3a2H1hCmGDHm0bKOIg1kClTVFoxIn8RlM
  63. 0gkWGjxZ50/8613qwOIz2Lr1VxQr2TcjQCwN+j9ZP/7jeA304k8OnHH9uJnfHJ/3
  64. duRkIGsgzTHiJM6s7dVpG56ay3Tr8vFO0j/XzlgwK+m6qqNayuQOrgXa3eZB0YOu
  65. hsQpi0XBxs1/GIDCKYlVrmP3U2sWtSX/+CqZ8abv+/AIwQYWqFngM8IRLNDN2bo=
  66. -----END CERTIFICATE-----
  67. [root@master1 etcd]# cat healthcheck-client.key
  68. -----BEGIN RSA PRIVATE KEY-----
  69. MIIEpAIBAAKCAQEAtPxzxbho/LLxIv3s7LhO/knKOlc/UEt9J/0FlrWhzQRMTUwO
  70. XGHgKYXZfYx9Vc0XxUsnX4YHTNNuh5H6ps/QrVBkzbMRytGzcCmhr2Lymgim1QdS
  71. rD3s2A5Jhnhyv6ISXkf7I4Vx52Gh21698r9KJkzXQoeHCFABdqjFZmdN1HETnGQJ
  72. x4tSFWI/1gy1d/af3XTv8OIFHtgO21LwE3PO51uvrhBmkH6EcKn6mikd+qqvERoO
  73. z7IrZk54NVivA7ykPJMVbNuf/wxKJWv8MHRfJ0DAWkVNJxcqcS19gVoaenZMVh71
  74. wMK3VdQji7HJFayCe8ig0MYspJj16P9xNpiPPwIDAQABAoIBAAEFk9m/6sfScs4R
  75. xO6pM7j3za56o57ebjx1jzyElf9EUPH2xfX7j3psiQfObT64w7OXcwd1CEGEyBD3
  76. 4ARlE/aGh6spoaYVfP/bHFCTLG92MQru2aajSt0FZ6DcuTkfvx7NJTvUGwqFYJaO
  77. eGAQeGiy8lwry7VeTkPPPB4R4zyZzGX5UoxI3qd+ffQ7kDCPocDgWxio6TZyjaLZ
  78. Sj/felD9mth28IgToRJmVgH7ZZog81gxTv1pSofCcRtjT3zNH3n2kVR4zMFxXpfh
  79. DwKdCnzt1Jv0Hs/PFvJPxK9MDw49S6lhN7tgUbhoON7wqvC0CSQretelwDjZWVRI
  80. J1rtSgECgYEAz//rSUCAwYD2ynk/eqFRlhgloTCOquYipGCdojGbh5uj7AKrc8T8
  81. Y+9fe/rbFVDOtYh6GZJghl7p5KTU0Td6NvwMs+gT4e7m3/hiz8jsW19G+W8AxYVG
  82. y5OSm9fiyReZrw7DMVuz3fRcidivr+RBOcNvuUwI1PFjASJUNF/JGL8CgYEA3sCk
  83. rArnUg7IjcTrYX+W+5TNQDXxMSbho6DQOFIeuHEibdOsN0fwCPXwREiJ0wYwwAHy
  84. KrWDgFLUEDkMQVpd6QqG3rlGM4RMIr0wsBc8h0LpfbpUmw3RO00yaIkf5T6j8psp
  85. LokAQKl2t1adKjvQaGeodqbse0NrzObaiZVTqYECgYBxb+NEGfeekNUHa8Tg/mXe
  86. c+Dh3feQ4N33w/F0aZWnCY0GxBX5l28GmZ/7n74oC+AQRRRCKgCWh+ELn5GpYJY4
  87. spHC9EkTqRUlBPPu2md9FaNBmfZTwvHvSNZmRAEdJs/cFzMBEkAwRnrJevGl/dhM
  88. xneCGSOf7t3N2okN30dvRQKBgQC5248KpYZg50jbUUT8ctLtUzj2rIt0cXavaoyR
  89. kaNkTbFmZck5zuIu99Xjg4rL8kxWyMjgbdctCO88If1hwh69RTVHPNugPHCyQ50O
  90. MDUmvuPHLeNOBHdhvYWjx1Y/lsaAtInl9BWr3jnZu4EjLgk0M9lSNvD14ElgC/an
  91. +Vp3AQKBgQCHsT/bi7f8efrRhwVXE3Vs1zULTesr9QRWz5EOd/l3AhawYt4ntrKG
  92. 7XYORKtvIHsN8/1Xxd8euG0fec0kKmnFLT/LRShG1AfcH87hw6mLfvVFuBhmpaPb
  93. zr71f6PJ2oMFTXNutrVY1dg6Su2fQjF01eXYwTfrVtL3ShI6EmMN9g==
  94. -----END RSA PRIVATE KEY-----
  95. [root@master1 etcd]# cat peer.crt
  96. -----BEGIN CERTIFICATE-----
  97. MIIDFDCCAfygAwIBAgIICCVcX9SOfGUwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
  98. AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDRaMBIxEDAO
  99. BgNVBAMTB21hc3RlcjEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDV
  100. LT+aoq/aHNPtcg5bRo1yewrTOSP8Y/q24E9bl+eNYx4wI/PmyQhPxoi3o61R598p
  101. WOTDt6fkOLhSR6o+4ZDhDege+rHfeZ4afwCXkOO5hAanN7YedqJljiahuv5fPPGd
  102. be1tRwRLXK8KcHtYI06wP97QFjoALshixKCJK53vjH9+KcTDqJDNt20FTSuUZB5z
  103. QCvK2Hy8+CtwaxmJqDbjq1CX1q75HnvCENHQ0StZKJRhzu0S6fT2xxwo58YMcVUM
  104. tSkVoW8if++BE2h8GB/UcuQFisc6f3Ps/fvrLrOUhWww32xeGFFGDtttyZ6qt5g8
  105. XVImUUhH7iU2aI0rr2TjAgMBAAGjbjBsMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUE
  106. FjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwOwYDVR0RBDQwMoIHbWFzdGVyMYIJbG9j
  107. YWxob3N0hwTAqEqAhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEB
  108. CwUAA4IBAQCVo26/7HNrnQ3kEb8vr6ME4nHkswLJrIwqe0bemNePqBpZFdeQIImS
  109. M/kWPDt9rgdhgAuABUz8UbIxxz6zOwfpMEAjMqu/EVcnxD4a26Bpma7S7BUaiVXH
  110. GORLMJn6y54dI+Fs5gvDIKk69ueuHmd8nX/z2cMRgNToTVCGaRHqInaV75vZXaB+
  111. g4hLl3HrCnU+toqcmJ0ENy/k6+HDZ6jl1FH8mrj7Xu8uwvHE39tkUodPG+y+AWBa
  112. bEcoME1aMcld/I55jMq4gJuEzH2rUdCJw7u9+Lv4Jx3OlhZeoNXSH/uNjMxR/cv5
  113. VMw8Ekta5MOQ9MtgaoTS8p26th0bFTsC
  114. -----END CERTIFICATE-----
  115. [root@master1 etcd]# cat peer.key
  116. -----BEGIN RSA PRIVATE KEY-----
  117. MIIEowIBAAKCAQEA1S0/mqKv2hzT7XIOW0aNcnsK0zkj/GP6tuBPW5fnjWMeMCPz
  118. 5skIT8aIt6OtUeffKVjkw7en5Di4UkeqPuGQ4Q3oHvqx33meGn8Al5DjuYQGpze2
  119. HnaiZY4mobr+XzzxnW3tbUcES1yvCnB7WCNOsD/e0BY6AC7IYsSgiSud74x/finE
  120. w6iQzbdtBU0rlGQec0Aryth8vPgrcGsZiag246tQl9au+R57whDR0NErWSiUYc7t
  121. Eun09sccKOfGDHFVDLUpFaFvIn/vgRNofBgf1HLkBYrHOn9z7P376y6zlIVsMN9s
  122. XhhRRg7bbcmeqreYPF1SJlFIR+4lNmiNK69k4wIDAQABAoIBAEuZ0na6v3awxo/s
  123. 5R6FtOAmtr4WA6ccpet5PWuUQbAouKoF9hegr+vq0s2dpHfprYDyX57xYP9VBjlX
  124. 5Q6L3F+UGP/zlGVWsjVfWQxne/ts0Rc4cMP4+rrdYOH2eQO5j05vj8Yza1h2tDUV
  125. kwi87MkgvZo6Z7Ns4+/zH6PF7irnmP1hwkgwPZQ8aGys/SjDMGYo/r6SS872I4go
  126. /I80AuE8OIxvkSt0M2McvtJxoy1BdMY5FTheJkTVQg1lQJsFVicLv11Qql02I38u
  127. eI/+jjW/VDkJlp2QxVaAh0ZKyEJQNxBHbmEiZuomPovtZGvAPZoU1RkSw6UN5R0O
  128. FpcIF9ECgYEA4e+q0D2+OJFsbvxqBpoDPBYs6Onjboz7sUBVAHVe2NvpB4Nw5nsd
  129. tJwx5kGDeo0ABlACVzslmZlGUICnEk4KGiwDkuy8DKmHhzBV0M8Mi9VuhHXN+4aV
  130. 2b8Y8+wIP3XQR0WWX+gHfQsvwjmuYefkaCzJ/hIxTOavlsyRD2+40/8CgYEA8Yrw
  131. 5K9NY9bv8Fe8YATLXCtoBOprFURXeVjxzuIZGcU9sd1jwsaB+CBLxD8tuwf7gVF2
  132. /IbF5FTOjGePnDxYGujHi4ev1eJfvPVoin9YNw1XUCi7R49DS3SQwBtTJQBeSldP
  133. fvZPeqz4KO4vzwWFkHqFnQSYj66BZATehbfsnx0CgYABJB+9u4IZcQqWKOo0LFT1
  134. 2brSVlQSu92NkKCdRvp6p+muYwiP8XE990f9PLl4RfwJDCBm5mKTOwXy5CNz4TcF
  135. 2NEPzehJPBX2JdVZH6KVljdfreSjb5OULPXoTXnhMCwkIALZayeWhxbvqTDrR6uM
  136. pyVCBj9/fu7GGTRmWo8ZawKBgQC8vjBs0lsr+Am4Cibl9Pkfxb9bj/4rOSMNbKZP
  137. Xjf0/j6uXOwWiF2JEVuDN0c5zgwGyiyrOXkraeWYq1f54uGJ7Xn4GwgYnvLmyfFt
  138. wAKjyiX/OkTVryoLrUNrCi8XS8liWAWDlV8X4k9sVGtBXvQ2qLb9sliwddEf4fos
  139. DUO2NQKBgDI424TXg6VmGOVBcYyxc2t+Q155u675aC40CDIU/Olev8oIgcv9VSie
  140. 01BZNDLPoY+N5DdQxEf7Z6BtBeOrwax0KIu0YPsunef6q4aqUxRrkpZhwLoZOnFX
  141. WcVxMBYbCgi305LRq8xYffkp2CbkRY8SPRJ2+xR4JMxQSZSgJICA
  142. -----END RSA PRIVATE KEY-----
  143. [root@master1 etcd]# cat server.crt
  144. -----BEGIN CERTIFICATE-----
  145. MIIDFDCCAfygAwIBAgIISEsLZ8pQBn8wDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
  146. AxMHZXRjZC1jYTAeFw0yMTA1MDMwODQ5MDRaFw0yMjA1MDMwODQ5MDRaMBIxEDAO
  147. BgNVBAMTB21hc3RlcjEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDI
  148. cZwVY1owRrx3Gw43HcxSj8eULupWx1U99+eS54BcFoyY57rwCyL3jbPXDmv1dili
  149. RnLPeAJmbRgtY2q2/2BHAp3c8Uz2y+OK2pSdo5giWm/Xio+++S3I7hanhg68HzDx
  150. o/oizCSZ40DPyRE9pMkgsIRmxLQFnLENuby5w8/cTyM3XdfsZl7mPnRvlnEqvSKS
  151. U0EH722VV+nzagath1xcW+q6YW34g8PRkP1x/U5h1vUch+ZcmarR5oP8NBYaZ+P5
  152. vUp54m8MYUMzIae2BA8LaHrTR3YGDaJiQvUq1tWkmc3Lu8Yn53ciYIKjZPmt4gLa
  153. D4z/g9h6Glhf4bC7ufudAgMBAAGjbjBsMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUE
  154. FjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwOwYDVR0RBDQwMoIHbWFzdGVyMYIJbG9j
  155. YWxob3N0hwTAqEqAhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEB
  156. CwUAA4IBAQBV723E89RJpLH3QLNgiNU3AEXW+qRJ+1wMOdF33X9ExEhFNPqofjSx
  157. h+timy22AJ9xFKSKIXL/ZEhSff4u3yg0xpEtpmx/lPquN4w27d16H2qPk72Tbl9/
  158. gwoKyl0qDYGkhOsnf4L3oFVg3c8xynn2VTZl4p5VnRJwPETJoKVME7YRMz2xNvvs
  159. gwkqm/u3ktH0hrdS3jUtzkgZLBD7iSFHrMUV/jetLKliTBIP5v1ZStQPprgz55Db
  160. mE107Q2npubxuHF0cPLcPWd5K0igMSgzujZIf9Pe1bO9blxjhbDQHmfvzEpHygcj
  161. zXMi2XoIsawXlcaMMQNbCqLbeIeKDzON
  162. -----END CERTIFICATE-----
  163. [root@master1 etcd]# cat server.key
  164. -----BEGIN RSA PRIVATE KEY-----
  165. MIIEowIBAAKCAQEAyHGcFWNaMEa8dxsONx3MUo/HlC7qVsdVPffnkueAXBaMmOe6
  166. 8Asi942z1w5r9XYpYkZyz3gCZm0YLWNqtv9gRwKd3PFM9svjitqUnaOYIlpv14qP
  167. vvktyO4Wp4YOvB8w8aP6IswkmeNAz8kRPaTJILCEZsS0BZyxDbm8ucPP3E8jN13X
  168. 7GZe5j50b5ZxKr0iklNBB+9tlVfp82oGrYdcXFvqumFt+IPD0ZD9cf1OYdb1HIfm
  169. XJmq0eaD/DQWGmfj+b1KeeJvDGFDMyGntgQPC2h600d2Bg2iYkL1KtbVpJnNy7vG
  170. J+d3ImCCo2T5reIC2g+M/4PYehpYX+Gwu7n7nQIDAQABAoIBAAEph3ooRVGaV2Vp
  171. Zr+zEIg6BTI6w2kVZs0hLtqPNRNTniUU0uSpa957l9tbXgziToMfXXMOgxUM9OLu
  172. fKPq/yfqP/gT/hpAPGWFtu7jD/LDC3r4drToxPcxSjhWcqdsluAPz1d8T4oE409R
  173. HyR4XCIwY9Qkt9aAfhZSSWHaXM4uNKju2fMIr8dEf6F7iazqU+ziYwpLC5QzMWd5
  174. BCIvT95eSNzaBx5kunFFRUSEGh1e2lrWLEP6jD+xULUI6hWKSLQBV9EPwVzwax9f
  175. TAS3/U0VAcpDSat3LXBheCSgd6btXv1BOTlGNAV9vEWjisKdSF1GsUpo3ex5OxxU
  176. EHgE7wECgYEA7J5hbe24MbJPF4yJsjC8lSkqgQGk85l9JCeu8LuG7DXEuqIr6IlW
  177. rwEveTvY42viYTf5LqikKaQ5dlRKdBi2PL7UDTnf2A20ZOgUMoF7f376GsCsuep+
  178. MOKcdB7Bft9lOdf6Lo2P+66yPrDDF+ylQGdIY/2kW7F+O5TYMjh0LzECgYEA2Nyv
  179. BQT9H8KRyglWEjVIvnJhY/IF2M7fFNeS7hAEcban39WWjTwUQ/j+y8QKZJRK/otG
  180. kIbyqBcQfUBrkuEb7nIfQQUDprIFwMrFHESyUNkGClU/KLyseeFlej7fQZ7KMSwz
  181. y9CfXgQDTyX1Jl2A0OSXntVNvz+/bJWyub2csC0CgYEArdNYPdqiMxgL1H/w9A+r
  182. qmR4jhc4J6C9dx8T/FO3NbX2VSkn2odyP9Q+HPDjT4cE4mitTSKknta/Q/d+TrWM
  183. wylpPGIk2GKRAIQhuky2/h24/IhJG7dxhtYjG4cwnNTeV1UbvLFQchOPbFCMsfmu
  184. GJcHbjV6VcYZtwmMnbAtYjECgYA7gKHNIMdLNZnG87TYHiKtjrjGMZwFFw4Cq/u2
  185. slJl2RZKxlIewoNU+zb+NfYcDsxc914PPdfK4zk1BL3/eSCu1kVZE8Uisen+MiTP
  186. UtISeNm9cBJ6XPp+Hqg3WJTtbmJQB67Wl5GCvFskFmgjdLhpmK85d5Fzjkw5wQFf
  187. EXWyqQKBgHHgW02OikysXXwtW9rb+xAzP7gfI2jFSsqV5DeLnjhFd952JIn6WJa5
  188. Ercujh9UAR4Gy7lBWWYsp/2hR1OX5hk4WsPmH7/iGG0zdvYKDEnm8Nr/4wPd7YU0
  189. TYQ0ayrqvptmyZzo5687Gie9TW0NLi3pRzcZ8o3mPcYvluGgBq8z
  190. -----END RSA PRIVATE KEY-----

高可用kubeadm安装集群推荐博客:https://www.cnblogs.com/lfl17718347843/p/13417304.html

六、高可用安装

6.1、准备工作

主机初始化参考:1.1和1.2 ,按照2.1和2.2安装docker,kubelet,kubeadm,kubectl

另外增加如下配置:

  1. 1、开机加载lvs模块
  2. [root@master1 ~]# vi /etc/sysconfig/modules/ipvs.modules
  3. #!/bin/sh
  4. modprobe -- ip_vs
  5. modprobe -- ip_vs_rr
  6. modprobe -- ip_vs_wrr
  7. modprobe -- ip_vs_sh
  8. modprobe -- nf_conntrack_ipv4
  9. [root@master1 ~]# cat /etc/hosts
  10. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  11. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  12. 192.168.56.101 master1 www.mt.com
  13. 192.168.56.102 master2
  14. 192.168.56.103 master3
  15. 2、安装keepalive-所有master都安装
  16. yum install keepalived haproxy -y
  17. cat > /etc/keepalived/keepalived.conf <<EOF
  18. ! Configuration File for keepalived
  19. global_defs {
  20. router_id k8s
  21. }
  22. vrrp_script check_haproxy {
  23. script "killall -0 haproxy"
  24. interval 3
  25. weight -2
  26. fall 10
  27. rise 2
  28. }
  29. vrrp_instance VI_1 {
  30. state MASTER #除了第一个为MASTER,其他为BACKEND
  31. interface en0s8 #注意替换
  32. virtual_router_id 51
  33. priority 250 #不同的节点设置不同的vip
  34. advert_int 1
  35. authentication {
  36. auth_type PASS
  37. auth_pass ceb1b3ec013d66163d6ab
  38. }
  39. virtual_ipaddress {
  40. 192.168.56.211 #地址注意替换
  41. }
  42. track_script {
  43. check_haproxy
  44. }
  45. }
  46. EOF
  47. systemctl enable keepalived ; systemctl start keepalived; systemctl status keepalived
  48. 尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。
  49. 3、安装haproxy
  50. cat > /etc/haproxy/haproxy.cfg << EOF
  51. #---------------------------------------------------------------------
  52. # Global settings
  53. #---------------------------------------------------------------------
  54. global
  55. # to have these messages end up in /var/log/haproxy.log you will
  56. # need to:
  57. # 1) configure syslog to accept network log events. This is done
  58. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  59. # /etc/sysconfig/syslog
  60. # 2) configure local2 events to go to the /var/log/haproxy.log
  61. # file. A line like the following can be added to
  62. # /etc/sysconfig/syslog
  63. #
  64. # local2.* /var/log/haproxy.log
  65. #
  66. log 127.0.0.1 local2
  67. chroot /var/lib/haproxy
  68. pidfile /var/run/haproxy.pid
  69. maxconn 4000
  70. user haproxy
  71. group haproxy
  72. daemon
  73. # turn on stats unix socket
  74. stats socket /var/lib/haproxy/stats
  75. #---------------------------------------------------------------------
  76. # common defaults that all the 'listen' and 'backend' sections will
  77. # use if not designated in their block
  78. #---------------------------------------------------------------------
  79. defaults
  80. mode http
  81. log global
  82. option httplog
  83. option dontlognull
  84. option http-server-close
  85. option forwardfor except 127.0.0.0/8
  86. option redispatch
  87. retries 3
  88. timeout http-request 10s
  89. timeout queue 1m
  90. timeout connect 10s
  91. timeout client 1m
  92. timeout server 1m
  93. timeout http-keep-alive 10s
  94. timeout check 10s
  95. maxconn 3000
  96. #---------------------------------------------------------------------
  97. # kubernetes apiserver frontend which proxys to the backends
  98. #---------------------------------------------------------------------
  99. frontend kubernetes-apiserver
  100. mode tcp
  101. bind *:16443
  102. option tcplog
  103. default_backend kubernetes-apiserver
  104. #---------------------------------------------------------------------
  105. # round robin balancing between the various backends
  106. #---------------------------------------------------------------------
  107. backend kubernetes-apiserver
  108. mode tcp
  109. balance roundrobin
  110. server master1 192.168.54.101:6443 check
  111. server master2 192.168.54.102:6443 check
  112. server master3 192.168.54.103:6443 check
  113. #---------------------------------------------------------------------
  114. # collection haproxy statistics message
  115. #---------------------------------------------------------------------
  116. listen stats
  117. bind *:1080
  118. stats auth admin:awesomePassword
  119. stats refresh 5s
  120. stats realm HAProxy\ Statistics
  121. stats uri /admin?stats
  122. EOF
  123. systemctl start haproxy;systemctl enable haproxy;systemctl status haproxy #三个节点配置文件一样

6.2、初始化

  1. 1、master1上进行初始化
  2. [root@master1 ~]# kubeadm config print init-defaults > kubeadm-config.yaml #然后修改kubeadm-config.yaml
  3. [root@master1 ~]# cat kubeadm-config.yaml
  4. [root@master1 ~]# cat kubeadm-config.yaml
  5. [root@master1 ~]# cat kubeadm-config.yaml
  6. apiVersion: kubeadm.k8s.io/v1beta2
  7. bootstrapTokens:
  8. - groups:
  9. - system:bootstrappers:kubeadm:default-node-token
  10. token: abcdef.0123456789abcdef
  11. ttl: 24h0m0s
  12. usages:
  13. - signing
  14. - authentication
  15. kind: InitConfiguration
  16. localAPIEndpoint:
  17. advertiseAddress: 192.168.56.101
  18. bindPort: 6443
  19. nodeRegistration:
  20. criSocket: /var/run/dockershim.sock
  21. name: master1
  22. ---
  23. apiServer:
  24. certSANs:
  25. - master1
  26. - master2
  27. - master3
  28. - reg.mt.com
  29. - 192.168.56.101
  30. - 192.168.56.102
  31. - 192.168.56.103
  32. - 192.168.56.211
  33. - 127.0.0.1
  34. - www.mt.com
  35. extraArgs:
  36. authorization-mode: Node,RBAC
  37. timeoutForControlPlane: 4m0s
  38. apiVersion: kubeadm.k8s.io/v1beta2
  39. certificatesDir: /etc/kubernetes/pki
  40. clusterName: kubernetes
  41. controllerManager: {}
  42. controlPlaneEndpoint: "192.168.56.211:16443"
  43. dns:
  44. type: CoreDNS
  45. etcd:
  46. local:
  47. dataDir: /var/lib/etcd
  48. imageRepository: registry.aliyuncs.com/google_containers
  49. kind: ClusterConfiguration
  50. kubernetesVersion: v1.18.0
  51. networking:
  52. dnsDomain: cluster.local
  53. podSubnet: 10.244.0.0/16
  54. serviceSubnet: 10.96.0.0/12
  55. scheduler: {}
  56. ---
  57. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  58. kind: KubeProxyConfiguration
  59. mode: "ipvs"

[root@master1 ~]# kubeadm reset --kubeconfig kubeadm-config.yaml 用户init失败后清理现场

  1. [root@master1 ~]# kubeadm config images pull --kubeconfig ./kubeadm-config.yaml #镜像可以提前下载
  2. [root@master1 ~]# kubeadm init --config kubeadm-config.yaml
  3. W0507 18:29:39.557236 29338 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  4. [init] Using Kubernetes version: v1.18.0
  5. [preflight] Running pre-flight checks
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Starting the kubelet
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "ca" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3 reg.mt.com www.mt.com] and IPs [10.96.0.1 192.168.56.101 192.168.56.211 192.168.56.101 192.168.56.102 192.168.56.103 192.168.56.211 127.0.0.1]
  16. [certs] Generating "apiserver-kubelet-client" certificate and key
  17. [certs] Generating "front-proxy-ca" certificate and key
  18. [certs] Generating "front-proxy-client" certificate and key
  19. [certs] Generating "etcd/ca" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
  22. [certs] Generating "etcd/peer" certificate and key
  23. [certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
  24. [certs] Generating "etcd/healthcheck-client" certificate and key
  25. [certs] Generating "apiserver-etcd-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  28. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  29. [kubeconfig] Writing "admin.conf" kubeconfig file
  30. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  31. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  32. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  33. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  34. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  35. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  36. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  37. [control-plane] Creating static Pod manifest for "kube-apiserver"
  38. W0507 18:29:42.986846 29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  39. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  40. W0507 18:29:42.991544 29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  41. [control-plane] Creating static Pod manifest for "kube-scheduler"
  42. W0507 18:29:42.992043 29338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  43. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  44. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  45. [apiclient] All control plane components are healthy after 15.055860 seconds
  46. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  47. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  48. [upload-certs] Skipping phase. Please see --upload-certs
  49. [mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  50. [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  51. [bootstrap-token] Using token: abcdef.0123456789abcdef
  52. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  55. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  56. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  57. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  58. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  59. [addons] Applied essential addon: CoreDNS
  60. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  61. [addons] Applied essential addon: kube-proxy
  62. Your Kubernetes control-plane has initialized successfully!
  63. To start using your cluster, you need to run the following as a regular user:
  64. mkdir -p $HOME/.kube
  65. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  66. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  67. You should now deploy a pod network to the cluster.
  68. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  69. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  70. You can now join any number of control-plane nodes by copying certificate authorities
  71. and service account keys on each node and then running the following as root:
  72. kubeadm join 192.168.56.211:16443 --token abcdef.0123456789abcdef \
  73. --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d364 \
  74. --control-plane
  75. Then you can join any number of worker nodes by running the following on each as root:
  76. kubeadm join 192.168.56.211:16443 --token abcdef.0123456789abcdef \
  77. --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d364
  1. 安装网络
  2. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  3. kubectl get cs验证cs状态,kubectl get pods --all-namespaces查看集群pod是否被正常调度。验证
  4. [root@master1 ~]# kubectl get cs
  5. NAME STATUS MESSAGE ERROR
  6. scheduler Healthy ok
  7. controller-manager Healthy ok
  8. etcd-0 Healthy {"health":"true"}
  9. [root@master1 ~]# kubectl get pods -A
  10. NAMESPACE NAME READY STATUS RESTARTS AGE
  11. kube-system coredns-7ff77c879f-h6ptq 1/1 Running 0 10m
  12. kube-system coredns-7ff77c879f-pw97s 1/1 Running 0 10m
  13. kube-system etcd-master1 1/1 Running 0 11m
  14. kube-system kube-apiserver-master1 1/1 Running 0 11m
  15. kube-system kube-controller-manager-master1 1/1 Running 0 11m
  16. kube-system kube-flannel-ds-m8s9l 1/1 Running 0 80s
  17. kube-system kube-proxy-wfrvj 1/1 Running 0 10m
  18. kube-system kube-scheduler-master1 1/1 Running 0 11m

6.3、join

  1. # 1、证书拷贝
  2. [root@master1 pki]# ssh root@master2 mkdir /etc/kubernetes/pki/etcd -p
  3. [root@master1 ~]# scp /etc/kubernetes/admin.conf root@master2:/etc/kubernetes/
  4. admin.conf 100% 5455 5.6MB/s 00:00
  5. [root@master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master2:/etc/kubernetes/pki
  6. ca.crt 100% 1025 868.7KB/s 00:00
  7. ca.key 100% 1679 2.1MB/s 00:00
  8. sa.key 100% 1675 1.3MB/s 00:00
  9. sa.pub 100% 451 518.4KB/s 00:00
  10. front-proxy-ca.crt 100% 1038 1.7MB/s 00:00
  11. front-proxy-ca.key 100% 1679 3.1MB/s 00:00
  12. [root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@master2:/etc/kubernetes/pki/etcd/
  13. ca.crt 100% 1017 1.1MB/s 00:00
  14. ca.key 100% 1675 2.8MB/s 00:00
  15. # 2、master2-join操作
  16. [root@master1 ~]# kubeadm token create --print-join-command #在master1上执行,获取加入集群命令
  17. W0508 10:05:18.251587 12237 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  18. kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89 --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3
  19. 需要带上参数--control-plane表示把master控制节点加入集群
  20. [root@master2 ~]# kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89 --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3 --control-plane #命令缺少参数--apiserver-advertise-address,我本机环境为双网卡一个nat一个桥接。k8s内部通信为192.168.56网段
  21. [preflight] Running pre-flight checks
  22. [preflight] Reading configuration from the cluster...
  23. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  24. [preflight] Running pre-flight checks before initializing the new control plane instance
  25. [preflight] Pulling images required for setting up a Kubernetes cluster
  26. [preflight] This might take a minute or two, depending on the speed of your internet connection
  27. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  28. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  29. [certs] Generating "apiserver" certificate and key
  30. [certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3 reg.mt.com www.mt.com] and IPs [10.96.0.1 10.0.2.15 192.168.56.211 192.168.56.101 192.168.56.102 192.168.56.103 192.168.56.211 127.0.0.1]
  31. [certs] Generating "apiserver-kubelet-client" certificate and key
  32. [certs] Generating "front-proxy-client" certificate and key
  33. [certs] Generating "etcd/server" certificate and key
  34. [certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
  35. [certs] Generating "etcd/peer" certificate and key
  36. [certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
  37. [certs] Generating "etcd/healthcheck-client" certificate and key
  38. [certs] Generating "apiserver-etcd-client" certificate and key
  39. [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  40. [certs] Using the existing "sa" key
  41. [kubeconfig] Generating kubeconfig files
  42. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  43. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  44. [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
  45. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  46. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  47. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  48. [control-plane] Creating static Pod manifest for "kube-apiserver"
  49. W0508 10:07:34.128720 30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  50. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  51. W0508 10:07:34.132698 30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  52. [control-plane] Creating static Pod manifest for "kube-scheduler"
  53. W0508 10:07:34.133621 30961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  54. [check-etcd] Checking that the etcd cluster is healthy
  55. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
  56. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  57. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  58. [kubelet-start] Starting the kubelet
  59. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  60. [etcd] Announced new etcd member joining to the existing etcd cluster
  61. [etcd] Creating static Pod manifest for "etcd"
  62. [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
  63. {"level":"warn","ts":"2021-05-08T10:07:48.851+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.0.2.15:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
  64. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  65. [mark-control-plane] Marking the node master2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  66. [mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  67. This node has joined the cluster and a new control plane instance was created:
  68. * Certificate signing request was sent to apiserver and approval was received.
  69. * The Kubelet was informed of the new secure connection details.
  70. * Control plane (master) label and taint were applied to the new node.
  71. * The Kubernetes control plane instances scaled up.
  72. * A new etcd member was added to the local/stacked etcd cluster.
  73. To start administering your cluster from this node, you need to run the following as a regular user:
  74. mkdir -p $HOME/.kube
  75. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  76. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  77. Run 'kubectl get nodes' to see this node join the cluster.
  78. # 3、master3-join操作
  79. [root@master3 ~]# kubeadm join 192.168.56.211:16443 --token mb0joe.3dh8xri1xxdf2c89 --discovery-token-ca-cert-hash sha256:62d2014ce2211235ac0bcd8195d76f9ac6e15ce27bbb229f8776dea9f2e5d3 --control-plane --apiserver-advertise-address 192.168.56.103
  80. ...
  81. [control-plane] Creating static Pod manifest for "kube-scheduler"
  82. W0508 10:10:19.785375 30434 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  83. [check-etcd] Checking that the etcd cluster is healthy
  84. error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://10.0.2.15:2379 with maintenance client: context deadline exceeded
  85. To see the stack trace of this error execute with --v=5 or higher
  86. 到这里失败了,意思为etcd集群状态异常导致。
  87. [root@master1 ~]# kubectl describe configmaps kubeadm-config -n kube-system #在master1上检查
  88. Name: kubeadm-config
  89. Namespace: kube-system
  90. Labels: <none>
  91. Annotations: <none>
  92. Data
  93. ====
  94. ClusterConfiguration:
  95. ----
  96. apiServer:
  97. certSANs:
  98. - master1
  99. - master2
  100. - master3
  101. - reg.mt.com
  102. - 192.168.56.101
  103. - 192.168.56.102
  104. - 192.168.56.103
  105. - 192.168.56.211
  106. - 127.0.0.1
  107. - www.mt.com
  108. extraArgs:
  109. authorization-mode: Node,RBAC
  110. timeoutForControlPlane: 4m0s
  111. apiVersion: kubeadm.k8s.io/v1beta2
  112. certificatesDir: /etc/kubernetes/pki
  113. clusterName: kubernetes
  114. controlPlaneEndpoint: 192.168.56.211:16443
  115. controllerManager: {}
  116. dns:
  117. type: CoreDNS
  118. etcd:
  119. local:
  120. dataDir: /var/lib/etcd
  121. imageRepository: registry.aliyuncs.com/google_containers
  122. kind: ClusterConfiguration
  123. kubernetesVersion: v1.18.0
  124. networking:
  125. dnsDomain: cluster.local
  126. podSubnet: 10.244.0.0/16
  127. serviceSubnet: 10.96.0.0/12
  128. scheduler: {}
  129. ClusterStatus:
  130. ----
  131. apiEndpoints:
  132. master1:
  133. advertiseAddress: 192.168.56.101 #这里有master1和master2的,但是没有master3
  134. bindPort: 6443
  135. master2:
  136. advertiseAddress: 10.0.2.15
  137. bindPort: 6443
  138. apiVersion: kubeadm.k8s.io/v1beta2
  139. kind: ClusterStatus
  140. Events: <none>
  141. [root@master1 ~]# kubectl exec -it etcd-master1 -n kube-system /bin/sh
  142. # export ETCDCTL_API=3
  143. # alias etcdctl='etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
  144. # etcdctl member list
  145. 181ffe2d394f445b, started, master1, https://192.168.56.101:2380, https://192.168.56.101:2379, false
  146. dfa99d8cfc0ee00f, started, master2, https://10.0.2.15:2380, https://10.0.2.15:2379, false
  147. 问题在于:master2的etcd应该注册的ip为192.168.56.102(master1-3都是多网卡,注册了其他网卡)
  148. 1、master1上kubectl delete master2
  149. 2、在master2的初始化参数加上"--apiserver-advertise-address 192.168.56.102" 重新init,并根据报错删除对应文件和
  150. 3、验证etcd集群健康
  151. # export ETCDCTL_API=3
  152. # alias etcdctl='etcdctl --endpoints=https://192.168.56.101:2379,https://192.168.56.102:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
  153. # etcdctl member list
  154. 181ffe2d394f445b, started, master1, https://192.168.56.101:2380, https://192.168.56.101:2379, false
  155. cc36adc51c559f97, started, master2, https://192.168.56.102:2380, https://192.168.56.102:2379, false
  156. # etcdctl endpoint health
  157. https://192.168.56.101:2379 is healthy: successfully committed proposal: took = 9.486454ms
  158. https://192.168.56.102:2379 is healthy: successfully committed proposal: took = 9.691437ms
  159. # etcdctl endpoint status
  160. https://192.168.56.101:2379, 181ffe2d394f445b, 3.4.3, 2.6 MB, true, false, 15, 18882, 18882,
  161. https://192.168.56.102:2379, cc36adc51c559f97, 3.4.3, 2.6 MB, false, false, 15, 18882, 18882,

注:master2和master3同理,这里只列出master的

  • 集群后续缩容:
    • master:kubectl drain $nodename --delete-local-data --force --ignore-daemonsets;kubectl delete node $nodename
    • node: kubeadm reset
  • 集群后续扩容:
    • kubeadm token create --print-join-command

6.4、检查

  1. # 1、pod创建和测试
  2. [root@master1 ~]# kubectl create deployment nginx --image=nginx
  3. [root@master1 ~]# kubectl scale deployment nginx --replicas=3
  4. [root@master1 ~]# kubectl get pods -o wide
  5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  6. nginx-f89759699-5p4dr 1/1 Running 0 24m 10.244.2.2 master3 <none> <none>
  7. nginx-f89759699-czvpd 1/1 Running 0 20m 10.244.0.8 master1 <none> <none>
  8. nginx-f89759699-dc5p9 1/1 Running 0 23m 10.244.1.4 master2 <none> <none>
  9. 问题:在master1ping master23上的nginx pod不通。原因在于这里没有指定使用哪一张网卡作为内部通信用的网卡使用
  10. [root@master1 ~]# ps -ef |grep flannel
  11. root 6715 6685 0 14:38 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
  12. [root@master1 ~]# kubectl edit ds/kube-flannel-ds -n kube-system
  13. ...
  14. containers:
  15. - args:
  16. - --ip-masq
  17. - --kube-subnet-mgr
  18. - --iface=enp0s8
  19. command:
  20. - /opt/bin/flanneld
  21. ...
  22. 修改完后可以ping 通
  23. 2、svc测试
  24. [root@master1 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
  25. [root@master1 ~]# kubectl get svc/nginx
  26. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  27. nginx NodePort 10.106.130.76 <none> 80:31254/TCP 38
  28. [root@master1 ~]# curl 10.106.130.76:80 -Is
  29. HTTP/1.1 200 OK
  30. Server: nginx/1.19.10
  31. Date: Sat, 08 May 2021 07:18:40 GMT
  32. Content-Type: text/html
  33. Content-Length: 612
  34. Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
  35. Connection: keep-alive
  36. ETag: "6075b537-264"
  37. Accept-Ranges: bytes
  38. [root@master1 ~]# ipvsadm -Ln -t 10.106.130.76:80 #在其他节点上也一样
  39. Prot LocalAddress:Port Scheduler Flags
  40. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  41. TCP 10.106.130.76:80 rr
  42. -> 10.244.0.8:80 Masq 1 0 1
  43. -> 10.244.1.4:80 Masq 1 0 1
  44. -> 10.244.2.2:80 Masq 1 0 1
  45. 3、dns测试
  46. [root@master1 ~]# kubectl get svc/kube-dns -n kube-system
  47. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  48. kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,9153/TCP 21h
  49. [root@master1 ~]# echo "nameserver 10.96.0.10" >> /etc/resolv.conf
  50. [root@master1 ~]# ping nginx.default.svc.cluster.local -c1
  51. PING nginx.default.svc.cluster.local (10.106.130.76) 56(84) bytes of data.
  52. 64 bytes from nginx.default.svc.cluster.local (10.106.130.76): icmp_seq=1 ttl=64 time=0.019 ms
  53. --- nginx.default.svc.cluster.local ping statistics ---
  54. 1 packets transmitted, 1 received, 0% packet loss, time 0ms
  55. rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms

回到顶部

七、问题记录

问题1:删除svc后面的pod后,lvs的ip不变

  1. > 修改前,kube-dns这个svc,ipvs对饮过的是10.244.0.5(这个ip也不对)
  2. [root@master1 ~]# kubectl get svc/kube-dns -n kube-system
  3. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d17h
  5. [root@master1 ~]# ipvsadm -Ln
  6. IP Virtual Server version 1.2.1 (size=4096)
  7. Prot LocalAddress:Port Scheduler Flags
  8. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  9. TCP 10.96.0.1:443 rr
  10. -> 192.168.56.101:6443 Masq 1 1 0
  11. TCP 10.96.0.10:53 rr
  12. -> 10.244.0.5:53 Masq 1 0 0
  13. TCP 10.96.0.10:9153 rr
  14. -> 10.244.0.5:9153 Masq 1 0 0
  15. UDP 10.96.0.10:53 rr
  16. -> 10.244.0.5:53 Masq 1 0 0
  17. [root@master1 ~]# kubectl get pods -n kube-system -o wide |grep dns
  18. coredns-7ff77c879f-nnbm7 1/1 Running 0 5m2s 10.244.1.9 master2 <none> <none>
  19. [root@master1 ~]# kubectl delete pod/coredns-7ff77c879f-nnbm7 -n kube-system
  20. pod "coredns-7ff77c879f-nnbm7" deleted
  21. [root@master1 ~]# kubectl get pods -n kube-system -o wide |grep dns
  22. coredns-7ff77c879f-d976l 1/1 Running 0 19s 10.244.2.10 master3 <none> <none>
  23. [root@master1 ~]# ipvsadm -Ln #后面的ip仍然没有变化
  24. IP Virtual Server version 1.2.1 (size=4096)
  25. Prot LocalAddress:Port Scheduler Flags
  26. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  27. TCP 10.96.0.1:443 rr
  28. -> 192.168.56.101:6443 Masq 1 1 0
  29. TCP 10.96.0.10:53 rr
  30. -> 10.244.0.5:53 Masq 1 0 0
  31. TCP 10.96.0.10:9153 rr
  32. -> 10.244.0.5:9153 Masq 1 0 0
  33. UDP 10.96.0.10:53 rr
  34. -> 10.244.0.5:53 Masq 1 0 0
  35. 使用的是kubeadm安装的集群,尝试删除Kube-proxy组件仍然不行
  36. [root@master1 ~]# kubectl logs kube-proxy-fhxsj -n kube-system #报错如下
  37. ...
  38. E0513 02:10:16.362685 1 proxier.go:1950] Failed to list IPVS destinations, error: parseIP Error ip=[10 244 0 5 0 0 0 0 0 0 0 0 0 0 0 0]
  39. E0513 02:10:16.362698 1 proxier.go:1192] Failed to sync endpoint for service: 10.96.0.10:9153/TCP, err: parseIP Error ip=[10 244 0 5 0 0 0 0 0 0 0 0 0 0 0 0]
  40. [root@master1 ~]# kubectl logs kube-apiserver-master1 -n kube-system #apiserver报错如下
  41. E0513 01:12:19.425042 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
  42. I0513 01:12:20.278922 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  43. I0513 01:12:20.278948 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  44. I0513 01:12:20.292422 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
  45. I0513 01:12:35.872070 1 controller.go:606] quota admission added evaluator for: endpoints
  46. I0513 01:12:49.649488 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
  47. I0513 01:56:44.648374 1 log.go:172] http: TLS handshake error from 10.0.2.15:23596: tls: first record does not look like a TLS handshake
  48. I0513 01:57:55.036379 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
  49. 影响:pod在不删除的情况下不影响,新申请的pod也不影响。只影响pod和svc创建后。pod被删除后ipvs不会更新,会导致访问不到svc。
  50. 临时解法:删除svc(对应的lvs规则也会被清理)。重新建立
  51. 根本原因为:Kubernetes 使用的 IPVS 模块是比较新的,需要系统内核版本支 持更新kernel
  52. 1)当前环境信息
  53. [root@master1 ~]# uname -r #本次测试环境kernel版本和ipvs版本信息
  54. 3.10.0-1160.el7.x86_64
  55. [root@master1 ~]# modinfo ip_vs
  56. filename: /lib/modules/3.10.0-1160.el7.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko.xz
  57. license: GPL
  58. retpoline: Y
  59. rhelversion: 7.9
  60. srcversion: 7C6456F1C909656E6093A8F
  61. depends: nf_conntrack,libcrc32c
  62. intree: Y
  63. vermagic: 3.10.0-1160.el7.x86_64 SMP mod_unload modversions
  64. signer: CentOS Linux kernel signing key
  65. sig_key: E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5
  66. sig_hashalgo: sha256
  67. parm: conn_tab_bits:Set connections' hash size (int)
  68. 2)更新kernel
  69. rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
  70. #注意如果repo文件中enabled=1,gppcheck=1但是gpgkey文件不存在,yum repo list |grep kernel是出不来的,可以尝试修改gpgcheck=0
  71. yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
  72. yum install kernel-lt --enablerepo=elrepo-kernel
  73. [root@master1 ~]# cat /boot/grub2/grub.cfg | grep CentOS
  74. menuentry 'CentOS Linux (5.4.118-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.el7.x86_64-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
  75. menuentry 'CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.el7.x86_64-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
  76. menuentry 'CentOS Linux (0-rescue-7042e3de62955144a939ec93d90c2cd7) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-7042e3de62955144a939ec93d90c2cd7-advanced-f5f7e6ec-62d1-4002-a6d2-f3864485cf02' {
  77. [root@master1 ~]# grub2-set-default 'CentOS Linux (5.4.118-1.el7.elrepo.x86_64) 7 (Core)'
  78. ELRepo有两种类型的Linux内核包,kernel-lt和kernel-ml。kernel-ml软件包是根据Linux Kernel Archives的主线稳定分支提供的源构建的。 内核配置基于默认的RHEL-7配置,并根据需要启用了添加的功能。 这些软件包有意命名为kernel-ml,以免与RHEL-7内核发生冲突,因此,它们可以与常规内核一起安装和更新。 kernel-lt包是从Linux Kernel Archives提供的源代码构建的,就像kernel-ml软件包一样。 不同之处在于kernel-lt基于长期支持分支,而kernel-ml基于主线稳定分支。
  79. 3)验证
  80. reboot
  81. uname -r查看内部已经更新
  82. ipvsadm -Ln也已经变化
  83. 其他node节点也同样进行更新操作即可

问题2:如何修改proxy模式,默认模式为iptables

  1. [root@master1 ~]# kubectl get cm/kube-proxy -n kube-system -o yaml |grep -i mode #使用edit kube-proxy的configmap然后删除Kube-proxy重新建立
  2. detectLocalMode: ""
  3. mode: ipvs

问题3:kubelet日志一直报错"Container runtime network not ready",master一直"NotReady" 在内网环境(没有yum)安装时发现的问题

  1. [root@iZvy205vkg85k96l33i7lrZ kubernetes]# journalctl -xe -u kubelet -l
  2. ...
  3. May 13 18:33:32 iZvy205vkg85k96l33i7lrZ kubelet[20719]: : [failed to find plugin "flannel" in path [/opt/cni/bin] failed to find plugin "por
  4. May 13 18:33:32 iZvy205vkg85k96l33i7lrZ kubelet[20719]: W0513 18:33:32.703337 20719 cni.go:237] Unable to update cni config: no valid netw
  5. May 13 18:33:33 iZvy205vkg85k96l33i7lrZ kubelet[20719]: E0513 18:33:33.977912 20719 kubelet.go:2187] Container runtime network not ready:
  6. 原因为 "kubernetes-cni-0.8.7-0.x86_64" 没有安装
  7. [root@master1 ~]# yum deplist kubeadm kubelet kubectl
  8. 已加载插件:fastestmirror
  9. Loading mirror speeds from cached hostfile
  10. * base: mirrors.163.com
  11. * epel: mirrors.bfsu.edu.cn
  12. * extras: mirrors.163.com
  13. * updates: mirrors.163.com
  14. 软件包:kubeadm.x86_64 1.21.0-0
  15. 依赖:cri-tools >= 1.13.0
  16. provider: cri-tools.x86_64 1.13.0-0
  17. 依赖:kubectl >= 1.13.0
  18. provider: kubectl.x86_64 1.21.0-0
  19. 依赖:kubelet >= 1.13.0
  20. provider: kubelet.x86_64 1.21.0-0
  21. 依赖:kubernetes-cni >= 0.8.6
  22. provider: kubernetes-cni.x86_64 0.8.7-0
  23. 软件包:kubectl.x86_64 1.21.0-0
  24. 此软件包无依赖关系
  25. 软件包:kubelet.x86_64 1.21.0-0
  26. 依赖:conntrack
  27. provider: conntrack-tools.x86_64 1.4.4-7.el7
  28. 依赖:ebtables
  29. provider: ebtables.x86_64 2.0.10-16.el7
  30. 依赖:ethtool
  31. provider: ethtool.x86_64 2:4.8-10.el7
  32. 依赖:iproute
  33. provider: iproute.x86_64 4.11.0-30.el7
  34. 依赖:iptables >= 1.4.21
  35. provider: iptables.x86_64 1.4.21-35.el7
  36. provider: iptables.i686 1.4.21-35.el7
  37. 依赖:kubernetes-cni >= 0.8.7
  38. provider: kubernetes-cni.x86_64 0.8.7-0
  39. 依赖:libc.so.6(GLIBC_2.2.5)(64bit)
  40. provider: glibc.x86_64 2.17-324.el7_9
  41. 依赖:libdl.so.2()(64bit)
  42. provider: glibc.x86_64 2.17-324.el7_9
  43. 依赖:libdl.so.2(GLIBC_2.2.5)(64bit)
  44. provider: glibc.x86_64 2.17-324.el7_9
  45. 依赖:libpthread.so.0()(64bit)
  46. provider: glibc.x86_64 2.17-324.el7_9
  47. 依赖:libpthread.so.0(GLIBC_2.2.5)(64bit)
  48. provider: glibc.x86_64 2.17-324.el7_9
  49. 依赖:libpthread.so.0(GLIBC_2.3.2)(64bit)
  50. provider: glibc.x86_64 2.17-324.el7_9
  51. 依赖:rtld(GNU_HASH)
  52. provider: glibc.x86_64 2.17-324.el7_9
  53. provider: glibc.i686 2.17-324.el7_9
  54. 依赖:socat
  55. provider: socat.x86_64 1.7.3.2-2.el7
  56. 依赖:util-linux
  57. provider: util-linux.x86_64 2.23.2-65.el7_9.1
  58. provider: util-linux.i686 2.23.2-65.el7_9.1

问题4:“OCI runtime create failed: systemd cgroup flag passed, but systemd support for managing cgroups is not available: unknown”

注释/etc/docker/daemon.json ""exec-opts": ["native.cgroupdriver=systemd"]," 

问题5:镜像下载和推送失败

  1. [root@master1 test]# docker push www.mt.com:9500/google_containers/flannel:v0.14.0-rc1
  2. The push refers to repository [www.mt.com:9500/google_containers/flannel]
  3. e4c4fde0a196: Layer already exists
  4. 8a984b390686: Layer already exists
  5. 71b519fcb2d2: Layer already exists
  6. ae1d52bdc861: Layer already exists
  7. 2e16188127c8: Layer already exists
  8. 815dff9e0b57: Layer already exists
  9. 777b2c648970: Layer already exists
  10. v0.14.0-rc1: digest: sha256:124e34e6e2423ba8c839cd34806f125d4daed6d4d98e034a01975f0fd4229b2f size: 1785
  11. Signing and pushing trust metadata
  12. Error: error contacting notary server: tls: oversized record received with length 20527
  13. 错误2
  14. [root@node01 ~]# docker push www.mt.com:9500/google_containers/flannel:v0.14.0-rc1
  15. Error: error contacting notary server: tls: first record does not look like a TLS handshake

原因为:

  • DOCKER_CONTENT_TRUST 是哦福
  • docker info 中的HTTP-PROXY和HTTPS-PROXY的参数
  • ~/.docker/config.json

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Li_阴宅/article/detail/836245
推荐阅读
相关标签
  

闽ICP备14008679号