当前位置:   article > 正文

k8s1.18.20高可用集群(修改源码延长证书时间)_kubeadm 安装 1.18.20

kubeadm 安装 1.18.20

使用kubeadm 搭建 k8s(1.18.20版本)高可用集群

一、修改源码增加证书有效时间到100年

k8s有两种证书:

ca证书默认是10年(ca、etcd-ca 、front-proxy-ca)

客户端证书和集群间认证证书有效期是1年:apiserver 、etcd-server、apiserver-etcd-client等

过期后,会导致api service不可用,使用过程中会出现:x509: certificate has expired or is not yet valid.

在初始化群集之前重新编译kubeadm,证书有效期设置为100年

1、获取源代码
$ cd /opt/k8s_srccode
$ wget https://codeload.github.com/kubernetes/kubernetes/tar.gz/v1.18.20
$ tar -zxvf v1.18.20.tar.gz
$ cd kubernetes-1.18.20
  • 1
  • 2
  • 3
  • 4
2、修改源码增加证书有效时间
$ vim ./staging/src/k8s.io/client-go/util/cert/cert.go #ca证书时间修改
// 这个方法里面NotAfter:              now.Add(duration365d * 10).UTC()
// 默认有效期就是10年,改成100年
// 按/NotAfter查找
func NewSelfSignedCACert(cfg Config, key crypto.Signer) (*x509.Certificate, error) {
        now := time.Now()
        tmpl := x509.Certificate{
                SerialNumber: new(big.Int).SetInt64(0),
                Subject: pkix.Name{
                        CommonName:   cfg.CommonName,
                        Organization: cfg.Organization,
                },
                NotBefore:             now.UTC(),
                NotAfter:              now.Add(duration365d * 100).UTC(),
                KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
                BasicConstraintsValid: true,
                IsCA:                  true,
        }

        certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)
        if err != nil {
                return nil, err
        }
        return x509.ParseCertificate(certDERBytes)
}


$ vim ./cmd/kubeadm/app/constants/constants.go ## 集群认证证书修改
// 就是这个常量定义CertificateValidity,改成*100年
const (
        // KubernetesDir is the directory Kubernetes owns for storing various configuration files
        KubernetesDir = "/etc/kubernetes"
        // ManifestsSubDirName defines directory name to store manifests
        ManifestsSubDirName = "manifests"
        // TempDirForKubeadm defines temporary directory for kubeadm
        // should be joined with KubernetesDir.
        TempDirForKubeadm = "tmp"

        // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
        CertificateValidity = time.Hour * 24 * 365 * 100

        // CACertAndKeyBaseName defines certificate authority base name
        CACertAndKeyBaseName = "ca"
        // CACertName defines certificate name
        CACertName = "ca.crt"
        // CAKeyName defines certificate name
        CAKeyName = "ca.key"

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
3、重新编译kubeadm

使用官方docker镜像编译

$ docker pull gcrcontainer/kube-cross:v1.13.6-1
$ docker run --rm -v /opt/k8s_srccode/kubernetes-1.18.20:/go/src/k8s.io/kubernetes -it gcrcontainer/kube-cross:v1.13.6-1 bash
root@940ce120673f:/go# cd /go/src/k8s.io/kubernetes
root@940ce120673f:/go/src/k8s.io/kubernetes# make all WHAT=cmd/kubeadm GOFLAGS=-v
root@940ce120673f:/go/src/k8s.io/kubernetes# exit

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

编译完的kubeadm二进制文件路径:_output/bin/kubeadm

$ cp _output/local/bin/linux/amd64/kubeadm /usr/bin/kubeadm
$ chmod +x /usr/bin/kubeadm
  • 1
  • 2

编译完成,接下来开始安装集群

二、节点规划信息

角色IP地址系统
k8s-master0110.12.52.156centos7.6
k8s-master0210.12.52.157centos7.6
k8s-master0310.12.52.158centos7.6
k8s-node0110.12.52.161centos7.6
k8s-node0210.12.52.168centos7.6
k8s-lb10.12.52.169vip

1.基础环境准备

k8s版本:1.18.20

docker版本:19.0.3

1.1环境初始化
  • 配置主机名,以k8s-master01为例(需要依次根据节点规划角色修改主机名,其中k8s-lb为虚拟的ip 不需要配置 hostname)
[root@localhost ~]# hostnamectl set-hostname k8s-master01
  • 1
  • 配置主机hosts映射(操作每个节点)
[root@k8s-master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.12.52.156 k8s-master1
10.12.52.157 k8s-master2
10.12.52.158 k8s-master3
10.12.52.161  k8s-node1
10.12.52.168  k8s-node2
10.12.52.169  k8s-lb
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

这里ping k8s-lb不通,是因为我们还没配置VIP

  • 禁用防火墙
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
  • 1
  • 2
  • 关闭selinux
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config 
  • 1
  • 2
  • 关闭swap分区
[root@localhost ~]# swapoff -a # 临时
[root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久
  • 1
  • 2
  • 时间同步
[root@localhost ~]# yum install chrony -y
[root@localhost ~]# systemctl enable chronyd
[root@localhost ~]# systemctl start chronyd[root@localhost ~]# chronyc sources
  • 1
  • 2
  • 3
  • 配置ulimt
[root@localhost ~]# ulimit -SHn 65535
  • 1
  • 配置内核参数
[root@localhost ~]# cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
[root@localhost ~]# sysctl -p
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
1.2.内核升级

由于centos7.6的系统默认内核版本是3.10,3.10的内核有很多BUG,最常见的一个就是group memory leak(四台主机都要执行)
1)下载所需要的内核版本,我这里采用rpm安装,所以直接下载的rpm包

[root@localhost ~]# wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm
  • 1

2)执行rpm升级即可

[root@localhost ~]# rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm
  • 1

3)升级完reboot,然后查看内核是否成功升级

[root@localhost ~]# reboot
[root@k8s-master01 ~]# uname -r
  • 1
  • 2

2.组件安装

2.1安装ipvs
  • 安装ipvs需要的软件

由于我准备使用ipvs作为kube-proxy的代理模式,所以需要安装相应的软件包。

[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
  • 1
  • 加载模块
  cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- ip_tables
modprobe -- ip_set
modprobe -- xt_set
modprobe -- ipt_set
modprobe -- ipt_rpfilter
modprobe -- ipt_REJECT
modprobe -- ipip
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

注意:在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack

  • 配置重启自动加载
[root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrackk
  • 1
2.2安装docker-ce

所有主机都需要安装

[root@k8s-master01 ~]# # 安装需要的软件[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@k8s-master01 ~]# # 添加yum源[root@k8s-master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • 1
  • 查看是否有docker-ce包
[root@k8s-master01 ~]# yum list | grep docker-ce
containerd.io.x86_64                        1.2.13-3.1.el7             docker-ce-stable
docker-ce.x86_64                            3:19.03.8-3.el7            docker-ce-stable
docker-ce-cli.x86_64                        1:19.03.8-3.el7            docker-ce-stable
docker-ce-selinux.noarch                    17.03.3.ce-1.el7           docker-ce-stable
  • 1
  • 2
  • 3
  • 4
  • 5
  • 安装docker-ce
[root@k8s-master01 ~]# yum install docker-ce-19.03.8-3.el7 -y[root@k8s-master01 ~]# systemctl start docker
[root@k8s-master01 ~]# systemctl enable docker
  • 1
  • 2
  • 配置镜像加速
[root@k8s-master01 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io[root@k8s-master01 ~]# systemctl restart docker
  • 1
2.3 安装kubernetes组件

以上操作在所有节点执行

  • 添加yum源
[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 安装软件
[root@k8s-master01 ~]# yum install -y kubelet-1.18.20-0  kubectl-1.18.20-0 --disableexcludes=kubernetes
  • 1
  • 将kubelet设置为开机自启动
[root@k8s-master01 ~]# systemctl enable kubelet.service
  • 1
  • 将刚才编译完的kubeadm分别拷贝到所有机器上

    [root@k8s-master01 ~]# for i in 10.12.52.156 10.12.52.157 10.12.52.158 10.12.52.161 10.12.52.168;do scp -rf /usr/bin/kubeadm $i:/usr/bin/kubeadm;done
    
    • 1

3、集群初始化

3.1 配置集群高可用

高可用采用的是HAProxy+Keepalived来进行高可用和master节点的流量负载均衡,HAProxy和KeepAlived以守护进程的方式在所有Master节点部署

  • 安装软件
[root@k8s-master01 ~]# yum install keepalived haproxy -y
  • 1
  • 配置haproxy

所有master节点的配置相同,如下:

[root@k8s-master01 ~]# cat /etc/haproxy/haproxy.cfg 
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    stats socket /var/lib/haproxy/stats
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

backend kubernetes-apiserver
    balance     roundrobin
    mode        tcp
    server  master01 10.12.52.156:6443 check
    server  master02 10.12.52.157:6443 check
    server  master03 10.12.52.158:6443 check

listen admin_stats
    stats   enable
    bind    *:8080
    mode    http
    option  httplog
    stats   uri /admin
    stats   realm haproxy
    stats   auth  admin:admin
    stats   hide-version
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56

注意:把apiserver地址改成自己节点规划的master地址

  • 配置keepalived

k8s-master01

[root@k8s-master01 ~]# cat /etc/keepalived/keepalived.conf 
global_defs {
    router_id LVS_K8S
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check-apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 155
    priority 100
    authentication {
        auth_type PASS
        auth_pass kubernetes_1.18.20
    }
    virtual_ipaddress {
        10.12.52.169
    }
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

k8s-master02节点配置

[root@k8s-master02 ~]# cat /etc/keepalived/keepalived.conf 
global_defs {
    router_id LVS_K8S
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check-apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 155
    priority 90
    authentication {
        auth_type PASS
        auth_pass kubernetes_1.18.20
    }
    virtual_ipaddress {
        10.12.52.169
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

k8s-master03节点配置

[root@k8s-master03 ~]# cat /etc/keepalived/keepalived.conf 
global_defs {
    router_id LVS_K8S
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check-apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 155
    priority 80
    authentication {
        auth_type PASS
        auth_pass kubernetes_1.18.20
    }
    virtual_ipaddress {
        10.12.52.169
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

编写健康检测脚本

[root@k8s-master03 ~]# cat /etc/keepalived/check-apiserver.sh
#!/bin/bash

function check_apiserver(){
 for ((i=0;i<5;i++))
 do
  apiserver_job_id=${pgrep kube-apiserver}
  if [[ ! -z ${apiserver_job_id} ]];then
   return
  else
   sleep 2
  fi
 done
 apiserver_job_id=0
}

# 1->running    0->stopped
check_apiserver
if [[ $apiserver_job_id -eq 0 ]];then
 /usr/bin/systemctl stop keepalived
 exit 1
else
 exit 0
fi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

启动haproxy和keepalived

[root@k8s-master01 ~]# systemctl enable --now keepalived
[root@k8s-master01 ~]# systemctl enable --now haproxy
[root@k8s-master01 ~]# systemctl  status haproxy
[root@k8s-master01 ~]# systemctl  status keepalived## 查看虚拟ip
[root@k8s-master01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:81:fb:6b brd ff:ff:ff:ff:ff:ff
    inet 10.12.52.156/24 brd 10.12.52.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.12.52.169/32 scope global eth0
       valid_lft forever preferred_lft forever
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

检查vip漂移情况 停止keepalived是否飘逸到其他节点

3.2 部署master
  • 在k8s-master01上,编写kubeadm.yaml配置文件,如下
[root@k8s-master01 ~]# cat >> kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.aliyuncs.com/google_containers
controlPlaneEndpoint: "k8s-lb:16443"
networking:
  dnsDomain: cluster.local
  podSubnet: 192.100.0.0/16
  serviceSubnet: 192.101.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 下载镜像
[root@k8s-master01 ~]# kubeadm config images pull --config kubeadm.yaml
  • 1

镜像地址是使用的阿里云的地址,理论上应该也会很快,大家也可以直接下载文中开头所提供的镜像,然后导入到节点中

docker load -i  1-18-kube-apiserver.tar.gz
docker load -i  1-18-kube-scheduler.tar.gz
docker load -i  1-18-kube-controller-manager.tar.gz
docker load -i  1-18-pause.tar.gz
docker load -i  1-18-cordns.tar.gz
docker load -i  1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

说明: pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2 etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0 cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7 apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是 k8s.gcr.io/kube-apiserver:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2

  • 进行初始化
[root@-k8s-master01 ~]# kubeadm init --config kubeadm.yaml --upload-certs
W1026 19:02:09.032034   24133 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1026 19:02:12.569732   24133 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1026 19:02:12.572476   24133 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.519122 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
6ae26c2ee15f5eb4812fea1f797bbd5f999e4e6acba4d71061610ffd4f46a06e
[mark-control-plane] Marking the node bj-zmzy-dev-k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node bj-zmzy-dev-k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8xkq1p.kbia9sclkeo6ca8e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-lb:16443 --token 8xkq1p.kbia9sclkeo6ca8e \
    --discovery-token-ca-cert-hash sha256:a33cd3f9e80b4f70efa564ab2f346a68f1c67ae610e10bc85e8d1f2cf544a29b \
    --control-plane --certificate-key 6ae26c2ee15f5eb4812fea1f797bbd5f999e4e6acba4d71061610ffd4f46a06e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-lb:16443 --token 8xkq1p.kbia9sclkeo6ca8e \
    --discovery-token-ca-cert-hash sha256:a33cd3f9e80b4f70efa564ab2f346a68f1c67ae610e10bc85e8d1f2cf544a29b 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85

最后输出的kubeadm jion需要记录下来,后面的master节点和node节点需要用

  • 配置环境变量
[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master01 ~]# source /root/.bashrc
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 查看节点状态
[root@bj-zmzy-dev-k8s-master01 ~]#  kubectl get node
NAME                       STATUS   ROLES    AGE   VERSION
bj-zmzy-dev-k8s-master01   Ready    master   15h   v1.18.20
  • 1
  • 2
  • 3
  • 安装网络插件
如果有节点是多网卡,所以需要在资源清单文件中指定内网网卡(如何单网卡可以不用修改))
  • 1
[root@k8s-master01 ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
[root@k8s-master01 ~]# vi calico.yaml # 修改- name: CALICO_IPV4POOL_CIDR value: "10.5.0.0/16"
......
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.8.8-1
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: IP_AUTODETECTION_METHOD # DaemonSet中添加该环境变量
              value: interface=eth0 # 指定内网网卡
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
......
# 安装calico网络插件
[root@k8s-master01 ~]# kubectl apply -f calico.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

当网络插件安装完成后,查看node节点信息如下:

[root@bj-zmzy-dev-k8s-master01 ~]#  kubectl get node
NAME                       STATUS   ROLES    AGE   VERSION
bj-zmzy-dev-k8s-master01   Ready    master   15h   v1.18.20
  • 1
  • 2
  • 3

可以看到状态已经从NotReady变为ready了。

  • 将master02加入集群
  • 下载镜像

[root@k8s-master02 ~]# kubeadm config images pull --config kubeadm.yaml
  • 1
  • 加入集群
kubeadm join k8s-lb:16443 --token 8xkq1p.kbia9sclkeo6ca8e \    --discovery-token-ca-cert-hash sha256:a33cd3f9e80b4f70efa564ab2f346a68f1c67ae610e10bc85e8d1f2cf544a29b \    --control-plane --certificate-key 6ae26c2ee15f5eb4812fea1f797bbd5f999e4e6acba4d71061610ffd4f46a06e
  • 1
...
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 配置环境变量
[root@k8s-master02 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master02 ~]# source /root/.bashrc
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 另一台的操作一样,把master03加入集群
  • 查看集群状态
[root@bj-zmzy-dev-k8s-master01 ~]#  kubectl get node
NAME                       STATUS   ROLES    AGE   VERSION
bj-zmzy-dev-k8s-master01   Ready    master   15h   v1.18.20
bj-zmzy-dev-k8s-master02   Ready    master   15h   v1.18.20
bj-zmzy-dev-k8s-master03   Ready    master   15h   v1.18.20
  • 1
  • 2
  • 3
  • 4
  • 5
  • 查看集群组件状态

全部都Running,则所有组件都正常了,不正常,可以具体查看pod日志进行排查

[root@k8s-master1 ~]#  kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-75d555c48-vq56c   1/1     Running   0          7m24s
calico-node-6dnkh                         1/1     Running   0          7m24s
calico-node-7dvdx                         1/1     Running   0          2m34s
calico-node-g9sq6                         1/1     Running   0          2m40s
coredns-7ff77c879f-5v2c7                  1/1     Running   0          22m
coredns-7ff77c879f-kxhgt                  1/1     Running   0          22m
etcd-k8s-master1                          1/1     Running   0          22m
etcd-k8s-master2                          1/1     Running   0          2m35s
etcd-k8s-master3                          1/1     Running   0          2m27s
kube-apiserver-k8s-master1                1/1     Running   0          22m
kube-apiserver-k8s-master2                1/1     Running   0          2m39s
kube-apiserver-k8s-master3                1/1     Running   0          2m34s
kube-controller-manager-k8s-master1       1/1     Running   1          22m
kube-controller-manager-k8s-master2       1/1     Running   0          2m39s
kube-controller-manager-k8s-master3       1/1     Running   0          2m34s
kube-proxy-dkfz6                          1/1     Running   0          2m34s
kube-proxy-rdmbb                          1/1     Running   0          2m40s
kube-proxy-vxc5b                          1/1     Running   0          22m
kube-scheduler-k8s-master1                1/1     Running   1          22m
kube-scheduler-k8s-master2                1/1     Running   0          2m39s
kube-scheduler-k8s-master3                1/1     Running   0          2m34s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 查看CSR
[root@k8s-master1 ~]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 CONDITION
csr-5v7ld   23m     kubernetes.io/kube-apiserver-client-kubelet   system:node:k8s-master1   Approved,Issued
csr-b9v26   3m16s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:zuhvol   Approved,Issued
csr-wls66   3m12s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:zuhvol   Approved,Issued
  • 1
  • 2
  • 3
  • 4
  • 5

其他master节点也可以操作k8s集群,实现高可用。

4.3 部署node
  • node节点只需加入集群即可
  • token 如果过期 执行 kubeadm token create --print-join-command
kubeadm join k8s-lb:16443 --token 8xkq1p.kbia9sclkeo6ca8e \
    --discovery-token-ca-cert-hash sha256:a33cd3f9e80b4f70efa564ab2f346a68f1c67ae610e10bc85e8d1f2cf544a29b 
  • 1
  • 2
  • 输出日志如下:
W1117 12:50:41.204864    8762 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 1
  • 最后然后查看集群节点信息
[root@k8s-master01 ~]#  kubectl get node
NAME                       STATUS   ROLES    AGE   VERSION
bj-zmzy-dev-k8s-master01   Ready    master   15h   v1.18.20
bj-zmzy-dev-k8s-master02   Ready    master   15h   v1.18.20
bj-zmzy-dev-k8s-master03   Ready    master   15h   v1.18.20
bj-zmzy-dev-k8s-node01     Ready    <none>   14h   v1.18.20
bj-zmzy-dev-k8s-node02     Ready    <none>   14h   v1.18.20
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

四、验证证书时间

[root@k8s-master01 ~]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 03, 2121 02:35 UTC   99y                                     no      
apiserver                  Oct 03, 2121 02:35 UTC   99y             ca                      no      
apiserver-etcd-client      Oct 03, 2121 02:35 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Oct 03, 2121 02:35 UTC   99y             ca                      no      
controller-manager.conf    Oct 03, 2121 02:35 UTC   99y                                     no      
etcd-healthcheck-client    Oct 03, 2121 02:35 UTC   99y             etcd-ca                 no      
etcd-peer                  Oct 03, 2121 02:35 UTC   99y             etcd-ca                 no      
etcd-server                Oct 03, 2121 02:35 UTC   99y             etcd-ca                 no      
front-proxy-client         Oct 03, 2121 02:35 UTC   99y             front-proxy-ca          no      
scheduler.conf             Oct 03, 2121 02:35 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 02, 2121 10:51 UTC   99y             no      
etcd-ca                 Oct 02, 2121 10:51 UTC   99y             no      
front-proxy-ca          Oct 02, 2121 10:51 UTC   99y             no  
#如果可以活到100年后,证书到期了,可以执行以下命令进行证书续期
[root@k8s-master01 ~]# kubeadm alpha certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

参考:kubeadm安装k8s1.18
https://blog.csdn.net/weixin_41476014/article/details/106402916

问题1:
Warning:detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".

vim /etc/docker/daemon.json

添加如下内容:
{
"exec-opts":["native.cgroupdriver=systemd"]
}
将/var/lib/kubelet/kubeadm-flags.env文件中的–cgroup-driver=cgroupfs 修改为
–cgroup-driver=systemd

也就是docker配置项和kubelet的一致都是systemd。

systemctl restart docker kubelet
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

问题2:
kubeadm 新加node节点报错 there is no JWS signed token in the cluster-info ConfigMap

[root@k8s01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
37016n.xu57cr94pvxk9fmd   23h         2020-06-12T17:53:43+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@k8s01 ~]#  kubeadm --v=5 token create --print-join-command
I0611 18:07:29.945991    5389 token.go:121] [token] validating mixed arguments
I0611 18:07:29.946052    5389 token.go:130] [token] getting Clientsets from kubeconfig file
I0611 18:07:29.946082    5389 cmdutil.go:79] Using kubeconfig file: /etc/kubernetes/admin.conf
I0611 18:07:29.948272    5389 token.go:243] [token] loading configurations
I0611 18:07:29.948549    5389 interface.go:400] Looking for default routes with IPv4 addresses
I0611 18:07:29.948560    5389 interface.go:405] Default route transits interface "eno1"
I0611 18:07:29.948941    5389 interface.go:208] Interface eno1 is up
I0611 18:07:29.949026    5389 interface.go:256] Interface "eno1" has 2 addresses :[10.100.22.145/24 10.100.22.148/32].
I0611 18:07:29.949046    5389 interface.go:223] Checking addr  10.100.22.145/24.
I0611 18:07:29.949058    5389 interface.go:230] IP found 10.100.22.145
I0611 18:07:29.949069    5389 interface.go:262] Found valid IPv4 address 10.100.22.145 for interface "eno1".
I0611 18:07:29.949078    5389 interface.go:411] Found active IP 10.100.22.145 
W0611 18:07:29.949209    5389 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0611 18:07:29.949225    5389 token.go:255] [token] creating token
kubeadm join k8s-lb:16443 --token 385ya5.v8jkyzyrmf6i13qn     --discovery-token-ca-cert-hash sha256:fb3336b728602dda8fbc2120ddf854034d208960c59b01e2509293c3964028ea 

[root@k8s01 ~]# kubeadm init phase upload-certs --upload-certs
W0611 18:09:07.243420    7445 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
07ddca43735a7e5cfaec51aef37cbdf1be0dd9a02eb9333b09b245da17f1f698

[root@k8s01 ~]# kubeadm token list                              
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
37016n.xu57cr94pvxk9fmd   23h         2020-06-12T17:53:43+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
385ya5.v8jkyzyrmf6i13qn   23h         2020-06-12T18:07:29+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
ncoekw.5q77ik0b4ml3ce4e   1h          2020-06-11T20:09:07+08:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret        <none>

替换后最新的执行命令:
kubeadm join k8s-lb:16443 --token 385ya5.v8jkyzyrmf6i13qn     --discovery-token-ca-cert-hash sha256:fb3336b728602dda8fbc2120ddf854034d208960c59b01e2509293c3964028ea     --control-plane --certificate-key 07ddca43735a7e5cfaec51aef37cbdf1be0dd9a02eb9333b09b245da17f1f698 --v=5

https://www.jianshu.com/p/300cf6bf0943?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

问题3:
K8s - 让Master也能当作Node使用的方法(允许将Pod副本调度到Master节点上)
https://www.hangge.com/blog/cache/detail_2431.html
1,让 Master 也当作 Node 使用
(1)如果想让 Pod 也能调度到在 Master(本样例即 localhost.localdomain)上,可以执行如下命令使其作为一个工作节点:
注意:利用该方法,我们可以不使用 minikube 而创建一个单节点的 K8S 集群

kubectl taint node localhost.localdomain node-role.kubernetes.io/master-

(2)执行后将输出如下信息(其中报错可忽略):
原文:K8s - 让Master也能当作Node使用的方法(允许将Pod副本调度到Master节点上)

2,将 Master 恢复成 Master Only 状态
如果想禁止 Master(本样例即 localhost.localdomain)部署 pod,则可执行如下命令:

kubectl taint node localhost.localdomain node-role.kubernetes.io/master="":NoSchedule

问题4:
Docker 中 TLS handshake timeout 问题

[root@k8s02 ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://hub-mirror.c.163.com"],
  "exec-opts":["native.cgroupdriver=systemd"]
}
systemctl  restart docker
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

安装自动补全命令

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
  • 1
  • 2
  • 3
  • 4

问题5:
kubeadm初始化k8s集群延长证书过期时间

https://www.cnblogs.com/skymyyang/p/11093686.html
自编译kubeadm二进制:https://blog.csdn.net/fuck487/article/details/102759523
  • 1
  • 2

问题6
kubesphere https://kubesphere.io/install/

问题7:
安装helm3
https://devopscube.com/install-configure-helm-kubernetes/
https://www.jianshu.com/p/e86148b86957
使用helm3 https://aliasmee.github.io/post/helm-3-is-coming/

问题8:
安装metrics kubectl top node
https://blog.csdn.net/qq_24794401/article/details/106234001

问题9:

kubeadm加入master节点失败,报错 error execution phase check-etcd: etcd cluster is not healthy: context deadli

https://blog.csdn.net/q1403539144/article/details/107560921

问题10: unknown container “/system.slice/docker.service”

https://www.jianshu.com/p/5c2e15fa6733

https://blog.csdn.net/qq_35151346/article/details/108979873

find / -name 10-kubeadm.conf

vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

add :

Environment=“KUBELET_CGROUP_ARGS=–cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice”

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/247537
推荐阅读
  

闽ICP备14008679号