当前位置:   article > 正文

最新kubernetes v1.22.1高可用集群部署_kubernetesv1.22.12部署

kubernetesv1.22.12部署

最新kubernetes v1.22.1高可用集群部署

一、简述:

kubeadm部署Kubernetes 1.22.1多Master高可用集群;

**节点 ****角色 ****IP **部署软件
master-1master192.168.5.11kubeadm、kubelet、kubectl、docker、haproxy、keepalived
master-2master192.168.5.12kubeadm、kubelet、kubectl、docker、haproxy、keepalived
master-3master192.168.5.13kubeadm、kubelet、kubectl、docker、haproxy、keepalived
controPlane VIPVIP192.168.5.50
worker-1worker192.168.5.21kubeadm、kubelet、kubectl、docker

架构图:
在这里插入图片描述
说明:

kubernetes集群,etcd分别安装在3个主节点上,controlplane的服务都是kubeadm部署,用pod形式存在3个主节点上。
loadbalancer通过haproxy来实现负载均衡,与apiserver通信
keepalived的VIP与所在节点的controlplane(controller-manager scheduler)通信,集群运行时,只有一个controlplane起作用,其他2个属于standby。

二、基础环境部署(全部主机执行)


安装基本软件与升级内核:

yum -y install vim git lrzsz wget net-tools bash-completion 
 
sudo yum -y update,需要下载1G左右的升级文件。
sudo yum update -y kernel,只升级内核只需要下载100M左右的升级文件。
内核升级完毕后,重启。然后重新启动Docker服务,成功。

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

关闭所有节点的Slinux/防火墙:

setenforce 0 \
&& sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config \
&& getenforce
 
systemctl stop firewalld \
&& systemctl daemon-reload \
&& systemctl disable firewalld \
&& systemctl daemon-reload \
&& systemctl status firewalld

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

添加Host解析:

cat <<EOF > /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.11  master-1
192.168.5.12  master-2
192.168.5.13  master-3
192.168.5.21  worker-1
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

同步节点系统时间:

yum install ntp -y
ntpdate cn.pool.ntp.org
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 1
timedatectl set-ntp 1
  • 1
  • 2
  • 3
  • 4
  • 5

设置网桥:

配置L2网桥在转发包时会被iptables的FORWARD规则所过滤,CNI插件需要该配置;创建/etc/sysctl.d/k8s.conf文件;

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
 
# 执行命令使其修改生效
modprobe br_netfilter \
&& sysctl -p /etc/sysctl.d/k8s.conf
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

关闭交换分区:

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab
 
echo vm.swappiness = 0 >> /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
  • 1
  • 2
  • 3
  • 4
  • 5

安装设置Ipvs:

yum -y install ipvsadm ipset
 
#创建ipvs脚本
 
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
 
#执行脚本,验证配置
 
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

部署密钥登陆:

[root@master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
b7:ac:c3:65:06:97:80:2a:f6:88:13:9a:dd:8a:a1:d6 root@node1
The key's randomart image is:
+--[ RSA 2048]----+
|       .         |
|      . .        |
|     .   . .     |
|. o .   . o      |
|.* =    So.      |
|* o o    o+.     |
|.+..   . +o      |
|o..E    o.       |
|.       ..       |
+-----------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

分发密钥:

for host in master-1 master-2 master-3; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done

  • 1
  • 2

部署Docker:

#卸载已经安装的docker
 
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
 
#部署依赖与源
 
 sudo yum install -y yum-utils
 sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
 
#部署docker
 
sudo yum install docker-ce docker-ce-cli containerd.io
 
#启动docker
 
systemctl start docker
systemctl enable docker
systemctl status docker
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

设置Docker镜像源和Cgroup驱动:

cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://aedvu1x8.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
 
 
重启Docker,查看结果;
systemctl restart docker
docker info | grep Cgroup

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

三、Haproxy 和 keepalived部署(三个主节点)

所有Master节点安装Haproxy、Keepalived;

yum -y install haproxy keepalived
  • 1

修改所有Master节点的配置文件:

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000
frontend  k8s_https *:8443
    mode      tcp
    maxconn      2000
    default_backend     https_sri
    
backend https_sri
    balance      roundrobin
    server master-1-api 192.168.5.11:6443  check inter 10000 fall 2 rise 2 weight 1
    server master-2-api 192.168.5.12:6443  check inter 10000 fall 2 rise 2 weight 1
    server master-3-api 192.168.5.13:6443  check inter 10000 fall 2 rise 2 weight 1
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

所有Master节点修改Keepalived配置文件: 注意按照规划修改priorty 值;

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   router_id LVS_DEVEL
}
vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3000
}
vrrp_instance VI_1 {
    state MASTER      # 主机 MASTER  备机 BACKUP
    interface ens33   # 修改网卡名
    virtual_router_id 80
    priority 100      #注意修改三个节点的值  100 80 70
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    virtual_ipaddress {
        192.168.5.50/24  #规划的vip地址
    }
    track_script {
        check_haproxy 
    }
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

SElinux一定要关闭,否则脚本无法运行
所有master节点上部署Haproxy监控脚本:

cat > /etc/keepalived/check_haproxy.sh << EOF
#!/bin/bash
if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
        systemctl start haproxy
        sleep 3
        if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
                systemctl stop keepalived
        fi
fi
EOF
 
 
chmod +x /etc/keepalived/check_haproxy.sh


systemctl enable keepalived
systemctl restart keepalived
systemctl enable haproxy
systemctl restart haproxy
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

四、ETCD集群部署(三个主节点)

创建etcd证书(master-1上执行即可)

1:设置cfssl环境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

2:创建 CA 配置文件(下面配置的IP为etc节点的IP)

mkdir /root/ssl
cd /root/ssl
cat >  ca-config.json <<EOF
{
"signing": {
"default": {
  "expiry": "8760h"
},
"profiles": {
  "kubernetes-Soulmate": {
    "usages": [
        "signing",
        "key encipherment",
        "server auth",
        "client auth"
    ],
    "expiry": "8760h"
  }
}
}
}
EOF

cat >  ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
  "C": "CN",
  "ST": "shanghai",
  "L": "shanghai",
  "O": "k8s",
  "OU": "System"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.150.181",
    "192.168.150.182",
    "192.168.150.183"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "shanghai",
      "L": "shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74

3:master-1分发etcd证书到master-2、master-3上面

mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n master-2 "mkdir -p /etc/etcd/ssl && exit"
ssh -n master-3 "mkdir -p /etc/etcd/ssl && exit"
scp -r /etc/etcd/ssl/*.pem master-2:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/*.pem master-3:/etc/etcd/ssl/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

etcd集群部署

1、安装etcd

yum install etcd -y
mkdir -p /var/lib/etcd
  • 1
  • 2

2、master-1的etcd.service

# cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name master-1 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.5.11:2380 \
  --listen-peer-urls https://192.168.5.11:2380 \
  --listen-client-urls https://192.168.5.11:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.5.11:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380,master-3=https://192.168.5.13:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

3、master-2的etcd.service

# cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name master-2 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.5.12:2380 \
  --listen-peer-urls https://192.168.5.12:2380 \
  --listen-client-urls https://192.168.5.12:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.5.12:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380,master-3=https://192.168.5.13:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

4、master-3的etcd.service

# cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name master-3 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.5.13:2380 \
  --listen-peer-urls https://192.168.5.13:2380 \
  --listen-client-urls https://192.168.5.13:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.5.13:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380,master-3=https://192.168.5.13:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

5、添加自启动(etc集群最少2个节点才能启动,启动报错看mesages日志)

mv /etc/systemd/system/etcd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
  • 1
  • 2
  • 3
  • 4
  • 5

6、创建etcd证书(master-1上执行即可)

etcdctl --endpoints=https://192.168.5.11:2379,https://192.168.5.12:2379,https://192.168.5.13:2379 \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

member 45bf9ccad8d8900a is healthy: got healthy result from https://192.168.5.12:2379
member 54a5796a6803f252 is healthy: got healthy result from https://192.168.5.11:2379
member da27c13c21936c01 is healthy: got healthy result from https://192.168.5.13:2379
cluster is healthy
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

四、Kubernetes集群部署(全部主机执行)

1、部署安装kubelet、kubeadm和kubectl

#添加kubernetes阿里源
 
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

2、安装kubelet、kubeadm和kubectl:

yum -y install kubelet-1.22.1-0 kubeadm-1.22.1-0 kubectl-1.22.1-0
 
#启动kubelet,并设置自启动:
 
systemctl start kubelet
systemctl enable kubelet
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

此时kubelet缺省配置文件无法启动,可忽略状态;

3、提前下载镜像(3台master主机上下载):

#查看所需镜像
[root@master01 tools]# kubeadm config images list
I1213 22:07:58.020311   27601 version.go:255] remote version is much newer: v1.23.0; falling back to: stable-1.22
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

下载脚本

#!/bin/bash
images="kube-apiserver:v1.22.1 kube-controller-manager:v1.22.1 kube-scheduler:v1.22.1 kube-proxy:v1.22.1 pause:3.5 etcd:3.5.0-0"
        
for imageName in ${images[@]};
do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
 
docker pull coredns/coredns:1.8.4
docker tag coredns/coredns:1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker rmi coredns/coredns:1.8.4   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

在master节点保存镜像;

docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.22.1
docker save -o coredns.tar k8s.gcr.io/coredns:v1.8.4
docker save -o pause.tar k8s.gcr.io/pause:3.5
  • 1
  • 2
  • 3

在worker节点上导入镜像:

docker load  kube-proxy.tar k8s.gcr.io/kube-proxy:v1.22.1
docker load  coredns.tar k8s.gcr.io/coredns:v1.8.4
docker load  pause.tar k8s.gcr.io/pause:3.5
  • 1
  • 2
  • 3

4、初始化集群
使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默认配置,然后在根据自己的环境修改配置; 需要修改advertiseAddress、controlPlaneEndpoint、imageRepository、serviceSubnet;

[root@master01 tools]# cat kubeadm-init.yaml 

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.5.11  #本节点master-1 ip 
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master-1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "192.168.5.50:8443"     #VIP 地址 , haproxy对应端口8443
dns: {}
etcd:
  external:
    endpoints:
    - https://192.168.5.11:2379
    - https://192.168.5.12:2379
    - https://192.168.5.13:2379
    caFile: /etc/etcd/ssl/ca.pem
    certFile: /etc/etcd/ssl/etcd.pem
    keyFile: /etc/etcd/ssl/etcd-key.pem
imageRepository: k8s.gcr.io     #镜像地址
kind: ClusterConfiguration
kubernetesVersion: 1.22.1       #版本,下载对应image 版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12   #service 网段
  podSubnet: 10.244.0.0/16
scheduler: {}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48

初始化

kubeadm init --config kubeadm-init.yaml
  • 1
[root@master-1 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1] and IPs [10.96.0.1 192.168.5.11 192.168.5.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1] and IPs [192.168.5.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1] and IPs [192.168.5.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.576499 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.5.50:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:83370f58a593b43539175844f4d8d895d4a2be4345ae76528e92b2ee52eaba1d \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.5.50:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:83370f58a593b43539175844f4d8d895d4a2be4345ae76528e92b2ee52eaba1d

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85

kubeadm init主要执行了以下操作:

[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的 Docker 镜像文件
[kubelet-start] :生成 kubelet 的配置文件 ”/var/lib/kubelet/config.yaml”,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动失败。
[certificates]:生成Kubernetes使用的证书,存放在 /etc/kubernetes/pki 目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在 /etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件。
[etcd]:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务。
[wait-control-plane]:等待 control-plan 部署的 Master 组件启动。
[apiclient]:检查 Master 组件服务状态。
[uploadconfig]:更新配置。
[kubelet]:使用 configMap 配置 kubelet。
[patchnode]:更新 CNI 信息到 Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod。
[bootstrap-token]:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到。
[addons]:安装附加组件 CoreDNS 和 kube-proxy。

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

为 kubectl 准备 Kubeconfig 文件;

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
export KUBECONFIG=/etc/kubernetes/admin.conf
  • 1
  • 2
  • 3
  • 4
  • 5

将证书分配至其它Master节点;

for node in master-2 master-3; do
  ssh $node "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $node:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $node:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $node:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $node:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $node:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $node:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $node:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $node:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $node:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $node:~/.kube/config
done
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

加入Master节点:

把master-2 master-3加入集群
[root@master-2 ~]#  kubeadm join 192.168.5.50:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:83370f58a593b43539175844f4d8d895d4a2be4345ae76528e92b2ee52eaba1d \
        --control-plane

[root@master-2 ~]# mkdir -p $HOME/.kube
[root@master-2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
======================================================================
[root@master-3 ~]#  kubeadm join 192.168.5.50:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:83370f58a593b43539175844f4d8d895d4a2be4345ae76528e92b2ee52eaba1d \
        --control-plane

[root@master-3 ~]# mkdir -p $HOME/.kube
[root@master-3 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-3 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

加入work节点:

[root@worker-1 ~]#kubeadm join 192.168.5.50:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:83370f58a593b43539175844f4d8d895d4a2be4345ae76528e92b2ee52eaba1d 
  • 1
  • 2

查看节点状态:

一开始没安装网络组件,是显示 notReady 的,装完 cailco 后就变成 Ready,说明集群已就绪了,可以进行下一步验证集群是否搭建成功。

[root@master-1 ~]# kubectl get node
NAME       STATUS   ROLES                  AGE    VERSION
master-1   Ready    control-plane,master   117m   v1.22.1
master-2   Ready    control-plane,master   18m    v1.22.1
master-3   Ready    control-plane,master   16m    v1.22.1
worker-1   Ready    <none>                 15m    v1.22.1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

kube-proxy开启ipvs:

在任意Master节点上修改ConfigMap kube-proxy中的mode: “ipvs”:

kubectl edit configmap kube-proxy -n kube-system
添加这个
mode: "ipvs"
  • 1
  • 2
  • 3

在任意Master节点上重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

  • 1
  • 2

验证修改:

[root@master-1 ~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-2fp75                   1/1     Running   0               8s
kube-proxy-bshbw                   1/1     Running   0               9s
kube-proxy-q7gpd                   1/1     Running   0               11s
kube-proxy-qc7ct                   1/1     Running   0               10s
 
[root@master-1 ~]# kubectl logs kube-proxy-2fp75 -n kube-system
I1214 06:15:16.767187       1 node.go:172] Successfully retrieved node IP: 192.168.5.11
I1214 06:15:16.767494       1 server_others.go:140] Detected node IP 192.168.5.11
I1214 06:15:16.800057       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I1214 06:15:16.800120       1 server_others.go:274] Using ipvs Proxier.
I1214 06:15:16.800133       1 server_others.go:276] creating dualStackProxier for ipvs.
W1214 06:15:16.800149       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
E1214 06:15:16.800390       1 proxier.go:381] "can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1"
I1214 06:15:16.800881       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
E1214 06:15:16.801026       1 proxier.go:381] "can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1"
I1214 06:15:16.801356       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
W1214 06:15:16.801408       1 ipset.go:113] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W1214 06:15:16.801427       1 ipset.go:113] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I1214 06:15:16.801637       1 server.go:649] Version: v1.22.1
I1214 06:15:16.805136       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1214 06:15:16.806091       1 config.go:315] Starting service config controller
I1214 06:15:16.806124       1 shared_informer.go:240] Waiting for caches to sync for service config
I1214 06:15:16.806192       1 config.go:224] Starting endpoint slice config controller
I1214 06:15:16.806206       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1214 06:15:16.907190       1 shared_informer.go:247] Caches are synced for endpoint slice config
I1214 06:15:16.907368       1 shared_informer.go:247] Caches are synced for service config
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

五、安装CNI网络

官方链接:network addon

目前比较成熟的解决方案是:flannel + Calico, 即使用flannel来提供简单的网络管理功能,而使用Calico提供的网络策略功能。我们下面的插件任选一种,我用flannel配置。

安装Calico

wget https://docs.projectcalico.org/manifests/calico.yaml  #下载calico.yaml文件
  • 1

所有master节点提前下载镜像:

[root@master01 tools]# cat calico.yaml |grep image
          image: docker.io/calico/cni:v3.20.0
          image: docker.io/calico/cni:v3.20.0
          image: docker.io/calico/pod2daemon-flexvol:v3.20.0
          image: docker.io/calico/node:v3.20.0
          image: docker.io/calico/kube-controllers:v3.20.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

部署CNI网络:

kubectl apply -f calico.yaml

  • 1
  • 2

注意:calico.yaml中的CIDR需与初始化集群中的参数一致

安装flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f  kube-flannel.yml

  • 1
  • 2
  • 3

安装完成后,coredns Pod 会自动running。 kubectl get node 节点会变成Ready状态。

在Virtualbox环境中,遇到coredns在Master之间可以解析,但是到Worker不能解析的情况。需要做如下修改

方法: 更改flannel daemonset的网卡eth1 ,我这里eth1用作集群内部的通信网卡
kubectl edit ds kube-flannel-ds -n kube-system
找到下面内容,添加 - --iface=eth1
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1
        command:
        - /opt/bin/flanneld
        name: kube-flannel
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

六、测试集群

部署deployment service
通过deployment部署一个nginx
kubectl create deployment nginx-deploy --image=nginx:1.18

通过svc暴露服务
kubectl expose deployment nginx-deploy --name=nginx-svc --port=80 --target-port=80 --type=NodePort

查看
kubectl get deploy,pod,svc -o wide

访问worker节点IP
[root@master-1 ~]# curl 192.168.5.21:30647
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

能显示出 Welcome to nginx,说明 pod 运行正常,间接也说明集群可以正常使用。

集群DNS测试
kubeadm默认已部署coredns
[root@master-1 ~]# kubectl get pod -n kube-system
NAME                               READY   STATUS    RESTARTS       AGE
coredns-6d8c4cb4d-dl2fv            1/1     Running   2 (44h ago)    46h
coredns-6d8c4cb4d-g4qhd            1/1     Running   2 (139m ago)   46h

创建一个busybox
[root@master-2 ~]# kubectl run busybox-ns-1 --rm -it --image=busybox:1.28.3 -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-svc
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx-svc
Address 1: 10.104.211.32 nginx-svc.default.svc.cluster.local

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

IPVS状态查看

[root@master01 tools]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  172.18.241.64:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  192.168.7.2:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.96.0.0:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.7.2:6443             Masq    1      0          0         
  -> 192.168.7.3:6443             Masq    1      0          0         
  -> 192.168.7.10:6443            Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 172.16.241.65:53             Masq    1      0          0         
  -> 172.16.241.66:53             Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 172.16.241.65:9153           Masq    1      0          0         
  -> 172.16.241.66:9153           Masq    1      0          0         
TCP  10.103.28.216:443 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.107.116.120:8000 rr
  -> 172.18.135.1:8000            Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 172.16.241.65:53             Masq    1      0          0         
  -> 172.16.241.66:53             Masq    1      0          0       
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

七、kubernetes Dashboard UI

部署
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
    

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

获取token,通过令牌登陆

创建sa
kubectl create serviceaccount dashboard-admin -n kube-system
绑定role
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
查看token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret| awk '/dashboard-admin/{print $1}')

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

通过firefox访问dashboard,输入token,即可登陆

https://192.168.5.12:30000/#!/login

  • 1
  • 2

在这里插入图片描述
登录界面在这里插入图片描述
如果要重新部署:

kubeadm reset
  • 1

喜欢博主的文章,欢迎点赞关注加转发!

本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
  

闽ICP备14008679号