当前位置:   article > 正文

s21.基于 Kubernetes v1.25 (kubeadm) 和 Containerd部署高可用集群_kubernetes1.25高可用部署

kubernetes1.25高可用部署

基于 Kubernetes v1.25 和 Containerd 部署高可用集群

1.Kubernetes 高可用集群部署架构

本示例中的Kubernetes集群部署将基于以下环境进行。

在这里插入图片描述
表1-1 高可用Kubernetes集群规划

角色机器名机器配置ip地址安装软件
K8s 集群主节点 1,Master和etcdk8s-master01.example.local2C4G172.31.3.101chrony-client、containerd、kubeadm 、kubelet、kubectl
K8s 集群主节点 2,Master和etcdk8s-master02.example.local2C4G172.31.3.102chrony-client、containerd、kubeadm 、kubelet、kubectl
K8s 集群主节点 3,Master和etcdk8s-master03.example.local2C4G172.31.3.103chrony-client、containerd、kubeadm 、kubelet、kubectl
K8s 主节点访问入口 1,提供高可用及负载均衡k8s-ha01.example.local2C2G172.31.3.104chrony-server、haproxy、keepalived
K8s 主节点访问入口 2,提供高可用及负载均衡k8s-ha02.example.local2C2G172.31.3.105chrony-server、haproxy、keepalived
容器镜像仓库1k8s-harbor01.example.local2C2G172.31.3.106chrony-client、docker、docker-compose、harbor
容器镜像仓库2k8s-harbor02.example.local2C2G172.31.3.107chrony-client、docker、docker-compose、harbor
K8s 集群工作节点 1k8s-node01.example.local2C4G172.31.3.108chrony-client、containerd、kubeadm 、kubelet
K8s 集群工作节点 2k8s-node02.example.local2C4G172.31.3.109chrony-client、containerd、kubeadm 、kubelet
K8s 集群工作节点 3k8s-node03.example.local2C4G172.31.3.110chrony-client、containerd、kubeadm 、kubelet
VIP,在ha01和ha02主机实现k8s.example.local172.31.3.188

软件版本信息和Pod、Service网段规划:

配置信息备注
支持的操作系统版本CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04
Container Runtime:containerd v1.6.8
kubeadm版本1.25.0
宿主机网段172.31.0.0/21
Pod网段192.168.0.0/12
Service网段10.96.0.0/12

2.基于Kubeadm 实现 Kubernetes v1.25.0集群部署流程说明

官方说明

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
  • 1
  • 2
  • 3
  • 4
  • 5

使用 kubeadm,能创建一个符合最佳实践的最小化 Kubernetes 集群。 事实上,你可以使用 kubeadm配置一个通过 Kubernetes 一致性测试的集群。 kubeadm 还支持其他集群生命周期功能, 例如启动引导令牌和集群升级。

  • 每个节点主机的初始环境准备
  • Kubernetes集群API访问入口的高可用
  • 在所有Master和Node节点都安装容器运行时,实际Kubernetes只使用其中的Containerd
  • 在所有Master和Node节点安装kubeadm 、kubelet、kubectl
  • 在第一个 master 节点运行 kubeadm init 初始化命令 ,并验证 master 节点状态
  • 在第一个 master 节点安装配置网络插件
  • 在其它master节点运行kubeadm join 命令加入到控制平面集群中
  • 在所有 node 节点使用 kubeadm join 命令加入集群
  • 创建 pod 并启动容器测试访问 ,并测试网络通信

3.基于Kubeadm 部署 Kubernetes v1.25.0高可用集群案例

3.1 所有主机初始化

3.1.1 设置ip地址

#CentOS
[root@k8s-master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
NAME=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.31.3.101
PREFIX=21
GATEWAY=172.31.0.2
DNS1=223.5.5.5
DNS2=180.76.76.76

#Ubuntu
root@k8s-master01:~# cat /etc/netplan/01-netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses: [172.31.3.101/21] 
      gateway4: 172.31.0.2
      nameservers:
        addresses: [223.5.5.5, 180.76.76.76]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

3.1.2 设置主机名

hostnamectl set-hostname k8s-master01.example.local
hostnamectl set-hostname k8s-master02.example.local
hostnamectl set-hostname k8s-master03.example.local
hostnamectl set-hostname k8s-ha01.example.local
hostnamectl set-hostname k8s-ha02.example.local
hostnamectl set-hostname k8s-harbor01.example.local
hostnamectl set-hostname k8s-harbor02.example.local
hostnamectl set-hostname k8s-node01.example.local
hostnamectl set-hostname k8s-node02.example.local
hostnamectl set-hostname k8s-node03.example.local
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3.1.3 配置镜像源

CentOS 7所有节点配置 yum源如下:

rm -f /etc/yum.repos.d/*.repo

cat > /etc/yum.repos.d/base.repo <<EOF
[base]
name=base
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever/os/\$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-\$releasever

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever/extras/\$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-\$releasever

[updates]
name=updates
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever/updates/\$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-\$releasever

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever/centosplus/\$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-\$releasever

[epel]
name=epel
baseurl=https://mirrors.cloud.tencent.com/epel/\$releasever/\$basearch/
gpgcheck=1
gpgkey=https://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-\$releasever
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

Rocky 8所有节点配置 yum源如下:

rm -f /etc/yum.repos.d/*.repo

cat > /etc/yum.repos.d/base.repo <<EOF
[BaseOS]
name=BaseOS
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/\$releasever/BaseOS/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/\$releasever/AppStream/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[extras]
name=extras
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/\$releasever/extras/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
enabled=1

[plus]
name=plus
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/\$releasever/plus/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

CentOS stream 8所有节点配置 yum源如下:

rm -f /etc/yum.repos.d/*.repo

cat > /etc/yum.repos.d/base.repo <<EOF
[BaseOS]
name=BaseOS
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever-stream/BaseOS/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever-stream/AppStream/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever-stream/extras/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/\$releasever-stream/centosplus/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

Ubuntu 所有节点配置 apt源如下:

cat > /etc/apt/sources.list <<EOF
deb http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs) main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs) main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-security main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-security main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-updates main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-updates main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-proposed main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-proposed main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-backports main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ $(lsb_release -cs)-backports main restricted universe multiverse
EOF

apt update
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

3.1.4 必备工具安装

#CentOS安装
yum -y install vim tree lrzsz wget jq psmisc net-tools telnet yum-utils device-mapper-persistent-data lvm2 git 
#Rocky除了安装上面工具,还需要安装rsync
yum -y install rsync

#Ubuntu安装
apt -y install tree lrzsz jq
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

3.1.5 配置 ssh key 验证

配置 ssh key 验证,方便后续同步文件

[root@k8s-master01 ~]# cat ssh_key_push.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-19
#FileName:      ssh_key_push.sh
#URL:           raymond.blog.csdn.net
#Description:   ssh_key_push for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

export SSHPASS=123456
HOSTS="
172.31.3.101
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110"

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

ssh_key_push(){
    rm -f ~/.ssh/id_rsa*
    ssh-keygen -f /root/.ssh/id_rsa -P '' &> /dev/null
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        rpm -q sshpass &> /dev/null || { ${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> /dev/null; }
    else
        dpkg -S sshpass &> /dev/null || { ${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> /dev/null; }
    fi
    for i in $HOSTS;do
        {
            sshpass -e ssh-copy-id -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa.pub $i &> /dev/null
            [ $? -eq 0 ] && echo $i is finished || echo $i is false
        }&
    done
    wait
}

main(){
    os
    ssh_key_push
}

main

[root@k8s-master01 ~]# bash ssh_key_push.sh 
安装sshpass软件包
172.31.3.105 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.106 is finished
172.31.3.101 is finished
172.31.3.110 is finished
172.31.3.104 is finished
172.31.3.107 is finished
172.31.3.102 is finished
172.31.3.103 is finished
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68

3.1.6 设置域名解析

cat >> /etc/hosts <<EOF
172.31.3.101 k8s-master01.example.local k8s-master01
172.31.3.102 k8s-master02.example.local k8s-master02
172.31.3.103 k8s-master03.example.local k8s-master03
172.31.3.104 k8s-ha01.example.local k8s-ha01
172.31.3.105 k8s-ha02.example.local k8s-ha02
172.31.3.106 k8s-harbor01.example.local k8s-harbor01
172.31.3.107 k8s-harbor02.example.local k8s-harbor02
172.31.3.108 k8s-node01.example.local k8s-node01
172.31.3.109 k8s-node02.example.local k8s-node02
172.31.3.110 k8s-node03.example.local k8s-node03
172.31.3.188 kubeapi.raymonds.cc kubeapi
172.31.3.188 harbor.raymonds.cc
EOF

for i in {102..110};do scp /etc/hosts 172.31.3.$i:/etc/ ;done
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

3.1.7 关闭防火墙

#CentOS
systemctl disable --now firewalld

#CentOS 7
systemctl disable --now NetworkManager

#Ubuntu
systemctl disable --now ufw
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

3.1.8 禁用SELinux

#CentOS
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

#Ubuntu
Ubuntu没有安装SELinux,不用设置
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

3.1.9 禁用swap

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a

#Ubuntu 20.04,执行下面命令
sed -ri 's/.*swap.*/#&/' /etc/fstab
SD_NAME=`lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'`
systemctl mask dev-${SD_NAME}.swap
swapoff -a
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

3.1.10 时间同步

ha01和ha02上安装chrony-server:

root@k8s-ha01:~# cat install_chrony_server.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-22
#FileName:      install_chrony_server.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_server for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_chrony(){
    ${COLOR}"安装chrony软件包..."${END}
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
		yum -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' -e 's@^#allow.*@allow 0.0.0.0/0@' -e 's@^#local.*@local stratum 10@' /etc/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    else
        apt -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' /etc/chrony/chrony.conf
        echo "allow 0.0.0.0/0" >> /etc/chrony/chrony.conf
        echo "local stratum 10" >> /etc/chrony/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    fi
}

main(){
    os
    install_chrony
}

main

[root@k8s-ha01 ~]# bash install_chrony_server.sh 
chrony安装完成

[root@k8s-ha02 ~]# bash install_chrony_server.sh 
chrony安装完成

[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    39  -1507us[-8009us] +/-   37ms
^- 139.199.215.251               2   6    17    39    +10ms[  +10ms] +/-   48ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns

[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    40    +90us[-1017ms] +/-   32ms
^+ 139.199.215.251               2   6    33    37    +13ms[  +13ms] +/-   25ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66

master、node、harbor上安装chrony-client:

root@k8s-master01:~# cat install_chrony_client.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-22
#FileName:      install_chrony_client.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_client for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
SERVER1=172.31.3.104
SERVER2=172.31.3.105

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_chrony(){
    ${COLOR}"安装chrony软件包..."${END}
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        yum -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' /etc/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    else
        apt -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' /etc/chrony/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        systemctl restart chronyd
        ${COLOR}"chrony安装完成"${END}
    fi
}

main(){
    os
    install_chrony
}

main

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-harbor01 k8s-harbor02 k8s-node01 k8s-node02 k8s-node03;do scp -o StrictHostKeyChecking=no install_chrony_client.sh $i:/root/ ; done

[root@k8s-master01 ~]# bash install_chrony_client.sh 
[root@k8s-master02 ~]# bash install_chrony_client.sh 
[root@k8s-master03 ~]# bash install_chrony_client.sh 
[root@k8s-harbor01:~]# bash install_chrony_client.sh
[root@k8s-harbor02:~]# bash install_chrony_client.sh
[root@k8s-node01:~]# bash install_chrony_client.sh
[root@k8s-node02:~]# bash install_chrony_client.sh
[root@k8s-node03:~]# bash install_chrony_client.sh

[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^+ k8s-ha01                      3   6    17     8    +84us[  +74us] +/-   55ms
^* k8s-ha02                      3   6    17     8    -82us[  -91us] +/-   45ms
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63

3.1.11 设置时区

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone

#Ubuntu还要设置下面内容
cat >> /etc/default/locale <<-EOF
LC_TIME=en_DK.UTF-8
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

3.1.12 优化资源限制参数

ulimit -SHn 65535

cat >>/etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3.1.13 内核升级

CentOS7内核是3.10,kubernetes需要内核是4.18或以上版本,CentOS7必须升级内核才可以安装kubernetes,其它系统根据自己的需求去升级

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

在master01节点下载内核:

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
  • 1
  • 2
  • 3

从master01节点传到其他节点:

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03;do scp -o StrictHostKeyChecking=no kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
  • 1

master和node节点安装内核:

cd /root && yum localinstall -y kernel-ml*
  • 1

master和node节点更改内核启动顺序

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
  • 1
  • 2
  • 3

检查默认内核是不是4.19

grubby --default-kernel

[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
  • 1
  • 2
  • 3
  • 4

所有节点重启,然后检查内核是不是4.19

reboot

uname -a

[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

3.1.14 安装ipvs相关工具并优化内核

master和node安装ipvsadm:

#CentOS
yum -y install ipvsadm ipset sysstat conntrack libseccomp

#Ubuntu
apt -y install ipvsadm ipset sysstat conntrack libseccomp-dev
  • 1
  • 2
  • 3
  • 4
  • 5

master和node节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4

cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

然后执行systemctl restart systemd-modules-load.service即可
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

开启一些k8s集群中必须的内核参数,master和node节点配置k8s内核:

cat > /etc/sysctl.d/k8s.conf <<EOF 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

Kubernetes内核优化常用参数详解:

net.ipv4.ip_forward = 1 #其值为0,说明禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时也会被iptables的FORWARD规则所过滤,这样有时会出现L3层的iptables rules去过滤L2的帧的问题
net.bridge.bridge-nf-call-ip6tables = 1 #是否在ip6tables链中过滤IPv6包 
fs.may_detach_mounts = 1 #当系统有容器运行时,需要设置为1

vm.overcommit_memory=1  
#0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
#1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
#2, 表示内核允许分配超过所有物理内存和交换空间总和的内存

vm.panic_on_oom=0 
#OOM就是out of memory的缩写,遇到内存耗尽、无法分配的状况。kernel面对OOM的时候,咱们也不能慌乱,要根据OOM参数来进行相应的处理。
#值为0:内存不足时,启动 OOM killer。
#值为1:内存不足时,有可能会触发 kernel panic(系统重启),也有可能启动 OOM killer。
#值为2:内存不足时,表示强制触发 kernel panic,内核崩溃GG(系统重启)。

fs.inotify.max_user_watches=89100 #表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)

fs.file-max=52706963 #所有进程最大的文件数
fs.nr_open=52706963 #单个进程可分配的最大文件数
net.netfilter.nf_conntrack_max=2310720 #连接跟踪表的大小,建议根据内存计算该值CONNTRACK_MAX = RAMSIZE (in bytes) / 16384 / (x / 32),并满足nf_conntrack_max=4*nf_conntrack_buckets,默认262144

net.ipv4.tcp_keepalive_time = 600  #KeepAlive的空闲时长,或者说每次正常发送心跳的周期,默认值为7200s(2小时)
net.ipv4.tcp_keepalive_probes = 3 #在tcp_keepalive_time之后,没有接收到对方确认,继续发送保活探测包次数,默认值为9(次)
net.ipv4.tcp_keepalive_intvl =15 #KeepAlive探测包的发送间隔,默认值为75s
net.ipv4.tcp_max_tw_buckets = 36000 #Nginx 之类的中间代理一定要关注这个值,因为它对你的系统起到一个保护的作用,一旦端口全部被占用,服务就异常了。 tcp_max_tw_buckets 能帮你降低这种情况的发生概率,争取补救时间。
net.ipv4.tcp_tw_reuse = 1 #只对客户端起作用,开启后客户端在1s内回收
net.ipv4.tcp_max_orphans = 327680 #这个值表示系统所能处理不属于任何进程的socket数量,当我们需要快速建立大量连接时,就需要关注下这个值了。

net.ipv4.tcp_orphan_retries = 3
#出现大量fin-wait-1
#首先,fin发送之后,有可能会丢弃,那么发送多少次这样的fin包呢?fin包的重传,也会采用退避方式,在2.6.358内核中采用的是指数退避,2s,4s,最后的重试次数是由tcp_orphan_retries来限制的。

net.ipv4.tcp_syncookies = 1 #tcp_syncookies是一个开关,是否打开SYN Cookie功能,该功能可以防止部分SYN攻击。tcp_synack_retries和tcp_syn_retries定义SYN的重试次数。
net.ipv4.tcp_max_syn_backlog = 16384 #进入SYN包的最大请求队列.默认1024.对重负载服务器,增加该值显然有好处.
net.ipv4.ip_conntrack_max = 65536 #表明系统将对最大跟踪的TCP连接数限制默认为65536
net.ipv4.tcp_max_syn_backlog = 16384 #指定所能接受SYN同步包的最大客户端数量,即半连接上限;
net.ipv4.tcp_timestamps = 0 #在使用 iptables 做 nat 时,发现内网机器 ping 某个域名 ping 的通,而使用 curl 测试不通, 原来是 net.ipv4.tcp_timestamps 设置了为 1 ,即启用时间戳
net.core.somaxconn = 16384	#Linux中的一个kernel参数,表示socket监听(listen)的backlog上限。什么是backlog呢?backlog就是socket的监听队列,当一个请求(request)尚未被处理或建立时,他会进入backlog。而socket server可以一次性处理backlog中的所有请求,处理后的请求不再位于监听队列中。当server处理请求较慢,以至于监听队列被填满后,新来的请求会被拒绝。
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

3.2 部署 Kubernetes 集群 API 访问入口的高可用

(注意:如果不是高可用集群,haproxy和keepalived无需安装)

公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

在172.31.3.104和172.31.3.105上实现如下操作

3.2.1 安装 HAProxy

利用 HAProxy 实现 Kubeapi 服务的负载均衡

[root@k8s-ha01 ~]# cat install_haproxy.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-12-29
#FileName:      install_haproxy.sh
#URL:           raymond.blog.csdn.net
#Description:   install_haproxy for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`

#lua下载地址:http://www.lua.org/ftp/lua-5.4.4.tar.gz
LUA_FILE=lua-5.4.4.tar.gz

#haproxy下载地址:https://www.haproxy.org/download/2.6/src/haproxy-2.6.4.tar.gz
HAPROXY_FILE=haproxy-2.6.4.tar.gz
HAPROXY_INSTALL_DIR=/apps/haproxy

STATS_AUTH_USER=admin
STATS_AUTH_PASSWORD=123456

VIP=172.31.3.188
MASTER01=172.31.3.101
MASTER02=172.31.3.102
MASTER03=172.31.3.103
HARBOR01=172.31.3.106
HARBOR02=172.31.3.107

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

check_file (){
    cd ${SRC_DIR}
    ${COLOR}'检查Haproxy相关源码包'${END}
    if [ ! -e ${LUA_FILE} ];then
        ${COLOR}"缺少${LUA_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    elif [ ! -e ${HAPROXY_FILE} ];then
        ${COLOR}"缺少${HAPROXY_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    else
        ${COLOR}"相关文件已准备好"${END}
    fi
}

install_haproxy(){
    [ -d ${HAPROXY_INSTALL_DIR} ] && { ${COLOR}"Haproxy已存在,安装失败"${END};exit; }
    ${COLOR}"开始安装Haproxy"${END}
    ${COLOR}"开始安装Haproxy依赖包"${END}
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        yum -y install gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel libtermcap-devel ncurses-devel libevent-devel readline-devel &> /dev/null
    else
        apt update &> /dev/null;apt -y install gcc make openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev libreadline-dev libsystemd-dev &> /dev/null
    fi
    tar xf ${LUA_FILE}
    LUA_DIR=`echo ${LUA_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
    cd ${LUA_DIR}
    make all test
    cd ${SRC_DIR}
    tar xf ${HAPROXY_FILE}
    HAPROXY_DIR=`echo ${HAPROXY_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
    cd ${HAPROXY_DIR}
    make -j ${CPUS} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=${SRC_DIR}/${LUA_DIR}/src/ LUA_LIB=${SRC_DIR}/${LUA_DIR}/src/ PREFIX=${HAPROXY_INSTALL_DIR}
    make install PREFIX=${HAPROXY_INSTALL_DIR}
    [ $? -eq 0 ] && $COLOR"Haproxy编译安装成功"$END ||  { $COLOR"Haproxy编译安装失败,退出!"$END;exit; }
    cat > /lib/systemd/system/haproxy.service <<-EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID

[Install]
WantedBy=multi-user.target
EOF
    [ -L /usr/sbin/haproxy ] || ln -s ../..${HAPROXY_INSTALL_DIR}/sbin/haproxy /usr/sbin/ &> /dev/null
    [ -d /etc/haproxy ] || mkdir /etc/haproxy &> /dev/null  
    [ -d /var/lib/haproxy/ ] || mkdir -p /var/lib/haproxy/ &> /dev/null
    cat > /etc/haproxy/haproxy.cfg <<-EOF
global
maxconn 100000
chroot ${HAPROXY_INSTALL_DIR}
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info

defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms

listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable
    log global
    stats uri /haproxy-status
    stats auth ${STATS_AUTH_USER}:${STATS_AUTH_PASSWORD}

listen kubernetes-6443
    bind ${VIP}:6443
    mode tcp
    log global
    server ${MASTER01} ${MASTER01}:6443 check inter 3s fall 2 rise 5
    server ${MASTER02} ${MASTER02}:6443 check inter 3s fall 2 rise 5
    server ${MASTER03} ${MASTER03}:6443 check inter 3s fall 2 rise 5

listen harbor-80
    bind ${VIP}:80
    mode http
    log global
    balance source
    server ${HARBOR01} ${HARBOR01}:80 check inter 3s fall 2 rise 5
    server ${HARBOR02} ${HARBOR02}:80 check inter 3s fall 2 rise 5
EOF
    cat >> /etc/sysctl.conf <<-EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
    sysctl -p &> /dev/null
    echo "PATH=${HAPROXY_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/haproxy.sh
    systemctl daemon-reload
    systemctl enable --now haproxy &> /dev/null
    systemctl is-active haproxy &> /dev/null && ${COLOR}"Haproxy 服务启动成功!"${END} ||  { ${COLOR}"Haproxy 启动失败,退出!"${END} ; exit; }
    ${COLOR}"Haproxy安装完成"${END}
}

main(){
    os
    check_file
    install_haproxy
}

main

[root@k8s-ha01 ~]# bash install_haproxy.sh

[root@k8s-ha02 ~]# bash install_haproxy.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154

3.2.2 安装 Keepalived

安装 keepalived 实现 HAProxy的高可用

所有ha节点配置KeepAlived健康检查文件:

[root@k8s-ha01 ~]# cat check_haproxy.sh 
#!/bin/bash
err=0
for k in $(seq 1 3);do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

在ha01和ha02节点安装KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的网卡(interface参数)

在ha01节点上安装keepalived-master:

[root@k8s-ha01 ~]# cat install_keepalived_master.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-12-29
#FileName:      install_keepalived_master.sh
#URL:           raymond.blog.csdn.net
#Description:   install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.7.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=MASTER
PRIORITY=100
VIP=172.31.3.188


os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
    OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' /etc/os-release`
}

check_file (){
    cd  ${SRC_DIR}
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
    fi
    if [ ! -e ${KEEPALIVED_FILE} ];then
        ${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
        ${COLOR}'开始下载Keepalived源码包'${END}
        wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
    elif [ ! -e check_haproxy.sh ];then
        ${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    else
        ${COLOR}"相关文件已准备好"${END}
    fi
}

install_keepalived(){
    [ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
    ${COLOR}"开始安装Keepalived"${END}
    ${COLOR}"开始安装Keepalived依赖包"${END}
    if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
        URL=mirrors.sjtug.sjtu.edu.cn
		if [ ! `grep -R "\[PowerTools\]" /etc/yum.repos.d/` ];then
            cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
        fi
    fi
    if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
        URL=mirrors.cloud.tencent.com
        if [ ! `grep -R "\[PowerTools\]" /etc/yum.repos.d/` ];then
            cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
        fi
    fi
    if [[ ${OS_RELEASE_VERSION} == 8 ]] &> /dev/null;then
        yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> /dev/null
    elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> /dev/null;then
        yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> /dev/null
    elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> /dev/null;then
        apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
    else
        apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> /dev/null
    fi
    tar xf ${KEEPALIVED_FILE}
    KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
    cd ${KEEPALIVED_DIR}
    ./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
    make -j $CPUS && make install
    [ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} ||  { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
    [ -d /etc/keepalived ] || mkdir -p /etc/keepalived &> /dev/null
    cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}

vrrp_script check_haoroxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 5
    weight -5
    fall 2  
    rise 1
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${NET_NAME}
    virtual_router_id 51
    priority ${PRIORITY}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        ${VIP} dev ${NET_NAME} label ${NET_NAME}:1
    }
    track_script {
       check_haproxy
    }
}
EOF
    cp ./keepalived/keepalived.service /lib/systemd/system/
    cd  ${SRC_DIR}
    mv check_haproxy.sh /etc/keepalived/check_haproxy.sh
    chmod +x /etc/keepalived/check_haproxy.sh
    echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/keepalived.sh
    systemctl daemon-reload
    systemctl enable --now keepalived &> /dev/null 
    systemctl is-active keepalived &> /dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} ||  { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
    ${COLOR}"Keepalived安装完成"${END}
}

main(){
    os
    check_file
    install_keepalived
}

main

[root@k8s-ha01 ~]# bash install_keepalived_master.sh

[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:04:fa:9d brd ff:ff:ff:ff:ff:ff
    inet 172.31.3.104/21 brd 172.31.7.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.31.3.188/32 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe04:fa9d/64 scope link 
       valid_lft forever preferred_lft forever
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162

在ha02节点上安装keepalived-backup:

[root@k8s-ha02 ~]# cat install_keepalived_backup.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-12-29
#FileName:      install_keepalived_backup.sh
#URL:           raymond.blog.csdn.net
#Description:   install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.7.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=BACKUP
PRIORITY=90
VIP=172.31.3.188


os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
    OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' /etc/os-release`
}

check_file (){
    cd  ${SRC_DIR}
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
    fi
    if [ ! -e ${KEEPALIVED_FILE} ];then
        ${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
        ${COLOR}'开始下载Keepalived源码包'${END}
        wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
    elif [ ! -e check_haproxy.sh ];then
        ${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    else
        ${COLOR}"相关文件已准备好"${END}
    fi
}

install_keepalived(){
    [ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
    ${COLOR}"开始安装Keepalived"${END}
    ${COLOR}"开始安装Keepalived依赖包"${END}
    if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
        URL=mirrors.sjtug.sjtu.edu.cn
		if [ ! `grep -R "\[PowerTools\]" /etc/yum.repos.d/` ];then
            cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
        fi
    fi
    if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
        URL=mirrors.cloud.tencent.com
        if [ ! `grep -R "\[PowerTools\]" /etc/yum.repos.d/` ];then
            cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
        fi
    fi
    if [[ ${OS_RELEASE_VERSION} == 8 ]] &> /dev/null;then
        yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> /dev/null
    elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> /dev/null;then
        yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> /dev/null
    elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> /dev/null;then
        apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
    else
        apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> /dev/null
    fi
    tar xf ${KEEPALIVED_FILE}
    KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
    cd ${KEEPALIVED_DIR}
    ./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
    make -j $CPUS && make install
    [ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} ||  { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
    [ -d /etc/keepalived ] || mkdir -p /etc/keepalived &> /dev/null
    cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}

vrrp_script check_haoroxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 5
    weight -5
    fall 2  
    rise 1
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${NET_NAME}
    virtual_router_id 51
    priority ${PRIORITY}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        ${VIP} dev ${NET_NAME} label ${NET_NAME}:1
    }
    track_script {
       check_haproxy
    }
}
EOF
    cp ./keepalived/keepalived.service /lib/systemd/system/
    cd  ${SRC_DIR}
    mv check_haproxy.sh /etc/keepalived/check_haproxy.sh
    chmod +x /etc/keepalived/check_haproxy.sh
    echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/keepalived.sh
    systemctl daemon-reload
    systemctl enable --now keepalived &> /dev/null 
    systemctl is-active keepalived &> /dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} ||  { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
    ${COLOR}"Keepalived安装完成"${END}
}

main(){
    os
    check_file
    install_keepalived
}

main 

[root@k8s-ha02 ~]# bash install_keepalived_backup.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146

3.2.3 测试访问

172.31.3.188 kubeapi.raymonds.cc
  • 1

浏览器访问验证,用户名密码: admin:123456

http://kubeapi.raymonds.cc:9999/haproxy-status
  • 1

在这里插入图片描述

在这里插入图片描述

3.3 安装harbor

3.3.1 安装harbor

在harbor01和harbor02上安装harbor:

[root@k8s-harbor01 ~]# cat install_docker_compose_harbor.sh 
#!/bin/bash
#
#**************************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-12-16
#FileName:      install_docke_compose_harbor.sh
#URL:           raymond.blog.csdn.net
#Description:   install_docker_compose_harbor for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#**************************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'

DOCKER_VERSION=20.10.17
URL='mirrors.cloud.tencent.com'

#docker-compose下载地址:https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
DOCKER_COMPOSE_FILE=docker-compose-linux-x86_64

#harbor下载地址:https://github.com/goharbor/harbor/releases/download/v2.6.0/harbor-offline-installer-v2.6.0.tgz
HARBOR_FILE=harbor-offline-installer-v
HARBOR_VERSION=2.6.0
TAR=.tgz
HARBOR_INSTALL_DIR=/apps
HARBOR_DOMAIN=harbor.raymonds.cc
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${NET_NAME}| awk -F" +|/" '/global/{print $3}'`
HARBOR_ADMIN_PASSWORD=123456

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
    OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' /etc/os-release`
}

check_file (){
    cd ${SRC_DIR}
    if [ ! -e ${DOCKER_COMPOSE_FILE} ];then
        ${COLOR}"缺少${DOCKER_COMPOSE_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    elif [ ! -e ${HARBOR_FILE}${HARBOR_VERSION}${TAR} ];then
        ${COLOR}"缺少${HARBOR_FILE}${HARBOR_VERSION}${TAR}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    else
        ${COLOR}"相关文件已准备好"${END}
    fi
}

ubuntu_install_docker(){
    ${COLOR}"开始安装DOCKER依赖包"${END}
    apt update &> /dev/null
    apt -y install apt-transport-https ca-certificates curl software-properties-common &> /dev/null
    curl -fsSL https://${URL}/docker-ce/linux/ubuntu/gpg | sudo apt-key add - &> /dev/null
    add-apt-repository  "deb [arch=amd64] https://${URL}/docker-ce/linux/ubuntu  $(lsb_release -cs) stable" &> /dev/null 
    apt update &> /dev/null

    ${COLOR}"Docker有以下版本"${END}
    apt-cache madison docker-ce
    ${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装DOCKER"${END}
    apt -y install docker-ce=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) docker-ce-cli=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) &> /dev/null || { ${COLOR}"apt源失败,请检查apt配置"${END};exit; }
}

centos_install_docker(){
	${COLOR}"开始安装DOCKER依赖包"${END}
    yum -y install yum-utils &> /dev/null
    yum-config-manager --add-repo https://${URL}/docker-ce/linux/centos/docker-ce.repo &> /dev/null
    yum clean all &> /dev/null
	yum makecache &> /dev/null

    ${COLOR}"Docker有以下版本"${END}
    yum list docker-ce.x86_64 --showduplicates
    ${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装DOCKER"${END}
    yum -y install docker-ce-${DOCKER_VERSION} docker-ce-cli-${DOCKER_VERSION} &> /dev/null || { ${COLOR}"yum源失败,请检查yum配置"${END};exit; }
}

mirror_accelerator(){
    mkdir -p /etc/docker
    cat > /etc/docker/daemon.json <<-EOF
{
    "registry-mirrors": [
        "https://registry.docker-cn.com",
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn"
    ],
    "insecure-registries": ["${HARBOR_DOMAIN}"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "max-concurrent-downloads": 10,
    "max-concurrent-uploads": 5,
    "log-opts": {
        "max-size": "300m",
        "max-file": "2"  
    },
    "live-restore": true
}
EOF
    systemctl daemon-reload
    systemctl enable --now docker
    systemctl is-active docker &> /dev/null && ${COLOR}"Docker 服务启动成功"${END} || { ${COLOR}"Docker 启动失败"${END};exit; }
    docker version &&  ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败"${END}
}

set_alias(){
    echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
    echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
}

install_docker_compose(){
    ${COLOR}"开始安装 Docker compose....."${END}
    sleep 1
    mv ${SRC_DIR}/${DOCKER_COMPOSE_FILE} /usr/bin/docker-compose
    chmod +x /usr/bin/docker-compose
    docker-compose --version &&  ${COLOR}"Docker Compose 安装完成"${END} || ${COLOR}"Docker compose 安装失败"${END}
}

install_harbor(){
    ${COLOR}"开始安装 Harbor....."${END}
    sleep 1
    [ -d ${HARBOR_INSTALL_DIR} ] || mkdir ${HARBOR_INSTALL_DIR}
    tar xf ${SRC_DIR}/${HARBOR_FILE}${HARBOR_VERSION}${TAR} -C ${HARBOR_INSTALL_DIR}/
    mv ${HARBOR_INSTALL_DIR}/harbor/harbor.yml.tmpl ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
    sed -ri.bak -e 's/^(hostname:) .*/\1 '${IP}'/' -e 's/^(harbor_admin_password:) .*/\1 '${HARBOR_ADMIN_PASSWORD}'/' -e 's/^(https:)/#\1/' -e 's/  (port: 443)/#  \1/' -e 's@  (certificate: .*)@#  \1@' -e 's@  (private_key: .*)@#  \1@' ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        if [ ${OS_RELEASE_VERSION} == "8" ];then
            yum -y install python3 &> /dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
        else
            yum -y install python &> /dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
        fi
    else
        apt -y install python3 &> /dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
    fi
    ${HARBOR_INSTALL_DIR}/harbor/install.sh && ${COLOR}"Harbor 安装完成"${END} ||  ${COLOR}"Harbor 安装失败"${END}
    cat > /lib/systemd/system/harbor.service <<-EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

    systemctl daemon-reload 
    systemctl enable harbor &>/dev/null && ${COLOR}"Harbor已配置为开机自动启动"${END}
}

set_swap_limit(){
    if [ ${OS_ID} == "Ubuntu" ];then
        ${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
        sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' /etc/default/grub
        update-grub &> /dev/null
        ${COLOR}"10秒后,机器会自动重启"${END}
        sleep 10
        reboot
    fi
}

main(){
    os
    check_file
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        rpm -q docker-ce &> /dev/null && ${COLOR}"Docker已安装"${END} || centos_install_docker
    else
        dpkg -s docker-ce &>/dev/null && ${COLOR}"Docker已安装"${END} || ubuntu_install_docker
    fi
    [ -f /etc/docker/daemon.json ] &>/dev/null && ${COLOR}"Docker镜像加速器已设置"${END} || mirror_accelerator
    grep -Eqoi "(.*rmi=|.*rmc=)" ~/.bashrc && ${COLOR}"Docker别名已设置"${END} || set_alias
    docker-compose --version &> /dev/null && ${COLOR}"Docker Compose已安装"${END} || install_docker_compose
    systemctl is-active harbor &> /dev/null && ${COLOR}"Harbor已安装"${END} || install_harbor
    grep -q "swapaccount=1" /etc/default/grub && ${COLOR}'"WARNING: No swap limit support"警告,已设置'${END} || set_swap_limit
}

main

[root@k8s-harbor01 ~]# bash install_docker_compose_harbor.sh

[root@k8s-harbor02 ~]# bash install_docker_compose_harbor.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194

3.3.2 创建harbor仓库

在harbor01新建项目google_containers

http://172.31.3.106/

用户名:admin 密码:123456

在这里插入图片描述
在这里插入图片描述
在harbor02新建项目google_containers

http://172.31.3.107/
在这里插入图片描述
在这里插入图片描述
在harbor02上新建目标
在这里插入图片描述

在harbor02上新建规则
在这里插入图片描述

在harbor01上新建目标
在这里插入图片描述

在harbor01上新建规则
在这里插入图片描述

3.4 安装 Containerd

3.4.1 内核参数调整

如果是安装 Docker 会自动配置以下的内核参数,而无需手动实现

但是如果安装Contanerd,还需手动配置

允许 iptables 检查桥接流量,若要显式加载此模块,需运行 modprobe br_netfilter

为了让 Linux 节点的 iptables 能够正确查看桥接流量,还需要确认net.bridge.bridge-nf-call-iptables 设置为 1。

配置Containerd所需的模块:

[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
  • 1
  • 2
  • 3
  • 4

加载模块:

[root@k8s-master01 ~]# modprobe -- overlay
[root@k8s-master01 ~]# modprobe -- br_netfilter
  • 1
  • 2

配置Containerd所需的内核:

[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
  • 1
  • 2
  • 3
  • 4
  • 5

加载内核:

[root@k8s-master01 ~]# sysctl --system
  • 1

3.4.2 二进制安装 Containerd

官方下载链接:

https://github.com/containerd/containerd
  • 1

Containerd有三种二进制安装包:

  • containerd-xxx :不包含runC,需要单独安装runC

    [root@k8s-master01 ~]# tar tf containerd-1.6.8-linux-amd64.tar.gz 
    bin/
    bin/containerd-stress
    bin/containerd-shim-runc-v2
    bin/containerd-shim
    bin/ctr
    bin/containerd
    bin/containerd-shim-runc-v1
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
  • cri-containerd-xxx:包含runC,ctr、crictl、systemd 配置文件等相关文件,不包含cni插件,k8s不需要containerd的cni插件,所以选择这个二进制包安装

    [root@k8s-master01 ~]# tar tf cri-containerd-1.6.8-linux-amd64.tar.gz 
    etc/crictl.yaml
    etc/systemd/
    etc/systemd/system/
    etc/systemd/system/containerd.service
    usr/
    usr/local/
    usr/local/sbin/
    usr/local/sbin/runc
    usr/local/bin/
    usr/local/bin/containerd-stress
    usr/local/bin/containerd-shim-runc-v2
    usr/local/bin/containerd-shim
    usr/local/bin/ctr
    usr/local/bin/containerd
    usr/local/bin/critest
    usr/local/bin/ctd-decoder
    usr/local/bin/crictl
    usr/local/bin/containerd-shim-runc-v1
    opt/containerd/
    opt/containerd/cluster/
    opt/containerd/cluster/gce/
    opt/containerd/cluster/gce/env
    opt/containerd/cluster/gce/cloud-init/
    opt/containerd/cluster/gce/cloud-init/master.yaml
    opt/containerd/cluster/gce/cloud-init/node.yaml
    opt/containerd/cluster/gce/cni.template
    opt/containerd/cluster/gce/configure.sh
    opt/containerd/cluster/version
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
  • cri-containerd-cni-xxx:包含runc、ctr、crictl、cni插件、systemd 配置文件等相关文件

    [root@k8s-master01 ~]# tar tf cri-containerd-cni-1.6.8-linux-amd64.tar.gz 
    etc/
    etc/crictl.yaml
    etc/systemd/
    etc/systemd/system/
    etc/systemd/system/containerd.service
    etc/cni/
    etc/cni/net.d/
    etc/cni/net.d/10-containerd-net.conflist
    usr/
    usr/local/
    usr/local/sbin/
    usr/local/sbin/runc
    usr/local/bin/
    usr/local/bin/containerd-stress
    usr/local/bin/containerd-shim-runc-v2
    usr/local/bin/containerd-shim
    usr/local/bin/ctr
    usr/local/bin/containerd
    usr/local/bin/critest
    usr/local/bin/ctd-decoder
    usr/local/bin/crictl
    usr/local/bin/containerd-shim-runc-v1
    opt/
    opt/containerd/
    opt/containerd/cluster/
    opt/containerd/cluster/gce/
    opt/containerd/cluster/gce/env
    opt/containerd/cluster/gce/cloud-init/
    opt/containerd/cluster/gce/cloud-init/master.yaml
    opt/containerd/cluster/gce/cloud-init/node.yaml
    opt/containerd/cluster/gce/cni.template
    opt/containerd/cluster/gce/configure.sh
    opt/containerd/cluster/version
    opt/cni/
    opt/cni/bin/
    opt/cni/bin/bandwidth
    opt/cni/bin/loopback
    opt/cni/bin/ipvlan
    opt/cni/bin/host-local
    opt/cni/bin/static
    opt/cni/bin/vlan
    opt/cni/bin/tuning
    opt/cni/bin/host-device
    opt/cni/bin/firewall
    opt/cni/bin/portmap
    opt/cni/bin/sbr
    opt/cni/bin/macvlan
    opt/cni/bin/bridge
    opt/cni/bin/dhcp
    opt/cni/bin/ptp
    opt/cni/bin/vrf
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52

安装Containerd:

[root@k8s-master01 ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-1.6.8-linux-amd64.tar.gz

#cri-containerd-1.6.8-linux-amd64.tar.gz 压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了 systemd 配置文件,containerd 和ctr、crictl等部署文件。 将解压缩到系统的根目录 / 中:
[root@k8s-master01 ~]# tar xf cri-containerd-1.6.8-linux-amd64.tar.gz -C /
  • 1
  • 2
  • 3
  • 4

配置Containerd的配置文件:

[root@k8s-master01 ~]# mkdir -p /etc/containerd
[root@k8s-master01 ~]# containerd config default | tee /etc/containerd/config.toml
  • 1
  • 2

将Containerd的Cgroup改为Systemd和修改containerd配置sandbox_image 镜像源设置为阿里google_containers镜像源:

[root@k8s-master01 ~]# vim /etc/containerd/config.toml
...
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
...
            SystemdCgroup = true #SystemdCgroup改成true
...
    sandbox_image = "harbor.raymonds.cc/google_containers/pause:3.8" #sandbox_image的镜像改为私有仓库“harbor.raymonds.cc/google_containers/pause:3.8”,如果没有私有仓库改为阿里镜像源“registry.aliyuncs.com/google_containers/pause:3.8”

#使用下面命令修改
sed -ri -e 's/(.*SystemdCgroup = ).*/\1true/' -e 's@(.*sandbox_image = ).*@\1\"harbor.raymonds.cc/google_containers/pause:3.8\"@' /etc/containerd/config.toml

#如果没有harbor,请执行下面命令
sed -ri -e 's/(.*SystemdCgroup = ).*/\1true/' -e 's@(.*sandbox_image = ).*@\1\"registry.aliyuncs.com/google_containers/pause:3.8\"@' /etc/containerd/config.toml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

配置镜像加速和配置私有镜像仓库:

参考文档:https://github.com/containerd/cri/blob/master/docs/registry.md

[root@k8s-master01 ~]# vim /etc/containerd/config.toml
...
    [plugins."io.containerd.grpc.v1.cri".registry]
...
#下面几行是配置私有仓库授权,如果没有私有仓库下面的不用设置
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.raymonds.cc".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.raymonds.cc".auth]
          username = "admin"
          password = "123456"
...
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
#下面两行是配置镜像加速
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry.docker-cn.com" ,"http://hub-mirror.c.163.com" ,"https://docker.mirrors.ustc.edu.cn"]
#下面两行是配置私有仓库,如果没有私有仓库下面的不用设置
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.raymonds.cc"]
          endpoint = ["http://harbor.raymonds.cc"]
...

#使用下面命令修改
sed -i -e '/.*registry.mirrors.*/a\        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n          endpoint = ["https://registry.docker-cn.com" ,"http://hub-mirror.c.163.com" ,"https://docker.mirrors.ustc.edu.cn"]\n        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.raymonds.cc"]\n          endpoint = ["http://harbor.raymonds.cc"]' -e '/.*registry.configs.*/a\        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.raymonds.cc".tls]\n          insecure_skip_verify = true\n        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.raymonds.cc".auth]\n          username = "admin"\n          password = "123456"' /etc/containerd/config.toml

#如果没有harbor不需要设置私有仓库相关配置,只需要设置镜像加速,请使用下面命令执行
sed -i '/.*registry.mirrors.*/a\        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n          endpoint = ["https://registry.docker-cn.com" ,"http://hub-mirror.c.163.com" ,"https://docker.mirrors.ustc.edu.cn"]' /etc/containerd/config.toml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

配置crictl客户端连接的运行时位置:

[root@k8s-master01 ~]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

启动Containerd,并配置开机自启动:

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now containerd
  • 1

查看信息:

[root@k8s-master01 ~]# ctr version
Client:
  Version:  v1.6.8
  Revision: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  Go version: go1.17.13

Server:
  Version:  v1.6.8
  Revision: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  UUID: 18d9c9c1-27cc-4883-be10-baf17a186aad

[root@k8s-master01 ~]# crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.6.8
RuntimeApiVersion:  v1

[root@k8s-master01 ~]# crictl info
...
  },
...
  "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
  "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
}
#这里cni插件报错,不用管,因为没有装containerd的CNI插件,kubernetes里不需要containerd的CNI插件,装上了还会冲突,后边安装flannel或calico的CNI插件
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

3.4.3 containerd 客户端工具 nerdctl

推荐使用 nerdctl,使用效果与 docker 命令的语法一致,github 下载链接:

https://github.com/containerd/nerdctl/releases

  • 精简 (nerdctl–linux-amd64.tar.gz): 只包含 nerdctl
  • 完整 (nerdctl-full–linux-amd64.tar.gz): 包含 containerd, runc, and CNI 等依赖

nerdctl 的目标并不是单纯地复制 docker 的功能,它还实现了很多 docker 不具备的功能,例如延迟拉取镜像(lazy-pulling)、镜像加密(imgcrypt)等。具体看 nerdctl。

在这里插入图片描述
延迟拉取镜像功能可以参考这篇文章:Containerd 使用 Stargz Snapshotter 延迟拉取镜像

https://icloudnative.io/posts/startup-containers-in-lightning-speed-with-lazy-image-distribution-on-containerd/

1)安装 nerdctl(精简版):

[root@k8s-master01 ~]# wget https://github.com/containerd/nerdctl/releases/download/v0.23.0/nerdctl-0.23.0-linux-amd64.tar.gz

[root@k8s-master01 ~]# tar xf nerdctl-0.23.0-linux-amd64.tar.gz -C /usr/local/bin/

#配置nerdctl
[root@k8s-master01 ~]# mkdir -p /etc/nerdctl/
[root@k8s-master01 ~]# cat > /etc/nerdctl/nerdctl.toml <<EOF
namespace      = "k8s.io" #设置nerdctl工具默认namespace
insecure_registry = true #跳过安全镜像仓库检测
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

2)安装 buildkit 支持构建镜像:

buildkit GitHub 地址:

https://github.com/moby/buildkit

使用精简版 nerdctl 无法直接通过 containerd 构建镜像,需要与 buildkit 组全使用以实现镜像构建。当然你也可以安装上面的完整 nerdctl;buildkit 项目是 Docker 公司开源出来的一个构建工具包,支持 OCI 标准的镜像构建。它主要包含以下部分:

  • 服务端 buildkitd,当前支持 runc 和 containerd 作为 worker,默认是 runc;
  • 客户端 buildctl,负责解析 Dockerfile,并向服务端 buildkitd 发出构建请求。

buildkit 是典型的C/S 架构,client 和 server 可以不在一台服务器上。而 nerdctl 在构建镜像方面也可以作为 buildkitd 的客户端。

[root@k8s-master01 ~]# wget https://github.com/moby/buildkit/releases/download/v0.10.4/buildkit-v0.10.4.linux-amd64.tar.gz

[root@k8s-master01 ~]# tar xf buildkit-v0.10.4.linux-amd64.tar.gz -C /usr/local/
  • 1
  • 2
  • 3

配置 buildkit 的启动文件,可以从这里下载:

https://github.com/moby/buildkit/tree/master/examples/systemd

buildkit 需要配置两个文件

  • /usr/lib/systemd/system/buildkit.socket
[root@k8s-master01 ~]# cat > /usr/lib/systemd/system/buildkit.socket <<EOF
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Socket]
ListenStream=%t/buildkit/buildkitd.sock
SocketMode=0660

[Install]
WantedBy=sockets.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • /usr/lib/systemd/system/buildkit.service
[root@k8s-master01 ~]# cat > /usr/lib/systemd/system/buildkit.service << EOF
[Unit]
Description=BuildKit
Requires=buildkit.socket
After=buildkit.socket
Documentation=https://github.com/moby/buildkit

[Service]
Type=notify
ExecStart=/usr/local/bin/buildkitd --addr fd://

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

启动 buildkit:

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now buildkit 

root@k8s-master01:~# systemctl status buildkit
● buildkit.service - BuildKit
     Loaded: loaded (/lib/systemd/system/buildkit.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-09-13 16:47:14 CST; 21s ago
TriggeredBy: ● buildkit.socket
       Docs: https://github.com/moby/buildkit
   Main PID: 3303 (buildkitd)
      Tasks: 7 (limit: 4575)
     Memory: 14.5M
     CGroup: /system.slice/buildkit.service
             └─3303 /usr/local/bin/buildkitd --addr fd://

Sep 13 16:47:14 k8s-master01.example.local systemd[1]: Started BuildKit.
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=info msg="auto snapshotter: using overlayfs"
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=warning msg="using host network as the defa>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=info msg="found worker \"sgqr1t2c81tj7ec7w3>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=warning msg="using host network as the defa>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=info msg="found worker \"w4fzprdjtuqtj3f3wd>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=info msg="found 2 workers, default=\"sgqr1t>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=warning msg="currently, only the default wo>
Sep 13 16:47:14 k8s-master01.example.local buildkitd[3303]: time="2022-09-13T16:47:14+08:00" level=info msg="running server on /run/buildkit/b>

[root@k8s-master01 ~]# nerdctl version
Client:
 Version:	v0.23.0
 OS/Arch:	linux/amd64
 Git commit:	660680b7ddfde1d38a66ec1c7f08f8d89ab92c68
 buildctl:
  Version:	v0.10.4
  GitCommit:	a2ba6869363812a210fcc3ded6926757ab780b5f

Server:
 containerd:
  Version:	v1.6.8
  GitCommit:	9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
 runc:
  Version:	1.1.3
  GitCommit:	v1.1.3-0-g6724737f

[root@k8s-master01 ~]# buildctl --version
buildctl github.com/moby/buildkit v0.10.4 a2ba6869363812a210fcc3ded6926757ab780b5f

[root@k8s-master01 ~]# nerdctl info
Client:
 Namespace:	default
 Debug Mode:	false

Server:
 Server Version: v1.6.8
 Storage Driver: overlayfs
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Log: fluentd journald json-file
  Storage: aufs native overlayfs
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-107-generic
 Operating System: Ubuntu 20.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.81GiB
 Name: k8s-master01.example.local
 ID: ab901e55-fa37-496e-9920-ee6eff687687

WARNING: No swap limit support #系统警告信息 (没有开启 swap 资源限制 )
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72

解决上述SWAP报警提示:

#SWAP报警提示,只有在ubuntu系统有,centos系统里没有不用设置
root@k8s-master01:~# sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' /etc/default/grub

root@k8s-master01:~# update-grub
root@k8s-master01:~# reboot

root@k8s-master01:~# nerdctl info
Client:
 Namespace:	default
 Debug Mode:	false

Server:
 Server Version: v1.6.8
 Storage Driver: overlayfs
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Log: fluentd journald json-file
  Storage: aufs native overlayfs
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-125-generic
 Operating System: Ubuntu 20.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.81GiB
 Name: k8s-master01.example.local
 ID: ab901e55-fa37-496e-9920-ee6eff687687
#现在就没有SWAP报警提示
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

nerdctl命令补全:

#CentOS
[root@k8s-master01 ~]# yum -y install bash-completion

#Ubuntu
[root@k8s-master01 ~]# apt -y install bash-completion

[root@k8s-master01 ~]# echo "source <(nerdctl completion bash)" >> ~/.bashrc
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

master02、master03和node安装containerd:

[root@k8s-master02 ~]# cat install_containerd_binary.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-09-13
#FileName:      install_containerd_binary.sh
#URL:           raymond.blog.csdn.net
#Description:   install_containerd_binary for centos 7/8 & ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'

#Containerd下载地址:https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-1.6.8-linux-amd64.tar.gz
CONTAINERD_FILE=cri-containerd-1.6.8-linux-amd64.tar.gz
PAUSE_VERSION=3.8
HARBOR_DOMAIN=harbor.raymonds.cc
USERNAME=admin
PASSWORD=123456

#Netdctl下载地址:https://github.com/containerd/nerdctl/releases/download/v0.23.0/nerdctl-0.23.0-linux-amd64.tar.gz
NETDCTL_FILE=nerdctl-0.23.0-linux-amd64.tar.gz
#Buildkit下载地址:https://github.com/moby/buildkit/releases/download/v0.10.4/buildkit-v0.10.4.linux-amd64.tar.gz
BUILDKIT_FILE=buildkit-v0.10.4.linux-amd64.tar.gz

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

check_file (){
    cd ${SRC_DIR}
    if [ ! -e ${CONTAINERD_FILE} ];then
        ${COLOR}"缺少${CONTAINERD_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    elif [ ! -e ${NETDCTL_FILE} ];then
        ${COLOR}"缺少${NETDCTL_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    elif [ ! -e ${BUILDKIT_FILE} ];then
        ${COLOR}"缺少${BUILDKIT_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
        exit
    else
        ${COLOR}"相关文件已准备好"${END}
    fi
}

install_containerd(){ 
    [ -f /usr/local/bin/containerd ] && { ${COLOR}"Containerd已存在,安装失败"${END};exit; }
    cat > /etc/modules-load.d/containerd.conf <<-EOF
overlay
br_netfilter
EOF
    modprobe -- overlay
    modprobe -- br_netfilter

    cat > /etc/sysctl.d/99-kubernetes-cri.conf <<-EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
    sysctl --system &> /dev/null

    ${COLOR}"开始安装Containerd..."${END}
    tar xf ${CONTAINERD_FILE} -C /

    mkdir -p /etc/containerd
    containerd config default | tee /etc/containerd/config.toml &> /dev/null 
    sed -ri -e 's/(.*SystemdCgroup = ).*/\1true/' -e 's@(.*sandbox_image = ).*@\1\"'''${HARBOR_DOMAIN}'''/google_containers/pause:'''${PAUSE_VERSION}'''\"@' -e '/.*registry.mirrors.*/a\        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n          endpoint = ["https://registry.docker-cn.com" ,"http://hub-mirror.c.163.com" ,"https://docker.mirrors.ustc.edu.cn"]\n        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."'''${HARBOR_DOMAIN}'''"]\n          endpoint = ["http://'''${HARBOR_DOMAIN}'''"]' -e '/.*registry.configs.*/a\        [plugins."io.containerd.grpc.v1.cri".registry.configs."'''${HARBOR_DOMAIN}'''".tls]\n          insecure_skip_verify = true\n        [plugins."io.containerd.grpc.v1.cri".registry.configs."'''${HARBOR_DOMAIN}'''".auth]\n          username = "'''${USERNAME}'''"\n          password = "'''${PASSWORD}'''"' /etc/containerd/config.toml
    cat > /etc/crictl.yaml <<-EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
    systemctl daemon-reload && systemctl enable --now containerd &> /dev/null
    systemctl is-active containerd &> /dev/null && ${COLOR}"Containerd 服务启动成功"${END} || { ${COLOR}"Containerd 启动失败"${END};exit; }
    ctr version &&  ${COLOR}"Containerd 安装成功"${END} || ${COLOR}"Containerd 安装失败"${END}
}

set_alias(){
    echo 'alias rmi="nerdctl images -qa|xargs nerdctl rmi -f"' >> ~/.bashrc
    echo 'alias rmc="nerdctl ps -qa|xargs nerdctl rm -f"' >> ~/.bashrc
}

install_netdctl_buildkit(){
    ${COLOR}"开始安装Netdctl..."${END}
    tar xf ${NETDCTL_FILE} -C /usr/local/bin/
    mkdir -p /etc/nerdctl/
    cat > /etc/nerdctl/nerdctl.toml <<-EOF
namespace      = "k8s.io"
insecure_registry = true
EOF

    ${COLOR}"开始安装Buildkit..."${END}
    tar xf ${BUILDKIT_FILE} -C /usr/local/
    cat > /usr/lib/systemd/system/buildkit.socket <<-EOF
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Socket]
ListenStream=%t/buildkit/buildkitd.sock
SocketMode=0660

[Install]
WantedBy=sockets.target
EOF
    cat > /usr/lib/systemd/system/buildkit.service <<-EOF
[Unit]
Description=BuildKit
Requires=buildkit.socket
After=buildkit.socket
Documentation=https://github.com/moby/buildkit

[Service]
Type=notify
ExecStart=/usr/local/bin/buildkitd --addr fd://

[Install]
WantedBy=multi-user.target
EOF
    systemctl daemon-reload && systemctl enable --now buildkit &> /dev/null
    systemctl is-active buildkit &> /dev/null && ${COLOR}"Buildkit 服务启动成功"${END} || { ${COLOR}"Buildkit 启动失败"${END};exit; }
    buildctl --version &&  ${COLOR}"Buildkit 安装成功"${END} || ${COLOR}"Buildkit 安装失败"${END}
}

nerdctl_command_completion(){
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ];then
        yum -y install bash-completion
    else
        apt -y install bash-completion
    fi
    echo "source <(nerdctl completion bash)" >> ~/.bashrc
    . ~/.bashrc
}

set_swap_limit(){
    if [ ${OS_ID} == "Ubuntu" ];then
        ${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
        sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' /etc/default/grub
        update-grub &> /dev/null
        ${COLOR}"10秒后,机器会自动重启"${END}
        sleep 10
        reboot
    fi
}

main(){
    os
    check_file
    install_containerd
    set_alias
    install_netdctl_buildkit
    nerdctl_command_completion
    set_swap_limit
}

main

[root@k8s-master02 ~]# bash install_containerd_binary.sh
[root@k8s-master03 ~]# bash install_containerd_binary.sh

[root@k8s-node01 ~]# bash install_containerd_binary.sh
[root@k8s-node02 ~]# bash install_containerd_binary.sh
[root@k8s-node03 ~]# bash install_containerd_binary.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167

3.5 安装kubeadm等组件

CentOS 配置k8s镜像仓库和安装k8s组件:

[root@k8s-master01 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r|grep 1.25
kubeadm.x86_64                       1.25.0-0                        kubernetes 

[root@k8s-master01 ~]# yum -y install kubeadm-1.25.0 kubelet-1.25.0 kubectl-1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

Ubuntu:

root@k8s-master01:~# apt update
root@k8s-master01:~# apt install -y apt-transport-https
root@k8s-master01:~# curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
OK

root@k8s-master01:~# echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

root@k8s-master01:~# apt update

root@k8s-master01:~# apt-cache madison kubeadm | grep 1.25
   kubeadm |  1.25.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages

root@k8s-master01:~# apt -y install kubelet=1.25.0-00 kubeadm=1.25.0-00 kubectl=1.25.0-00
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

设置Kubelet开机自启动:

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  • 1
  • 2
  • 3

在master02和master03执行脚本安装:

[root@k8s-master02 ~]# cat install_kubeadm_for_master.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      install_kubeadm_for_master.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

KUBEADM_MIRRORS=mirrors.aliyun.com
KUBEADM_VERSION=1.25.0
HARBOR_DOMAIN=harbor.raymonds.cc

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_ubuntu_kubeadm(){
    ${COLOR}"开始安装Kubeadm依赖包"${END}
    apt update &> /dev/null && apt install -y apt-transport-https &> /dev/null
    curl -fsSL https://${KUBEADM_MIRRORS}/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> /dev/null
    echo "deb https://"${KUBEADM_MIRRORS}"/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
    apt update &> /dev/null

    ${COLOR}"Kubeadm有以下版本"${END}
    apt-cache madison kubeadm
    ${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装Kubeadm"${END}
    apt -y install kubelet=${KUBEADM_VERSION}-00 kubeadm=${KUBEADM_VERSION}-00 kubectl=${KUBEADM_VERSION}-00 &> /dev/null
    ${COLOR}"Kubeadm安装完成"${END}
}

install_centos_kubeadm(){
    cat > /etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://${KUBEADM_MIRRORS}/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/yum-key.gpg https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/rpm-package-key.gpg
EOF
    ${COLOR}"Kubeadm有以下版本"${END}
    yum list kubeadm.x86_64 --showduplicates | sort -r
    ${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装Kubeadm"${END}
    yum -y install kubelet-${KUBEADM_VERSION} kubeadm-${KUBEADM_VERSION} kubectl-${KUBEADM_VERSION} &> /dev/null
    ${COLOR}"Kubeadm安装完成"${END}
}

start_service(){
    systemctl daemon-reload
    systemctl enable --now kubelet
    systemctl is-active kubelet &> /dev/null && ${COLOR}"Kubelet 服务启动成功"${END} || { ${COLOR}"Kubelet 启动失败"${END};exit; }
    kubelet --version &&  ${COLOR}"Kubelet 安装成功"${END} || ${COLOR}"Kubelet 安装失败"${END}
}

main(){
    os
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        install_centos_kubeadm
    else
        install_ubuntu_kubeadm
    fi
    start_service
}

main

[root@k8s-master02 ~]# bash install_kubeadm_for_master.sh 
[root@k8s-master03 ~]# bash install_kubeadm_for_master.sh 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83

node上安装kubeadm:

[root@k8s-node01 ~]# cat install_kubeadm_for_node.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      install_kubeadm_for_node.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

KUBEADM_MIRRORS=mirrors.aliyun.com
KUBEADM_VERSION=1.25.0
HARBOR_DOMAIN=harbor.raymonds.cc

os(){
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_ubuntu_kubeadm(){
    ${COLOR}"开始安装Kubeadm依赖包"${END}
    apt update &> /dev/null && apt install -y apt-transport-https &> /dev/null
    curl -fsSL https://${KUBEADM_MIRRORS}/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> /dev/null
    echo "deb https://"${KUBEADM_MIRRORS}"/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
    apt update &> /dev/null

    ${COLOR}"Kubeadm有以下版本"${END}
    apt-cache madison kubeadm
    ${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装Kubeadm"${END}
    apt -y install kubelet=${KUBEADM_VERSION}-00 kubeadm=${KUBEADM_VERSION}-00 &> /dev/null
    ${COLOR}"Kubeadm安装完成"${END}
}

install_centos_kubeadm(){
    cat > /etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://${KUBEADM_MIRRORS}/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/yum-key.gpg https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/rpm-package-key.gpg
EOF
    ${COLOR}"Kubeadm有以下版本"${END}
    yum list kubeadm.x86_64 --showduplicates | sort -r
    ${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
    ${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
    sleep 10

    ${COLOR}"开始安装Kubeadm"${END}
    yum -y install kubelet-${KUBEADM_VERSION} kubeadm-${KUBEADM_VERSION} &> /dev/null
    ${COLOR}"Kubeadm安装完成"${END}
}

start_service(){
    systemctl daemon-reload
    systemctl enable --now kubelet
    systemctl is-active kubelet &> /dev/null && ${COLOR}"Kubelet 服务启动成功"${END} || { ${COLOR}"Kubelet 启动失败"${END};exit; }
    kubelet --version &&  ${COLOR}"Kubelet 安装成功"${END} || ${COLOR}"Kubelet 安装失败"${END}
}

main(){
    os
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        install_centos_kubeadm
    else
        install_ubuntu_kubeadm
    fi
    start_service
}

main

[root@k8s-node01 ~]# bash install_kubeadm_for_node.sh
[root@k8s-node02 ~]# bash install_kubeadm_for_node.sh 
[root@k8s-node03 ~]# bash install_kubeadm_for_node.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84

3.6 提前准备 Kubernetes 初始化所需镜像

查看镜像版本:

[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version v1.25.0
registry.k8s.io/kube-apiserver:v1.25.0
registry.k8s.io/kube-controller-manager:v1.25.0
registry.k8s.io/kube-scheduler:v1.25.0
registry.k8s.io/kube-proxy:v1.25.0
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3

#查看国内镜像
[root@k8s-master01 ~]# kubeadm  config images list  --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.25.0
registry.aliyuncs.com/google_containers/pause:3.8
registry.aliyuncs.com/google_containers/etcd:3.5.4-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

下载镜像并上传至harbor:

#注意:如果没有harbor不用执行下面命令
root@k8s-master01:~# nerdctl login harbor.raymonds.cc
Username: admin
Password: 
WARN[0000] skipping verifying HTTPS certs for "harbor.raymonds.cc" 
WARNING: Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

#注意:如果没有harbor不用执行下面脚本
[root@k8s-master01 ~]# cat download_kubeadm_images_1.25.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_kubeadm_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

KUBEADM_VERSION=1.25.0
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/"  '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Kubeadm镜像"${END}
    for i in ${images};do 
        nerdctl pull registry.aliyuncs.com/google_containers/$i
        nerdctl tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        nerdctl rmi registry.aliyuncs.com/google_containers/$i
        nerdctl push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Kubeadm镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_kubeadm_images_1.25.sh 

root@k8s-master01:~# nerdctl images
REPOSITORY                                                      TAG        IMAGE ID        CREATED          PLATFORM       SIZE         BLOB SIZE
harbor.raymonds.cc/google_containers/coredns                    v1.9.3     8e352a029d30    3 minutes ago    linux/amd64    47.0 MiB     14.2 MiB
harbor.raymonds.cc/google_containers/etcd                       3.5.4-0    6f72b8515449    3 minutes ago    linux/amd64    289.4 MiB    97.4 MiB
harbor.raymonds.cc/google_containers/kube-apiserver             v1.25.0    f6902791fb9a    4 minutes ago    linux/amd64    125.5 MiB    32.6 MiB
harbor.raymonds.cc/google_containers/kube-controller-manager    v1.25.0    66ce7d460e53    4 minutes ago    linux/amd64    115.3 MiB    29.8 MiB
harbor.raymonds.cc/google_containers/kube-proxy                 v1.25.0    1b1f3456bb19    3 minutes ago    linux/amd64    63.1 MiB     19.3 MiB
harbor.raymonds.cc/google_containers/kube-scheduler             v1.25.0    9330c53feca7    3 minutes ago    linux/amd64    51.9 MiB     15.1 MiB
harbor.raymonds.cc/google_containers/pause                      3.8        900118502363    3 minutes ago    linux/amd64    700.0 KiB    304.0 KiB
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

3.7 基于命令初始化高可用master方式

kubeadm init 命令参考说明

--kubernetes-version:#kubernetes程序组件的版本号,它必须要与安装的kubelet程序包的版本号相同
--control-plane-endpoint:#多主节点必选项,用于指定控制平面的固定访问地址,可是IP地址或DNS名称,会被用于集群管理员及集群组件的kubeconfig配置文件的API Server的访问地址,如果是单主节点的控制平面部署时不使用该选项,注意:kubeadm 不支持将没有 --control-plane-endpoint 参数的单个控制平面集群转换为高可用性集群。
--pod-network-cidr:#Pod网络的地址范围,其值为CIDR格式的网络地址,通常情况下Flannel网络插件的默认为10.244.0.0/16,Calico网络插件的默认值为192.168.0.0/16
--service-cidr:#Service的网络地址范围,其值为CIDR格式的网络地址,默认为10.96.0.0/12;通常,仅Flannel一类的网络插件需要手动指定该地址
--service-dns-domain string #指定k8s集群域名,默认为cluster.local,会自动通过相应的DNS服务实现解析
--apiserver-advertise-address:#API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示此节点上所有可用地址,非必选项
--image-repository string #设置镜像仓库地址,默认为 k8s.gcr.io,此地址国内可能无法访问,可以指向国内的镜像地址
--token-ttl #共享令牌(token)的过期时长,默认为24小时,0表示永不过期;为防止不安全存储等原因导致的令牌泄露危及集群安全,建议为其设定过期时长。未设定该选项时,在token过期后,若期望再向集群中加入其它节点,可以使用如下命令重新创建token,并生成节点加入命令。kubeadm token create --print-join-command
--ignore-preflight-errors=Swap” #若各节点未禁用Swap设备,还需附加选项“从而让kubeadm忽略该错误
--upload-certs #将控制平面证书上传到 kubeadm-certs Secret
--cri-socket  #v1.24版之后指定连接cri的socket文件路径,注意;不同的CRI连接文件不同
#如果是cRI是containerd,则使用--cri-socket unix:///run/containerd/containerd.sock #如果是cRI是docker,则使用--cri-socket unix:///var/run/cri-dockerd.sock
#如果是CRI是CRI-o,则使用--cri-socket unix:///var/run/crio/crio.sock
#注意:CRI-o与containerd的容器管理机制不一样,所以镜像文件不能通用。
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

初始化集群:

#注意:如果没有harbor下面“--image-repository”后面的地址改成“registry.aliyuncs.com/google_containers”

[root@k8s-master01 ~]# kubeadm init --control-plane-endpoint="kubeapi.raymonds.cc" --kubernetes-version=v1.25.0  --pod-network-cidr=192.168.0.0/12 --service-cidr=10.96.0.0/12 --token-ttl=0 --image-repository harbor.raymonds.cc/google_containers  --upload-certs
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join kubeapi.raymonds.cc:6443 --token 5ijb9f.ocr2d4gvh59ppxe7 \
	--discovery-token-ca-cert-hash sha256:fdc7986b95f28d291070ada98246c7328f2b3b36cbe2bcac890ab8b89c1b83a3 \
	--control-plane --certificate-key e12f0a2cd47699ffa672c40d925e28208c9e8d58071d4039b2eaa846afdb4441

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kubeapi.raymonds.cc:6443 --token 5ijb9f.ocr2d4gvh59ppxe7 \
	--discovery-token-ca-cert-hash sha256:fdc7986b95f28d291070ada98246c7328f2b3b36cbe2bcac890ab8b89c1b83a3  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

3.8 生成 kubectl 命令的授权文件

kubectl是kube-apiserver的命令行客户端程序,实现了除系统部署之外的几乎全部的管理操作,是kubernetes管理员使用最多的命令之一。kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm部署的集群为其生成了一个具有管理员权限的认证配置文件/etc/kubernetes/admin.conf,它可由kubectl通过默认的“$HOME/.kube/config”的路径进行加载。当然,用户也可在kubectl命令上使用–kubeconfig选项指定一个别的位置。

下面复制认证为Kubernetes系统管理员的配置文件至目标用户(例如当前用户root)的家目录下:

#可复制4.9的结果执行下面命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 1
  • 2
  • 3
  • 4

3.9 实现 kubectl 命令补全

kubectl 命令功能丰富,默认不支持命令补会,可以用下面方式实现

#CentOS
[root@k8s-master01 ~]# yum -y install bash-completion

#Ubuntu
[root@k8s-master01 ~]# apt -y install bash-completion

# 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。
[root@k8s-master01 ~]# source <(kubectl completion bash) 

# 在您的 bash shell 中永久的添加自动补全
[root@k8s-master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc 

root@k8s-master01:~# kubectl get nodes 
NAME                         STATUS     ROLES           AGE   VERSION
k8s-master01.example.local   NotReady   control-plane   97s   v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

3.10 高可用Master

如果是配置文件初始化集群,不用申请证书,命令行初始化,执行下面命令,申请证书,当前maste生成证书用于添加新控制节点

添加master02和master03:

kubeadm join kubeapi.raymonds.cc:6443 --token 5ijb9f.ocr2d4gvh59ppxe7 \
	--discovery-token-ca-cert-hash sha256:fdc7986b95f28d291070ada98246c7328f2b3b36cbe2bcac890ab8b89c1b83a3 \
	--control-plane --certificate-key e12f0a2cd47699ffa672c40d925e28208c9e8d58071d4039b2eaa846afdb4441

[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS     ROLES           AGE     VERSION
k8s-master01.example.local   NotReady   control-plane   3m36s   v1.25.0
k8s-master02.example.local   NotReady   control-plane   65s     v1.25.0
k8s-master03.example.local   NotReady   control-plane   20s     v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

3.11 高可用Node

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

添加node:

kubeadm join kubeapi.raymonds.cc:6443 --token 5ijb9f.ocr2d4gvh59ppxe7 \
	--discovery-token-ca-cert-hash sha256:fdc7986b95f28d291070ada98246c7328f2b3b36cbe2bcac890ab8b89c1b83a3 

[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS     ROLES           AGE     VERSION
k8s-master01.example.local   NotReady   control-plane   4m50s   v1.25.0
k8s-master02.example.local   NotReady   control-plane   2m19s   v1.25.0
k8s-master03.example.local   NotReady   control-plane   94s     v1.25.0
k8s-node01.example.local     NotReady   <none>          42s     v1.25.0
k8s-node02.example.local     NotReady   <none>          24s     v1.25.0
k8s-node03.example.local     NotReady   <none>          7s      v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

3.12 网络组件flannel部署

Kubernetes系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。下面的命令用于在线部署flannel至Kubernetes系统之上:

首先,下载适配系统及硬件平台环境的flanneld至每个节点,并放置于/opt/bin/目录下。我们这里选用flanneld-amd64,目前最新的版本为v0.19.1,因而,我们需要在集群的每个节点上执行如下命令:

提示:下载flanneld的地址为 https://github.com/flannel-io/flannel

随后,在初始化的第一个master节点k8s-master01上运行如下命令,向Kubernetes部署kube-flannel。

[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

[root@k8s-master01 ~]# grep  '"Network":' kube-flannel.yml 
      "Network": "10.244.0.0/16",

[root@k8s-master01 ~]# sed -ri '/"Network":/s@("Network": ).*@\1"192.168.0.0/12",@g' kube-flannel.yml
[root@k8s-master01 ~]# grep  '"Network":' kube-flannel.yml 
      "Network": "192.168.0.0/12",

[root@k8s-master01 ~]# grep  '[^#]image:' kube-flannel.yml 
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1

#注意:如果没有harbor不用执行下面脚本
[root@k8s-master01 ~]# cat download_flannel_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_flannel_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/[^#]image:/{print $NF}' kube-flannel.yml |uniq)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Flannel镜像"${END}
    for i in ${images};do 
        nerdctl pull registry.cn-beijing.aliyuncs.com/raymond9/$i
        nerdctl tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
        nerdctl rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
        nerdctl push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Flannel镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_flannel_images.sh 

[root@k8s-master01 ~]# nerdctl images |grep flannel
harbor.raymonds.cc/google_containers/mirrored-flannelcni-flannel-cni-plugin    v1.1.0     190ba8db6e14    25 seconds ago    linux/amd64    8.3 MiB      3.6 MiB
harbor.raymonds.cc/google_containers/mirrored-flannelcni-flannel               v0.19.1    09dfa4ceff10    17 seconds ago    linux/amd64    62.4 MiB     19.5 MiB

[root@k8s-master01 ~]# sed -ri 's@([^#]image:) docker.io/rancher(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' kube-flannel.yml 

[root@k8s-master01 ~]# grep  '[^#]image:' kube-flannel.yml
        image: harbor.raymonds.cc/google_containers/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        image: harbor.raymonds.cc/google_containers/mirrored-flannelcni-flannel:v0.19.1
        image: harbor.raymonds.cc/google_containers/mirrored-flannelcni-flannel:v0.19.1

#注意:如果没有harbor执行下面命令
[root@k8s-master01 ~]# sed -ri 's@([^#]image:) docker.io/rancher(/.*)@\1 registry.cn-beijing.aliyuncs.com/raymond9\2@g' kube-flannel.yml 

[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yml

#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-flannel 
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-2sgrb   1/1     Running   0          12s
kube-flannel-ds-8vl4m   1/1     Running   0          12s
kube-flannel-ds-9qf9r   1/1     Running   0          12s
kube-flannel-ds-cgzwx   1/1     Running   0          12s
kube-flannel-ds-cwsnj   1/1     Running   0          12s
kube-flannel-ds-vcxkc   1/1     Running   0          12s

#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS   ROLES           AGE     VERSION
k8s-master01.example.local   Ready    control-plane   7m27s   v1.25.0
k8s-master02.example.local   Ready    control-plane   4m56s   v1.25.0
k8s-master03.example.local   Ready    control-plane   4m11s   v1.25.0
k8s-node01.example.local     Ready    <none>          3m19s   v1.25.0
k8s-node02.example.local     Ready    <none>          3m1s    v1.25.0
k8s-node03.example.local     Ready    <none>          2m44s   v1.25.0

[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME                                                 READY   STATUS    RESTARTS   AGE
coredns-7666b559bd-9fm29                             1/1     Running   0               10m
coredns-7666b559bd-xll5k                             1/1     Running   0               10m
etcd-k8s-master01.example.local                      1/1     Running   0               10m
etcd-k8s-master02.example.local                      1/1     Running   0               7m49s
etcd-k8s-master03.example.local                      1/1     Running   0               6m47s
kube-apiserver-k8s-master01.example.local            1/1     Running   0               10m
kube-apiserver-k8s-master02.example.local            1/1     Running   1 (7m40s ago)   7m40s
kube-apiserver-k8s-master03.example.local            1/1     Running   0               6m45s
kube-controller-manager-k8s-master01.example.local   1/1     Running   1 (7m39s ago)   10m
kube-controller-manager-k8s-master02.example.local   1/1     Running   0               6m34s
kube-controller-manager-k8s-master03.example.local   1/1     Running   0               5m29s
kube-proxy-62pwp                                     1/1     Running   0               5m44s
kube-proxy-k6kvb                                     1/1     Running   0               10m
kube-proxy-mdxpd                                     1/1     Running   0               5m26s
kube-proxy-rpzp9                                     1/1     Running   0               7m50s
kube-proxy-ws45d                                     1/1     Running   0               6m6s
kube-proxy-zn72m                                     1/1     Running   0               6m58s
kube-scheduler-k8s-master01.example.local            1/1     Running   1 (7m39s ago)   10m
kube-scheduler-k8s-master02.example.local            1/1     Running   0               7m39s
kube-scheduler-k8s-master03.example.local            1/1     Running   0               6m46s

[root@k8s-master01 ~]# crictl info
...
  "golang": "go1.17.13",
  "lastCNILoadStatus": "OK",
  "lastCNILoadStatus.default": "OK"
}
#现在CNI插件正常,没有报错了。
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114

重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

#测试VIP
[root@k8s-master01 ~]# ping 172.31.3.188
PING 172.31.3.188 (172.31.3.188) 56(84) bytes of data.
64 bytes from 172.31.3.188: icmp_seq=1 ttl=64 time=0.526 ms
64 bytes from 172.31.3.188: icmp_seq=2 ttl=64 time=0.375 ms
^C
--- 172.31.3.188 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.375/0.450/0.526/0.078 ms

[root@k8s-ha01 ~]# systemctl stop keepalived
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
    inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe05:9b2a/64 scope link 
       valid_lft forever preferred_lft forever
 
 [root@k8s-ha02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:5e:d8:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.31.3.105/21 brd 172.31.7.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.31.3.188/32 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5e:d8f8/64 scope link 
       valid_lft forever preferred_lft forever
 
 [root@k8s-ha01 ~]# systemctl start keepalived
 [root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
    inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.31.3.188/32 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe05:9b2a/64 scope link 
       valid_lft forever preferred_lft forever

[root@k8s-master01 ~]# telnet 172.31.3.188 6443
Trying 172.31.3.188...
Connected to 172.31.3.188.
Escape character is '^]'.
Connection closed by foreign host.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等

所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld

所有节点查看selinux状态,必须为disable:getenforce

master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy

master节点查看监听端口:netstat -lntp

查看haproxy状态

http://kubeapi.raymonds.cc:9999/haproxy-status

在这里插入图片描述

3.13 测试应用编排及服务访问

demoapp是一个web应用,可将demoapp以Pod的形式编排运行于集群之上,并通过在集群外部进行访问:

[root@k8s-master01 ~]# kubectl create deployment demoapp --image=registry.cn-hangzhou.aliyuncs.com/raymond9/demoapp:v1.0 --replicas=3
deployment.apps/demoapp created

[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE    IP            NODE                       NOMINATED NODE   READINESS GATES
demoapp-c4787f9fc-5l4tf   1/1     Running   0          3m4s   192.160.3.2   k8s-node01.example.local   <none>           <none>
demoapp-c4787f9fc-75cr7   1/1     Running   0          3m4s   192.160.5.2   k8s-node03.example.local   <none>           <none>
demoapp-c4787f9fc-c8h56   1/1     Running   0          3m4s   192.160.4.2   k8s-node02.example.local   <none>           <none>

[root@k8s-master01 ~]# curl 192.160.3.2
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-5l4tf, ServerIP: 192.160.3.2!
[root@k8s-master01 ~]# curl 192.160.4.2
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-c8h56, ServerIP: 192.160.4.2!
[root@k8s-master01 ~]# curl 192.160.5.2
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-75cr7, ServerIP: 192.160.5.2!

#使用如下命令了解Service对象demoapp使用的NodePort,格式:<集群端口>:<POd端口>,以便于在集群外部进行访问
root@k8s-master01:~# kubectl create service nodeport demoapp --tcp=80:80
service/demoapp created

[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
demoapp      NodePort    10.111.101.237   <none>        80:32698/TCP   9s
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        23m

[root@k8s-master01 ~]# curl 10.111.101.237
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-75cr7, ServerIP: 192.160.5.2!
[root@k8s-master01 ~]# curl 10.111.101.237
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-c8h56, ServerIP: 192.160.4.2!
[root@k8s-master01 ~]# curl 10.111.101.237
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-5l4tf, ServerIP: 192.160.3.2!

#用户可以于集群外部通过“http://NodeIP:32698”这个URL访问demoapp上的应用,例如于集群外通过浏览器访问“http://<kubernetes-node>:32698”。
[root@rocky8 ~]# curl http://172.31.3.101:32698
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-5l4tf, ServerIP: 192.160.3.2!
[root@rocky8 ~]# curl http://172.31.3.101:32698
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-75cr7, ServerIP: 192.160.5.2!
[root@rocky8 ~]# curl http://172.31.3.101:32698
raymond demoapp v1.0 !! ClientIP: 192.160.0.0, ServerName: demoapp-c4787f9fc-c8h56, ServerIP: 192.160.4.2!

#扩容
[root@k8s-master01 ~]# kubectl scale deployment demoapp --replicas 5
deployment.apps/demoapp scaled
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
demoapp-c4787f9fc-5l4tf   1/1     Running   0          7m14s
demoapp-c4787f9fc-75cr7   1/1     Running   0          7m14s
demoapp-c4787f9fc-c8h56   1/1     Running   0          7m14s
demoapp-c4787f9fc-pq6tm   1/1     Running   0          8s
demoapp-c4787f9fc-qq6fs   1/1     Running   0          8s

#缩容
[root@k8s-master01 ~]# kubectl scale deployment demoapp --replicas 2
deployment.apps/demoapp scaled

#可以看到销毁pod的过程
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS        RESTARTS   AGE
demoapp-c4787f9fc-5l4tf   1/1     Running       0          7m44s
demoapp-c4787f9fc-75cr7   1/1     Running       0          7m44s
demoapp-c4787f9fc-c8h56   1/1     Terminating   0          7m44s
demoapp-c4787f9fc-pq6tm   1/1     Terminating   0          38s
demoapp-c4787f9fc-qq6fs   1/1     Terminating   0          38s

#再次查看,最终缩容成功
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
demoapp-c4787f9fc-5l4tf   1/1     Running   0          8m15s
demoapp-c4787f9fc-75cr7   1/1     Running   0          8m15s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69

3.14 基于文件初始化高可用master方式

Master01节点创建kubeadm-config.yaml配置文件如下:

Master01:(# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)

注意

以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:43:25Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}

#将默认配置输出⾄⽂件
[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml

#修改后的配置文件
[root@k8s-master01 ~]# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101 #master01的IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock #容器运行时
  imagePullPolicy: IfNotPresent
  name: k8s-master01.example.local #设置master01的hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - kubeapi.raymonds.cc #VIP地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: kubeapi.raymonds.cc:6443 #haproxy代理后端地址
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers #harbor镜像地址,如果没有harbor改成“registry.aliyuncs.com/google_containers”
kind: ClusterConfiguration
kubernetesVersion: v1.25.0 #更改版本号
networking:
  dnsDomain: cluster.local #dnsdomain
  podSubnet: 192.168.0.0/12 #pod网段
  serviceSubnet: 10.96.0.0/12 #service网段
scheduler: {}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50

更新kubeadm文件

[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

[root@k8s-master01 ~]# cat new.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01.example.local
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - kubeapi.raymonds.cc
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: kubeapi.raymonds.cc:6443
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.25.0
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/12
  serviceSubnet: 10.96.0.0/12
scheduler: {}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

#如果已经初始化过,重新初始化用下面命令reset集群后,再进行初始化
#master和node上执行
kubeadm reset -f
rm -rf /etc/cni/net.d/
rm -rf $HOME/.kube/config
reboot

[root@k8s-master01 ~]# kubeadm init --config /root/new.yaml  --upload-certs
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join kubeapi.raymonds.cc:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:1563ff1330b12d780b7215d7f2909b0d01de2b17353743b700489f5434cee3b7 \
	--control-plane --certificate-key 06df38a4dfeb8abcb8839a4621e442dee61edcfa47480494ee19bc11039b2857

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kubeapi.raymonds.cc:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:1563ff1330b12d780b7215d7f2909b0d01de2b17353743b700489f5434cee3b7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39

生成 kubectl 命令的授权文件,重复4.10

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS     ROLES           AGE   VERSION
k8s-master01.example.local   NotReady   control-plane   36s   v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

高可用master,参考4.12

#添加master02和master03
kubeadm join kubeapi.raymonds.cc:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:1563ff1330b12d780b7215d7f2909b0d01de2b17353743b700489f5434cee3b7 \
	--control-plane --certificate-key 06df38a4dfeb8abcb8839a4621e442dee61edcfa47480494ee19bc11039b2857

[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS     ROLES           AGE    VERSION
k8s-master01.example.local   NotReady   control-plane   2m1s   v1.25.0
k8s-master02.example.local   NotReady   control-plane   36s    v1.25.0
k8s-master03.example.local   NotReady   control-plane   5s     v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

高可用node,参考4.13

kubeadm join kubeapi.raymonds.cc:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:1563ff1330b12d780b7215d7f2909b0d01de2b17353743b700489f5434cee3b7

[root@k8s-master01 ~]# kubectl get nodes 
NAME                         STATUS     ROLES           AGE     VERSION
k8s-master01.example.local   NotReady   control-plane   3m31s   v1.25.0
k8s-master02.example.local   NotReady   control-plane   2m6s    v1.25.0
k8s-master03.example.local   NotReady   control-plane   95s     v1.25.0
k8s-node01.example.local     NotReady   <none>          45s     v1.25.0
k8s-node02.example.local     NotReady   <none>          26s     v1.25.0
k8s-node03.example.local     NotReady   <none>          9s      v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

3.15 网络组件calico部署

https://docs.projectcalico.org/maintenance/kubernetes-upgrade#upgrading-an-installation-that-uses-the-kubernetes-api-datastore

calico安装:https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O

[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico.yaml 
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico.yaml 
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico.yaml 
            - name: CALICO_IPV4POOL_CIDR
              value: 192.168.0.0/12

[root@k8s-master01 ~]# grep "image:" calico.yaml 
          image: docker.io/calico/cni:v3.24.1
          image: docker.io/calico/cni:v3.24.1
          image: docker.io/calico/node:v3.24.1
          image: docker.io/calico/node:v3.24.1
          image: docker.io/calico/kube-controllers:v3.24.1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

下载calico镜像并上传harbor:

#注意:如果没有harbor不用执行下面脚本
[root@k8s-master01 ~]# cat download_calico_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' calico.yaml |uniq)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Calico镜像"${END}
    for i in ${images};do 
        nerdctl pull registry.cn-beijing.aliyuncs.com/raymond9/$i
        nerdctl tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
        nerdctl rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
        nerdctl push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Calico镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_calico_images.sh

root@k8s-master01:~# nerdctl images | grep 3.24.1
harbor.raymonds.cc/google_containers/cni                                       v3.24.1    21df750b80ba    About a minute ago    linux/amd64    188.4 MiB    83.3 MiB
harbor.raymonds.cc/google_containers/kube-controllers                          v3.24.1    b65317537174    About a minute ago    linux/amd64    68.1 MiB     29.7 MiB
harbor.raymonds.cc/google_containers/node                                      v3.24.1    135054e0bc90    About a minute ago    linux/amd64    221.5 MiB    76.5 MiB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico.yaml 

[root@k8s-master01 ~]# grep "image:" calico.yaml 
          image: harbor.raymonds.cc/google_containers/cni:v3.24.1
          image: harbor.raymonds.cc/google_containers/cni:v3.24.1
          image: harbor.raymonds.cc/google_containers/node:v3.24.1
          image: harbor.raymonds.cc/google_containers/node:v3.24.1
          image: harbor.raymonds.cc/google_containers/kube-controllers:v3.24.1

#注意:如果没有harbor执行下面命令
[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 registry.cn-beijing.aliyuncs.com/raymond9\2@g' calico.yaml 

[root@k8s-master01 ~]# kubectl apply -f calico.yaml 

#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-5477499cbc-txrnf             1/1     Running   0          36s
calico-node-7kbrp                                    1/1     Running   0          36s
calico-node-7z76n                                    1/1     Running   0          36s
calico-node-hr5mj                                    1/1     Running   0          36s
calico-node-hsldl                                    1/1     Running   0          36s
calico-node-ntb4c                                    1/1     Running   0          36s
calico-node-wd78c                                    1/1     Running   0          36s

#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES           AGE     VERSION
k8s-master01.example.local   Ready    control-plane   10m     v1.25.0
k8s-master02.example.local   Ready    control-plane   9m19s   v1.25.0
k8s-master03.example.local   Ready    control-plane   8m48s   v1.25.0
k8s-node01.example.local     Ready    <none>          7m58s   v1.25.0
k8s-node02.example.local     Ready    <none>          7m39s   v1.25.0
k8s-node03.example.local     Ready    <none>          7m22s   v1.25.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72

测试应用编排及服务访问,参考4.15

[root@k8s-master01 ~]# kubectl create deployment demoapp --image=registry.cn-hangzhou.aliyuncs.com/raymond9/demoapp:v1.0 --replicas=3
deployment.apps/demoapp created

[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP                NODE                       NOMINATED NODE   READINESS GATES
demoapp-c4787f9fc-926r7   1/1     Running   0          8s    192.169.111.132   k8s-node01.example.local   <none>           <none>
demoapp-c4787f9fc-9qghz   1/1     Running   0          8s    192.170.21.193    k8s-node03.example.local   <none>           <none>
demoapp-c4787f9fc-xzz2z   1/1     Running   0          8s    192.167.195.129   k8s-node02.example.local   <none>           <none>

[root@k8s-master01 ~]# curl 192.169.111.132
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-926r7, ServerIP: 192.169.111.132!
[root@k8s-master01 ~]# curl 192.170.21.193
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-9qghz, ServerIP: 192.170.21.193!
[root@k8s-master01 ~]# curl 192.167.195.129
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-xzz2z, ServerIP: 192.167.195.129!

#使用如下命令了解Service对象demoapp使用的NodePort,格式:<集群端口>:<POd端口>,以便于在集群外部进行访问
[root@k8s-master01 ~]# kubectl create service nodeport demoapp --tcp=80:80
service/demoapp created

[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
demoapp      NodePort    10.100.33.92   <none>        80:32184/TCP   6s
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        11m

[root@k8s-master01 ~]# curl 10.100.33.92
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-926r7, ServerIP: 192.169.111.132!
[root@k8s-master01 ~]# curl 10.100.33.92
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-9qghz, ServerIP: 192.170.21.193!
[root@k8s-master01 ~]# curl 10.100.33.92
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-xzz2z, ServerIP: 192.167.195.129!

#用户可以于集群外部通过“http://NodeIP:32184”这个URL访问demoapp上的应用,例如于集群外通过浏览器访问“http://<kubernetes-node>:32184”。
[root@rocky8 ~]# curl http://172.31.3.101:32184
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-9qghz, ServerIP: 192.170.21.193!
[root@rocky8 ~]# curl http://172.31.3.101:32184
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-xzz2z, ServerIP: 192.167.195.129!
[root@rocky8 ~]# curl http://172.31.3.101:32184
raymond demoapp v1.0 !! ClientIP: 192.162.55.64, ServerName: demoapp-c4787f9fc-926r7, ServerIP: 192.169.111.132!

#扩容
[root@k8s-master01 ~]# kubectl scale deployment demoapp --replicas 5
deployment.apps/demoapp scaled
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
demoapp-c4787f9fc-926r7   1/1     Running   0          3m3s
demoapp-c4787f9fc-9qghz   1/1     Running   0          3m3s
demoapp-c4787f9fc-g27kz   1/1     Running   0          6s
demoapp-c4787f9fc-xzz2z   1/1     Running   0          3m3s
demoapp-c4787f9fc-zwlnn   1/1     Running   0          6s

#缩容
[root@k8s-master01 ~]# kubectl scale deployment demoapp --replicas 2
deployment.apps/demoapp scaled

#可以看到销毁pod的过程
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS        RESTARTS   AGE
demoapp-c4787f9fc-926r7   1/1     Running       0          3m20s
demoapp-c4787f9fc-9qghz   1/1     Terminating   0          3m20s
demoapp-c4787f9fc-g27kz   1/1     Terminating   0          23s
demoapp-c4787f9fc-xzz2z   1/1     Running       0          3m20s
demoapp-c4787f9fc-zwlnn   1/1     Terminating   0          23s

#再次查看,最终缩容成功
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
demoapp-c4787f9fc-926r7   1/1     Running   0          3m53s
demoapp-c4787f9fc-xzz2z   1/1     Running   0          3m53s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69

3.16 Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

https://github.com/kubernetes-sigs/metrics-server

[root@k8s-master01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  • 1

将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]# for i in k8s-node01 k8s-node02 k8s-node03;do scp -o StrictHostKeyChecking=no /etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done
  • 1

修改下面内容:

[root@k8s-master01 ~]# vim components.yaml
...
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
#添加下面内容
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #kubeadm证书文件是front-proxy-ca.crt
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra- 
...
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
#添加下面内容
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki 
...
      volumes:
      - emptyDir: {}
        name: tmp-dir
#添加下面内容
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki 
...  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

下载镜像并修改镜像地址:

#注意:如果没有harbor不用执行下面脚本
[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1

[root@k8s-master01 ~]# cat download_metrics_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Metrics镜像"${END}
    for i in ${images};do 
        nerdctl pull registry.aliyuncs.com/google_containers/$i
        nerdctl tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        nerdctl rmi registry.aliyuncs.com/google_containers/$i
        nerdctl push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Metrics镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_metrics_images.sh

root@k8s-master01:~# nerdctl images |grep metrics
harbor.raymonds.cc/google_containers/metrics-server                            v0.6.1     5ddc6458eb95    23 seconds ago    linux/amd64    69.3 MiB     26.8 MiB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml

[root@k8s-master01 ~]# grep "image:" components.yaml 
        image: harbor.raymonds.cc/google_containers/metrics-server:v0.6.1

#注意:如果没有harbor执行下面命令
[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 registry.aliyuncs.com/google_containers\2@g' components.yaml

[root@k8s-master01 ~]# kubectl apply -f components.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-6dcf48c9dc-mxkw7                      1/1     Running   0          32s

[root@k8s-master01 ~]# kubectl top node
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   252m         12%    1314Mi          34%       
k8s-master02.example.local   220m         11%    986Mi           25%       
k8s-master03.example.local   195m         9%     1002Mi          26%       
k8s-node01.example.local     110m         5%     695Mi           18%       
k8s-node02.example.local     84m          4%     645Mi           16%       
k8s-node03.example.local     105m         5%     652Mi           17%     
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

3.17 Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

https://github.com/kubernetes/dashboard/releases

查看对应版本兼容的kubernetes版本

在这里插入图片描述
可以看到上图dashboard v2.7.0是支持kuberneres 1.25版本的

[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

[root@k8s-master01 ~]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #添加这行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30005 #添加这行
  selector:
    k8s-app: kubernetes-dashboard
...

[root@k8s-master01 ~]# grep "image:" recommended.yaml 
          image: kubernetesui/dashboard:v2.7.0
          image: kubernetesui/metrics-scraper:v1.0.8

#注意:如果没有harbor不用执行下面脚本
[root@k8s-master01 ~]# cat download_dashboard_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_dashboard_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/"  '/image:/{print $NF}' recommended.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
    ${COLOR}"开始下载Dashboard镜像"${END}
    for i in ${images};do 
        nerdctl pull registry.aliyuncs.com/google_containers/$i
        nerdctl tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
        nerdctl rmi registry.aliyuncs.com/google_containers/$i
        nerdctl push ${HARBOR_DOMAIN}/google_containers/$i
    done
    ${COLOR}"Dashboard镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_dashboard_images.sh

root@k8s-master01:~# nerdctl images | grep -E "(dashboard|metrics-scraper)"
harbor.raymonds.cc/google_containers/dashboard                                 v2.7.0     2e500d29e9d5    29 seconds ago       linux/amd64    245.8 MiB    72.3 MiB
harbor.raymonds.cc/google_containers/metrics-scraper                           v1.0.8     76049887f07a    20 seconds ago       linux/amd64    41.8 MiB     18.8 MiB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' recommended.yaml 

[root@k8s-master01 ~]# grep "image:"  recommended.yaml
          image: harbor.raymonds.cc/google_containers/dashboard:v2.7.0
          image: harbor.raymonds.cc/google_containers/metrics-scraper:v1.0.8

#注意:如果没有harbor执行下面命令
[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 registry.aliyuncs.com/google_containers\2@g' recommended.yaml

[root@k8s-master01 ~]# kubectl apply -f recommended.yaml

[root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-764c76b989-p5jdr   1/1     Running   0          9s
kubernetes-dashboard-865c67b459-xfv9j        1/1     Running   0          9s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78

创建管理员用户admin.yaml

[root@k8s-master01 ~]# cat > admin.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

[root@k8s-master01 ~]# kubectl apply -f admin.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

3.17.1 登录dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:

--test-type --ignore-certificate-errors
  • 1

在这里插入图片描述

​ 图1-1 谷歌浏览器 Chrome的配置

[root@k8s-master01 ~]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.96.173.161   <none>        443:30005/TCP   2m3s
  • 1
  • 2
  • 3

访问Dashboard:https://172.31.3.101:30005,参考图1-2
在这里插入图片描述
​ 图1-2 Dashboard登录方式

3.17.2 token登录

创建token:

[root@k8s-master01 ~]# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6InBWZzRyck5xcWljTjdIQi0ydFhjTDRSYlQyVC1TSk1KQUU2X0oyMng4ZGsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjYzNDEzODc1LCJpYXQiOjE2NjM0MTAyNzUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiY2FiYzNmYjMtZTE5ZS00YmY1LWEzNjMtYTA5OGFlMzY2N2Q4In19LCJuYmYiOjE2NjM0MTAyNzUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.QhG3el-uMJH3gJjbDBmUkwjRpesqZxaOaTv2wupLlc8wFcBM0S4G7YpCtRl9twMbx2weUNJqEIleKC8zwY1JnVpgDxbPYz7FOg-gCE7FWEwGscFRhbS3fPMd5cv6l-gzSSUoPEuFotZad0yHXYsrSVxaopKoVxMO6MqSbchdZRssdjCDPhtwDps17aSDprt6QIS4_Tdk_9INLpAH4I4lZBCsnltorU8H93NntTA06t3l-fysHgYmh7puLWIKBwYw9f43n7JFUbLeSRg1a8nxOgTJYLsr3xbG41KPts9_1WHvPOoBTlvAXGOihIkxwsiYJglkT_BpSpGHJx7YaKBv7g
  • 1
  • 2

将token值输入到令牌后,单击登录即可访问Dashboard,参考图1-3:

在这里插入图片描述

在这里插入图片描述

3.17.3 使用kubeconfig文件登录dashboard

[root@k8s-master01 ~]# cp /etc/kubernetes/admin.conf kubeconfig

root@k8s-master01:~# vim kubeconfig
...
#在最下面添加token
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBWZzRyck5xcWljTjdIQi0ydFhjTDRSYlQyVC1TSk1KQUU2X0oyMng4ZGsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjYzNDEzODc1LCJpYXQiOjE2NjM0MTAyNzUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiY2FiYzNmYjMtZTE5ZS00YmY1LWEzNjMtYTA5OGFlMzY2N2Q4In19LCJuYmYiOjE2NjM0MTAyNzUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.QhG3el-uMJH3gJjbDBmUkwjRpesqZxaOaTv2wupLlc8wFcBM0S4G7YpCtRl9twMbx2weUNJqEIleKC8zwY1JnVpgDxbPYz7FOg-gCE7FWEwGscFRhbS3fPMd5cv6l-gzSSUoPEuFotZad0yHXYsrSVxaopKoVxMO6MqSbchdZRssdjCDPhtwDps17aSDprt6QIS4_Tdk_9INLpAH4I4lZBCsnltorU8H93NntTA06t3l-fysHgYmh7puLWIKBwYw9f43n7JFUbLeSRg1a8nxOgTJYLsr3xbG41KPts9_1WHvPOoBTlvAXGOihIkxwsiYJglkT_BpSpGHJx7YaKBv7g
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

在这里插入图片描述

在这里插入图片描述

4.一些必须的配置更改

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

在master01节点执行

[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
iptables

[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
...
    mode: "ipvs"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

更新Kube-Proxy的Pod:

[root@k8s-master01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
daemonset.apps/kube-proxy patched
  • 1
  • 2

验证Kube-Proxy模式

[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
ipvs

[root@k8s-master01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.31.3.101:30005 rr
  -> 192.170.21.196:8443          Masq    1      0          0         
TCP  172.31.3.101:32184 rr
  -> 192.167.195.129:80           Masq    1      0          0         
  -> 192.169.111.132:80           Masq    1      0          0         
TCP  192.162.55.64:30005 rr
  -> 192.170.21.196:8443          Masq    1      0          0         
TCP  192.162.55.64:32184 rr
  -> 192.167.195.129:80           Masq    1      0          0         
  -> 192.169.111.132:80           Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 172.31.3.101:6443            Masq    1      0          0         
  -> 172.31.3.102:6443            Masq    1      0          0         
  -> 172.31.3.103:6443            Masq    1      1          0         
TCP  10.96.0.10:53 rr
  -> 192.169.111.129:53           Masq    1      0          0         
  -> 192.169.111.131:53           Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 192.169.111.129:9153         Masq    1      0          0         
  -> 192.169.111.131:9153         Masq    1      0          0         
TCP  10.98.38.228:8000 rr
  -> 192.167.195.131:8000         Masq    1      0          0         
TCP  10.99.151.204:443 rr
  -> 192.170.21.195:4443          Masq    1      0          0         
TCP  10.99.239.87:443 rr
  -> 192.170.21.196:8443          Masq    1      0          0         
TCP  10.100.33.92:80 rr
  -> 192.167.195.129:80           Masq    1      0          0         
  -> 192.169.111.132:80           Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 192.169.111.129:53           Masq    1      0          0         
  -> 192.169.111.131:53           Masq    1      0          0 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/304790?site
推荐阅读
相关标签
  

闽ICP备14008679号