当前位置:   article > 正文

二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈

vjkttlv

二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈

https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了

介绍

kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。

我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。

若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。

不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。

若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。

强烈建议在Github上查看文档 !!!!!!

Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!

手动项目地址:https://github.com/cby-chen/Kubernetes

1.环境

主机名称IP地址说明软件
Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Node01192.168.1.64node节点kubelet、kube-proxy、nfs-client、nginx
Node02192.168.1.65node节点kubelet、kube-proxy、nfs-client、nginx

192.168.8.66VIP
软件版本
kernel6.0.11
CentOS 8v8、 v7、Ubuntu
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.4
etcdv3.5.6
containerdv1.6.10
dockerv20.10.21
cfsslv1.6.3
cniv1.1.1
crictlv1.26.0
haproxyv1.8.27
keepalivedv2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.26.0/kubernetes-v1.26.0.tar

1.1.k8s基础系统环境配置

1.2.配置IP

  1. ssh root@192.168.1.143 "nmcli con mod eth0 ipv4.addresses 192.168.1.61/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
  2. ssh root@192.168.1.144 "nmcli con mod eth0 ipv4.addresses 192.168.1.62/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
  3. ssh root@192.168.1.145 "nmcli con mod eth0 ipv4.addresses 192.168.1.63/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
  4. ssh root@192.168.1.146 "nmcli con mod eth0 ipv4.addresses 192.168.1.64/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
  5. ssh root@192.168.1.148 "nmcli con mod eth0 ipv4.addresses 192.168.1.65/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
  6. # 没有IPv6选择不配置即可
  7. ssh root@192.168.1.61 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
  8. ssh root@192.168.1.62 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
  9. ssh root@192.168.1.63 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
  10. ssh root@192.168.1.64 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
  11. ssh root@192.168.1.65 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
  12. # 查看网卡配置
  13. [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
  14. TYPE=Ethernet
  15. PROXY_METHOD=none
  16. BROWSER_ONLY=no
  17. BOOTPROTO=none
  18. DEFROUTE=yes
  19. IPV4_FAILURE_FATAL=no
  20. IPV6INIT=yes
  21. IPV6_AUTOCONF=no
  22. IPV6_DEFROUTE=yes
  23. IPV6_FAILURE_FATAL=no
  24. IPV6_ADDR_GEN_MODE=stable-privacy
  25. NAME=eth0
  26. UUID=424fd260-c480-4899-97e6-6fc9722031e8
  27. DEVICE=eth0
  28. ONBOOT=yes
  29. IPADDR=192.168.1.61
  30. PREFIX=24
  31. GATEWAY=192.168.8.1
  32. DNS1=8.8.8.8
  33. IPV6ADDR=fc00:43f4:1eea:1::10/128
  34. IPV6_DEFAULTGW=fc00:43f4:1eea:1::1
  35. DNS2=2400:3200::1
  36. [root@localhost ~]#

1.3.设置主机名

  1. hostnamectl set-hostname k8s-master01
  2. hostnamectl set-hostname k8s-master02
  3. hostnamectl set-hostname k8s-master03
  4. hostnamectl set-hostname k8s-node01
  5. hostnamectl set-hostname k8s-node02

1.4.配置yum源

  1. # 对于 Ubuntu
  2. sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
  3. # 对于 CentOS 7
  4. sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
  5.          -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
  6.          -i.bak \
  7.          /etc/yum.repos.d/CentOS-*.repo
  8. # 对于 CentOS 8
  9. sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
  10.          -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
  11.          -i.bak \
  12.          /etc/yum.repos.d/CentOS-*.repo
  13. # 对于私有仓库
  14. sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.5.安装一些必备工具

  1. # 对于 Ubuntu
  2. apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl
  3. # 对于 CentOS 7
  4. yum update -y && yum -y install  wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl
  5. # 对于 CentOS 8
  6. yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl

1.6.选择性下载需要工具

  1. 1.下载kubernetes1.26.+的二进制包
  2. github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md
  3. wget https://dl.k8s.io/v1.26.0/kubernetes-server-linux-amd64.tar.gz
  4. 2.下载etcdctl二进制包
  5. github二进制包下载地址:https://github.com/etcd-io/etcd/releases
  6. wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
  7. 3.docker二进制包下载
  8. 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  9. wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz
  10. 4.下载cri-docker 
  11. 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/
  12. wget  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
  13. 4.containerd下载时下载带cni插件的二进制包。
  14. github下载地址:https://github.com/containerd/containerd/releases
  15. wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz
  16. 5.下载cfssl二进制包
  17. github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
  18. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64
  19. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64
  20. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64
  21. 6.cni插件下载
  22. github下载地址:https://github.com/containernetworking/plugins/releases
  23. wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  24. 7.crictl客户端二进制下载
  25. github下载:https://github.com/kubernetes-sigs/cri-tools/releases
  26. wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz

1.7.关闭防火墙

  1. # Ubuntu忽略,CentOS执行
  2. systemctl disable --now firewalld

1.8.关闭SELinux

  1. # Ubuntu忽略,CentOS执行
  2. setenforce 0
  3. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

  1. sed -ri 's/.*swap.*/#&/' /etc/fstab
  2. swapoff -a && sysctl -w vm.swappiness=0
  3. cat /etc/fstab
  4. # /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.10.网络配置(俩种方式二选一)

  1. # Ubuntu忽略,CentOS执行
  2. # 方式一
  3. # systemctl disable --now NetworkManager
  4. # systemctl start network && systemctl enable network
  5. # 方式二
  6. cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
  7. [keyfile]
  8. unmanaged-devices=interface-name:cali*;interface-name:tunl*
  9. EOF
  10. systemctl restart NetworkManager

1.11.进行时间同步

  1. # 服务端
  2. # apt install chrony -y
  3. yum install chrony -y
  4. cat > /etc/chrony.conf << EOF 
  5. pool ntp.aliyun.com iburst
  6. driftfile /var/lib/chrony/drift
  7. makestep 1.0 3
  8. rtcsync
  9. allow 192.168.1.0/24
  10. local stratum 10
  11. keyfile /etc/chrony.keys
  12. leapsectz right/UTC
  13. logdir /var/log/chrony
  14. EOF
  15. systemctl restart chronyd ; systemctl enable chronyd
  16. # 客户端
  17. # apt install chrony -y
  18. yum install chrony -y
  19. cat > /etc/chrony.conf << EOF 
  20. pool 192.168.1.61 iburst
  21. driftfile /var/lib/chrony/drift
  22. makestep 1.0 3
  23. rtcsync
  24. keyfile /etc/chrony.keys
  25. leapsectz right/UTC
  26. logdir /var/log/chrony
  27. EOF
  28. systemctl restart chronyd ; systemctl enable chronyd
  29. #使用客户端进行验证
  30. chronyc sources -v

1.12.配置ulimit

  1. ulimit -SHn 65535
  2. cat >> /etc/security/limits.conf <<EOF
  3. * soft nofile 655360
  4. * hard nofile 131072
  5. * soft nproc 655350
  6. * hard nproc 655350
  7. * seft memlock unlimited
  8. * hard memlock unlimitedd
  9. EOF

1.13.配置免密登录

  1. # apt install -y sshpass
  2. yum install -y sshpass
  3. ssh-keygen -f /root/.ssh/id_rsa -P ''
  4. export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65"
  5. export SSHPASS=123123
  6. for HOST in $IP;do
  7.      sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
  8. done

1.14.添加启用源

  1. # Ubuntu忽略,CentOS执行
  2. # 为 RHEL-8或 CentOS-8配置源
  3. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y 
  4. sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
  5. sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 
  6. # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo 
  7. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y 
  8. sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
  9. sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 
  10. # 查看可用安装包
  11. yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.15.升级内核至4.18版本以上

  1. # Ubuntu忽略,CentOS执行
  2. # 安装最新的内核
  3. # 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt  
  4. yum -y --enablerepo=elrepo-kernel  install  kernel-ml
  5. # 查看已安装那些内核
  6. rpm -qa | grep kernel
  7. # 查看默认内核
  8. grubby --default-kernel
  9. # 若不是最新的使用命令设置
  10. grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo)
  11. # 重启生效
  12. reboot
  13. # v8 整合命令为:
  14. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; reboot 
  15. # v7 整合命令为:
  16. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot

1.16.安装ipvsadm

  1. # 对于 Ubuntu
  2. # apt install ipvsadm ipset sysstat conntrack -y
  3. # 对于 CentOS
  4. yum install ipvsadm ipset sysstat conntrack libseccomp -y
  5. cat >> /etc/modules-load.d/ipvs.conf <<EOF 
  6. ip_vs
  7. ip_vs_rr
  8. ip_vs_wrr
  9. ip_vs_sh
  10. nf_conntrack
  11. ip_tables
  12. ip_set
  13. xt_set
  14. ipt_set
  15. ipt_rpfilter
  16. ipt_REJECT
  17. ipip
  18. EOF
  19. systemctl restart systemd-modules-load.service
  20. lsmod | grep -e ip_vs -e nf_conntrack
  21. ip_vs_sh               16384  0
  22. ip_vs_wrr              16384  0
  23. ip_vs_rr               16384  0
  24. ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  25. nf_conntrack          176128  1 ip_vs
  26. nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
  27. nf_defrag_ipv4         16384  1 nf_conntrack
  28. libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.17.修改内核参数

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. fs.may_detach_mounts = 1
  5. vm.overcommit_memory=1
  6. vm.panic_on_oom=0
  7. fs.inotify.max_user_watches=89100
  8. fs.file-max=52706963
  9. fs.nr_open=52706963
  10. net.netfilter.nf_conntrack_max=2310720
  11. net.ipv4.tcp_keepalive_time = 600
  12. net.ipv4.tcp_keepalive_probes = 3
  13. net.ipv4.tcp_keepalive_intvl =15
  14. net.ipv4.tcp_max_tw_buckets = 36000
  15. net.ipv4.tcp_tw_reuse = 1
  16. net.ipv4.tcp_max_orphans = 327680
  17. net.ipv4.tcp_orphan_retries = 3
  18. net.ipv4.tcp_syncookies = 1
  19. net.ipv4.tcp_max_syn_backlog = 16384
  20. net.ipv4.ip_conntrack_max = 65536
  21. net.ipv4.tcp_max_syn_backlog = 16384
  22. net.ipv4.tcp_timestamps = 0
  23. net.core.somaxconn = 16384
  24. net.ipv6.conf.all.disable_ipv6 = 0
  25. net.ipv6.conf.default.disable_ipv6 = 0
  26. net.ipv6.conf.lo.disable_ipv6 = 0
  27. net.ipv6.conf.all.forwarding = 1
  28. EOF
  29. sysctl --system

1.18.所有节点配置hosts本地解析

  1. cat > /etc/hosts <<EOF
  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.1.61 k8s-master01
  5. 192.168.1.62 k8s-master02
  6. 192.168.1.63 k8s-master03
  7. 192.168.1.64 k8s-node01
  8. 192.168.1.65 k8s-node02
  9. 192.168.8.66 lb-vip
  10. EOF

2.k8s基本组件安装

注意 :  2.1 和 2.2 二选其一即可

2.1.安装Containerd作为Runtime (推荐)

  1. # wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  2. cd kubernetes-v1.26.0/cby/
  3. #创建cni插件所需目录
  4. mkdir -p /etc/cni/net.d /opt/cni/bin 
  5. #解压cni二进制包
  6. tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/
  7. # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz
  8. #解压
  9. tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /
  10. #创建服务启动文件
  11. cat > /etc/systemd/system/containerd.service <<EOF
  12. [Unit]
  13. Description=containerd container runtime
  14. Documentation=https://containerd.io
  15. After=network.target local-fs.target
  16. [Service]
  17. ExecStartPre=-/sbin/modprobe overlay
  18. ExecStart=/usr/local/bin/containerd
  19. Type=notify
  20. Delegate=yes
  21. KillMode=process
  22. Restart=always
  23. RestartSec=5
  24. LimitNPROC=infinity
  25. LimitCORE=infinity
  26. LimitNOFILE=infinity
  27. TasksMax=infinity
  28. OOMScoreAdjust=-999
  29. [Install]
  30. WantedBy=multi-user.target
  31. EOF

2.1.1配置Containerd所需的模块

  1. cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

2.1.2加载模块

systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

  1. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables  = 1
  3. net.ipv4.ip_forward                 = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF
  6. # 加载内核
  7. sysctl --system

2.1.4创建Containerd的配置文件

  1. # 创建默认配置文件
  2. mkdir -p /etc/containerd
  3. containerd config default | tee /etc/containerd/config.toml
  4. # 修改Containerd的配置文件
  5. sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
  6. cat /etc/containerd/config.toml | grep SystemdCgroup
  7. sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
  8. cat /etc/containerd/config.toml | grep sandbox_image
  9. sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
  10. cat /etc/containerd/config.toml | grep certs.d
  11. mkdir /etc/containerd/certs.d/docker.io -pv
  12. cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
  13. server = "https://docker.io"
  14. [host."https://hub-mirror.c.163.com"]
  15.   capabilities = ["pull""resolve"]
  16. EOF

2.1.5启动并设置为开机启动

  1. systemctl daemon-reload
  2. systemctl enable --now containerd
  3. systemctl restart containerd

2.1.6配置crictl客户端连接的运行时位置

  1. # wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
  2. #解压
  3. tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/
  4. #生成配置文件
  5. cat > /etc/crictl.yaml <<EOF
  6. runtime-endpoint: unix:///run/containerd/containerd.sock
  7. image-endpoint: unix:///run/containerd/containerd.sock
  8. timeout: 10
  9. debug: false
  10. EOF
  11. #测试
  12. systemctl restart  containerd
  13. crictl info

2.2 安装docker作为Runtime (暂不支持)

v1.26.0 暂时不支持docker方式

2.2.1 安装docker

  1. # 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  2. # wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz
  3. #解压
  4. tar xf docker-*.tgz 
  5. #拷贝二进制文件
  6. cp docker/* /usr/bin/
  7. #创建containerd的service文件,并且启动
  8. cat >/etc/systemd/system/containerd.service <<EOF
  9. [Unit]
  10. Description=containerd container runtime
  11. Documentation=https://containerd.io
  12. After=network.target local-fs.target
  13. [Service]
  14. ExecStartPre=-/sbin/modprobe overlay
  15. ExecStart=/usr/bin/containerd
  16. Type=notify
  17. Delegate=yes
  18. KillMode=process
  19. Restart=always
  20. RestartSec=5
  21. LimitNPROC=infinity
  22. LimitCORE=infinity
  23. LimitNOFILE=1048576
  24. TasksMax=infinity
  25. OOMScoreAdjust=-999
  26. [Install]
  27. WantedBy=multi-user.target
  28. EOF
  29. systemctl enable --now containerd.service
  30. #准备docker的service文件
  31. cat > /etc/systemd/system/docker.service <<EOF
  32. [Unit]
  33. Description=Docker Application Container Engine
  34. Documentation=https://docs.docker.com
  35. After=network-online.target firewalld.service containerd.service
  36. Wants=network-online.target
  37. Requires=docker.socket containerd.service
  38. [Service]
  39. Type=notify
  40. ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  41. ExecReload=/bin/kill -s HUP $MAINPID
  42. TimeoutSec=0
  43. RestartSec=2
  44. Restart=always
  45. StartLimitBurst=3
  46. StartLimitInterval=60s
  47. LimitNOFILE=infinity
  48. LimitNPROC=infinity
  49. LimitCORE=infinity
  50. TasksMax=infinity
  51. Delegate=yes
  52. KillMode=process
  53. OOMScoreAdjust=-500
  54. [Install]
  55. WantedBy=multi-user.target
  56. EOF
  57. #准备docker的socket文件
  58. cat > /etc/systemd/system/docker.socket <<EOF
  59. [Unit]
  60. Description=Docker Socket for the API
  61. [Socket]
  62. ListenStream=/var/run/docker.sock
  63. SocketMode=0660
  64. SocketUser=root
  65. SocketGroup=docker
  66. [Install]
  67. WantedBy=sockets.target
  68. EOF
  69. #创建docker组
  70. groupadd docker
  71. #启动docker
  72. systemctl enable --now docker.socket  && systemctl enable --now docker.service
  73. #验证
  74. docker info
  75. cat >/etc/docker/daemon.json <<EOF
  76. {
  77.   "exec-opts": ["native.cgroupdriver=systemd"],
  78.   "registry-mirrors": [
  79.     "https://docker.mirrors.ustc.edu.cn",
  80.     "http://hub-mirror.c.163.com"
  81.   ],
  82.   "max-concurrent-downloads": 10,
  83.   "log-driver": "json-file",
  84.   "log-level": "warn",
  85.   "log-opts": {
  86.     "max-size": "10m",
  87.     "max-file": "3"
  88.     },
  89.   "data-root": "/var/lib/docker"
  90. }
  91. EOF
  92. systemctl restart docker

2.2.2 安装cri-docker

  1. # 由于1.24以及更高版本不支持docker所以安装cri-docker
  2. # 下载cri-docker 
  3. # wget  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz
  4. # 解压cri-docker
  5. tar xvf cri-dockerd-*.amd64.tgz 
  6. cp cri-dockerd/cri-dockerd  /usr/bin/
  7. # 写入启动配置文件
  8. cat >  /usr/lib/systemd/system/cri-docker.service <<EOF
  9. [Unit]
  10. Description=CRI Interface for Docker Application Container Engine
  11. Documentation=https://docs.mirantis.com
  12. After=network-online.target firewalld.service docker.service
  13. Wants=network-online.target
  14. Requires=cri-docker.socket
  15. [Service]
  16. Type=notify
  17. ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
  18. ExecReload=/bin/kill -s HUP $MAINPID
  19. TimeoutSec=0
  20. RestartSec=2
  21. Restart=always
  22. StartLimitBurst=3
  23. StartLimitInterval=60s
  24. LimitNOFILE=infinity
  25. LimitNPROC=infinity
  26. LimitCORE=infinity
  27. TasksMax=infinity
  28. Delegate=yes
  29. KillMode=process
  30. [Install]
  31. WantedBy=multi-user.target
  32. EOF
  33. # 写入socket配置文件
  34. cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
  35. [Unit]
  36. Description=CRI Docker Socket for the API
  37. PartOf=cri-docker.service
  38. [Socket]
  39. ListenStream=%t/cri-dockerd.sock
  40. SocketMode=0660
  41. SocketUser=root
  42. SocketGroup=docker
  43. [Install]
  44. WantedBy=sockets.target
  45. EOF
  46. # 进行启动cri-docker
  47. systemctl daemon-reload ; systemctl enable cri-docker --now

2.3.k8s与etcd下载及安装(仅在master01操作)

2.3.1解压k8s安装包

  1. # 下载安装包
  2. # wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz
  3. # wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
  4. # 解压k8s安装文件
  5. cd cby
  6. tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
  7. # 解压etcd安装文件
  8. tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/
  9. # 查看/usr/local/bin下内容
  10. ls /usr/local/bin/
  11. containerd               crictl       etcdctl                  kube-proxy
  12. containerd-shim          critest      kube-apiserver           kube-scheduler
  13. containerd-shim-runc-v1  ctd-decoder  kube-controller-manager
  14. containerd-shim-runc-v2  ctr          kubectl
  15. containerd-stress        etcd         kubelet

2.3.2查看版本

  1. [root@k8s-master01 ~]#  kubelet --version
  2. Kubernetes v1.26.0
  3. [root@k8s-master01 ~]# etcdctl version
  4. etcdctl version: 3.5.6
  5. API version: 3.5
  6. [root@k8s-master01 ~]#

2.3.3将组件发送至其他k8s节点

  1. Master='k8s-master02 k8s-master03'
  2. Work='k8s-node01 k8s-node02'
  3. for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
  4. for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
  5. mkdir -p /opt/cni/bin

2.3创建证书相关文件

  1. mkdir pki
  2. cd pki
  3. cat > admin-csr.json << EOF 
  4. {
  5.   "CN""admin",
  6.   "key": {
  7.     "algo""rsa",
  8.     "size"2048
  9.   },
  10.   "names": [
  11.     {
  12.       "C""CN",
  13.       "ST""Beijing",
  14.       "L""Beijing",
  15.       "O""system:masters",
  16.       "OU""Kubernetes-manual"
  17.     }
  18.   ]
  19. }
  20. EOF
  21. cat > ca-config.json << EOF 
  22. {
  23.   "signing": {
  24.     "default": {
  25.       "expiry""876000h"
  26.     },
  27.     "profiles": {
  28.       "kubernetes": {
  29.         "usages": [
  30.             "signing",
  31.             "key encipherment",
  32.             "server auth",
  33.             "client auth"
  34.         ],
  35.         "expiry""876000h"
  36.       }
  37.     }
  38.   }
  39. }
  40. EOF
  41. cat > etcd-ca-csr.json  << EOF 
  42. {
  43.   "CN""etcd",
  44.   "key": {
  45.     "algo""rsa",
  46.     "size"2048
  47.   },
  48.   "names": [
  49.     {
  50.       "C""CN",
  51.       "ST""Beijing",
  52.       "L""Beijing",
  53.       "O""etcd",
  54.       "OU""Etcd Security"
  55.     }
  56.   ],
  57.   "ca": {
  58.     "expiry""876000h"
  59.   }
  60. }
  61. EOF
  62. cat > front-proxy-ca-csr.json  << EOF 
  63. {
  64.   "CN""kubernetes",
  65.   "key": {
  66.      "algo""rsa",
  67.      "size"2048
  68.   },
  69.   "ca": {
  70.     "expiry""876000h"
  71.   }
  72. }
  73. EOF
  74. cat > kubelet-csr.json  << EOF 
  75. {
  76.   "CN""system:node:\$NODE",
  77.   "key": {
  78.     "algo""rsa",
  79.     "size"2048
  80.   },
  81.   "names": [
  82.     {
  83.       "C""CN",
  84.       "L""Beijing",
  85.       "ST""Beijing",
  86.       "O""system:nodes",
  87.       "OU""Kubernetes-manual"
  88.     }
  89.   ]
  90. }
  91. EOF
  92. cat > manager-csr.json << EOF 
  93. {
  94.   "CN""system:kube-controller-manager",
  95.   "key": {
  96.     "algo""rsa",
  97.     "size"2048
  98.   },
  99.   "names": [
  100.     {
  101.       "C""CN",
  102.       "ST""Beijing",
  103.       "L""Beijing",
  104.       "O""system:kube-controller-manager",
  105.       "OU""Kubernetes-manual"
  106.     }
  107.   ]
  108. }
  109. EOF
  110. cat > apiserver-csr.json << EOF 
  111. {
  112.   "CN""kube-apiserver",
  113.   "key": {
  114.     "algo""rsa",
  115.     "size"2048
  116.   },
  117.   "names": [
  118.     {
  119.       "C""CN",
  120.       "ST""Beijing",
  121.       "L""Beijing",
  122.       "O""Kubernetes",
  123.       "OU""Kubernetes-manual"
  124.     }
  125.   ]
  126. }
  127. EOF
  128. cat > ca-csr.json   << EOF 
  129. {
  130.   "CN""kubernetes",
  131.   "key": {
  132.     "algo""rsa",
  133.     "size"2048
  134.   },
  135.   "names": [
  136.     {
  137.       "C""CN",
  138.       "ST""Beijing",
  139.       "L""Beijing",
  140.       "O""Kubernetes",
  141.       "OU""Kubernetes-manual"
  142.     }
  143.   ],
  144.   "ca": {
  145.     "expiry""876000h"
  146.   }
  147. }
  148. EOF
  149. cat > etcd-csr.json << EOF 
  150. {
  151.   "CN""etcd",
  152.   "key": {
  153.     "algo""rsa",
  154.     "size"2048
  155.   },
  156.   "names": [
  157.     {
  158.       "C""CN",
  159.       "ST""Beijing",
  160.       "L""Beijing",
  161.       "O""etcd",
  162.       "OU""Etcd Security"
  163.     }
  164.   ]
  165. }
  166. EOF
  167. cat > front-proxy-client-csr.json  << EOF 
  168. {
  169.   "CN""front-proxy-client",
  170.   "key": {
  171.      "algo""rsa",
  172.      "size"2048
  173.   }
  174. }
  175. EOF
  176. cat > kube-proxy-csr.json  << EOF 
  177. {
  178.   "CN""system:kube-proxy",
  179.   "key": {
  180.     "algo""rsa",
  181.     "size"2048
  182.   },
  183.   "names": [
  184.     {
  185.       "C""CN",
  186.       "ST""Beijing",
  187.       "L""Beijing",
  188.       "O""system:kube-proxy",
  189.       "OU""Kubernetes-manual"
  190.     }
  191.   ]
  192. }
  193. EOF
  194. cat > scheduler-csr.json << EOF 
  195. {
  196.   "CN""system:kube-scheduler",
  197.   "key": {
  198.     "algo""rsa",
  199.     "size"2048
  200.   },
  201.   "names": [
  202.     {
  203.       "C""CN",
  204.       "ST""Beijing",
  205.       "L""Beijing",
  206.       "O""system:kube-scheduler",
  207.       "OU""Kubernetes-manual"
  208.     }
  209.   ]
  210. }
  211. EOF
  212. cd ..
  213. mkdir bootstrap
  214. cd bootstrap
  215. cat > bootstrap.secret.yaml << EOF 
  216. apiVersion: v1
  217. kind: Secret
  218. metadata:
  219.   name: bootstrap-token-c8ad9c
  220.   namespace: kube-system
  221. type: bootstrap.kubernetes.io/token
  222. stringData:
  223.   description: "The default bootstrap token generated by 'kubelet '."
  224.   token-id: c8ad9c
  225.   token-secret: 2e4d610cf3e7426e
  226.   usage-bootstrap-authentication: "true"
  227.   usage-bootstrap-signing: "true"
  228.   auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  229. ---
  230. apiVersion: rbac.authorization.k8s.io/v1
  231. kind: ClusterRoleBinding
  232. metadata:
  233.   name: kubelet-bootstrap
  234. roleRef:
  235.   apiGroup: rbac.authorization.k8s.io
  236.   kind: ClusterRole
  237.   name: system:node-bootstrapper
  238. subjects:
  239. - apiGroup: rbac.authorization.k8s.io
  240.   kind: Group
  241.   name: system:bootstrappers:default-node-token
  242. ---
  243. apiVersion: rbac.authorization.k8s.io/v1
  244. kind: ClusterRoleBinding
  245. metadata:
  246.   name: node-autoapprove-bootstrap
  247. roleRef:
  248.   apiGroup: rbac.authorization.k8s.io
  249.   kind: ClusterRole
  250.   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  251. subjects:
  252. - apiGroup: rbac.authorization.k8s.io
  253.   kind: Group
  254.   name: system:bootstrappers:default-node-token
  255. ---
  256. apiVersion: rbac.authorization.k8s.io/v1
  257. kind: ClusterRoleBinding
  258. metadata:
  259.   name: node-autoapprove-certificate-rotation
  260. roleRef:
  261.   apiGroup: rbac.authorization.k8s.io
  262.   kind: ClusterRole
  263.   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  264. subjects:
  265. - apiGroup: rbac.authorization.k8s.io
  266.   kind: Group
  267.   name: system:nodes
  268. ---
  269. apiVersion: rbac.authorization.k8s.io/v1
  270. kind: ClusterRole
  271. metadata:
  272.   annotations:
  273.     rbac.authorization.kubernetes.io/autoupdate: "true"
  274.   labels:
  275.     kubernetes.io/bootstrapping: rbac-defaults
  276.   name: system:kube-apiserver-to-kubelet
  277. rules:
  278.   - apiGroups:
  279.       - ""
  280.     resources:
  281.       - nodes/proxy
  282.       - nodes/stats
  283.       - nodes/log
  284.       - nodes/spec
  285.       - nodes/metrics
  286.     verbs:
  287.       - "*"
  288. ---
  289. apiVersion: rbac.authorization.k8s.io/v1
  290. kind: ClusterRoleBinding
  291. metadata:
  292.   name: system:kube-apiserver
  293.   namespace: ""
  294. roleRef:
  295.   apiGroup: rbac.authorization.k8s.io
  296.   kind: ClusterRole
  297.   name: system:kube-apiserver-to-kubelet
  298. subjects:
  299.   - apiGroup: rbac.authorization.k8s.io
  300.     kind: User
  301.     name: kube-apiserver
  302. EOF
  303. cd ..
  304. mkdir coredns
  305. cd coredns
  306. cat > coredns.yaml << EOF 
  307. apiVersion: v1
  308. kind: ServiceAccount
  309. metadata:
  310.   name: coredns
  311.   namespace: kube-system
  312. ---
  313. apiVersion: rbac.authorization.k8s.io/v1
  314. kind: ClusterRole
  315. metadata:
  316.   labels:
  317.     kubernetes.io/bootstrapping: rbac-defaults
  318.   name: system:coredns
  319. rules:
  320.   - apiGroups:
  321.     - ""
  322.     resources:
  323.     - endpoints
  324.     - services
  325.     - pods
  326.     - namespaces
  327.     verbs:
  328.     - list
  329.     - watch
  330.   - apiGroups:
  331.     - discovery.k8s.io
  332.     resources:
  333.     - endpointslices
  334.     verbs:
  335.     - list
  336.     - watch
  337. ---
  338. apiVersion: rbac.authorization.k8s.io/v1
  339. kind: ClusterRoleBinding
  340. metadata:
  341.   annotations:
  342.     rbac.authorization.kubernetes.io/autoupdate: "true"
  343.   labels:
  344.     kubernetes.io/bootstrapping: rbac-defaults
  345.   name: system:coredns
  346. roleRef:
  347.   apiGroup: rbac.authorization.k8s.io
  348.   kind: ClusterRole
  349.   name: system:coredns
  350. subjects:
  351. - kind: ServiceAccount
  352.   name: coredns
  353.   namespace: kube-system
  354. ---
  355. apiVersion: v1
  356. kind: ConfigMap
  357. metadata:
  358.   name: coredns
  359.   namespace: kube-system
  360. data:
  361.   Corefile: |
  362.     .:53 {
  363.         errors
  364.         health {
  365.           lameduck 5s
  366.         }
  367.         ready
  368.         kubernetes cluster.local in-addr.arpa ip6.arpa {
  369.           fallthrough in-addr.arpa ip6.arpa
  370.         }
  371.         prometheus :9153
  372.         forward . /etc/resolv.conf {
  373.           max_concurrent 1000
  374.         }
  375.         cache 30
  376.         loop
  377.         reload
  378.         loadbalance
  379.     }
  380. ---
  381. apiVersion: apps/v1
  382. kind: Deployment
  383. metadata:
  384.   name: coredns
  385.   namespace: kube-system
  386.   labels:
  387.     k8s-app: kube-dns
  388.     kubernetes.io/name: "CoreDNS"
  389. spec:
  390.   # replicas: not specified here:
  391.   # 1. Default is 1.
  392.   # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  393.   strategy:
  394.     type: RollingUpdate
  395.     rollingUpdate:
  396.       maxUnavailable: 1
  397.   selector:
  398.     matchLabels:
  399.       k8s-app: kube-dns
  400.   template:
  401.     metadata:
  402.       labels:
  403.         k8s-app: kube-dns
  404.     spec:
  405.       priorityClassName: system-cluster-critical
  406.       serviceAccountName: coredns
  407.       tolerations:
  408.         - key: "CriticalAddonsOnly"
  409.           operator: "Exists"
  410.       nodeSelector:
  411.         kubernetes.io/os: linux
  412.       affinity:
  413.          podAntiAffinity:
  414.            preferredDuringSchedulingIgnoredDuringExecution:
  415.            - weight: 100
  416.              podAffinityTerm:
  417.                labelSelector:
  418.                  matchExpressions:
  419.                    - key: k8s-app
  420.                      operator: In
  421.                      values: ["kube-dns"]
  422.                topologyKey: kubernetes.io/hostname
  423.       containers:
  424.       - name: coredns
  425.         image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 
  426.         imagePullPolicy: IfNotPresent
  427.         resources:
  428.           limits:
  429.             memory: 170Mi
  430.           requests:
  431.             cpu: 100m
  432.             memory: 70Mi
  433.         args: [ "-conf""/etc/coredns/Corefile" ]
  434.         volumeMounts:
  435.         - name: config-volume
  436.           mountPath: /etc/coredns
  437.           readOnly: true
  438.         ports:
  439.         - containerPort: 53
  440.           name: dns
  441.           protocol: UDP
  442.         - containerPort: 53
  443.           name: dns-tcp
  444.           protocol: TCP
  445.         - containerPort: 9153
  446.           name: metrics
  447.           protocol: TCP
  448.         securityContext:
  449.           allowPrivilegeEscalation: false
  450.           capabilities:
  451.             add:
  452.             - NET_BIND_SERVICE
  453.             drop:
  454.             - all
  455.           readOnlyRootFilesystem: true
  456.         livenessProbe:
  457.           httpGet:
  458.             path: /health
  459.             port: 8080
  460.             scheme: HTTP
  461.           initialDelaySeconds: 60
  462.           timeoutSeconds: 5
  463.           successThreshold: 1
  464.           failureThreshold: 5
  465.         readinessProbe:
  466.           httpGet:
  467.             path: /ready
  468.             port: 8181
  469.             scheme: HTTP
  470.       dnsPolicy: Default
  471.       volumes:
  472.         - name: config-volume
  473.           configMap:
  474.             name: coredns
  475.             items:
  476.             - key: Corefile
  477.               path: Corefile
  478. ---
  479. apiVersion: v1
  480. kind: Service
  481. metadata:
  482.   name: kube-dns
  483.   namespace: kube-system
  484.   annotations:
  485.     prometheus.io/port: "9153"
  486.     prometheus.io/scrape: "true"
  487.   labels:
  488.     k8s-app: kube-dns
  489.     kubernetes.io/cluster-service: "true"
  490.     kubernetes.io/name: "CoreDNS"
  491. spec:
  492.   selector:
  493.     k8s-app: kube-dns
  494.   clusterIP: 10.96.0.10 
  495.   ports:
  496.   - name: dns
  497.     port: 53
  498.     protocol: UDP
  499.   - name: dns-tcp
  500.     port: 53
  501.     protocol: TCP
  502.   - name: metrics
  503.     port: 9153
  504.     protocol: TCP
  505. EOF
  506. cd ..
  507. mkdir metrics-server
  508. cd metrics-server
  509. cat > metrics-server.yaml << EOF 
  510. apiVersion: v1
  511. kind: ServiceAccount
  512. metadata:
  513.   labels:
  514.     k8s-app: metrics-server
  515.   name: metrics-server
  516.   namespace: kube-system
  517. ---
  518. apiVersion: rbac.authorization.k8s.io/v1
  519. kind: ClusterRole
  520. metadata:
  521.   labels:
  522.     k8s-app: metrics-server
  523.     rbac.authorization.k8s.io/aggregate-to-admin: "true"
  524.     rbac.authorization.k8s.io/aggregate-to-edit: "true"
  525.     rbac.authorization.k8s.io/aggregate-to-view: "true"
  526.   name: system:aggregated-metrics-reader
  527. rules:
  528. - apiGroups:
  529.   - metrics.k8s.io
  530.   resources:
  531.   - pods
  532.   - nodes
  533.   verbs:
  534.   - get
  535.   - list
  536.   - watch
  537. ---
  538. apiVersion: rbac.authorization.k8s.io/v1
  539. kind: ClusterRole
  540. metadata:
  541.   labels:
  542.     k8s-app: metrics-server
  543.   name: system:metrics-server
  544. rules:
  545. - apiGroups:
  546.   - ""
  547.   resources:
  548.   - pods
  549.   - nodes
  550.   - nodes/stats
  551.   - namespaces
  552.   - configmaps
  553.   verbs:
  554.   - get
  555.   - list
  556.   - watch
  557. ---
  558. apiVersion: rbac.authorization.k8s.io/v1
  559. kind: RoleBinding
  560. metadata:
  561.   labels:
  562.     k8s-app: metrics-server
  563.   name: metrics-server-auth-reader
  564.   namespace: kube-system
  565. roleRef:
  566.   apiGroup: rbac.authorization.k8s.io
  567.   kind: Role
  568.   name: extension-apiserver-authentication-reader
  569. subjects:
  570. - kind: ServiceAccount
  571.   name: metrics-server
  572.   namespace: kube-system
  573. ---
  574. apiVersion: rbac.authorization.k8s.io/v1
  575. kind: ClusterRoleBinding
  576. metadata:
  577.   labels:
  578.     k8s-app: metrics-server
  579.   name: metrics-server:system:auth-delegator
  580. roleRef:
  581.   apiGroup: rbac.authorization.k8s.io
  582.   kind: ClusterRole
  583.   name: system:auth-delegator
  584. subjects:
  585. - kind: ServiceAccount
  586.   name: metrics-server
  587.   namespace: kube-system
  588. ---
  589. apiVersion: rbac.authorization.k8s.io/v1
  590. kind: ClusterRoleBinding
  591. metadata:
  592.   labels:
  593.     k8s-app: metrics-server
  594.   name: system:metrics-server
  595. roleRef:
  596.   apiGroup: rbac.authorization.k8s.io
  597.   kind: ClusterRole
  598.   name: system:metrics-server
  599. subjects:
  600. - kind: ServiceAccount
  601.   name: metrics-server
  602.   namespace: kube-system
  603. ---
  604. apiVersion: v1
  605. kind: Service
  606. metadata:
  607.   labels:
  608.     k8s-app: metrics-server
  609.   name: metrics-server
  610.   namespace: kube-system
  611. spec:
  612.   ports:
  613.   - name: https
  614.     port: 443
  615.     protocol: TCP
  616.     targetPort: https
  617.   selector:
  618.     k8s-app: metrics-server
  619. ---
  620. apiVersion: apps/v1
  621. kind: Deployment
  622. metadata:
  623.   labels:
  624.     k8s-app: metrics-server
  625.   name: metrics-server
  626.   namespace: kube-system
  627. spec:
  628.   selector:
  629.     matchLabels:
  630.       k8s-app: metrics-server
  631.   strategy:
  632.     rollingUpdate:
  633.       maxUnavailable: 0
  634.   template:
  635.     metadata:
  636.       labels:
  637.         k8s-app: metrics-server
  638.     spec:
  639.       containers:
  640.       - args:
  641.         - --cert-dir=/tmp
  642.         - --secure-port=4443
  643.         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  644.         - --kubelet-use-node-status-port
  645.         - --metric-resolution=15s
  646.         - --kubelet-insecure-tls
  647.         - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
  648.         - --requestheader-username-headers=X-Remote-User
  649.         - --requestheader-group-headers=X-Remote-Group
  650.         - --requestheader-extra-headers-prefix=X-Remote-Extra-
  651.         image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0
  652.         imagePullPolicy: IfNotPresent
  653.         livenessProbe:
  654.           failureThreshold: 3
  655.           httpGet:
  656.             path: /livez
  657.             port: https
  658.             scheme: HTTPS
  659.           periodSeconds: 10
  660.         name: metrics-server
  661.         ports:
  662.         - containerPort: 4443
  663.           name: https
  664.           protocol: TCP
  665.         readinessProbe:
  666.           failureThreshold: 3
  667.           httpGet:
  668.             path: /readyz
  669.             port: https
  670.             scheme: HTTPS
  671.           initialDelaySeconds: 20
  672.           periodSeconds: 10
  673.         resources:
  674.           requests:
  675.             cpu: 100m
  676.             memory: 200Mi
  677.         securityContext:
  678.           readOnlyRootFilesystem: true
  679.           runAsNonRoot: true
  680.           runAsUser: 1000
  681.         volumeMounts:
  682.         - mountPath: /tmp
  683.           name: tmp-dir
  684.         - name: ca-ssl
  685.           mountPath: /etc/kubernetes/pki
  686.       nodeSelector:
  687.         kubernetes.io/os: linux
  688.       priorityClassName: system-cluster-critical
  689.       serviceAccountName: metrics-server
  690.       volumes:
  691.       - emptyDir: {}
  692.         name: tmp-dir
  693.       - name: ca-ssl
  694.         hostPath:
  695.           path: /etc/kubernetes/pki
  696. ---
  697. apiVersion: apiregistration.k8s.io/v1
  698. kind: APIService
  699. metadata:
  700.   labels:
  701.     k8s-app: metrics-server
  702.   name: v1beta1.metrics.k8s.io
  703. spec:
  704.   group: metrics.k8s.io
  705.   groupPriorityMinimum: 100
  706.   insecureSkipTLSVerify: true
  707.   service:
  708.     name: metrics-server
  709.     namespace: kube-system
  710.   version: v1beta1
  711.   versionPriority: 100
  712. EOF

3.相关证书生成

  1. # master01节点下载证书生成工具
  2. # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl
  3. # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson
  4. # 软件包内有
  5. cp cfssl_*_linux_amd64 /usr/local/bin/cfssl
  6. cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson
  7. chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

  1. cd pki
  2. # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
  3. # 若没有IPv6 可删除可保留 
  4. cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
  5. cfssl gencert \
  6.    -ca=/etc/etcd/ssl/etcd-ca.pem \
  7.    -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
  8.    -config=ca-config.json \
  9.    -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.61,192.168.1.62,192.168.1.63,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30 \
  10.    -profile=kubernetes \
  11.    etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

  1. Master='k8s-master02 k8s-master03'
  2. for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

mkdir -p /etc/kubernetes/pki

3.2.2master01节点生成k8s证书

  1. cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
  2. # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
  3. 10.96.0.1是service网段的第一个地址,需要计算,192.168.8.66为高可用vip地址
  4. # 若没有IPv6 可删除可保留 
  5. cfssl gencert   \
  6. -ca=/etc/kubernetes/pki/ca.pem   \
  7. -ca-key=/etc/kubernetes/pki/ca-key.pem   \
  8. -config=ca-config.json   \
  9. -hostname=10.96.0.1,192.168.8.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.8.66,192.168.1.67,192.168.1.68,192.168.1.69,192.168.1.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100   \
  10. -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

  1. cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
  2. # 有一个警告,可以忽略
  3. cfssl gencert  \
  4. -ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
  5. -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
  6. -config=ca-config.json   \
  7. -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

  1. cfssl gencert \
  2.    -ca=/etc/kubernetes/pki/ca.pem \
  3.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  4.    -config=ca-config.json \
  5.    -profile=kubernetes \
  6.    manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
  7. # 设置一个集群项
  8. # 在《5.高可用配置》选择使用那种高可用方案
  9. # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
  10. # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
  11. kubectl config set-cluster kubernetes \
  12.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  13.      --embed-certs=true \
  14.      --server=https://127.0.0.1:8443 \
  15.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  16. # 设置一个环境项,一个上下文
  17. kubectl config set-context system:kube-controller-manager@kubernetes \
  18.     --cluster=kubernetes \
  19.     --user=system:kube-controller-manager \
  20.     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  21. # 设置一个用户项
  22. kubectl config set-credentials system:kube-controller-manager \
  23.      --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
  24.      --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
  25.      --embed-certs=true \
  26.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  27. # 设置默认环境
  28. kubectl config use-context system:kube-controller-manager@kubernetes \
  29.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  30. cfssl gencert \
  31.    -ca=/etc/kubernetes/pki/ca.pem \
  32.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  33.    -config=ca-config.json \
  34.    -profile=kubernetes \
  35.    scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
  36. # 在《5.高可用配置》选择使用那种高可用方案
  37. # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
  38. # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
  39. kubectl config set-cluster kubernetes \
  40.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  41.      --embed-certs=true \
  42.      --server=https://127.0.0.1:8443 \
  43.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  44. kubectl config set-credentials system:kube-scheduler \
  45.      --client-certificate=/etc/kubernetes/pki/scheduler.pem \
  46.      --client-key=/etc/kubernetes/pki/scheduler-key.pem \
  47.      --embed-certs=true \
  48.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  49. kubectl config set-context system:kube-scheduler@kubernetes \
  50.      --cluster=kubernetes \
  51.      --user=system:kube-scheduler \
  52.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  53. kubectl config use-context system:kube-scheduler@kubernetes \
  54.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  55. cfssl gencert \
  56.    -ca=/etc/kubernetes/pki/ca.pem \
  57.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  58.    -config=ca-config.json \
  59.    -profile=kubernetes \
  60.    admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
  61. # 在《5.高可用配置》选择使用那种高可用方案
  62. # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
  63. # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
  64. kubectl config set-cluster kubernetes     \
  65.   --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  66.   --embed-certs=true     \
  67.   --server=https://127.0.0.1:8443     \
  68.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  69. kubectl config set-credentials kubernetes-admin  \
  70.   --client-certificate=/etc/kubernetes/pki/admin.pem     \
  71.   --client-key=/etc/kubernetes/pki/admin-key.pem     \
  72.   --embed-certs=true     \
  73.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  74. kubectl config set-context kubernetes-admin@kubernetes    \
  75.   --cluster=kubernetes     \
  76.   --user=kubernetes-admin     \
  77.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  78. kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建kube-proxy证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

  1. cfssl gencert \
  2.    -ca=/etc/kubernetes/pki/ca.pem \
  3.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  4.    -config=ca-config.json \
  5.    -profile=kubernetes \
  6.    kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
  7. # 在《5.高可用配置》选择使用那种高可用方案
  8. # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
  9. # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
  10. kubectl config set-cluster kubernetes     \
  11.   --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  12.   --embed-certs=true     \
  13.   --server=https://127.0.0.1:8443     \
  14.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  15. kubectl config set-credentials kube-proxy  \
  16.   --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \
  17.   --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \
  18.   --embed-certs=true     \
  19.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  20. kubectl config set-context kube-proxy@kubernetes    \
  21.   --cluster=kubernetes     \
  22.   --user=kube-proxy     \
  23.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  24. kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.5创建ServiceAccount Key ——secret

  1. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
  2. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

  1. #其他节点创建目录
  2. # mkdir  /etc/kubernetes/pki/ -p
  3. for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

  1. ls /etc/kubernetes/pki/
  2. admin.csr          controller-manager.csr      kube-proxy.csr
  3. admin-key.pem      controller-manager-key.pem  kube-proxy-key.pem
  4. admin.pem          controller-manager.pem      kube-proxy.pem
  5. apiserver.csr      front-proxy-ca.csr          sa.key
  6. apiserver-key.pem  front-proxy-ca-key.pem      sa.pub
  7. apiserver.pem      front-proxy-ca.pem          scheduler.csr
  8. ca.csr             front-proxy-client.csr      scheduler-key.pem
  9. ca-key.pem         front-proxy-client-key.pem  scheduler.pem
  10. ca.pem             front-proxy-client.pem
  11. # 一共26个就对了
  12. ls /etc/kubernetes/pki/ |wc -l
  13. 26

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master01'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.61:2380'
  11. listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.61:2380'
  16. advertise-client-urls: 'https://192.168.1.61:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.1.2master02配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master02'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.62:2380'
  11. listen-client-urls: 'https://192.168.1.62:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.62:2380'
  16. advertise-client-urls: 'https://192.168.1.62:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.1.3master03配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master03'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.63:2380'
  11. listen-client-urls: 'https://192.168.1.63:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.63:2380'
  16. advertise-client-urls: 'https://192.168.1.63:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

  1. cat > /usr/lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=Etcd Service
  4. Documentation=https://coreos.com/etcd/docs/latest/
  5. After=network.target
  6. [Service]
  7. Type=notify
  8. ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
  9. Restart=on-failure
  10. RestartSec=10
  11. LimitNOFILE=65536
  12. [Install]
  13. WantedBy=multi-user.target
  14. Alias=etcd3.service
  15. EOF

4.2.2创建etcd证书目录

  1. mkdir /etc/kubernetes/pki/etcd
  2. ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
  3. systemctl daemon-reload
  4. systemctl enable --now etcd

4.2.3查看etcd状态

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. export ETCDCTL_API=3
  3. etcdctl --endpoints="192.168.1.63:2379,192.168.1.62:2379,192.168.1.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
  4. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  5. |    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  6. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  7. 192.168.1.63:2379 | c0c8142615b9523f |   3.5.6 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  8. 192.168.1.62:2379 | de8396604d2c160d |   3.5.6 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  9. 192.168.1.61:2379 | 33c9d6df0037ab97 |   3.5.6 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
  10. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  11. [root@k8s-master01 pki]#

5.高可用配置(在Master服务器上操作)

注意* 5.1.1 和5.1.2 二选一即可

选择使用那种高可用方案

在《3.2.生成k8s相关证书》

若使用 nginx方案,那么为 --server=https://127.0.0.1:8443
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443

5.1 NGINX高可用方案 (推荐)

5.1.1自己手动编译

在所有节点执行

  1. # 安装编译环境
  2. yum install gcc -y
  3. # 下载解压nginx二进制文件
  4. wget http://nginx.org/download/nginx-1.22.1.tar.gz
  5. tar xvf nginx-*.tar.gz
  6. cd nginx-*
  7. # 进行编译
  8. ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
  9. make && make install

5.1.2使用我编译好的

  1. # 使用我编译好的
  2. cd kubernetes-v1.26.0/cby
  3. # 拷贝我编译好的nginx
  4. node='k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02'
  5. for NODE in $node; do scp nginx.tar $NODE:/usr/local/; done
  6. # 其他节点上执行
  7. cd /usr/local/
  8. tar xvf nginx.tar

5.1.3写入启动配置

在所有主机上执行

  1. # 写入nginx配置文件
  2. cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
  3. worker_processes 1;
  4. events {
  5.     worker_connections  1024;
  6. }
  7. stream {
  8.     upstream backend {
  9.         least_conn;
  10.         hash $remote_addr consistent;
  11.         server 192.168.1.61:6443        max_fails=3 fail_timeout=30s;
  12.         server 192.168.1.62:6443        max_fails=3 fail_timeout=30s;
  13.         server 192.168.1.63:6443        max_fails=3 fail_timeout=30s;
  14.     }
  15.     server {
  16.         listen 127.0.0.1:8443;
  17.         proxy_connect_timeout 1s;
  18.         proxy_pass backend;
  19.     }
  20. }
  21. EOF
  22. # 写入启动配置文件
  23. cat > /etc/systemd/system/kube-nginx.service <<EOF
  24. [Unit]
  25. Description=kube-apiserver nginx proxy
  26. After=network.target
  27. After=network-online.target
  28. Wants=network-online.target
  29. [Service]
  30. Type=forking
  31. ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
  32. ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
  33. ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
  34. PrivateTmp=true
  35. Restart=always
  36. RestartSec=5
  37. StartLimitInterval=0
  38. LimitNOFILE=65536
  39. [Install]
  40. WantedBy=multi-user.target
  41. EOF
  42. # 设置开机自启
  43. systemctl enable --now  kube-nginx 
  44. systemctl restart kube-nginx
  45. systemctl status kube-nginx

5.2 keepalived和haproxy 高可用方案 (不推荐)

5.2.1安装keepalived和haproxy服务

  1. systemctl disable --now firewalld
  2. setenforce 0
  3. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
  4. yum -y install keepalived haproxy

5.2.2修改haproxy配置文件(两台配置文件一样)

  1. # cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
  2. cat >/etc/haproxy/haproxy.cfg<<"EOF"
  3. global
  4.  maxconn 2000
  5.  ulimit-n 16384
  6.  log 127.0.0.1 local0 err
  7.  stats timeout 30s
  8. defaults
  9.  log global
  10.  mode http
  11.  option httplog
  12.  timeout connect 5000
  13.  timeout client 50000
  14.  timeout server 50000
  15.  timeout http-request 15s
  16.  timeout http-keep-alive 15s
  17. frontend monitor-in
  18.  bind *:33305
  19.  mode http
  20.  option httplog
  21.  monitor-uri /monitor
  22. frontend k8s-master
  23.  bind 0.0.0.0:8443
  24.  bind 127.0.0.1:8443
  25.  mode tcp
  26.  option tcplog
  27.  tcp-request inspect-delay 5s
  28.  default_backend k8s-master
  29. backend k8s-master
  30.  mode tcp
  31.  option tcplog
  32.  option tcp-check
  33.  balance roundrobin
  34.  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  35.  server  k8s-master01  192.168.1.61:6443 check
  36.  server  k8s-master02  192.168.1.62:6443 check
  37.  server  k8s-master03  192.168.1.63:6443 check
  38. EOF

5.2.3Master01配置keepalived master节点

  1. #cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state MASTER
  16.     # 注意网卡名
  17.     interface eth0 
  18.     mcast_src_ip 192.168.1.61
  19.     virtual_router_id 51
  20.     priority 100
  21.     nopreempt
  22.     advert_int 2
  23.     authentication {
  24.         auth_type PASS
  25.         auth_pass K8SHA_KA_AUTH
  26.     }
  27.     virtual_ipaddress {
  28.         192.168.8.66
  29.     }
  30.     track_script {
  31.       chk_apiserver 
  32. } }
  33. EOF

5.2.4Master02配置keepalived backup节点

  1. # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     # 注意网卡名
  17.     interface eth0
  18.     mcast_src_ip 192.168.1.62
  19.     virtual_router_id 51
  20.     priority 80
  21.     nopreempt
  22.     advert_int 2
  23.     authentication {
  24.         auth_type PASS
  25.         auth_pass K8SHA_KA_AUTH
  26.     }
  27.     virtual_ipaddress {
  28.         192.168.8.66
  29.     }
  30.     track_script {
  31.       chk_apiserver 
  32. } }
  33. EOF

5.2.5Master03配置keepalived backup节点

  1. # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     # 注意网卡名
  17.     interface eth0
  18.     mcast_src_ip 192.168.1.63
  19.     virtual_router_id 51
  20.     priority 50
  21.     nopreempt
  22.     advert_int 2
  23.     authentication {
  24.         auth_type PASS
  25.         auth_pass K8SHA_KA_AUTH
  26.     }
  27.     virtual_ipaddress {
  28.         192.168.8.66
  29.     }
  30.     track_script {
  31.       chk_apiserver 
  32. } }
  33. EOF

5.2.6健康检查脚本配置(两台lb主机)

  1. cat >  /etc/keepalived/check_apiserver.sh << EOF
  2. #!/bin/bash
  3. err=0
  4. for k in \$(seq 1 3)
  5. do
  6.     check_code=\$(pgrep haproxy)
  7.     if [[ \$check_code == "" ]]; then
  8.         err=\$(expr \$err + 1)
  9.         sleep 1
  10.         continue
  11.     else
  12.         err=0
  13.         break
  14.     fi
  15. done
  16. if [[ \$err != "0" ]]; then
  17.     echo "systemctl stop keepalived"
  18.     /usr/bin/systemctl stop keepalived
  19.     exit 1
  20. else
  21.     exit 0
  22. fi
  23. EOF
  24. # 给脚本授权
  25. chmod +x /etc/keepalived/check_apiserver.sh

5.2.7启动服务

  1. systemctl daemon-reload
  2. systemctl enable --now haproxy
  3. systemctl enable --now keepalived

5.2.8测试高可用

  1. # 能ping同
  2. [root@k8s-node02 ~]# ping 192.168.8.66
  3. # 能telnet访问
  4. [root@k8s-node02 ~]# telnet 192.168.8.66 8443
  5. # 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \\
  8.       --v=2  \\
  9.       --allow-privileged=true  \\
  10.       --bind-address=0.0.0.0  \\
  11.       --secure-port=6443  \\
  12.       --advertise-address=192.168.1.61 \\
  13.       --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
  14.       --service-node-port-range=30000-32767  \\
  15.       --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\
  16.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
  17.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
  18.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
  19.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
  20.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
  21.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
  22.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
  23.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
  24.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
  25.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
  26.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  27.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
  28.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  29.       --authorization-mode=Node,RBAC  \\
  30.       --enable-bootstrap-token-auth=true  \\
  31.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
  32.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
  33.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
  34.       --requestheader-allowed-names=aggregator  \\
  35.       --requestheader-group-headers=X-Remote-Group  \\
  36.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
  37.       --requestheader-username-headers=X-Remote-User \\
  38.       --enable-aggregator-routing=true
  39.       # --feature-gates=IPv6DualStack=true
  40.       # --token-auth-file=/etc/kubernetes/token.csv
  41. Restart=on-failure
  42. RestartSec=10s
  43. LimitNOFILE=65535
  44. [Install]
  45. WantedBy=multi-user.target
  46. EOF

6.1.2master02节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \\
  8.       --v=2  \\
  9.       --allow-privileged=true  \\
  10.       --bind-address=0.0.0.0  \\
  11.       --secure-port=6443  \\
  12.       --advertise-address=192.168.1.62 \\
  13.       --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
  14.       --service-node-port-range=30000-32767  \\
  15.       --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\
  16.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
  17.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
  18.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
  19.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
  20.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
  21.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
  22.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
  23.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
  24.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
  25.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
  26.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  27.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
  28.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
  29.       --authorization-mode=Node,RBAC  \\
  30.       --enable-bootstrap-token-auth=true  \\
  31.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
  32.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
  33.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
  34.       --requestheader-allowed-names=aggregator  \\
  35.       --requestheader-group-headers=X-Remote-Group  \\
  36.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
  37.       --requestheader-username-headers=X-Remote-User \\
  38.       --enable-aggregator-routing=true
  39.       # --feature-gates=IPv6DualStack=true
  40.       # --token-auth-file=/etc/kubernetes/token.csv
  41. Restart=on-failure
  42. RestartSec=10s
  43. LimitNOFILE=65535
  44. [Install]
  45. WantedBy=multi-user.target
  46. EOF

6.1.3master03节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \\
  8.       --v=2  \\
  9.       --allow-privileged=true  \\
  10.       --bind-address=0.0.0.0  \\
  11.       --secure-port=6443  \\
  12.       --advertise-address=192.168.1.63 \\
  13.       --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
  14.       --service-node-port-range=30000-32767  \\
  15.       --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\
  16.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
  17.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
  18.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
  19.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
  20.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
  21.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
  22.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
  23.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
  24.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
  25.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
  26.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  27.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
  28.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
  29.       --authorization-mode=Node,RBAC  \\
  30.       --enable-bootstrap-token-auth=true  \\
  31.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
  32.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
  33.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
  34.       --requestheader-allowed-names=aggregator  \\
  35.       --requestheader-group-headers=X-Remote-Group  \\
  36.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
  37.       --requestheader-username-headers=X-Remote-User \\
  38.       --enable-aggregator-routing=true
  39.       # --feature-gates=IPv6DualStack=true
  40.       # --token-auth-file=/etc/kubernetes/token.csv
  41. Restart=on-failure
  42. RestartSec=10s
  43. LimitNOFILE=65535
  44. [Install]
  45. WantedBy=multi-user.target
  46. EOF

6.1.4启动apiserver(所有master节点)

  1. systemctl daemon-reload && systemctl enable --now kube-apiserver
  2. # 注意查看状态是否启动正常
  3. # systemctl status kube-apiserver

6.2.配置kube-controller-manager service

  1. # 所有master节点配置,且配置相同
  2. 172.16.0.0/12为pod网段,按需求设置你自己的网段
  3. cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
  4. [Unit]
  5. Description=Kubernetes Controller Manager
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=network.target
  8. [Service]
  9. ExecStart=/usr/local/bin/kube-controller-manager \\
  10.       --v=2 \\
  11.       --bind-address=127.0.0.1 \\
  12.       --root-ca-file=/etc/kubernetes/pki/ca.pem \\
  13.       --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
  14.       --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
  15.       --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
  16.       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
  17.       --leader-elect=true \\
  18.       --use-service-account-credentials=true \\
  19.       --node-monitor-grace-period=40s \\
  20.       --node-monitor-period=5s \\
  21.       --pod-eviction-timeout=2m0s \\
  22.       --controllers=*,bootstrapsigner,tokencleaner \\
  23.       --allocate-node-cidrs=true \\
  24.       --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
  25.       --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\
  26.       --node-cidr-mask-size-ipv4=24 \\
  27.       --node-cidr-mask-size-ipv6=120 \\
  28.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 
  29.       # --feature-gates=IPv6DualStack=true
  30. Restart=always
  31. RestartSec=10s
  32. [Install]
  33. WantedBy=multi-user.target
  34. EOF

6.2.1启动kube-controller-manager,并查看状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-controller-manager
  3. # systemctl  status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

  1. cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-scheduler \\
  8.       --v=2 \\
  9.       --bind-address=127.0.0.1 \\
  10.       --leader-elect=true \\
  11.       --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  12. Restart=always
  13. RestartSec=10s
  14. [Install]
  15. WantedBy=multi-user.target
  16. EOF

6.3.2启动并查看服务状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-scheduler
  3. # systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

  1. # 在《5.高可用配置》选择使用那种高可用方案
  2. # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
  3. # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
  4. cd bootstrap
  5. kubectl config set-cluster kubernetes     \
  6. --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  7. --embed-certs=true     --server=https://127.0.0.1:8443     \
  8. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  9. kubectl config set-credentials tls-bootstrap-token-user     \
  10. --token=c8ad9c.2e4d610cf3e7426e \
  11. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  12. kubectl config set-context tls-bootstrap-token-user@kubernetes     \
  13. --cluster=kubernetes     \
  14. --user=tls-bootstrap-token-user     \
  15. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  16. kubectl config use-context tls-bootstrap-token-user@kubernetes     \
  17. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  18. # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
  19. mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

  1. kubectl get cs
  2. Warning: v1 ComponentStatus is deprecated in v1.19+
  3. NAME                 STATUS    MESSAGE                         ERROR
  4. scheduler            Healthy   ok                              
  5. controller-manager   Healthy   ok                              
  6. etcd-0               Healthy   {"health":"true","reason":""}   
  7. etcd-2               Healthy   {"health":"true","reason":""}   
  8. etcd-1               Healthy   {"health":"true","reason":""
  9. # 切记执行,别忘记!!!
  10. kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

  1. cd /etc/kubernetes/
  2. for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来

8.2.1当使用docker作为Runtime(暂不支持)

v1.26.0 暂时不支持docker方式

  1. cat > /usr/lib/systemd/system/kubelet.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kubelet \\
  7.     --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
  8.     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  9.     --config=/etc/kubernetes/kubelet-conf.yml \\
  10.     --container-runtime-endpoint=unix:///run/cri-dockerd.sock  \\
  11.     --node-labels=node.kubernetes.io/node=
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

8.2.2当使用Containerd作为Runtime (推荐)

  1. mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
  2. # 所有k8s节点配置kubelet service
  3. cat > /usr/lib/systemd/system/kubelet.service << EOF
  4. [Unit]
  5. Description=Kubernetes Kubelet
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=containerd.service
  8. Requires=containerd.service
  9. [Service]
  10. ExecStart=/usr/local/bin/kubelet \\
  11.     --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
  12.     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  13.     --config=/etc/kubernetes/kubelet-conf.yml \\
  14.     --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\
  15.     --node-labels=node.kubernetes.io/node=
  16.     # --feature-gates=IPv6DualStack=true
  17.     # --container-runtime=remote
  18.     # --runtime-request-timeout=15m
  19.     # --cgroup-driver=systemd
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

8.2.3所有k8s节点创建kubelet的配置文件

  1. cat > /etc/kubernetes/kubelet-conf.yml <<EOF
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. kind: KubeletConfiguration
  4. address: 0.0.0.0
  5. port: 10250
  6. readOnlyPort: 10255
  7. authentication:
  8.   anonymous:
  9.     enabled: false
  10.   webhook:
  11.     cacheTTL: 2m0s
  12.     enabled: true
  13.   x509:
  14.     clientCAFile: /etc/kubernetes/pki/ca.pem
  15. authorization:
  16.   mode: Webhook
  17.   webhook:
  18.     cacheAuthorizedTTL: 5m0s
  19.     cacheUnauthorizedTTL: 30s
  20. cgroupDriver: systemd
  21. cgroupsPerQOS: true
  22. clusterDNS:
  23. 10.96.0.10
  24. clusterDomain: cluster.local
  25. containerLogMaxFiles: 5
  26. containerLogMaxSize: 10Mi
  27. contentType: application/vnd.kubernetes.protobuf
  28. cpuCFSQuota: true
  29. cpuManagerPolicy: none
  30. cpuManagerReconcilePeriod: 10s
  31. enableControllerAttachDetach: true
  32. enableDebuggingHandlers: true
  33. enforceNodeAllocatable:
  34. - pods
  35. eventBurst: 10
  36. eventRecordQPS: 5
  37. evictionHard:
  38.   imagefs.available: 15%
  39.   memory.available: 100Mi
  40.   nodefs.available: 10%
  41.   nodefs.inodesFree: 5%
  42. evictionPressureTransitionPeriod: 5m0s
  43. failSwapOn: true
  44. fileCheckFrequency: 20s
  45. hairpinMode: promiscuous-bridge
  46. healthzBindAddress: 127.0.0.1
  47. healthzPort: 10248
  48. httpCheckFrequency: 20s
  49. imageGCHighThresholdPercent: 85
  50. imageGCLowThresholdPercent: 80
  51. imageMinimumGCAge: 2m0s
  52. iptablesDropBit: 15
  53. iptablesMasqueradeBit: 14
  54. kubeAPIBurst: 10
  55. kubeAPIQPS: 5
  56. makeIPTablesUtilChains: true
  57. maxOpenFiles: 1000000
  58. maxPods: 110
  59. nodeStatusUpdateFrequency: 10s
  60. oomScoreAdj: -999
  61. podPidsLimit: -1
  62. registryBurst: 10
  63. registryPullQPS: 5
  64. resolvConf: /etc/resolv.conf
  65. rotateCertificates: true
  66. runtimeRequestTimeout: 2m0s
  67. serializeImagePulls: true
  68. staticPodPath: /etc/kubernetes/manifests
  69. streamingConnectionIdleTimeout: 4h0m0s
  70. syncFrequency: 1m0s
  71. volumeStatsAggPeriod: 1m0s
  72. EOF

8.2.4启动kubelet

  1. systemctl daemon-reload
  2. systemctl restart kubelet
  3. systemctl enable --now kubelet

8.2.5查看集群

  1. [root@k8s-master01 ~]# kubectl  get node
  2. NAME           STATUS     ROLES    AGE   VERSION
  3. k8s-master01   Ready    <none>   18s   v1.26.0
  4. k8s-master02   Ready    <none>   16s   v1.26.0
  5. k8s-master03   Ready    <none>   16s   v1.26.0
  6. k8s-node01     Ready    <none>   14s   v1.26.0
  7. k8s-node02     Ready    <none>   14s   v1.26.0
  8. [root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1将kubeconfig发送至其他节点

  1. for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
  2. for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.2所有k8s节点添加kube-proxy的service文件

  1. cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-proxy \\
  8.   --config=/etc/kubernetes/kube-proxy.yaml \\
  9.   --v=2
  10. Restart=always
  11. RestartSec=10s
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

8.3.3所有k8s节点添加kube-proxy的配置

  1. cat > /etc/kubernetes/kube-proxy.yaml << EOF
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 0.0.0.0
  4. clientConnection:
  5.   acceptContentTypes: ""
  6.   burst: 10
  7.   contentType: application/vnd.kubernetes.protobuf
  8.   kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  9.   qps: 5
  10. clusterCIDR: 172.16.0.0/12,fc00:2222::/112
  11. configSyncPeriod: 15m0s
  12. conntrack:
  13.   max: null
  14.   maxPerCore: 32768
  15.   min: 131072
  16.   tcpCloseWaitTimeout: 1h0m0s
  17.   tcpEstablishedTimeout: 24h0m0s
  18. enableProfiling: false
  19. healthzBindAddress: 0.0.0.0:10256
  20. hostnameOverride: ""
  21. iptables:
  22.   masqueradeAll: false
  23.   masqueradeBit: 14
  24.   minSyncPeriod: 0s
  25.   syncPeriod: 30s
  26. ipvs:
  27.   masqueradeAll: true
  28.   minSyncPeriod: 5s
  29.   scheduler: "rr"
  30.   syncPeriod: 30s
  31. kind: KubeProxyConfiguration
  32. metricsBindAddress: 127.0.0.1:10249
  33. mode: "ipvs"
  34. nodePortAddresses: null
  35. oomScoreAdj: -999
  36. portRange: ""
  37. udpIdleTimeout: 250ms
  38. EOF

8.3.4启动kube-proxy

  1. systemctl daemon-reload
  2.  systemctl restart kube-proxy
  3.  systemctl enable --now kube-proxy

9.安装网络插件

注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚

** centos7 要升级libseccomp 不然 无法安装网络插件**

  1. # https://github.com/opencontainers/runc/releases
  2. # 升级runc
  3. wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
  4. install -m 755 runc.amd64 /usr/local/sbin/runc
  5. cp -p /usr/local/sbin/runc  /usr/local/bin/runc
  6. cp -p /usr/local/sbin/runc  /usr/bin/runc
  7. #下载高于2.4以上的包
  8. yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
  9. #查看当前版本
  10. [root@k8s-master-1 ~]# rpm -qa | grep libseccomp
  11. libseccomp-2.5.1-1.el8.x86_64

9.1安装Calico

9.1.1更改calico网段

  1. # 本地没有公网 IPv6 使用 calico.yaml
  2. kubectl apply -f calico.yaml
  3. # 本地有公网 IPv6 使用 calico-ipv6.yaml 
  4. # kubectl apply -f calico-ipv6.yaml 
  5. # 若docker镜像拉不下来,可以使用我的仓库
  6. # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico.yaml 
  7. # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico-ipv6.yaml

9.1.2查看容器状态

  1. # calico 初始化会很慢 需要耐心等待一下,大约十分钟左右
  2. [root@k8s-master01 ~]# kubectl  get pod -A
  3. NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
  4. kube-system   calico-kube-controllers-6747f75cdc-fbvvc   1/1     Running   0          61s
  5. kube-system   calico-node-fs7hl                          1/1     Running   0          61s
  6. kube-system   calico-node-jqz58                          1/1     Running   0          61s
  7. kube-system   calico-node-khjlg                          1/1     Running   0          61s
  8. kube-system   calico-node-wmf8q                          1/1     Running   0          61s
  9. kube-system   calico-node-xc6gn                          1/1     Running   0          61s
  10. kube-system   calico-typha-6cdc4b4fbc-57snb              1/1     Running   0          61s

9.2 安装cilium

9.2.1 安装helm

  1. # [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
  2. # [root@k8s-master01 ~]# chmod 700 get_helm.sh
  3. # [root@k8s-master01 ~]# ./get_helm.sh
  4. wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz
  5. tar xvf helm-canary-linux-amd64.tar.tar
  6. cp linux-amd64/helm /usr/local/bin/

9.2.2 安装cilium

  1. # 添加源
  2. helm repo add cilium https://helm.cilium.io
  3. # 默认参数安装
  4. helm install cilium cilium/cilium --namespace kube-system
  5. # 启用ipv6
  6. # helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true
  7. # 启用路由信息和监控插件
  8. # helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

9.2.3 查看

  1. [root@k8s-master01 ~]# kubectl  get pod -A | grep cil
  2. kube-system   cilium-gmr6c                       1/1     Running       0             5m3s
  3. kube-system   cilium-kzgdj                       1/1     Running       0             5m3s
  4. kube-system   cilium-operator-69b677f97c-6pw4k   1/1     Running       0             5m3s
  5. kube-system   cilium-operator-69b677f97c-xzzdk   1/1     Running       0             5m3s
  6. kube-system   cilium-q2rnr                       1/1     Running       0             5m3s
  7. kube-system   cilium-smx5v                       1/1     Running       0             5m3s
  8. kube-system   cilium-tdjq4                       1/1     Running       0             5m3s
  9. [root@k8s-master01 ~]#

9.2.4 下载专属监控面板

  1. [root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
  2. [root@k8s-master01 yaml]#
  3. [root@k8s-master01 yaml]# kubectl  apply -f monitoring-example.yaml
  4. namespace/cilium-monitoring created
  5. serviceaccount/prometheus-k8s created
  6. configmap/grafana-config created
  7. configmap/grafana-cilium-dashboard created
  8. configmap/grafana-cilium-operator-dashboard created
  9. configmap/grafana-hubble-dashboard created
  10. configmap/prometheus created
  11. clusterrole.rbac.authorization.k8s.io/prometheus created
  12. clusterrolebinding.rbac.authorization.k8s.io/prometheus created
  13. service/grafana created
  14. service/prometheus created
  15. deployment.apps/grafana created
  16. deployment.apps/prometheus created
  17. [root@k8s-master01 yaml]#

9.2.5 下载部署测试用例

  1. [root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml
  2. [root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml
  3. [root@k8s-master01 yaml]# kubectl  apply -f connectivity-check.yaml
  4. deployment.apps/echo-a created
  5. deployment.apps/echo-b created
  6. deployment.apps/echo-b-host created
  7. deployment.apps/pod-to-a created
  8. deployment.apps/pod-to-external-1111 created
  9. deployment.apps/pod-to-a-denied-cnp created
  10. deployment.apps/pod-to-a-allowed-cnp created
  11. deployment.apps/pod-to-external-fqdn-allow-google-cnp created
  12. deployment.apps/pod-to-b-multi-node-clusterip created
  13. deployment.apps/pod-to-b-multi-node-headless created
  14. deployment.apps/host-to-b-multi-node-clusterip created
  15. deployment.apps/host-to-b-multi-node-headless created
  16. deployment.apps/pod-to-b-multi-node-nodeport created
  17. deployment.apps/pod-to-b-intra-node-nodeport created
  18. service/echo-a created
  19. service/echo-b created
  20. service/echo-b-headless created
  21. service/echo-b-host-headless created
  22. ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
  23. ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
  24. ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
  25. [root@k8s-master01 yaml]#

9.2.6 查看pod

  1. [root@k8s-master01 yaml]# kubectl  get pod -A
  2. NAMESPACE           NAME                                                     READY   STATUS    RESTARTS      AGE
  3. cilium-monitoring   grafana-59957b9549-6zzqh                                 1/1     Running   0             10m
  4. cilium-monitoring   prometheus-7c8c9684bb-4v9cl                              1/1     Running   0             10m
  5. default             chenby-75b5d7fbfb-7zjsr                                  1/1     Running   0             27h
  6. default             chenby-75b5d7fbfb-hbvr8                                  1/1     Running   0             27h
  7. default             chenby-75b5d7fbfb-ppbzg                                  1/1     Running   0             27h
  8. default             echo-a-6799dff547-pnx6w                                  1/1     Running   0             10m
  9. default             echo-b-fc47b659c-4bdg9                                   1/1     Running   0             10m
  10. default             echo-b-host-67fcfd59b7-28r9s                             1/1     Running   0             10m
  11. default             host-to-b-multi-node-clusterip-69c57975d6-z4j2z          1/1     Running   0             10m
  12. default             host-to-b-multi-node-headless-865899f7bb-frrmc           1/1     Running   0             10m
  13. default             pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x                    1/1     Running   0             10m
  14. default             pod-to-a-denied-cnp-65cc5ff97b-2rzb8                     1/1     Running   0             10m
  15. default             pod-to-a-dfc64f564-p7xcn                                 1/1     Running   0             10m
  16. default             pod-to-b-intra-node-nodeport-677868746b-trk2l            1/1     Running   0             10m
  17. default             pod-to-b-multi-node-clusterip-76bbbc677b-knfq2           1/1     Running   0             10m
  18. default             pod-to-b-multi-node-headless-698c6579fd-mmvd7            1/1     Running   0             10m
  19. default             pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz            1/1     Running   0             10m
  20. default             pod-to-external-1111-8459965778-pjt9b                    1/1     Running   0             10m
  21. default             pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q   1/1     Running   0             10m
  22. kube-system         cilium-7rfj6                                             1/1     Running   0             56s
  23. kube-system         cilium-d4cch                                             1/1     Running   0             56s
  24. kube-system         cilium-h5x8r                                             1/1     Running   0             56s
  25. kube-system         cilium-operator-5dbddb6dbf-flpl5                         1/1     Running   0             56s
  26. kube-system         cilium-operator-5dbddb6dbf-gcznc                         1/1     Running   0             56s
  27. kube-system         cilium-t2xlz                                             1/1     Running   0             56s
  28. kube-system         cilium-z65z7                                             1/1     Running   0             56s
  29. kube-system         coredns-665475b9f8-jkqn8                                 1/1     Running   1 (36h ago)   36h
  30. kube-system         hubble-relay-59d8575-9pl9z                               1/1     Running   0             56s
  31. kube-system         hubble-ui-64d4995d57-nsv9j                               2/2     Running   0             56s
  32. kube-system         metrics-server-776f58c94b-c6zgs                          1/1     Running   1 (36h ago)   37h
  33. [root@k8s-master01 yaml]#

9.2.7 修改为NodePort

  1. [root@k8s-master01 yaml]# kubectl  edit svc  -n kube-system hubble-ui
  2. service/hubble-ui edited
  3. [root@k8s-master01 yaml]#
  4. [root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring grafana
  5. service/grafana edited
  6. [root@k8s-master01 yaml]#
  7. [root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring prometheus
  8. service/prometheus edited
  9. [root@k8s-master01 yaml]#
  10. type: NodePort

9.2.8 查看端口

  1. [root@k8s-master01 yaml]# kubectl get svc -A | grep monit
  2. cilium-monitoring   grafana                NodePort    10.100.250.17    <none>        3000:30707/TCP           15m
  3. cilium-monitoring   prometheus             NodePort    10.100.131.243   <none>        9090:31155/TCP           15m
  4. [root@k8s-master01 yaml]#
  5. [root@k8s-master01 yaml]# kubectl get svc -A | grep hubble
  6. kube-system         hubble-metrics         ClusterIP   None             <none>        9965/TCP                 5m12s
  7. kube-system         hubble-peer            ClusterIP   10.100.150.29    <none>        443/TCP                  5m12s
  8. kube-system         hubble-relay           ClusterIP   10.109.251.34    <none>        80/TCP                   5m12s
  9. kube-system         hubble-ui              NodePort    10.102.253.59    <none>        80:31219/TCP             5m12s
  10. [root@k8s-master01 yaml]#

9.2.9 访问

  1. http://192.168.1.61:30707
  2. http://192.168.1.61:31155
  3. http://192.168.1.61:31219

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

  1. cd coredns/
  2. cat coredns.yaml | grep clusterIP:
  3.   clusterIP: 10.96.0.10

10.1.2安装

  1. kubectl  create -f coredns.yaml 
  2. serviceaccount/coredns created
  3. clusterrole.rbac.authorization.k8s.io/system:coredns created
  4. clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  5. configmap/coredns created
  6. deployment.apps/coredns created
  7. service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

  1. # 安装metrics server
  2. cd metrics-server/
  3. kubectl  apply -f metrics-server.yaml

11.1.2稍等片刻查看状态

  1. kubectl  top node
  2. NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
  3. k8s-master01   154m         1%     1715Mi          21%       
  4. k8s-master02   151m         1%     1274Mi          16%       
  5. k8s-master03   523m         6%     1345Mi          17%       
  6. k8s-node01     84m          1%     671Mi           8%        
  7. k8s-node02     73m          0%     727Mi           9%        
  8. k8s-node03     96m          1%     769Mi           9%        
  9. k8s-node04     68m          0%     673Mi           8%        
  10. k8s-node05     82m          1%     679Mi           8%

12.集群验证

12.1部署pod资源

  1. cat<<EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: busybox
  6.   namespace: default
  7. spec:
  8.   containers:
  9.   - name: busybox
  10.     image: docker.io/library/busybox:1.28
  11.     command:
  12.       - sleep
  13.       - "3600"
  14.     imagePullPolicy: IfNotPresent
  15.   restartPolicy: Always
  16. EOF
  17. # 查看
  18. kubectl  get pod
  19. NAME      READY   STATUS    RESTARTS   AGE
  20. busybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

  1. kubectl get svc
  2. NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
  3. kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h
  4. kubectl exec  busybox -n default -- nslookup kubernetes
  5. 3Server:    10.96.0.10
  6. Address 110.96.0.10 kube-dns.kube-system.svc.cluster.local
  7. Name:      kubernetes
  8. Address 110.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

  1. kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
  2. Server:    10.96.0.10
  3. Address 110.96.0.10 kube-dns.kube-system.svc.cluster.local
  4. Name:      kube-dns.kube-system
  5. Address 110.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

  1. telnet 10.96.0.1 443
  2. Trying 10.96.0.1...
  3. Connected to 10.96.0.1.
  4. Escape character is '^]'.
  5.  telnet 10.96.0.10 53
  6. Trying 10.96.0.10...
  7. Connected to 10.96.0.10.
  8. Escape character is '^]'.
  9. curl 10.96.0.10:53
  10. curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

  1. kubectl get po -owide
  2. NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
  3. busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>
  4.  kubectl get po -n kube-system -owide
  5. NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
  6. calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
  7. calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.61     k8s-master01   <none>           <none>
  8. calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.64     k8s-node01     <none>           <none>
  9. calico-node-mdps8                          1/1     Running   0             77m   192.168.1.65     k8s-node02     <none>           <none>
  10. calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.63     k8s-master03   <none>           <none>
  11. calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.62     k8s-master02   <none>           <none>
  12. calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.65     k8s-node02     <none>           <none>
  13. calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.61     k8s-master01   <none>           <none>
  14. calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.64     k8s-node01     <none>           <none>
  15. coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
  16. metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none>
  17. # 进入busybox ping其他节点上的pod
  18. kubectl exec -ti busybox -- sh
  19. / # ping 192.168.1.64
  20. PING 192.168.1.64 (192.168.1.64): 56 data bytes
  21. 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms
  22. 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms
  23. 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms
  24. 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms
  25. 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms
  26. # 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

  1. cat > deployments.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5.   name: nginx-deployment
  6.   labels:
  7.     app: nginx
  8. spec:
  9.   replicas: 3
  10.   selector:
  11.     matchLabels:
  12.       app: nginx
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: nginx
  17.     spec:
  18.       containers:
  19.       - name: nginx
  20.         image: docker.io/library/nginx:1.14.2
  21.         ports:
  22.         - containerPort: 80
  23. EOF
  24. kubectl  apply -f deployments.yaml 
  25. deployment.apps/nginx-deployment created
  26. kubectl  get pod 
  27. NAME                               READY   STATUS    RESTARTS   AGE
  28. busybox                            1/1     Running   0          6m25s
  29. nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
  30. nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
  31. nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s
  32. # 删除nginx
  33. [root@k8s-master01 ~]# kubectl delete -f deployments.yaml

13.安装dashboard

  1. helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
  2. helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard

13.1更改dashboard的svc为NodePort,如果已是请忽略

  1. kubectl edit svc kubernetes-dashboard
  2.   type: NodePort

13.2查看端口号

  1. kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
  2. NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
  3. kubernetes-dashboard   NodePort   10.108.120.110   <none>        443:30034/TCP   34s

13.3创建token

  1. kubectl -n kubernetes-dashboard create token admin-user
  2. eyJhbGciOiJSUzI1NiIsImtpZCI6IkFZWENLUmZQWTViWUF4UV81NWJNb0JEa0I4R2hQMHVac2J3RDM3RHJLcFEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcwNjc0MzY1LCJpYXQiOjE2NzA2NzA3NjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiODkyODRjNGUtYzk0My00ODkzLWE2ZjctNTYxZWJhMzE2NjkwIn19LCJuYmYiOjE2NzA2NzA3NjUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.DFxzS802Iu0lldikjhyp2diZSpVAUoSTbOjerH2t7ToM0TMoPQdcdDyvBTcNlIew3F01u4D6atNV7J36IGAnHEX0Q_cYAb00jINjy1YXGz0gRhRE0hMrXay2-Qqo6tAORTLUVWrctW6r0li5q90rkBjr5q06Lt5BTpUhbhbgLQQJWwiEVseCpUEikxD6wGnB1tCamFyjs3sa-YnhhqCR8wUAZcTaeVbMxCuHVAuSqnIkxat9nyxGcsjn7sqmBqYjjOGxp5nhHPDj03TWmSJlb_Csc7pvLsB9LYm0IbER4xDwtLZwMAjYWRbjKxbkUp4L9v5CZ4PbIHap9qQp1FXreA

13.3登录dashboard

https://192.168.1.61:30034/

14.ingress安装

14.1执行部署

  1. cd ingress/
  2. kubectl  apply -f deploy.yaml 
  3. kubectl  apply -f backend.yaml 
  4. # 等创建完成后在执行:
  5. kubectl  apply -f ingress-demo-app.yaml 
  6. kubectl  get ingress
  7. NAME               CLASS   HOSTS                            ADDRESS     PORTS   AGE
  8. ingress-host-bar   nginx   hello.chenby.cn,demo.chenby.cn   192.168.1.62   80      7s

14.2过滤查看ingress端口

  1. [root@hello ~/yaml]# kubectl  get svc -A | grep ingress
  2. ingress-nginx          ingress-nginx-controller             NodePort    10.104.231.36    <none>        80:32636/TCP,443:30579/TCP   104s
  3. ingress-nginx          ingress-nginx-controller-admission   ClusterIP   10.101.85.88     <none>        443/TCP                      105s
  4. [root@hello ~/yaml]#

15.IPv6测试

  1. #部署应用
  2. cat<<EOF | kubectl apply -f -
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6.   name: chenby
  7. spec:
  8.   replicas: 3
  9.   selector:
  10.     matchLabels:
  11.       app: chenby
  12.   template:
  13.     metadata:
  14.       labels:
  15.         app: chenby
  16.     spec:
  17.       containers:
  18.       - name: chenby
  19.         image: docker.io/library/nginx
  20.         resources:
  21.           limits:
  22.             memory: "128Mi"
  23.             cpu: "500m"
  24.         ports:
  25.         - containerPort: 80
  26. ---
  27. apiVersion: v1
  28. kind: Service
  29. metadata:
  30.   name: chenby
  31. spec:
  32.   ipFamilyPolicy: PreferDualStack
  33.   ipFamilies:
  34.   - IPv6
  35.   - IPv4
  36.   type: NodePort
  37.   selector:
  38.     app: chenby
  39.   ports:
  40.   - port: 80
  41.     targetPort: 80
  42. EOF
  43. #查看端口
  44. [root@k8s-master01 ~]# kubectl  get svc
  45. NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
  46. chenby         NodePort    fd00::a29c       <none>        80:30779/TCP   5s
  47. [root@k8s-master01 ~]# 
  48. #使用内网访问
  49. [root@localhost yaml]# curl -I http://[fd00::a29c]
  50. HTTP/1.1 200 OK
  51. Server: nginx/1.21.6
  52. Date: Thu, 05 May 2022 10:20:35 GMT
  53. Content-Type: text/html
  54. Content-Length: 615
  55. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  56. Connection: keep-alive
  57. ETag: "61f01158-267"
  58. Accept-Ranges: bytes
  59. [root@localhost yaml]# curl -I http://192.168.1.61:30779
  60. HTTP/1.1 200 OK
  61. Server: nginx/1.21.6
  62. Date: Thu, 05 May 2022 10:20:59 GMT
  63. Content-Type: text/html
  64. Content-Length: 615
  65. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  66. Connection: keep-alive
  67. ETag: "61f01158-267"
  68. Accept-Ranges: bytes
  69. [root@localhost yaml]# 
  70. #使用公网访问
  71. [root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779
  72. HTTP/1.1 200 OK
  73. Server: nginx/1.21.6
  74. Date: Thu, 05 May 2022 10:20:54 GMT
  75. Content-Type: text/html
  76. Content-Length: 615
  77. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  78. Connection: keep-alive
  79. ETag: "61f01158-267"
  80. Accept-Ranges: bytes

16.安装命令行自动补全功能

  1. yum install bash-completion -y
  2. source /usr/share/bash-completion/bash_completion
  3. source <(kubectl completion bash)
  4. echo "source <(kubectl completion bash)" >> ~/.bashrc

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/429106
推荐阅读
相关标签
  

闽ICP备14008679号