当前位置:   article > 正文

ubuntu22.04安装k8s集群(cri-docker+haproxy+keepalived)_ubuntu cri-docker

ubuntu cri-docker

一、参考文档

 【kubernetes】k8s高可用集群搭建(三主三从)_搭建k8s高可用集群-CSDN博客

https://www.cnblogs.com/wangguishe/p/17823687.html#_label13

二、前言

        一直搭建的k8s集群都是1个主节点的学习环境,不适合生产环境,这里介绍一下利用haproxy和keeplived搭建高可用的多主K8s集群。

三、版本计划

角色ip软件
kmaster1192.168.48.210/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
kmaster2192.168.48.211/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
kmaster3192.168.48.212/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
knode1192.168.48.213/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode2192.168.48.214/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode3192.168.48.215/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode4192.168.48.216/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)

        注意:为了避免集群脑裂,建议主节点个数为奇数,具体请参考k8s官网介绍。

四、创建虚拟机

1、所有节点都执行

1)系统初始化

        这里是基于ubuntu22.04克隆的7台设备,需要先系统初始化一下,下面是kmaster1的步骤,其他主机参照步骤执行。

  1. #依次开机,设置ip地址,主机名
  2. ## 设置为root登录
  3. sudo su
  4. passwd root
  5. vim /etc/ssh/sshd_config
  6. ...
  7. PermitRootLogin yes
  8. sudo service ssh restart
  9. # 修改主机名和hosts
  10. hostnamectl set-hostname kmaster1
  11. vim /etc/hosts
  12. ...
  13. 192.168.48.210 kmaster1
  14. 192.168.48.211 kmaster2
  15. 192.168.48.212 kmaster3
  16. 192.168.48.213 knode1
  17. 192.168.48.214 knode2
  18. 192.168.48.215 knode3
  19. 192.168.48.216 knode4
  20. ## ubuntu22.04环境设置静态地址
  21. ssh ziu@192.168.48.x
  22. sudo su
  23. cp /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bbk
  24. vim /etc/netplan/00-installer-config.yaml
  25. network:
  26. ethernets:
  27. ens33:
  28. dhcp4: false
  29. addresses: [192.168.48.210/24] ##按照版本计划表修改ip
  30. optional: true
  31. routes:
  32. - to: default
  33. via: 192.168.48.2
  34. nameservers:
  35. addresses: [192.168.48.2]
  36. version: 2
  37. netplan apply

2)设置转发IPV4并让iptables看到桥接流量

  1. # br_netfilter是一个内核模块,它允许在网桥设备上加入防火墙功能,因此也被称为透明防火墙或桥接模式防火墙。这种防火墙具有部署能力强,隐蔽性好,安全性高的有点
  2. # overlay模块则是用于支持overlay网络文件系统的模块,overlay文件系统是一种在现有文件系统的顶部创建叠加层的办法,以实现联合挂载(Union Mount)。它允许将多个文件系统合并为一个单一的逻辑文件系统,具有层次结构和优先级,这样可以方便的将多个文件系统中的文件或目录合并在一起,而不需要实际复制和移动文件
  3. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
  4. overlay
  5. br_netfilter
  6. EOF
  7. sudo modprobe overlay
  8. sudo modprobe br_netfilter
  9. # 设置所需的 sysctl 参数,参数在重新启动后保持不变
  10. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  11. net.bridge.bridge-nf-call-iptables = 1
  12. net.bridge.bridge-nf-call-ip6tables = 1
  13. net.ipv4.ip_forward = 1
  14. EOF
  15. # 应用 sysctl 参数而不重新启动
  16. sudo sysctl --system
  17. #通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:
  18. lsmod | grep br_netfilter
  19. lsmod | grep overlay
  20. #通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:
  21. sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

3)设置ipvs

以下使用ipvs的原因是问的AI的解答)

Kubernetes(k8s)多主集群通常建议使用IPVS(IP Virtual Server)作为服务代理,原因主要包括以下几点:

(1)性能优势:
   IPVS内置于Linux内核中,其基于哈希表的快速查找机制在大规模高并发场景下比iptables(Netfilter)的NAT模式具有更高的性能和更低的延迟。

(2)负载均衡算法丰富:
   IPVS支持丰富的负载均衡算法,包括轮询、最少连接、源哈希、加权轮询等,可以根据业务需求灵活选择合适的负载均衡策略。

(3)会话保持能力:
   IPVS支持多种会话保持方式,例如源IP地址会话保持、目的IP地址会话保持、TCP/UDP端口会话保持等,这对于保持客户端与后端服务之间的长连接或状态相关的应用至关重要。

(4)更好的可扩展性和稳定性:
   在大规模集群中,随着Service数量的增长,iptables规则的数量也会迅速增加,这可能会影响系统的稳定性和性能。而IPVS则通过内核态实现高效的负载均衡和转发,避免了这个问题。

(5)更精细的服务管理:
   IPVS可以对单个服务进行粒度更细的管理,如单独设置服务的健康检查、权重调整等,更适合复杂的云原生环境。

(6)支持集群内部通信:
   在k8s集群内部,kube-proxy组件利用IPVS可以更好地处理Pod间的通信和服务发现。

因此,在构建Kubernetes多主集群时,使用IPVS能够提供更高效率、更稳定的网络服务代理功能,以满足分布式系统中的高可用和高性能需求。不过,IPVS并不是必须选项,kube-proxy也支持iptables和userspace两种模式,但在大规模生产环境中,IPVS往往是首选方案。

注意:前面我参考的文档也有ipvs步骤,参数比下面步骤少,具体看自己设置吧。

  1. #安装ipvs
  2. sudo apt update
  3. sudo apt install ipvsadm ipset sysstat conntrack libseccomp2 -y
  4. cat > /etc/modules-load.d/ipvs.conf << EOF
  5. ip_vs
  6. ip_vs_rr
  7. ip_vs_wrr
  8. ip_vs_sh
  9. nf_conntrack
  10. ip_tables
  11. ip_set
  12. xt_set
  13. ipt_set
  14. ipt_rpfilter
  15. ipt_REJECT
  16. ipip
  17. EOF
  18. sudo modprobe ip_vs
  19. sudo modprobe ip_vs_rr
  20. sudo modprobe ip_vs_wrr
  21. sudo modprobe ip_vs_sh
  22. sudo modprobe nf_conntrack
  23. sudo modprobe ip_tables
  24. sudo modprobe ip_set
  25. sudo modprobe xt_set
  26. sudo modprobe ipt_set
  27. sudo modprobe ipt_rpfilter
  28. sudo modprobe ipt_REJECT
  29. sudo modprobe ipip
  30. #通过运行以下指令确认模块被加载:
  31. lsmod | grep ip_vs
  32. lsmod | grep ip_vs_rr
  33. lsmod | grep ip_vs_wrr
  34. lsmod | grep ip_vs_sh
  35. lsmod | grep nf_conntrack
  36. lsmod | grep ip_tables
  37. lsmod | grep ip_set
  38. lsmod | grep xt_set
  39. lsmod | grep ipt_set
  40. lsmod | grep ipt_rpfilter
  41. lsmod | grep ipt_REJECT
  42. lsmod | grep ipip

4)安装docker-ce

  1. # 官网提供的步骤
  2. # https://docs.docker.com/engine/install/ubuntu/
  3. for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
  4. # Add Docker's official GPG key:
  5. sudo apt-get update
  6. sudo apt-get install ca-certificates curl gnupg -y
  7. #sudo install -m 0755 -d /etc/apt/keyrings
  8. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  9. sudo chmod a+r /etc/apt/keyrings/docker.gpg
  10. # Add the repository to Apt sources:
  11. echo \
  12. "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  13. $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  14. sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  15. sudo apt-get update
  16. sudo apt install docker-ce docker-ce-cli containerd.io
  17. #sudo apt install docker-buildx-plugin docker-compose-plugin #可以不按照,不是k8s集群必须
  18. #防止在执行apt update和apt upgrade时自动更新
  19. sudo apt-mark hold docker-ce docker-ce-cli containerd.io
  20. docker version
  21. Client: Docker Engine - Community
  22. Version: 24.0.7
  23. API version: 1.43
  24. Go version: go1.20.10
  25. Git commit: afdd53b
  26. Built: Thu Oct 26 09:07:41 2023
  27. OS/Arch: linux/amd64
  28. Context: default
  29. Server: Docker Engine - Community
  30. Engine:
  31. Version: 24.0.7
  32. API version: 1.43 (minimum version 1.12)
  33. Go version: go1.20.10
  34. Git commit: 311b9ff
  35. Built: Thu Oct 26 09:07:41 2023
  36. OS/Arch: linux/amd64
  37. Experimental: false
  38. containerd:
  39. Version: 1.6.26
  40. GitCommit: 3dd1e886e55dd695541fdcd67420c2888645a495
  41. runc:
  42. Version: 1.1.10
  43. GitCommit: v1.1.10-0-g18a0cb0
  44. docker-init:
  45. Version: 0.19.0
  46. GitCommit: de40ad0

5)安装容器进行时cri-docker

  1. # https://github.com/Mirantis/cri-dockerd/releases
  2. # Run these commands as root
  3. ## 如果github下载失败,gitee上有同步过来的0.3.8的,也可直接用
  4. wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9.amd64.tgz
  5. tar -xf cri-dockerd-0.3.9.amd64.tgz
  6. cd cri-dockerd
  7. mkdir -p /usr/local/bin
  8. install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
  9. #install packaging/systemd/* /etc/systemd/system
  10. #sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
  11. ##编写service-------------------------------------------------------------------------------------
  12. vim /etc/systemd/system/cri-docker.service
  13. [Unit]
  14. Description=CRI Interface for Docker Application Container Engine
  15. Documentation=https://docs.mirantis.com
  16. After=network-online.target firewalld.service docker.service
  17. Wants=network-online.target
  18. Requires=cri-docker.socket
  19. [Service]
  20. Type=notify
  21. ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
  22. ExecReload=/bin/kill -s HUP $MAINPID
  23. TimeoutSec=0
  24. RestartSec=2
  25. Restart=always
  26. # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
  27. # Both the old, and new location are accepted by systemd 229 and up, so using the old location
  28. # to make them work for either version of systemd.
  29. StartLimitBurst=3
  30. # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
  31. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
  32. # this option work for either version of systemd.
  33. StartLimitInterval=60s
  34. # Having non-zero Limit*s causes performance problems due to accounting overhead
  35. # in the kernel. We recommend using cgroups to do container-local accounting.
  36. LimitNOFILE=infinity
  37. LimitNPROC=infinity
  38. LimitCORE=infinity
  39. # Comment TasksMax if your systemd version does not support it.
  40. # Only systemd 226 and above support this option.
  41. TasksMax=infinity
  42. Delegate=yes
  43. KillMode=process
  44. [Install]
  45. WantedBy=multi-user.target
  46. ##编写socket---------------------------------------------------------------------------------
  47. vim /etc/systemd/system/cri-docker.socket
  48. [Unit]
  49. Description=CRI Docker Socket for the API
  50. PartOf=cri-docker.service
  51. [Socket]
  52. ListenStream=%t/cri-dockerd.sock
  53. SocketMode=0660
  54. SocketUser=root
  55. SocketGroup=docker
  56. [Install]
  57. WantedBy=sockets.target
  58. ##加载service配置并设置开机自启
  59. systemctl daemon-reload
  60. systemctl enable --now cri-docker.socket
  61. #插叙cgroup配置,默认就是systemd,所以不用修改了
  62. docker info | grep -i cgroup
  63. Cgroup Driver: systemd
  64. Cgroup Version: 2
  65. cgroupns

6)安装kubeadm,kubelet,kubectl

(1)安装前检查
  • 一台兼容的linux主机

  • 单主机节点内存2GB以上,CPU2核以上

  • 集群主机节点之间网络互联

  • 检查容器进行时

  1. #下面截图的容器进行时任选一个
  2. root@kmaster1:~/cri-dockerd# ls /var/run/cri-dockerd.sock
  3. /var/run/cri-dockerd.sock
  • 节点之间不可以有重复主机名,mac地址,product_uuid

  1. #检查网口和MAC地址
  2. ip link
  3. #检查product_uuid
  4. sudo cat /sys/class/dmi/id/product_uuid
  5. #检查主机名
  6. hostname
  • 开启端口:6443

  1. #主机端口,这里检查没有输出
  2. nc 127.0.0.1 6443
  • 禁用交换分区

  1. #注释里面swap的行
  2. vim /etc/fstab
  3. #临时关闭
  4. swapoff -a
  5. #查看时间是否一致
  6. date
  7. #关闭防火墙
  8. # 关闭ufw和firewalled
  9. systemctl stop ufw firewalld
  10. systemctl disable ufw firewalld
  11. # 禁用selinux(ubuntu没有启用,centos才默认启用,需要注意一下)
  12. #默认ubunt默认是不安装selinux的,如果没有selinux命令和配置文件则说明没有安装selinux,则下面步骤就不用做了
  13. sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  14. setenforce 0
(2)安装3个命令

  1. #此次的安装步骤
  2. curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  3. cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
  4. deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
  5. EOF
  6. mv /etc/apt/sources.list.d/docker.list /root/
  7. apt update
  8. sudo apt list kubeadm -a
  9. sudo apt-get install -y kubelet=1.28.0-00 kubeadm=1.28.0-00 kubectl=1.28.0-00
  10. sudo apt-mark hold kubelet kubeadm kubectl
  11. #---------以下安装源可任选一种----------------------------------------
  12. #阿里云的源
  13. curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  14. cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
  15. deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
  16. EOF
  17. #华为云的源
  18. curl -fsSL https://repo.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  19. echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://repo.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
  20. #k8s官方源
  21. curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  22. echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

2、3个主节点执行

1)设置命令补全

  1. apt install bash-completion -y
  2. echo "source <(kubectl completion bash)" >> ~/.bashrc
  3. source .bashrc

2)安装haproxy-基于docker方式

  1. #创建haproxy配置文件
  2. mkdir /etc/haproxy
  3. vim /etc/haproxy/haproxy.cfg
  1. root@kmaster1:~# grep -v '^#' /etc/haproxy/haproxy.cfg
  2. global
  3. # to have these messages end up in /var/log/haproxy.log you will
  4. # need to:
  5. #
  6. # 1) configure syslog to accept network log events. This is done
  7. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  8. # /etc/sysconfig/syslog
  9. #
  10. # 2) configure local2 events to go to the /var/log/haproxy.log
  11. # file. A line like the following can be added to
  12. # /etc/sysconfig/syslog
  13. #
  14. # local2.* /var/log/haproxy.log
  15. #
  16. log 127.0.0.1 local2
  17. pidfile /var/run/haproxy.pid
  18. maxconn 4000
  19. # daemon
  20. # turn on stats unix socket
  21. stats socket /var/lib/haproxy/stats
  22. defaults
  23. mode http
  24. log global
  25. option httplog
  26. option dontlognull
  27. option http-server-close
  28. option forwardfor except 127.0.0.0/8
  29. option redispatch
  30. retries 3
  31. timeout http-request 10s
  32. timeout queue 1m
  33. timeout connect 10s
  34. timeout client 1m
  35. timeout server 1m
  36. timeout http-keep-alive 10s
  37. timeout check 10s
  38. maxconn 3000
  39. frontend kubernetes-apiserver
  40. mode tcp
  41. bind *:9443 ## 监听9443端口
  42. # bind *:443 ssl # To be completed ....
  43. acl url_static path_beg -i /static /images /javascript /stylesheets
  44. acl url_static path_end -i .jpg .gif .png .css .js
  45. default_backend kubernetes-apiserver
  46. backend kubernetes-apiserver
  47. mode tcp # 模式tcp
  48. balance roundrobin # 采用轮询的负载算法
  49. server kmaster1 192.168.48.210:6443 check
  50. server kmaster2 192.168.48.211:6443 check
  51. server kmaster3 192.168.48.212:6443 check
  52. root@kmaster1:~# docker run -d \
  53. --restart always \
  54. --name=haproxy \
  55. --net=host \
  56. -v /etc/haproxy:/usr/local/etc/haproxy:ro \
  57. -v /var/lib/haproxy:/var/lib/haproxy \
  58. haproxy:2.3.6
  59. root@kmaster1:~# docker ps -a

3)安装keepalived-基于docker方式

  1. #查看网口名,这里是ens33,自己根据自己的网口名称修改
  2. root@kmaster1:~# ip link
  3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
  4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  5. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
  6. link/ether 00:0c:29:e7:89:b3 brd ff:ff:ff:ff:ff:ff
  7. altname enp2s1
  8. 3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
  9. link/ipip 0.0.0.0 brd 0.0.0.0
  10. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
  11. link/ether 02:42:58:31:78:56 brd ff:ff:ff:ff:ff:ff
  12. 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
  13. link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff
  14. #创建keepalived配置文件
  15. mkdir /etc/keepalived
  16. vim /etc/keepalived/keepalived.conf
  1. global_defs {
  2. script_user root
  3. enable_script_security
  4. }
  5. vrrp_script chk_haproxy {
  6. script "/bin/bash -c 'if [[ $(netstat -nlp | grep 9443) ]]; then exit 0; else exit 1; fi'" # haproxy 检测脚本,这里需要根据自己实际情况判断
  7. interval 2 # 每2秒执行一次检测
  8. weight 11 # 权重变化
  9. }
  10. vrrp_instance VI_1 {
  11. interface ens33 # 此处通过ip addr命令根据实际填写
  12. state MASTER # backup节点设为BACKUP
  13. virtual_router_id 51 # id设为相同,表示是同一个虚拟路由组
  14. priority 100 #初始权重
  15. #这里需要注意,我的3个Master节点不在同一个网段,不配置会出现多个Master节点的脑裂现象,值根据当前节点情况,配置其余2个节点
  16. # unicast_peer {
  17. # 192.168.48.210
  18. # 192.168.48.211
  19. # 192.168.48.212
  20. # }
  21. virtual_ipaddress {
  22. 192.168.48.222 # vip 虚拟ip
  23. }
  24. authentication {
  25. auth_type PASS
  26. auth_pass password
  27. }
  28. track_script {
  29. chk_haproxy
  30. }
  31. notify "/container/service/keepalived/assets/notify.sh"
  32. }
  1. docker run --cap-add=NET_ADMIN \
  2. --restart always \
  3. --name keepalived \
  4. --cap-add=NET_BROADCAST \
  5. --cap-add=NET_RAW \
  6. --net=host \
  7. --volume /etc/keepalived/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
  8. -d osixia/keepalived:2.0.20 \
  9. --copy-service
  10. docker ps -a

4)验证vip的切换

前提:3个主节点都配置完成了haproxy和keepalived,并容器启动成功

  1. #vip所在的master节点
  2. root@kmaster1:~# ip a s
  3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  5. inet 127.0.0.1/8 scope host lo
  6. valid_lft forever preferred_lft forever
  7. inet6 ::1/128 scope host
  8. valid_lft forever preferred_lft forever
  9. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  10. link/ether 00:0c:29:e7:89:b3 brd ff:ff:ff:ff:ff:ff
  11. altname enp2s1
  12. inet 192.168.48.212/24 brd 192.168.48.255 scope global ens33
  13. valid_lft forever preferred_lft forever
  14. inet 192.168.48.222/32 scope global ens33
  15. valid_lft forever preferred_lft forever
  16. inet6 fe80::20c:29ff:fee7:89b3/64 scope link
  17. valid_lft forever preferred_lft forever
  18. 3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
  19. link/ipip 0.0.0.0 brd 0.0.0.0
  20. inet 10.244.163.64/32 scope global tunl0
  21. valid_lft forever preferred_lft forever
  22. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
  23. link/ether 02:42:7a:40:ff:74 brd ff:ff:ff:ff:ff:ff
  24. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  25. valid_lft forever preferred_lft forever
  26. 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
  27. link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff
  28. inet 10.96.0.10/32 scope global kube-ipvs0
  29. valid_lft forever preferred_lft forever
  30. inet 10.96.0.1/32 scope global kube-ipvs0
  31. valid_lft forever preferred_lft forever
  32. inet 10.97.78.110/32 scope global kube-ipvs0
  33. valid_lft forever preferred_lft forever
  34. #非vip的master节点
  35. root@kmaster2:~# ip a s
  36. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  37. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  38. inet 127.0.0.1/8 scope host lo
  39. valid_lft forever preferred_lft forever
  40. inet6 ::1/128 scope host
  41. valid_lft forever preferred_lft forever
  42. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  43. link/ether 00:0c:29:a6:69:6e brd ff:ff:ff:ff:ff:ff
  44. altname enp2s1
  45. inet 192.168.48.211/24 brd 192.168.48.255 scope global ens33
  46. valid_lft forever preferred_lft forever
  47. inet6 fe80::20c:29ff:fea6:696e/64 scope link
  48. valid_lft forever preferred_lft forever
  49. 3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
  50. link/ipip 0.0.0.0 brd 0.0.0.0
  51. inet 10.244.135.2/32 scope global tunl0
  52. valid_lft forever preferred_lft forever
  53. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
  54. link/ether 02:42:4c:cd:64:63 brd ff:ff:ff:ff:ff:ff
  55. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  56. valid_lft forever preferred_lft forever
  57. 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
  58. link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff
  59. inet 10.96.0.1/32 scope global kube-ipvs0
  60. valid_lft forever preferred_lft forever
  61. inet 10.96.0.10/32 scope global kube-ipvs0
  62. valid_lft forever preferred_lft forever
  63. inet 10.97.78.110/32 scope global kube-ipvs0
  64. valid_lft forever preferred_lft forever
  65. 10: calibc082d1a6a4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000
  66. link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
  67. inet6 fe80::ecee:eeff:feee:eeee/64 scope link
  68. valid_lft forever preferred_lft forever
  69. 11: cali41e20c84d22@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000
  70. link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
  71. inet6 fe80::ecee:eeff:feee:eeee/64 scope link
  72. valid_lft forever preferred_lft forever

5)拉取集群需要的镜像

        这里以kmaster1举例,kmaster2和kmaster3参照执行。

  1. #查看镜像版本
  2. root@kmaster1:~# kubeadm config images list
  3. I0117 08:26:18.644762 4685 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
  4. registry.k8s.io/kube-apiserver:v1.28.5
  5. registry.k8s.io/kube-controller-manager:v1.28.5
  6. registry.k8s.io/kube-scheduler:v1.28.5
  7. registry.k8s.io/kube-proxy:v1.28.5
  8. registry.k8s.io/pause:3.9
  9. registry.k8s.io/etcd:3.5.9-0
  10. registry.k8s.io/coredns/coredns:v1.10.1
  11. #查看阿里云的镜像库的镜像版本
  12. root@kmaster1:~# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
  13. I0117 08:28:40.181207 4779 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
  14. registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.5
  15. registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.5
  16. registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.5
  17. registry.aliyuncs.com/google_containers/kube-proxy:v1.28.5
  18. registry.aliyuncs.com/google_containers/pause:3.9
  19. registry.aliyuncs.com/google_containers/etcd:3.5.9-0
  20. registry.aliyuncs.com/google_containers/coredns:v1.10.1
  21. #下载镜像
  22. root@kmaster1:~# kubeadm config images pull --kubernetes-version=v1.28.0 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
  23. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
  24. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
  25. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
  26. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
  27. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
  28. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
  29. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

3、kmaster1节点执行

        创建集群初始化的配置文件。

  1. #kmaster1上创建k8s-init-config.yaml
  2. vim k8s-init-config.yaml
  3. -----------------------整个文件内容--------------------------------
  4. apiVersion: kubeadm.k8s.io/v1beta3
  5. bootstrapTokens:
  6. - groups:
  7. - system:bootstrappers:kubeadm:default-node-token
  8. token: wgs001.com3yjucgqr276rf # 可以自定义,正则([a-z0-9]{6}).([a-z0-9]{16})
  9. ttl: 24h0m0s
  10. usages:
  11. - signing
  12. - authentication
  13. kind: InitConfiguration
  14. localAPIEndpoint:
  15. advertiseAddress: 192.168.48.210 # 修改成节点ip
  16. bindPort: 6443
  17. nodeRegistration:
  18. criSocket: unix:///var/run/cri-dockerd.sock
  19. imagePullPolicy: IfNotPresent
  20. name: kmaster1 # 节点的hostname
  21. taints:
  22. - effect: NoSchedule
  23. key: node-role.kubernetes.io/master
  24. - effect: NoSchedule
  25. key: node-role.kubernetes.io/control-plane
  26. ---
  27. apiServer:
  28. timeoutForControlPlane: 4m0s
  29. certSANs: # 3主个节点IP和vip的ip
  30. - 192.168.48.210
  31. - 192.168.48.211
  32. - 192.168.48.212
  33. - 192.168.48.222
  34. apiVersion: kubeadm.k8s.io/v1beta3
  35. controlPlaneEndpoint: "192.168.48.222:6443" # 设置vip高可用地址
  36. certificatesDir: /etc/kubernetes/pki
  37. clusterName: kubernetes
  38. controllerManager: {}
  39. dns: {}
  40. etcd:
  41. local:
  42. dataDir: /var/lib/etcd
  43. imageRepository: registry.aliyuncs.com/google_containers # 设置国内源
  44. kind: ClusterConfiguration
  45. kubernetesVersion: v1.28.0 # 指定版本
  46. networking:
  47. dnsDomain: k8s.local
  48. podSubnet: 10.244.0.0/16 # 增加指定pod的网段
  49. serviceSubnet: 10.96.0.0/12
  50. scheduler: {}
  51. ---
  52. # 用于配置kube-proxy上为Service指定的代理模式: ipvs or iptables
  53. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  54. kind: KubeProxyConfiguration
  55. mode: "ipvs"
  56. ---
  57. # 指定cgroup
  58. apiVersion: kubelet.config.k8s.io/v1beta1
  59. kind: KubeletConfiguration
  60. cgroupDriver: "systemd"

执行结果

  1. #kmaster1
  2. root@kmaster1:~# kubeadm init --config kubeadm-init-config.yaml --upload-certs
  3. [init] Using Kubernetes version: v1.28.0
  4. [preflight] Running pre-flight checks
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  9. [certs] Generating "ca" certificate and key
  10. [certs] Generating "apiserver" certificate and key
  11. [certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.k8s.local] and IPs [10.96.0.1 192.168.48.210 192.168.48.222 192.168.48.211 192.168.48.212]
  12. [certs] Generating "apiserver-kubelet-client" certificate and key
  13. [certs] Generating "front-proxy-ca" certificate and key
  14. [certs] Generating "front-proxy-client" certificate and key
  15. [certs] Generating "etcd/ca" certificate and key
  16. [certs] Generating "etcd/server" certificate and key
  17. [certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.48.210 127.0.0.1 ::1]
  18. [certs] Generating "etcd/peer" certificate and key
  19. [certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.48.210 127.0.0.1 ::1]
  20. [certs] Generating "etcd/healthcheck-client" certificate and key
  21. [certs] Generating "apiserver-etcd-client" certificate and key
  22. [certs] Generating "sa" key and public key
  23. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  24. [kubeconfig] Writing "admin.conf" kubeconfig file
  25. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  26. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  27. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  28. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  29. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  30. [control-plane] Creating static Pod manifest for "kube-apiserver"
  31. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  32. [control-plane] Creating static Pod manifest for "kube-scheduler"
  33. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  34. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  35. [kubelet-start] Starting the kubelet
  36. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  37. [apiclient] All control plane components are healthy after 10.505761 seconds
  38. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  41. [upload-certs] Using certificate key:
  42. b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5
  43. [mark-control-plane] Marking the node kmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node kmaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
  45. [bootstrap-token] Using token: wgs001.com3yjucgqr276rf
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. [addons] Applied essential addon: kube-proxy
  55. Your Kubernetes control-plane has initialized successfully!
  56. To start using your cluster, you need to run the following as a regular user:
  57. mkdir -p $HOME/.kube
  58. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  59. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  60. Alternatively, if you are the root user, you can run:
  61. export KUBECONFIG=/etc/kubernetes/admin.conf
  62. You should now deploy a pod network to the cluster.
  63. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  64. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  65. You can now join any number of the control-plane node running the following command on each as root:
  66. kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
  67. --discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d \
  68. --control-plane --certificate-key b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5
  69. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  70. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  71. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  72. Then you can join any number of worker nodes by running the following on each as root:
  73. kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
  74. --discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d
  75. root@kmaster1:~# mkdir -p $HOME/.kube
  76. root@kmaster1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  77. root@kmaster1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、kmaster2和kmaster3执行

  1. kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
  2. --discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d \
  3. --control-plane --certificate-key b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5 --cri-socket unix:///var/run/cri-dockerd.sock
  4. mkdir -p $HOME/.kube
  5. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、knode1到knode4执行

  1. kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
  2. --discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d --cri-socket unix:///var/run/cri-dockerd.sock

6、安装calico

请参考我的另一个文档的ubuntu22.04安装k8s学习环境1主2从(cri-docker方式)-CSDN博客calico步骤,calico.yaml的内容太多,就不重复叙述了。

下载github的calico文件的,请到以下地址获取3.27版本

https://github.com/projectcalico/calico.git

git下来的文件路径:calico/manifests/calico.yaml

  1. root@kmaster1:~# kubectl apply -f calico.yaml
  2. poddisruptionbudget.policy/calico-kube-controllers created
  3. serviceaccount/calico-kube-controllers created
  4. serviceaccount/calico-node created
  5. serviceaccount/calico-cni-plugin created
  6. configmap/calico-config created
  7. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  16. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  17. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  18. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  19. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  20. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  21. customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
  22. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  23. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  24. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  25. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  26. clusterrole.rbac.authorization.k8s.io/calico-node created
  27. clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
  28. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  29. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  30. clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
  31. daemonset.apps/calico-node created
  32. deployment.apps/calico-kube-controllers created
  33. ##查看集群节点状态
  34. root@kmaster1:~# kubectl get nodes
  35. NAME STATUS ROLES AGE VERSION
  36. kmaster1 Ready control-plane 6h12m v1.28.0
  37. kmaster2 Ready control-plane 6h9m v1.28.0
  38. kmaster3 Ready control-plane 6h8m v1.28.0
  39. knode1 Ready <none> 6h8m v1.28.0
  40. knode2 Ready <none> 6h8m v1.28.0
  41. knode3 Ready <none> 6h8m v1.28.0
  42. knode4 Ready <none> 6h8m v1.28.0

7、设置主节点的etcd高可用

  1. #检查所有master节点,一般kmaster1改了之后,其他节点会同步,只检查即可,修改后,重启一下kubelet
  2. vim /etc/kubernetes/manifests/etcd.yaml
  3. apiVersion: v1
  4. kind: Pod
  5. metadata:
  6. annotations:
  7. kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.48.212:2379
  8. creationTimestamp: null
  9. labels:
  10. component: etcd
  11. tier: control-plane
  12. name: etcd
  13. namespace: kube-system
  14. spec:
  15. containers:
  16. - command:
  17. - etcd
  18. - --advertise-client-urls=https://192.168.48.212:2379
  19. - --cert-file=/etc/kubernetes/pki/etcd/server.crt
  20. - --client-cert-auth=true
  21. - --data-dir=/var/lib/etcd
  22. - --experimental-initial-corrupt-check=true
  23. - --experimental-watch-progress-notify-interval=5s
  24. - --initial-advertise-peer-urls=https://192.168.48.212:2380
  25. - --initial-cluster=kmaster2=https://192.168.48.211:2380,kmaster1=https://192.168.48.210:2380,kmaster3=https://192.168.48.212:2380 ##看这里
  26. ...
  27. #重启kubelet
  28. systemctl restart kubelet

五、集群高可用测试

  1. #创建测试pod,创建成功后会进入容器
  2. kubectl run client --image=ikubernetes/admin-box -it --rm --restart=Never --command -n default -- /bin/bash
  3. #ping测试
  4. ping www.baidu.com
  5. #nslookup测试域名
  6. nslookup www.baidu.com
  7. #将3个主节点依次关机测试vip192.168.48.222的变化
  8. ip a s

六、其他操作(生产环境慎用)

注意:集群创建有问题,或者加入集群有问题,可以参考一下步骤重置集群。

  1. #驱逐节点的配置
  2. kubectl drain knode3 --delete-emptydir-data --force --ignore-daemonsets
  3. #重置当前节点配置
  4. kubeadm reset -f --cri-socket unix:var/run/cri-dockerd.sock
  5. #删除节点
  6. kubectl delete node knode3
  7. #删除配置信息
  8. rm -rf /etc/cni/net.d /root/.kube
  9. ipvsadm --clear

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/601259
推荐阅读
相关标签
  

闽ICP备14008679号