当前位置:   article > 正文

搭建高可用k8s_openeuler2203 nginx+keepalived

openeuler2203 nginx+keepalived

高可用只针对于api-server,需要用到nginx + keepalived,nginx提供4层负载,keepalived提供vip(虚拟IP)

系统采用openEuler 22.03 LTS

1. 前期准备

因为机器内存只有16G,所有我采用3master + 1node

1.1 修改主机配置(所有节点操作)

  1. 修改主机名

  2. 关闭防火墙,selinux

  3. 关闭swap

  4. 配置时间同步

主机过多,我只写master01的操作

  1. # 修改主机名
  2. [root@localhost ~]# hostnamectl set-hostname master01
  3. [root@localhost ~]# bash
  4. # 关闭防火墙,selinux
  5. [root@master01 ~]# systemctl disable --now firewalld
  6. [root@master01 ~]# setenforce 0
  7. [root@master01 ~]# sed -i s"/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  8. # 关闭swap
  9. [root@master01 ~]# swapoff -a
  10. # 配置时间同步
  11. [root@master01 ~]# yum install chrony -y
  12. [root@master01 ~]# chronyc sources

1.2 开启ipvs(所有节点)

  1. [root@master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules << END
  2. > #!/bin/bash
  3. > ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
  4. > for kernel_module in ${ipvs_modules};do
  5. > /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  6. > if [ 0 -eq 0]; then
  7. > /sbin/modprobe ${kernel_module}
  8. > fi
  9. > done
  10. > END
  11. [root@master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
  12. [root@master01 ~]# bash /etc/sysconfig/modules/ipvs.modules

1.3 配置k8s yum源(所有节点)

  1. # 直接到华为镜像站搜索kubernetes
  2. [root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=https://mirrors.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-$basearch
  6. enabled=1
  7. gpgcheck=1
  8. repo_gpgcheck=0
  9. gpgkey=https://mirrors.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF

使用欧拉的话需要将$basearch 改为自己的架构 x86_64

2. 安装docker(所有节点)

由于欧拉目前最高支持k8s的版本是1.23 ,所以需要安装docker

2.1 安装

[root@master01 ~]# yum install docker -y

2.2 修改docker配置

  1. [root@master01 ~]# vim /etc/docker/daemon.json
  2. {
  3. "exec-opts":["native.cgroupdriver=systemd"]
  4. }

2.3 重启docker

  1. [root@master01 ~]# systemctl daemon-reload
  2. [root@master01 ~]# systemctl restart docker

3. 配置高可用(所有master节点)

3.1 安装软件包

[root@master01 ~]#  yum install nginx nginx-all-modules keepalived -y

3.2 配置nginx负载

在nginx的配置文件内加入一下内容

  1. [root@master01 ~]# vim /etc/nginx/nginx.conf
  2. user nginx;
  3. worker_processes auto;
  4. error_log /var/log/nginx/error.log;
  5. pid /run/nginx.pid;
  6. # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
  7. include /usr/share/nginx/modules/*.conf;
  8. events {
  9. worker_connections 1024;
  10. }
  11. # 添加这一段,要写在http段之外,因为我们用的是四层负载,并不是七层负载
  12. stream {
  13. upstream k8s-apiserver {
  14. server 192.168.200.163:6443;
  15. server 192.168.200.164:6443;
  16. server 192.168.200.165:6443;
  17. }
  18. server {
  19. listen 16443;
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. # 到这里结束
  24. # 检测语法
  25. [root@master01 ~]# nginx -t
  26. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
  27. nginx: configuration file /etc/nginx/nginx.conf test is successful
  28. # 重启
  29. [root@master01 ~]# systemctl restart nginx

3.3 配置keepalived

  1. # 备份原有配置
  2. [root@master01 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  3. # 修改配置
  4. [root@master01 ~]# vim /etc/keepalived/keepalived.conf
  5. global_defs {
  6. router_id master1
  7. }
  8. vrrp_instance Nginx {
  9. state MASTER # 只有master01写MASTER,其他master写BACKUP
  10. interface ens33 # 写上网卡
  11. virtual_router_id 51
  12. priority 200 # 其他节点的值要低于这个,另外2个节点的值也不要一样
  13. advert_int 1
  14. authentication {
  15. auth_type PASS
  16. auth_pass 123
  17. }
  18. virtual_ipaddress {
  19. 192.168.200.200 # 写VIP
  20. }
  21. }
  22. # 重启
  23. [root@master01 ~]# systemctl restart keepalived.service

将原本的配置项都删除,写入这些内容

注意:只有master01的 state 是MASTER,其他2个节点应该写为BACKUP。且priority要低于master01

3.4 验证keepalived

  1. # 查看master01的ens33
  2. [root@master01 ~]# ip a show ens33
  3. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  4. link/ether 00:0c:29:8d:ce:8a brd ff:ff:ff:ff:ff:ff
  5. altname enp2s1
  6. inet 192.168.200.163/24 brd 192.168.200.255 scope global noprefixroute ens33
  7. valid_lft forever preferred_lft forever
  8. inet 192.168.200.200/32 scope global ens33
  9. valid_lft forever preferred_lft forever
  10. inet6 fe80::ce91:fe4e:625d:6e32/64 scope link noprefixroute
  11. valid_lft forever preferred_lft forever

现在他有自己的iP和VIP

  1. # 停掉keepalived
  2. [root@master01 ~]# systemctl stop keepalived.service
  3. [root@master01 ~]# ip a show ens33
  4. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  5. link/ether 00:0c:29:8d:ce:8a brd ff:ff:ff:ff:ff:ff
  6. altname enp2s1
  7. inet 192.168.200.163/24 brd 192.168.200.255 scope global noprefixroute ens33
  8. valid_lft forever preferred_lft forever
  9. inet6 fe80::ce91:fe4e:625d:6e32/64 scope link noprefixroute
  10. valid_lft forever preferred_lft forever

停掉之后vip不存在了,切换到master02 来看看

  1. [root@master02 ~]# ip a show ens33
  2. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  3. link/ether 00:0c:29:2d:b0:8a brd ff:ff:ff:ff:ff:ff
  4. altname enp2s1
  5. inet 192.168.200.164/24 brd 192.168.200.255 scope global noprefixroute ens33
  6. valid_lft forever preferred_lft forever
  7. inet 192.168.200.200/32 scope global ens33
  8. valid_lft forever preferred_lft forever
  9. inet6 fe80::f409:2f97:f02e:a8d4/64 scope link noprefixroute
  10. valid_lft forever preferred_lft forever

现在vip跑到master02了,将master01的keepalived启动,vip会回来,因为master01的

优先级高于他

  1. [root@master01 ~]# systemctl restart keepalived.service
  2. [root@master01 ~]# systemctl enable --now nginx keepalived.service
  3. Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
  4. Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

4. 部署k8s

欧拉目前只支持1.23版本,所以目前的容器运行时是docker,没有写执行节点那么就是master01

4.1 安装软件包(所有master节点)

  1. [root@master01 ~]# yum install kubeadm kubelet kubectl -y
  2. [root@master01 ~]# systemctl enable kubelet

4.3 生成部署文件

[root@master01 ~]# kubeadm config print init-defaults > init.yaml

4.3.1 修改部署文件
  1. [root@master01 ~]# vim init.yaml
  2. apiVersion: kubeadm.k8s.io/v1beta3
  3. bootstrapTokens:
  4. - groups:
  5. - system:bootstrappers:kubeadm:default-node-token
  6. token: abcdef.0123456789abcdef
  7. ttl: 24h0m0s
  8. usages:
  9. - signing
  10. - authentication
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. advertiseAddress: 192.168.200.163 # 这个地方需要修改为自己的IP地址
  14. bindPort: 6443
  15. nodeRegistration:
  16. criSocket: /var/run/dockershim.sock
  17. imagePullPolicy: IfNotPresent
  18. name: master01 # 这个地方改成你的主机名或者IP,作用是集群部署出来之后在集群内显示的名称,这里写什么到时候就是什么
  19. taints: null
  20. ---
  21. apiServer:
  22. timeoutForControlPlane: 4m0s
  23. certSANs: # 添加这一整段,目的是让这些地址所在的主机都能够使用证书
  24. - master01
  25. - master02
  26. - master03
  27. - 127.0.0.1
  28. - localhost
  29. - kubernetes
  30. - kubernetes.default
  31. - kubernetes.default.svc
  32. - kubernetes.default.svc.cluster.local
  33. - 192.168.200.163 # 这里3个是master的IP地址
  34. - 192.168.200.164
  35. - 192.168.200.165
  36. - 192.168.200.200 # VIP也需要写上
  37. controlPlaneEndpoint: 192.168.200.200:16443 # 添加这一行,IP为VIP
  38. apiVersion: kubeadm.k8s.io/v1beta3
  39. certificatesDir: /etc/kubernetes/pki
  40. clusterName: kubernetes
  41. controllerManager: {}
  42. dns: {}
  43. etcd:
  44. local:
  45. dataDir: /var/lib/etcd
  46. imageRepository: swr.cn-east-3.myhuaweicloud.com/hcie_openeuler # 镜像仓库要改为国内的
  47. kind: ClusterConfiguration
  48. kubernetesVersion: 1.23.1 # 改为kubeadm版本一样的
  49. networking:
  50. dnsDomain: cluster.local
  51. podSunbet: 10.244.0.0/12 # 添加这一行
  52. serviceSubnet: 10.96.0.0/12
  53. scheduler: {}
  54. --- # 添加这一整段
  55. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  56. kind: KubeProxyConfiguration
  57. mode: ipvs

4.4 提前拉取镜像

  1. # 这是在部署之前提前先把镜像拉取下来
  2. [root@master01 ~]# kubeadm config images pull --config ./init.yaml

4.5 开始部署

  1. [root@master01 ~]# kubeadm init --upload-certs --config ./init.yaml
  2. # 如果安装失败了可以执行kubeadm reset -f 重置环境再来init,如果直接init会报错

执行成功之后会输出一些信息

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. You can now join any number of the control-plane node running the following command on each as root:
  5. # 加入新的master节点使用这个命令
  6. kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
  7. --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
  8. --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
  9. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  10. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  11. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  12. Then you can join any number of worker nodes by running the following on each as root:
  13. # 加入node节点就使用这个命令
  14. kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
  15. --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3

4.6 其他master节点加入集群

生成的token只有24小时有效,如果token过期了还需要有节点加入集群的话可以执行

[root@master01 ~]# kubeadm token create --print-join-commandkubeadm join 192.168.200.200:16443 --token gb00dz.tevdizf7mxqx1egj --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 这个命令可以直接让node节点加入

如果需要加入master节点,那么需要加上 --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e

  1. [root@master02 ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
  3. --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
  4. [root@master02 ~]# mkdir -p $HOME/.kube
  5. [root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. [root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. [root@master03 ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
  8. --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
  9. --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
  10. [root@master03 ~]# mkdir -p $HOME/.kube
  11. [root@master03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  12. [root@master03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

可以使用 --node-name 指定加入集群后的名字

4.7 node节点加入集群

  1. [root@node ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3

4.8 查看集群节点

  1. [root@master01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master01 NotReady control-plane,master 45m v1.23.1
  4. master02 NotReady control-plane,master 27m v1.23.1
  5. master03 NotReady control-plane,master 27m v1.23.1
  6. node NotReady <none> 10s v1.23.1

5. 安装网络插件 calico

没安装网络插件之前状态是NotReady,装完之后就是Ready

calico官方安装文档

在官方文档里面可以找到最新的版本

  1. [root@master01 ~]# wget https://docs.projectcalico.org/archive/v3.23/manifests/calico.yaml
  2. [root@master01 ~]# kubectl apply -f calico.yaml

稍等一会之后,查看集群节点状态

  1. [root@master01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master01 Ready control-plane,master 5h38m v1.23.1
  4. master02 Ready control-plane,master 5h21m v1.23.1
  5. master03 Ready control-plane,master 5h21m v1.23.1
  6. node Ready <none> 4h53m v1.23.1

如果登了很久还没有ready的话可以使用

[root@master01 ~]# kubectl get pods -A

看看那些pod没有起来,找到原因并解决之后就可以了

6. 验证集群是否可用

  1. [root@master01 ~]# kubectl run web01 --image nginx:1.24
  2. pod/web01 created
  3. [root@master01 ~]# kubectl get pods
  4. NAME READY STATUS RESTARTS AGE
  5. web01 1/1 Running 0 27s

能够正常启动pod

文章转载自:FuShudi

原文链接:https://www.cnblogs.com/fsdstudy/p/18233538

体验地址:引迈 - JNPF快速开发平台_低代码开发平台_零代码开发平台_流程设计器_表单引擎_工作流引擎_软件架构

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/811858
推荐阅读
相关标签
  

闽ICP备14008679号