当前位置:   article > 正文

Nginx+Keepalived Kubernetes 负载均衡

Nginx+Keepalived Kubernetes 负载均衡

部署Nginx+Keepalived高可用负载均衡器  kube-apiserver高可用架构图:

  • Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
  • Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。

注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。 

在两台Master节点操作。 

1. 安装软件包(主/备) 

  1. yum install epel-release -y
  2. yum install nginx keepalived -y

2. Nginx配置文件(主/备一样)

  1. cat > /etc/nginx/nginx.conf << "EOF"
  2. user nginx;
  3. worker_processes auto;
  4. error_log /var/log/nginx/error.log;
  5. pid /run/nginx.pid;
  6. include /usr/share/nginx/modules/*.conf;
  7. events {
  8. worker_connections 1024;
  9. }
  10. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  11. stream {
  12. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  13. access_log /var/log/nginx/k8s-access.log main;
  14. upstream k8s-apiserver {
  15. server 192.168.31.71:6443; # Master1 APISERVER IP:PORT
  16. server 192.168.31.72:6443; # Master2 APISERVER IP:PORT
  17. }
  18. server {
  19. listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. http {
  24. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  25. '$status $body_bytes_sent "$http_referer" '
  26. '"$http_user_agent" "$http_x_forwarded_for"';
  27. access_log /var/log/nginx/access.log main;
  28. sendfile on;
  29. tcp_nopush on;
  30. tcp_nodelay on;
  31. keepalive_timeout 65;
  32. types_hash_max_size 2048;
  33. include /etc/nginx/mime.types;
  34. default_type application/octet-stream;
  35. server {
  36. listen 80 default_server;
  37. server_name _;
  38. location / {
  39. }
  40. }
  41. }
  42. EOF

3. keepalived配置文件(Nginx Master)

  1. cat > /etc/keepalived/keepalived.conf << EOF
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 127.0.0.1
  10. smtp_connect_timeout 30
  11. router_id NGINX_MASTER
  12. }
  13. vrrp_script check_nginx {
  14. script "/etc/keepalived/check_nginx.sh"
  15. }
  16. vrrp_instance VI_1 {
  17. state MASTER
  18. interface ens33 # 修改为实际网卡名
  19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  20. priority 100 # 优先级,备服务器设置 90
  21. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1
  22. authentication {
  23. auth_type PASS
  24. auth_pass 1111
  25. }
  26. # 虚拟IP
  27. virtual_ipaddress {
  28. 192.168.31.88/24
  29. }
  30. track_script {
  31. check_nginx
  32. }
  33. }
  34. EOF
  •  vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
  • virtual_ipaddress:虚拟IP(VIP)

准备上述配置文件中检查nginx运行状态的脚本:

  1. cat > /etc/keepalived/check_nginx.sh << "EOF"
  2. #!/bin/bash
  3. count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
  4. if [ "$count" -eq 0 ];then
  5. exit 1
  6. else
  7. exit 0
  8. fi
  9. EOF
  10. chmod +x /etc/keepalived/check_nginx.sh

4. keepalived配置文件(Nginx Backup)

  1. cat > /etc/keepalived/keepalived.conf << EOF
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 127.0.0.1
  10. smtp_connect_timeout 30
  11. router_id NGINX_BACKUP
  12. }
  13. vrrp_script check_nginx {
  14. script "/etc/keepalived/check_nginx.sh"
  15. }
  16. vrrp_instance VI_1 {
  17. state BACKUP
  18. interface ens33
  19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  20. priority 90
  21. advert_int 1
  22. authentication {
  23. auth_type PASS
  24. auth_pass 1111
  25. }
  26. virtual_ipaddress {
  27. 192.168.31.88/24
  28. }
  29. track_script {
  30. check_nginx
  31. }
  32. }
  33. EOF

 准备上述配置文件中检查nginx运行状态的脚本:

  1. cat > /etc/keepalived/check_nginx.sh << "EOF"
  2. #!/bin/bash
  3. count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
  4. if [ "$count" -eq 0 ];then
  5. exit 1
  6. else
  7. exit 0
  8. fi
  9. EOF
  10. chmod +x /etc/keepalived/check_nginx.sh

 注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

 5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start nginx keepalived
  3. systemctl enable nginx keepalived

6. 查看keepalived工作状态

  1. ip addr
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  9. link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
  11. valid_lft forever preferred_lft forever
  12. inet 192.168.31.88/24 scope global secondary ens33
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::20c:29ff:fe04:f72c/64 scope link
  15. valid_lft forever preferred_lft forever

可以看到,在ens33网卡绑定了192.168.31.88 虚拟IP,说明工作正常。

7. Nginx+Keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。 在Nginx Master执行 pkill nginx;在Nginx Backup,ip addr命令查看已成功绑定VIP。 

 8. 访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

  1. curl -k https://192.168.31.88:16443/version
  2. {
  3. "major": "1",
  4. "minor": "20",
  5. "gitVersion": "v1.20.4",
  6. "gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
  7. "gitTreeState": "clean",
  8. "buildDate": "2021-02-18T16:03:00Z",
  9. "goVersion": "go1.15.8",
  10. "compiler": "gc",
  11. "platform": "linux/amd64"
  12. }

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver,通过查看Nginx日志也可以看到转发apiserver IP:

  1. tail /var/log/nginx/k8s-access.log -f
  2. 192.168.31.71 192.168.31.71:6443 - [02/Apr/2021:19:17:57 +0800] 200 423
  3. 192.168.31.71 192.168.31.72:6443 - [02/Apr/2021:19:18:50 +0800] 200 423

到此还没结束,还有下面最关键的一步。

7.3 修改所有Worker Node连接LB VIP
试想下,虽然我们增加了Master2 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.31.71修改为192.168.31.88(VIP)。

在所有Worker Node执行:

  1. sed -i 's#192.168.31.71:6443#192.168.31.88:16443#' /opt/kubernetes/cfg/*
  2. systemctl restart kubelet kube-proxy

 检查节点状态:

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 Ready <none> 32d v1.20.4
  4. k8s-master2 Ready <none> 10m v1.20.4
  5. k8s-node1 Ready <none> 31d v1.20.4
  6. k8s-node2 Ready <none> 31d v1.20.4

 至此,一套完整的 Kubernetes 高可用集群就部署完成了!

 -------------------------------------------------------------------------------------------------------------------------------

  1. [apps@TLVM202016131 conf]$ cat nginx.conf
  2. #user nobody;
  3. worker_processes auto;
  4. worker_cpu_affinity auto;
  5. worker_rlimit_nofile 262144;
  6. error_log logs/error.log;
  7. pid sbin/nginx.pid;
  8. events {
  9. use epoll;
  10. #accept_mutex off
  11. worker_connections 65536;
  12. }
  13. stream {
  14. log_format basic '$remote_addr [$time_local] '
  15. '$protocol $server_addr $server_port $status $bytes_sent $bytes_received '
  16. '$session_time';
  17. include conf.d/*.tcp;
  18. }
  19. http {
  20. vhost_traffic_status_zone shared:vhost_traffic_status:64m;
  21. include mime.types;
  22. default_type application/octet-stream;
  23. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  24. '$status $body_bytes_sent "$http_referer" '
  25. '"$http_user_agent" "$http_x_forwarded_for"';
  26. access_log logs/access.log main;
  27. sendfile on;
  28. #tcp_nopush on;
  29. keepalive_timeout 65;
  30. server_tokens off;
  31. #gzip on;
  32. #gzip_min_length 2k;
  33. #gzip_types text/plain application/x-javascript text/css application/xml text/javascript;
  34. #client_body_buffer_size 512k;
  35. #client_header_buffer_size 16k;
  36. #large_client_header_buffers 4 16k;
  37. #client_max_body_size 100m;
  38. #proxy_ignore_client_abort on;
  39. #proxy_set_header Host $host;
  40. #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  41. include rule.d/*.conf;
  42. include conf.d/*.conf;
  43. }
  44. [apps@TLVM202016131 conf]$ ls conf.d/bak/
  45. 00-default.conf apiserver.tcp coredns.tcp
  46. [apps@TLVM202016131 bak]$ cat apiserver.tcp
  47. #---------- 20231113 k8s-uat ----------#
  48. upstream apiserver {
  49. #hash $remote_addr consistent;
  50. server 10.202.17.17:6443 max_fails=3 fail_timeout=1s;
  51. server 10.202.17.18:6443 max_fails=3 fail_timeout=1s;
  52. server 10.202.17.19:6443 max_fails=3 fail_timeout=1s;
  53. }
  54. server {
  55. listen 6443;
  56. access_log logs/tcp.log basic;
  57. proxy_connect_timeout 1s;
  58. proxy_timeout 3600s;
  59. proxy_pass apiserver;
  60. }
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/448455
推荐阅读
相关标签
  

闽ICP备14008679号