赞
踩
Envoy提供了几种不同的负载均衡策略,并可大体分为全局负载均衡和分布式负载均衡两大类
复杂的部署场景可以混合使用两类负载均衡策略,全局负载均衡通过定义高级路由优先级和权重以控制同级别的流量,而分布式负载均衡用于对系统中的微观变动作出反应.
clusters: - name: ... ... load_assignment: {...} cluster_name: ... endpoints: [] # LocalityLbEndpoints列表,每个列表项主要由位置、端点列表、权重和优先级四项组成; - locality: {...} # 位置定义 region: ... zone: ... sub_zone: ... lb_endpoints: [] # 端点列表 - endpoint: {...} # 端点定义 address: {...} # 端点地址 health_check_config: {...} # 当前端点与健康状态检查相关的配置; load_balancing_weight: ... # 当前端点的负载均衡权重,可选; metadata: {...} # 基于匹配的侦听器、过滤器链、路由和端点等为过滤器提供额外信息的元数据,常用用于提供服务配置或辅助负载均衡; health_status: ... # 端点是经EDS发现时,此配置项用于管理式设定端点的健康状态,可用值有UNKOWN、HEALTHY、UNHEALTHY、DRAINING、TIMEOUT和DEGRADED; load_balancing_weight: {...} # 权重 priority: ... # 优先级 policy: {...} # 负载均衡策略设定 drop_overloads: [] # 过载保护机制,丢弃过载流量的机制; overprovisioning_factor: ... # 整数值,定义超配因子(百分比),默认值为140,即1.4; endpoint_stale_after: ... # 过期时长,过期之前未收到任何新流量分配的端点将被视为过时,并标记为不健康;默认值0表示永不过时; lb_subset_config: {...} # 负载均衡子集 ring_hash_lb_config: {...} # 环hash算法配置 original_dst_lb_config: {...} # 原始连接配置 least_request_lb_config: {...} # 最少连接数配置 common_lb_config: {...} # 公共配置 health_panic_threshold: ... # 恐慌阈值,Panic阈值,默认为50%; zone_aware_lb_config: {...} # 区域感知路由的相关配置; locality_weighted_lb_config: {...} # 局部权重负载均衡相关的配置; ignore_new_hosts_until_first_hc: ... # 是否在新加入的主机经历第一次健康状态检查之前不予考虑进负载均衡;
所有主机权重相同(对长连接更有意义)
所有主机权重不同(对短连接更有意义)
和之前一样,1个front envoy作为流量的入口172.31.22.2
三对后端EP,sidecar+webserver的组合.ip地址分别是172.31.22.11,172.31.22.12,172.31.22.13
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./front-envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.22.2 aliases: - front-proxy depends_on: - webserver01-sidecar - webserver02-sidecar - webserver03-sidecar webserver01-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: red networks: envoymesh: ipv4_address: 172.31.22.11 aliases: - myservice - red webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver01-sidecar" depends_on: - webserver01-sidecar webserver02-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: blue networks: envoymesh: ipv4_address: 172.31.22.12 aliases: - myservice - blue webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver02-sidecar" depends_on: - webserver02-sidecar webserver03-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: green networks: envoymesh: ipv4_address: 172.31.22.13 aliases: - myservice - green webserver03: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver03-sidecar" depends_on: - webserver03-sidecar networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.22.0/24
定义了三个endpoint的权重1:3:5
admin: profile_path: /tmp/envoy.prof access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: webservice domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: web_cluster_01 } http_filters: - name: envoy.filters.http.router clusters: - name: web_cluster_01 connect_timeout: 0.25s type: STRICT_DNS lb_policy: LEAST_REQUEST load_assignment: cluster_name: web_cluster_01 endpoints: - lb_endpoints: - endpoint: address: socket_address: address: red port_value: 80 load_balancing_weight: 1 - endpoint: address: socket_address: address: blue port_value: 80 load_balancing_weight: 3 - endpoint: address: socket_address: address: green port_value: 80 load_balancing_weight: 5
测试脚本内容:
总共做300次连接,对front envoy/hostname 进行请求,并对返回内容进行过滤统计,最后将统计结果返回
# cat send-request.sh #!/bin/bash declare -i red=0 declare -i blue=0 declare -i green=0 #interval="0.1" counts=300 echo "Send 300 requests, and print the result. This will take a while." echo "" echo "Weight of all endpoints:" echo "Red:Blue:Green = 1:3:5" for ((i=1; i<=${counts}; i++)); do if curl -s http://$1/hostname | grep "red" &> /dev/null; then # $1 is the host address of the front-envoy. red=$[$red+1] elif curl -s http://$1/hostname | grep "blue" &> /dev/null; then blue=$[$blue+1] else green=$[$green+1] fi # sleep $interval done echo "" echo "Response from:" echo "Red:Blue:Green = $red:$blue:$green"
可以看到least_requests的算法不是按绝对比例进行调度,理论上请求数越大越是趋近预设的理想值.
# docker-compose up # ./send-request.sh 172.31.22.2 Send 300 requests, and print the result. This will take a while. Weight of all endpoints: Red:Blue:Green = 1:3:5 Response from: Red:Blue:Green = 29:92:179 # ./send-request.sh 172.31.22.2 Send 3000 requests, and print the result. This will take a while. Weight of all endpoints: Red:Blue:Green = 1:3:5 Response from: Red:Blue:Green = 310:839:1851
环哈希是对2的32次方取模,缺点是计算量较大,节点少时可能存在负载的偏斜
磁悬浮是换哈希的改良,对65537取模,构建权重使得所有节点填满整个环,减少了计算量,计算完成时就确定了使用的是哪个节点,不用去计算下个点.计算消耗小于环哈希,稳定性略逊于换哈希.
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./front-envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.25.2 aliases: - front-proxy depends_on: - webserver01-sidecar - webserver02-sidecar - webserver03-sidecar webserver01-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: red networks: envoymesh: ipv4_address: 172.31.25.11 aliases: - myservice - red webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver01-sidecar" depends_on: - webserver01-sidecar webserver02-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: blue networks: envoymesh: ipv4_address: 172.31.25.12 aliases: - myservice - blue webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver02-sidecar" depends_on: - webserver02-sidecar webserver03-sidecar: image: envoyproxy/envoy-alpine:v1.21.5 volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: green networks: envoymesh: ipv4_address: 172.31.25.13 aliases: - myservice - green webserver03: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver03-sidecar" depends_on: - webserver03-sidecar networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.25.0/24
定义了算法为RING_HASH,环最大为2的20次方,最小为2的9次方.
对于什么进行hash,是基于route_config中hash_policy的参数进行设定,常见的会根据源地址hash或uri,浏览器等进行hash
健康检查基于/livez返回值200-399进行判断
admin: profile_path: /tmp/envoy.prof access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: webservice domains: ["*"] routes: - match: { prefix: "/" } route: cluster: web_cluster_01 hash_policy: # - connection_properties: # source_ip: true - header: header_name: User-Agent http_filters: - name: envoy.filters.http.router clusters: - name: web_cluster_01 connect_timeout: 0.5s type: STRICT_DNS lb_policy: RING_HASH ring_hash_lb_config: maximum_ring_size: 1048576 minimum_ring_size: 512 load_assignment: cluster_name: web_cluster_01 endpoints: - lb_endpoints: - endpoint: address: socket_address: address: myservice port_value: 80 health_checks: - timeout: 5s interval: 10s unhealthy_threshold: 2 healthy_threshold: 2 http_health_check: path: /livez expected_statuses: start: 200 end: 399
启动容器后,当User-Agent不发生改变,始终会被调度到通一台服务器上.
一旦User-Agent发生改变,根据计算后的hash值,请求会被分别调度到三台服务器之一进行调度,并保持不变.即使间隔很久再次访问.
# docker-compose up # while true;do curl 172.31.25.2;sleep 2;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! root@k8s-node-1:~# while true;do curl -H "User-Agent: Chrome" 172.31.25.2;sleep 2;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! root@k8s-node-1:~# while true;do curl -H "User-Agent: IE6.0" 172.31.25.2;sleep 2;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12! ^C root@k8s-node-1:~# while true;do curl -H "User-Agent: IE4.0" 172.31.25.2;sleep 2;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11! ^C root@k8s-node-1:~# while true;do curl -H "User-Agent: IE7.0" 172.31.25.2;sleep 2;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13! # curl 172.31.25.2 iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
负载均衡算法
会话保持
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。