赞
踩
网址: openelb.io
OpenELB 是一个开源的云原生负载均衡器实现,可以在基于裸金属服务器、边缘以及虚拟化的 Kubernetes 环境中使用 LoadBalancer 类型的 Service 对外暴露服务。OpenELB 项目最初由 KubeSphere 社区发起,目前已作为 CNCF 沙箱项目加入 CNCF 基金会,由 OpenELB 开源社区维护与支持。
与 MetalLB 类似,OpenELB 也拥有两种主要工作模式:Layer2 模式和 BGP 模式。OpenELB 的 BGP 模式目前暂不支持 IPv6。
无论是 Layer2 模式还是 BGP 模式,核心思路都是通过某种方式将特定 VIP 的流量引到 k8s 集群中,然后再通过 kube-proxy 将流量转发到后面的特定服务。
Layer2 模式需要我们的 k8s 集群基础环境支持发送 anonymous ARP/NDP packets。因为 OpenELB 是针对裸金属服务器设计的,因此如果是在云环境中部署,需要注意是否满足条件。
主要的工作流程就如同上面描述的一般,但是还有几个需要额外注意的点:
OpenELB 的 BGP 模式使用的是gobgp实现的 BGP 协议,通过使用 BGP 协议和路由器建立 BGP 连接并实现 ECMP 负载均衡,从而实现高可用的 LoadBalancer。
配置 ARP 参数
部署 Layer2 模式需要把 k8s 集群中的 ipvs 配置打开strictARP,开启之后 k8s 集群中的 kube-proxy
会停止响应 kube-ipvs0
网卡之外的其他网卡的 arp 请求,而由 MetalLB 接手处理。
strict ARP
开启之后相当于把 将 arp_ignore
设置为 1 并将 arp_announce
设置为 2 启用严格的 ARP,这个原理和 LVS 中的 DR 模式对 RS 的配置一样。
You need to prepare a Kubernetes cluster, and ensure that the Kubernetes version is 1.15 or later. OpenELB requires CustomResourceDefinition (CRD) v1, which is only supported by Kubernetes 1.15 or later. You can use the following methods to deploy a Kubernetes cluster:
Use KubeKey (recommended). You can use KubeKey to deploy a Kubernetes cluster with or without KubeSphere.
Follow official Kubernetes guides.
OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based Kubernetes cluster for learning and testing.
1.Log in to the Kubernetes cluster over SSH and run the following command:
# kubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml
2.Run the following command to check whether the status of openelb-manager
is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.
# kubectl get pods -n openelb-system
Use OpenELB in Layer 2 Mode
Step 1: Enable strictARP for kube-proxy
In Layer 2 mode, you need to enable strictARP for kube-proxy so that all NICs in the Kubernetes cluster stop answering ARP requests from other NICs and OpenELB handles ARP requests instead.
1.Log in to the Kubernetes cluster and run the following command to edit the kube-proxy ConfigMap:
# kubectl edit configmap kube-proxy -n kube-system
2.In the kube-proxy ConfigMap YAML configuration, set data.config.conf.ipvs.strictARP
to true
.
ipvs:
strictARP: true
mode:ipvs
3.Run the following command to restart kube-proxy:
# kubectl rollout restart daemonset kube-proxy -n kube-system
Step 2: Specify the NIC Used for OpenELB
If the node where OpenELB is installed has multiple NICs, you need to specify the NIC used for OpenELB in Layer 2 mode. You can skip this step if the node has only one NIC.
In this example, the master1 node where OpenELB is installed has two NICs (eth0 192.168.0.2 and eth1 192.168.1.2), and eth0 192.168.0.2 will be used for OpenELB.
Run the following command to annotate master1 to specify the NIC:
# kubectl annotate nodes k8s-master01 layer2.openelb.kubesphere.io/v1alpha1="192.168.10.141"
Step 3: Create an Eip Object
The Eip object functions as an IP address pool for OpenELB.
1.Run the following command to create a YAML file for the Eip object:
# vim layer2-eip.yaml
2.Add the following information to the YAML file:
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
name: layer2-eip
spec:
address: 192.168.10.70-192.168.10.99
interface: ens33
protocol: layer2
- The IP addresses specified in
spec:address
must be on the same network segment as the Kubernetes cluster nodes.- For details about the fields in the Eip YAML configuration, see Configure IP Address Pools Using Eip.
3.Run the following command to create the Eip object:
# kubectl apply -f layer2-eip.yaml
The following creates a Deployment of two Pods using the luksa/kubia image. Each Pod returns its own Pod name to external requests.
1.Run the following command to create a YAML file for the Deployment:
# vim layer2-openelb.yaml
2.Add the following information to the YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: layer2-openelb
spec:
replicas: 2
selector:
matchLabels:
app: layer2-openelb
template:
metadata:
labels:
app: layer2-openelb
spec:
containers:
- image: luksa/kubia
name: kubia
ports:
- containerPort: 8080
3.Run the following command to create the Deployment:
# kubectl apply -f layer2-openelb.yaml
1.Run the following command to create a YAML file for the Service:
# vim layer2-svc.yaml
2.Add the following information to the YAML file:
kind: Service
apiVersion: v1
metadata:
name: layer2-svc
annotations:
lb.kubesphere.io/v1alpha1: openelb
protocol.openelb.kubesphere.io/v1alpha1: layer2
eip.openelb.kubesphere.io/v1alpha2: layer2-eip
spec:
selector:
app: layer2-openelb
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
externalTrafficPolicy: Cluster
- You must set
spec:type
toLoadBalancer
.- The
lb.kubesphere.io/v1alpha1: openelb
annotation specifies that the Service uses OpenELB.- The
protocol.openelb.kubesphere.io/v1alpha1: layer2
annotation specifies that OpenELB is used in Layer 2 mode.- The
eip.openelb.kubesphere.io/v1alpha2: layer2-eip
annotation specifies the Eip object used by OpenELB. If this annotation is not configured, OpenELB automatically uses the first available Eip object that matches the protocol. You can also delete this annotation and add thespec:loadBalancerIP
field (for example,spec:loadBalancerIP: 192.168.0.91
) to assign a specific IP address to the Service.- If
spec:externalTrafficPolicy
is set toCluster
(default value), OpenELB randomly selects a node from all Kubernetes cluster nodes to handle Service requests. Pods on other nodes can also be reached over kube-proxy.- If
spec:externalTrafficPolicy
is set toLocal
, OpenELB randomly selects a node that contains a Pod in the Kubernetes cluster to handle Service requests. Only Pods on the selected node can be reached.
3.Run the following command to create the Service:
kubectl apply -f layer2-svc.yaml
In the Kubernetes cluster, run the following command to obtain the external IP address of the Service:
# kubectl get svc
lb.kubesphere.io/v1alpha1:openelb
protocol.openelb.kubesphere.io/v1alpha1:layer2
eip.openelb.kubesphere.io/v1alpha2:layer2-eip
可以进入到k8s集群master节点中 进行访问:curl http://www1.msb.com
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。