赞
踩
目录
在k8s中,pod是应用程序的载体,我们可以通过pod的ip来访问应用程序,但是pod经常会由于更新导致销毁,因此ip不是固定的,这就意味着不方便直接采用pod的ip对服务进行访问。
为了解决这个问题,kubernetes提供了service资源,service会对提供同一个服务的多个pod进行聚合,并且 提供一个统一的入口地址,通过访问service的入口地址就能访问到后面的pod服务
service原理:service资源创建后将信息写入etcd,kube-proxy通过监听etcd生成最新的ipvs规则,请求首先转发给service,service在根据定义的标签选择器关联到对应的pod上,最终由这一组pod提供服务
service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,在每个Node节点上都运行着一个kube-proxy服务进程,当创建service资源后,会通过api-server向etcd中写入创建的service信息,kube-proxy会基于监听机制发现service的变动,最后将最新的service信息转换成对应的访问规则,用户请求service的时候其实就是请求的kube-proxy转换的访问规则
kube-proxy支持三种模式,以ipvs为例
[root@k8s-master ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.10:53 rr -> 10.244.0.18:53 Masq 1 0 0 -> 10.244.0.19:53 Masq 1 0 0
- 10.96.0.10:53为service的ip,rr表示轮询,下面10.244.0.18、10.244.0.19都是pod的ip地址
- 当访问这个service ip时,kube-proxy会根据ipvs策略通过轮询的方式转发给后端的pod
- 这套规则会在集群上任意节点都生成,因此访问集群任意节点都可以访问到内部的pod
userspace模式
userspace模式下,kube-proxy会为每一个service创建一个监听端口,用户请求首先发送给Cluster IP,然后由iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据LB算法选择一个提供服务的pod并和其建立连接
该模式下,kube-proxy充当了一个四层负载均衡器的角色,由于kube-proxy运行在userspace下,在转发的过程中会增加内核和用户空间之间的数据拷贝,虽然稳定但是效率比较低
iptables模式
iptables模式下,kube-proxy为service后端的每个pod创建对应的iptables规则,当用户请求时,首先将请求发给cluster IP,然后根据iptables规则,转发到具体的pod上
该模式下kube-proxy的作用仅仅是监听service的变化,生成最新的iptables规则,此时kube-proxy不再承担四层负载均衡器的角色,该模式的优点较userspace模式效率更高,但是不能提供灵活的LB策略,当其中一个pod异常,iptables还是会转发在异常的pod上,并且不会重试
ipvs模式
ipvs模式和iptables模式类似,kube-proxy监控service、pod的变化并创建对应的ipvs规则,ipvs相对于iptables转发效率更高,除此之外,ipvs支持更多的LB算法
- 1.修改kube-proxy的configmap设置mode为ipvs
- [root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system
- 44 mode: "ipvs" #在44行左右
- configmap/kube-proxy edited
-
- 2.重建所有关于kube-proxy的pod
- #-l表示根据标签找到对应的pod
- [root@k8s-master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
- pod "kube-proxy-4lvxw" deleted
- pod "kube-proxy-lkm8r" deleted
- pod "kube-proxy-whmll" deleted
-
- 3.查看ipvs规则
- [root@k8s-master ~]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 192.168.81.210:30376 rr
- TCP 192.168.122.1:30376 rr
- TCP 10.96.0.1:443 rr
- -> 192.168.81.210:6443 Masq 1 0 0
- TCP 10.96.0.10:53 rr
- -> 10.244.0.18:53 Masq 1 0 0
- -> 10.244.0.19:53 Masq 1 0 0
- TCP 10.96.0.10:9153 rr
- -> 10.244.0.18:9153 Masq 1 0 0
- -> 10.244.0.19:9153 Masq 1 0 0
- TCP 10.101.187.80:80 rr
- TCP 10.102.142.245:80 rr
- -> 10.244.1.198:8080 Masq 1 0 0
- TCP 10.103.231.226:80 rr persistent 10800
- -> 10.244.1.202:8080 Masq 1 0 0
- TCP 10.111.44.36:443 rr
- -> 192.168.81.230:443 Masq 1 1 0
- TCP 10.111.227.155:80 rr persistent 10800
- TCP 10.244.0.0:30376 rr
- TCP 10.244.0.1:30376 rr
- TCP 127.0.0.1:30376 rr
- TCP 172.17.0.1:30376 rr
- UDP 10.96.0.10:53 rr
- -> 10.244.0.18:53 Masq 1 0 0
- -> 10.244.0.19:53 Masq 1 0 0
- kind: Service //资源类型
- apiVersion: v1 //资源版本
- metadata: //元数据
- name: service //资源名称
- namespace: dev //所在的命名空间
- spec: //属性
- selector: //标签选择器,用于确定当前service代理哪些pod
- app: nginx //pod的标签
- type: //Service类型, 指定service的访问方式
- clusterIP: //虚拟服务的ip地址,即service地址
- sessionAffinity: //session亲和性,支持ClientIP、 None两个选项
- ports: //端口信息
- protocol: TCP
- port: 3017 //service端口
- targetPort: 5003 //pod端口
- nodePort: 31122 //映射的主机端口
-
service支持四种访问方式:
sessionAffinity亲和性默认情况下,当同一个客户端地址发过来的请求会以轮询的方式转发给不同的pod,但是有时也需要当同一个客户端发来的请求只转发给一个pod时,就需要配置session亲和性了
创建一个deployment资源用于测试service资源
- 1.编写yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# vim deployment-nginx.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: deployment-nginx
- namespace: dev
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- ports:
- - containerPort: 80
-
- 2.创建资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f deployment-nginx.yaml
- deployment.apps/deployment-nginx created
-
- 3.查看资源的状态
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get deploy,pod -n dev -o wide
- NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
- deployment.apps/deployment-nginx 3/3 3 3 26s nginx nginx:1.17.1 app=nginx-pod
-
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod/deployment-nginx-5ffc5bf56c-788vz 1/1 Running 0 26s 10.244.2.25 k8s-node2 <none> <none>
- pod/deployment-nginx-5ffc5bf56c-b2wzp 1/1 Running 0 26s 10.244.2.27 k8s-node2 <none> <none>
- pod/deployment-nginx-5ffc5bf56c-zbhpr 1/1 Running 0 26s 10.244.2.26 k8s-node2 <none> <none>
-
- 4.修改nginx访问首页
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl exec -it deployment-nginx-5ffc5bf56c-788vz -n dev /bin/sh
- # echo "pod: 10.244.2.25" > /usr/share/nginx/html/index.html
- # cat /usr/share/nginx/html/index.html
- pod: 10.244.2.25
-
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl exec -it deployment-nginx-5ffc5bf56c-b2wzp -n dev /bin/sh
- # echo "pod: 10.244.2.27" > /usr/share/nginx/html/index.html
- # cat /usr/share/nginx/html/index.html
- pod: 10.244.2.27
-
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl exec -it deployment-nginx-5ffc5bf56c-zbhpr -n dev /bin/sh
- # echo "pod: 10.244.2.26" > /usr/share/nginx/html/index.html
- # cat /usr/share/nginx/html/index.html
- pod: 10.244.2.26
-
- 5.测试nginx
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.244.2.25
- pod: 10.244.2.25
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.244.2.26
- pod: 10.244.2.26
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.244.2.27
- pod: 10.244.2.27
ClusterIP类型的service资源只能在集群内部使用,集群外部无法访问
- 1.编写yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# vim service-clusterip.yaml
- apiVersion: v1 #版本
- kind: Service #控制器类型
- metadata: #元数据
- name: service-clusterip #service的名称
- namespace: dev #所在的命名空间
- spec: #属性
- selector: #标签选择器
- app: nginx-pod #关联上pod定义的标签而不是deployment的标签
- clusterIP: 10.100.100.100 #service ip地址,不填写k8s也会自动生成
- type: ClusterIP #service类型为ClusterIP
- ports: #端口设置
- - port: 80 #service的端口
- targetPort: 80 #pod的端口
-
- 2.创建service资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f service-clusterip.yaml
- service/service-clusterip created
-
- 3.查看service资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-clusterip ClusterIP 10.100.100.100 <none> 80/TCP 4m53s
-
- 4.查看svc的详细信息
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl describe svc service-clusterip -n dev
- Name: service-clusterip
- Namespace: dev
- Labels: <none>
- Annotations: <none>
- Selector: app=nginx-pod
- Type: ClusterIP
- IP: 10.100.100.100
- Port: <unset> 80/TCP
- TargetPort: 80/TCP
- Endpoints: 10.244.1.207:80,10.244.2.28:80,10.244.2.29:80 #endpoints也是一种k8s资源,显示当前svc对应后端的pod信息
- Session Affinity: None
- Events: <none>
-
- 5.查看ipvs映射规则
- [root@k8s-master ~/k8s_1.19_yaml/service]# ipvsadm -Ln
- TCP 10.100.100.100:80 rr #svc的地址、负载算法为rr
- -> 10.244.1.207:80 Masq 1 0 0
- -> 10.244.2.28:80 Masq 1 0 0
- -> 10.244.2.29:80 Masq 1 0 0
-
- 6.访问service查看效果
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.100.100.100:80
- pod: 10.244.2.28
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.100.100.100:80
- pod: 10.244.1.207
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl 10.100.100.100:80
- pod: 10.244.2.29
-
- 7.由于默认的service负载是rr算法,我们使用循环测试一下
- [root@k8s-master ~/k8s_1.19_yaml/service]# while true; do curl 10.100.100.100:80;sleep 3 ;done
- pod: 10.244.1.207
- pod: 10.244.2.29
- pod: 10.244.2.28
- pod: 10.244.1.207
- pod: 10.244.2.29
- pod: 10.244.2.28
- #可以看到非常的负载均衡
在上面测试了ClusterIP效果后,在查看svc详细信息时看到有个Endpoints,里面显示了pod的所有ip信息
其实Endpoints也是k8s一种资源类型,存储在etcd中,用来记录一个service对应的所有pod访问地址,它是根据service配置文件中selector标签选择器产生的
一个service由一组pod组成,这些pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合,service和pod之间的联系是通过endpoints实现的
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get endpoints -n dev
- NAME ENDPOINTS AGE
- service-clusterip 10.244.1.207:80,10.244.2.28:80,10.244.2.29:80 18m
当用户访问service时,请求都会被平均的分配到后端的pod上,目前k8s提供了两种负载分发策略
基于客户端地址的亲和性策略可以配置在spec中的sessionAffinity参数中
- 1.配置亲和性策略为ClientIP
- [root@k8s-master ~/k8s_1.19_yaml/service]# vim service-clusterip.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-clusterip
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- sessionAffinity: ClientIP #session亲和性为ClientIP
- clusterIP: 10.100.100.100
- type: ClusterIP
- ports:
- - port: 80
- targetPort: 80
-
- 2.更新资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl apply -f service-clusterip.yaml
- service/service-clusterip configured
-
- 3.查看资源配置是否生效
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl describe svc -n dev
- Name: service-clusterip
- Namespace: dev
- Labels: <none>
- Annotations: <none>
- Selector: app=nginx-pod
- Type: ClusterIP
- IP: 10.100.100.100
- Port: <unset> 80/TCP
- TargetPort: 80/TCP
- Endpoints: 10.244.1.207:80,10.244.2.28:80,10.244.2.29:80
- Session Affinity: ClientIP #已经生效
- Events: <none>
-
- 4.循环测试一下
- [root@k8s-master ~/k8s_1.19_yaml/service]# while true; do curl 10.100.100.100:80;sleep 3 ;done
- pod: 10.244.1.207
- pod: 10.244.1.207
- pod: 10.244.1.207
- pod: 10.244.1.207
- pod: 10.244.1.207
- #来自同一个客户端的ip是否请求在一个pod上
在某些场景中,开发人员可能不想使用service提供的负载均衡功能,而希望自己来控制负载均衡策略,针对这种情况,kubernetes提供了HeadLiness service,这类service不会分配ClusterIP,如果想要访问service,只能通过service的域名进行查询
service有默认的域名:service名称.命令空间.svc.cluster.local,svc.cluster.local是创建集群是配置的,不配置默认就是svc.cluster.local
- 1.编写yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# vim service-headliness.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-headliness
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- sessionAffinity: ClientIP
- clusterIP: None #和ClusterIP配置基本一致,只需要把clusterIP填写成None即可
- type: ClusterIP
- ports:
- - port: 80
- targetPort: 80
-
- 2.创建资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f service-headliness.yaml
- service/service-headliness created
-
- 3.查看svc的状态
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get svc service-headliness -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-headliness ClusterIP None <none> 80/TCP 41s
- #可以看到svc没有clusterIP,因此只能通过域名来访问
-
- 4.获取pod所使用的域名解析地址
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl exec -it deployment-nginx-5d4468bc6d-cxhgd /bin/bash -n dev
- root@deployment-nginx-5d4468bc6d-cxhgd:/# cat /etc/resolv.conf
- nameserver 10.96.0.10 #dns地址
- search dev.svc.cluster.local svc.cluster.local cluster.local #域名
- options ndots:5
-
- 5.解析域名
- [root@k8s-master ~/k8s_1.19_yaml/service]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
- ;; ANSWER SECTION:
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.207
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.28
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.29
如果希望将service暴露到集群外部使用,那么就需要使用另一种类型的service,称为NodePort类型,Nodeport的工作原理其实就是将service的端口映射到了Node的一个端口上,然后通过NodeIP:NodePort来访问service
NodePort原理:将service的端口映射到node节点的一个端口上,这个端口会在集群每个节点上都存在,当访问nodeip+nodeport时就会被转发到对应映射端口的service上
- 1.编写yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f service-nodeport.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-nodeport
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- type: NodePort #service类型为NodePort
- ports:
- - port: 80
- targetPort: 80
- nodePort: 31001 #指定映射在node节点上的端口号,不知道则会产生一个随机的端口号
-
- 2.创建资源
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f service-nodeport.yaml
- service/service-nodeport created
-
- 3.查看资源的状态
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get svc service-nodeport -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-nodeport NodePort 10.96.230.10 <none> 80:31001/TCP 36s
-
- 4.查看ipvs策略
- [root@k8s-master ~/k8s_1.19_yaml/service]# ipvsadm -Ln
- TCP 10.244.0.0:31001 rr
- -> 10.244.1.207:80 Masq 1 0 0
- -> 10.244.2.28:80 Masq 1 0 0
- -> 10.244.2.29:80 Masq 1 0 0
-
- 5.访问nodeip:nodeport验证是否可用
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl http://192.168.81.210:31001/
- pod: 10.244.1.207
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl http://192.168.81.210:31001/
- pod: 10.244.2.29
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl http://192.168.81.210:31001/
- pod: 10.244.2.28
-
-
ExternalName类型的service用于引入集群外部的服务,通过externalName属性指定一个外部的服务地址,引入集群后,就可以在集群内部使用service的域名访问到外部的服务
简而言之ExternalName就是将外部服务引入集群,然后通过service来访问外部的服务
- 1.编写yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# vim service-externalname.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-externalname
- namespace: dev
- spec:
- type: ExternalName
- externalName: www.baidu.com
-
- 2.创建yaml
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl create -f service-externalname.yaml
- service/service-externalname created
-
- 3.查看资源状态
- [root@k8s-master ~/k8s_1.19_yaml/service]# kubectl get svc service-externalname -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-externalname ExternalName <none> www.baidu.com <none> 105s
- #这种类型的service也是没有clusterip的
-
- 4.解析域名验证是否能访问到外部服务
- [root@k8s-master ~/k8s_1.19_yaml/service]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
- service-externalname.dev.svc.cluster.local. 5 IN CNAME www.baidu.com.
- www.baidu.com. 5 IN CNAME www.a.shifen.com.
- www.a.shifen.com. 5 IN A 220.181.38.150
- www.a.shifen.com. 5 IN A 220.181.38.149
-
- 5.测试域名是否能拿到外部项目
- [root@k8s-master ~/k8s_1.19_yaml/service]# curl www.a.shifen.com
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。