赞
踩
本篇文章继续给大家介绍运维界的超神器Kubernetes教程,内容较多,细细品味,你将对Kubernetes有更多的理解,包括Service的NodePort类型,名称空间的增删查,切换kube-proxy工作模式为生产环境常用的ipvs,可用性检查探针readinessProbe和启动探针startProbe,静态Pod,Pod的优雅终止,Pod的创建删除流程,利用rc和svc实现应用升级。
目录
利用rc和svc实现应用升级
先前文章里介绍过用Service可以对Pod进行自动发现和负载均衡,但是负载均衡的IP是10.200.x.x段的,我们的用户并不能通过这个IP访问到Pod中容器的业务,所以,我们要用到NodePort类型,这也是继hostnetwork、hostIP第三个提到的可以访问到Pod的方法,相较于前两个优点是多了负载均衡的功能。
编写svc资源清单
- [root@Master231 svc]# cat 04-svc-NodeIP.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: myweb-nodeport
- spec:
- # 指定svc的类型为NodePort,也就是在默认的ClusterIP基础之上监听所有worker节点的端口而已。
- type: NodePort
- # 基于标签选择器关联Pod
- selector:
- apps: web
- # 配置端口映射
- ports:
- # 指定Service服务本身的端口号
- - port: 8888
- # 后端Pod提供服务的端口号
- targetPort: 80
准备启动的Pod,对svc资源清单进行测试,发现自动映射到了宿主机的32212端口
- [root@Master231 rc]# kubectl apply -f 02-rc-nginx.yaml
- configmap/nginx.conf unchanged
- replicationcontroller/koten-rc created
- [root@Master231 rc]# kubectl get pods -o wide --show-labels
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
- koten-rc-4b7rj 1/1 Running 0 2m16s 10.100.1.143 worker232 <none> <none> apps=web
- koten-rc-4zz2k 1/1 Running 0 2m16s 10.100.1.145 worker232 <none> <none> apps=web
- koten-rc-7xmbf 1/1 Running 0 2m16s 10.100.2.65 worker233 <none> <none> apps=web
- koten-rc-8q5s9 1/1 Running 0 2m16s 10.100.1.142 worker232 <none> <none> apps=web
- koten-rc-t5klq 1/1 Running 0 2m16s 10.100.2.63 worker233 <none> <none> apps=web
- koten-rc-t8rl7 1/1 Running 0 2m16s 10.100.1.144 worker232 <none> <none> apps=web
- koten-rc-v9jk6 1/1 Running 0 2m16s 10.100.2.64 worker233 <none> <none> apps=web
- [root@Master231 svc]# kubectl apply -f 04-svc-NodeIP.yaml
- service/myweb-nodeport created
- [root@Master231 svc]# kubectl get svc
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 4m58s
- myweb-nodeport NodePort 10.200.98.50 <none> 8888:32212/TCP 5s
通过访问节点和端口号发现三个节点都可以访问到,此时有三个端口我们需要区别,80是Pod内容器的端口,8888是映射到svc的端口号,由于我们指定了类型为NodePort,所以会将服务的端口映射到节点上,也就是32212端口,客户通过访问宿主机的IP和这个端口访问到我们Pod中容器的业务。
- [root@Master231 svc]# curl 10.0.0.231:32212
- <html>
- <head><title>403 Forbidden</title></head>
- <body>
- <center><h1>403 Forbidden</h1></center>
- <hr><center>nginx/1.25.1</center>
- </body>
- </html>
- [root@Master231 svc]# curl 10.0.0.232:32212
- <html>
- <head><title>403 Forbidden</title></head>
- <body>
- <center><h1>403 Forbidden</h1></center>
- <hr><center>nginx/1.25.1</center>
- </body>
- </html>
- [root@Master231 svc]# curl 10.0.0.233:32212
- <html>
- <head><title>403 Forbidden</title></head>
- <body>
- <center><h1>403 Forbidden</h1></center>
- <hr><center>nginx/1.25.1</center>
- </body>
- </html>
但是我们查看节点端口并查不到,原因是因为底层是用iptables做了端口转发,svc底层是由kube-proxy实现路由规则编写的,默认基于iptables实现,生产环境中建议使用ipvs。
- [root@Master231 svc]# ss -ntl | grep 32212
- [root@Master231 svc]#
我们查看svc的详细信息,并通过筛选iptables的规则,筛选它是如何进行端口转发的
- [root@Master231 svc]# kubectl describe svc myweb-nodeport
- Name: myweb-nodeport
- Namespace: default
- Labels: <none>
- Annotations: <none>
- Selector: apps=web
- Type: NodePort
- IP Family Policy: SingleStack
- IP Families: IPv4
- IP: 10.200.98.50
- IPs: 10.200.98.50
- Port: <unset> 8888/TCP
- TargetPort: 80/TCP
- NodePort: <unset> 32212/TCP
- Endpoints: 10.100.1.142:80,10.100.1.143:80,10.100.1.144:80 + 4 more...
- Session Affinity: None
- External Traffic Policy: Cluster
- Events: <none>
- [root@Master231 svc]# iptables-save | grep 10.200.98.50
- -A KUBE-SERVICES -d 10.200.98.50/32 -p tcp -m comment --comment "default/myweb-nodeport cluster IP" -m tcp --dport 8888 -j KUBE-SVC-LX25QHSHDI4TEKI3
- -A KUBE-SVC-LX25QHSHDI4TEKI3 ! -s 10.100.0.0/16 -d 10.200.98.50/32 -p tcp -m comment --comment "default/myweb-nodeport cluster IP" -m tcp --dport 8888 -j KUBE-MARK-MASQ
我们去复制svc字样的继续筛选发现了其他不仅有端口转发的规则,还有分配到七个IP的规则
- [root@Master231 svc]# iptables-save | grep KUBE-SVC-LX25QHSHDI4TEKI3
- :KUBE-SVC-LX25QHSHDI4TEKI3 - [0:0]
- -A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb-nodeport" -m tcp --dport 32212 -j KUBE-SVC-LX25QHSHDI4TEKI3
- -A KUBE-SERVICES -d 10.200.98.50/32 -p tcp -m comment --comment "default/myweb-nodeport cluster IP" -m tcp --dport 8888 -j KUBE-SVC-LX25QHSHDI4TEKI
- -A KUBE-SVC-LX25QHSHDI4TEKI3 ! -s 10.100.0.0/16 -d 10.200.98.50/32 -p tcp -m comment --comment "default/myweb-nodeport cluster IP" -m tcp --dport 8888 -j KUBE-MARK-MASQ
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -p tcp -m comment --comment "default/myweb-nodeport" -m tcp --dport 32212 -j KUBE-MARK-MASQ
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.14285714272 -j KUBE-SEP-34YMX72M7Y4RMEBB
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.16666666651 -j KUBE-SEP-VFCOKZYDGC4I5XA7
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-ES2K7MWKTIV5L5XD
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-5DAVBS243ETY24QA
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-3NKFEKM2JTI6MOIL
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OGLMOFLEJIELE43F
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -j KUBE-SEP-DLGA52B2M7I3YUU6
我们选择一个IP规则去筛选,果然发现了IP
- [root@Master231 svc]# iptables-save | grep KUBE-SEP-34YMX72M7Y4RMEBB
- :KUBE-SEP-34YMX72M7Y4RMEBB - [0:0]
- -A KUBE-SEP-34YMX72M7Y4RMEBB -s 10.100.1.142/32 -m comment --comment "default/myweb-nodeport" -j KUBE-MARK-MASQ
- -A KUBE-SEP-34YMX72M7Y4RMEBB -p tcp -m comment --comment "default/myweb-nodeport" -m tcp -j DNAT --to-destination 10.100.1.142:80
- -A KUBE-SVC-LX25QHSHDI4TEKI3 -m comment --comment "default/myweb-nodeport" -m statistic --mode random --probability 0.14285714272 -j KUBE-SEP-34YMX72M7Y4RMEBB
查看svc的详细ep信息,发现是Addresses中的一个,其他对应的肯定也可以筛选出来,不再演示了
- [root@Master231 svc]# kubectl describe ep myweb-nodeport
- Name: myweb-nodeport
- Namespace: default
- Labels: <none>
- Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-06-20T11:42:33Z
- Subsets:
- Addresses: 10.100.1.142,10.100.1.143,10.100.1.144,10.100.1.145,10.100.2.63,10.100.2.64,10.100.2.65
- NotReadyAddresses: <none>
- Ports:
- Name Port Protocol
- ---- ---- --------
- <unset> 80 TCP
-
- Events: <none>
编写nodePort端口为30080,默认端口范围是30000-32767,这是官方规则,如果想要修改范围需要修改api-server启动时的参数
- [root@Master231 svc]# cat 05-svc-NodeIP.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: myweb-nodeport
- spec:
- # 指定svc的类型为NodePort,也就是在默认的ClusterIP基础之上多监听所有worker节点的端口而已。
- type: NodePort
- # 基于标签选择器关联Pod
- selector:
- apps: web
- # 配置端口映射
- ports:
- # 指定Service服务本身的端口号
- - port: 8888
- # 后端Pod提供服务的端口号
- targetPort: 80
- # 如果是NodePort类型,可以指定NodePort监听的端口号,若不指定,则随机生成。
- nodePort: 30080
启动svc,创建资源,发现节点提供服务的端口成功修改
- [root@Master231 svc]# kubectl apply -f 05-svc-NodeIP.yaml
- service/myweb-nodeport created
- [root@Master231 svc]# kubectl get pods -o wide --show-labels
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
- koten-rc-4dzmj 1/1 Running 0 118s 10.100.2.69 worker233 <none> <none> apps=web
- koten-rc-52snl 1/1 Running 0 118s 10.100.1.149 worker232 <none> <none> apps=web
- koten-rc-kh9wf 1/1 Running 0 118s 10.100.1.150 worker232 <none> <none> apps=web
- [root@Master231 svc]# kubectl get svc
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 3m18s
- myweb-nodeport NodePort 10.200.193.203 <none> 8888:30080/TCP 2m16s
- [root@Master231 svc]# curl 10.0.0.231:30080
- <html>
- <head><title>403 Forbidden</title></head>
- <body>
- <center><h1>403 Forbidden</h1></center>
- <hr><center>nginx/1.25.1</center>
- </body>
- </html>
Namespaces(名称空间)是一种资源对象,用于对 Kubernetes 资源进行分组和隔离。使用 Namespaces,可以将不同的 Kubernetes 资源(如 Pod、Service、Deployment 等)划分到不同的逻辑组中,实现资源的隔离和管理。
1、查看名称空间
- [root@Master231 svc]# kubectl get ns
- NAME STATUS AGE
- default Active 6d3h
- kube-flannel Active 6d3h
- kube-node-lease Active 6d3h
- kube-public Active 6d3h
- kube-system Active 6d3h
2、查看默认名称空间下的资源,若不指定,则默认为default
- [root@Master231 svc]# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- koten-rc-j86lg 1/1 Running 0 3s
- koten-rc-pjnjq 1/1 Running 0 3s
- koten-rc-zcjck 1/1 Running 0 3s
3、查看指定的kube-system名称空间
- [root@Master231 svc]# kubectl get pods -n kube-system
- NAME READY STATUS RESTARTS AGE
- coredns-6d8c4cb4d-5cxh8 1/1 Running 162 (18h ago) 6d3h
- coredns-6d8c4cb4d-tkkmr 1/1 Running 164 (18h ago) 6d3h
- etcd-master231 1/1 Running 3 (18h ago) 6d3h
- kube-apiserver-master231 1/1 Running 3 (18h ago) 6d3h
- kube-controller-manager-master231 1/1 Running 3 (18h ago) 6d3h
- kube-proxy-cl72l 1/1 Running 4 (18h ago) 6d3h
- kube-proxy-fpfw6 1/1 Running 3 (18h ago) 6d3h
- kube-proxy-lq4jw 1/1 Running 3 (18h ago) 6d3h
- kube-scheduler-master231 1/1 Running 3 (18h ago) 6d3h
4、查看所有名称空间的pod,cm资源
[root@Master231 pod]# kubectl get pods,cm -A
1、响应式创建名称空间
- [root@Master231 svc]# kubectl create namespace linux
- namespace/linux created
2、声明式创建名称空间
- [root@Master231 namespaces]# cat 01-ns-custom.yaml
- apiVersion: v1
- kind: Namespace
- metadata:
- name: koten-linux-custom
- labels:
- author: koten
- hobby: linux
- [root@Master231 namespaces]# kubectl apply -f 01-ns-custom.yaml
- namespace/koten-linux-custom created
3、使用名称空间
- [root@Master231 pod]# cat 31-pods-volumes-configMap-games-ns.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: koten-games-cm-ns-002
- # 将资源加入到指定的名称空间
- namespace: linux
- spec:
- nodeName: worker233
- volumes:
- - name: data01
- configMap:
- name: nginx.conf
- containers:
- - name: games
- image: harbor.koten.com/koten-games/games:v0.5
- volumeMounts:
- - name: data01
- mountPath: /etc/nginx/conf.d/
-
- ---
-
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: nginx.conf
- namespace: linux
- data:
- nginx.conf: |
- user nginx;
- worker_processes auto;
- error_log /var/log/nginx/error.log notice;
- pid /var/run/nginx.pid;
- events {
- worker_connections 1024;
- }
- http {
- include /etc/nginx/mime.types;
- default_type application/octet-stream;
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
- access_log /var/log/nginx/access.log main;
- sendfile on;
- keepalive_timeout 65;
- include /etc/nginx/conf.d/*.conf;
- }
- [root@Master231 pod]# kubectl apply -f 31-pods-volumes-configMap-games-ns.yaml
- pod/koten-games-cm-ns-002 created
- configmap/nginx.conf created
- [root@Master231 pod]# kubectl -n linux get cm,po
- NAME DATA AGE
- configmap/kube-root-ca.crt 1 6m
- configmap/nginx.conf 1 11s
-
- NAME READY STATUS RESTARTS AGE
- pod/koten-games-cm-ns-002 1/1 Running 0 11s
1、删除名称空间,一旦删除一个名称空间,则该名称空间下的所有资源都会被删除
- [root@Master231 pod]# kubectl delete namespaces linux
- namespace "linux" deleted
- [root@Master231 pod]# kubectl -n linux get cm,po
- No resources found in linux namespace.
1、查看kube-proxy默认的工作模式,可以看到是iptables
- [root@Master231 pod]# kubectl -n kube-system logs -f kube-proxy-cl72l
- I0620 00:32:58.145622 1 node.go:163] Successfully retrieved node IP: 10.0.0.232
- I0620 00:32:58.145952 1 server_others.go:138] "Detected node IP" address="10.0.0.232"
- I0620 00:32:58.146076 1 server_others.go:572] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
- I0620 00:32:58.232023 1 server_others.go:206] "Using iptables Proxier"
- ......
2、修改默认的工作模式
- [root@Master231 pod]# kubectl -n kube-system edit cm kube-proxy
- ......
- configmap/kube-proxy edited
- [root@Master231 pod]# kubectl -n kube-system get cm kube-proxy -o yaml | grep mode
- mode: "ipvs"
3、所有节点安装ipvs相关模块管理工具
所有节点安装ipvs相关组件
yum -y install conntrack-tools ipvsadm.x86_64
编写加载ipvs的配置文件
- cat > /etc/sysconfig/modules/ipvs.modules <<EOF
- #!/bin/bash
-
- modprobe -- ip_vs
- modprobe -- ip_vs_rr
- modprobe -- ip_vs_wrr
- modprobe -- ip_vs_sh
- modprobe -- nf_conntrack_ipv4
- EOF
加载ipvs相关模块并查看
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
4、重启Pod让cm的配置生效
- [root@Master231 pod]# kubectl -n kube-system get pods | grep kube-proxy
- kube-proxy-cl72l 1/1 Running 4 (19h ago) 6d3h
- kube-proxy-fpfw6 1/1 Running 3 (19h ago) 6d3h
- kube-proxy-lq4jw 1/1 Running 3 (19h ago) 6d4h
- [root@Master231 pod]# kubectl -n kube-system delete pod `kubectl -n kube-system get pods | grep kube-proxy | awk '{print $1}'`
- pod "kube-proxy-cl72l" deleted
- pod "kube-proxy-fpfw6" deleted
- pod "kube-proxy-lq4jw" deleted
5、验证是否修改了工作模式,发现成功修改
- [root@Master231 pod]# kubectl -n kube-system get pods | grep kube-proxy
- kube-proxy-8tk4w 1/1 Running 0 61s
- kube-proxy-dcblb 1/1 Running 0 62s
- kube-proxy-jpgmd 1/1 Running 0 62s
- [root@Master231 pod]# kubectl -n kube-system logs -f kube-proxy-8tk4w
- I0620 12:35:35.306840 1 node.go:163] Successfully retrieved node IP: 10.0.0.233
- I0620 12:35:35.307021 1 server_others.go:138] "Detected node IP" address="10.0.0.233"
- I0620 12:35:35.369096 1 server_others.go:269] "Using ipvs Proxier"
6、查看svc基于ipvs的映射
- #先启动下rc和svc
- [root@Master231 rc]# kubectl apply -f 03-rc-nginx.yaml
- configmap/nginx.conf unchanged
- replicationcontroller/koten-rc created
- [root@Master231 svc]# kubectl apply -f 05-svc-NodeIP.yaml
- service/myweb-nodeport created
-
- #可以看到svc的服务和端口指向Pod的IP和端口
- [root@Master231 rc]# kubectl get svc
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 46s
- myweb-nodeport NodePort 10.200.34.126 <none> 8888:30080/TCP 7s
- [root@Master231 rc]# ipvsadm -ln | grep 10.200.34.126 -A 3
- TCP 10.200.34.126:8888 rr
- -> 10.100.1.156:80 Masq 1 0 0
- -> 10.100.1.157:80 Masq 1 0 0
- -> 10.100.1.158:80 Masq 1 0 0
-
- #指向的Pod的IP与Endpoints一致
- [root@Master231 rc]# kubectl describe svc myweb-nodeport
- Name: myweb-nodeport
- Namespace: default
- Labels: <none>
- Annotations: <none>
- Selector: apps=web
- Type: NodePort
- IP Family Policy: SingleStack
- IP Families: IPv4
- IP: 10.200.34.126
- IPs: 10.200.34.126
- Port: <unset> 8888/TCP
- TargetPort: 80/TCP
- NodePort: <unset> 30080/TCP
- Endpoints: 10.100.1.156:80,10.100.1.157:80,10.100.1.158:80 + 4 more...
- Session Affinity: None
- External Traffic Policy: Cluster
- Events: <none>
- 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪。
- 若检测Pod服务不可用,则会将Pod从svc的ep列表中移除。
- 若检测Pod服务可用,则会将Pod重新添加到svc的ep列表中。
- 如果容器没有提供可用性检查,则默认状态为Success。
检查有exec、httpget、tcpSocket三种方式,我这边只展示httpget了
编写资源清单,有健康状态检查和可用性探针,观察参数是一直健康的状态,但是如果不干预会在第21秒检查出错,23秒Pod会从svc中移除
- [root@Master231 pod]# cat 32-pods-livenessProbe-readinessProbe-httGet.yaml
- kind: Pod
- apiVersion: v1
- metadata:
- name: probe-liveness-readiness-httpget
- labels:
- author: koten
- hobby: linux
- spec:
- containers:
- - name: web
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- livenessProbe:
- httpGet:
- port: 80
- path: /index.html
- failureThreshold: 3
- initialDelaySeconds: 0
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 1
- readinessProbe:
- httpGet:
- port: 80
- path: /koten.html
- failureThreshold: 3
- initialDelaySeconds: 20
- periodSeconds: 1
- successThreshold: 1
- timeoutSeconds: 1
-
- ---
-
- apiVersion: v1
- kind: Service
- metadata:
- name: probe-readiness-liveness-httpget
- spec:
- type: ClusterIP
- selector:
- hobby: linux
- ports:
- - port: 8888
- targetPort: 80
运行测试,虽然status显示running,但是ready一直处于未就绪,在21秒的时候显示不可用,将在23秒的时候移除ep
- [root@Master231 pod]# kubectl apply -f 32-pods-livenessProbe-readinessProbe-httGet.yaml
- pod/probe-liveness-readiness-httpget created
- service/probe-readiness-liveness-httpget created
- [root@Master231 pod]# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- probe-liveness-readiness-httpget 0/1 Running 0 15s
- [root@Master231 pod]# kubectl describe pods probe-liveness-readiness-httpget
- .....
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 23s default-scheduler Successfully assigned default/probe-liveness-readiness-httpget to worker232
- Normal Pulled 23s kubelet Container image "harbor.koten.com/koten-web/nginx:1.25.1-alpine" already present on machine
- Normal Created 23s kubelet Created container web
- Normal Started 22s kubelet Started container web
- Warning Unhealthy 1s (x2 over 2s) kubelet Readine
-
- [root@Master231 pod]# kubectl describe ep probe-readiness-liveness-httpget
- Name: probe-readiness-liveness-httpget
- Namespace: default
- Labels: <none>
- Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-06-20T13:21:29Z
- Subsets:
- Addresses: <none>
- NotReadyAddresses: 10.100.1.164
- Ports:
- Name Port Protocol
- ---- ---- --------
- <unset> 80 TCP
-
- Events: <none>
- [root@Master231 pod]#
我们在20秒内增加/koten.html的代码文件,发现IP地址已经加上了,没有准备的IP地址消失了
- [root@Master231 pod]# kubectl describe ep probe-readiness-liveness-httpget
- Name: probe-readiness-liveness-httpget
- Namespace: default
- Labels: <none>
- Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-06-20T13:26:36Z
- Subsets:
- Addresses: 10.100.1.167
- NotReadyAddresses: <none>
- Ports:
- Name Port Protocol
- ---- ---- --------
- <unset> 80 TCP
-
- Events: <none>
- [root@Master231 pod]#
- startupProbe探针是1.16后面的k8s版本才有的。
- 如果提供了启动探针,则所有其他探针都会被禁用,直到此探针成功为止。
- 如果启动探测失败,kubelet将杀死容器,而容器依其重启策略进行重启。
- 如果容器没有提供启动探测,则默认状态为 Success。
同样是只使用httpget的方式演示,其他方式同理
编写资源清单,观察探针,可以发现虽然规定了健康探针和可用性探针,但是他们在业务启动的同时进行检查的,与启动探针一起启动时,优先级比启动性探针小,可以发现启动探针会在41秒报错,43秒的时候杀死容器
- kind: Pod
- apiVersion: v1
- metadata:
- name: probe-startup-liveness-readiness-httpget
- labels:
- author: koten
- hobby: linux
- spec:
- containers:
- - name: web
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- # 指定健康检查探针,若未指定默认成功,若定义该探针,检测失败则会重启容器,即重新创建容器。
- livenessProbe:
- httpGet:
- port: 80
- path: /linux.html
- failureThreshold: 3
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 1
- # 指定就绪探针,若未指定则默认成功,若定义该探针,检测失败则会将Pod标记为未就绪状态,而未就绪的Pod是不会记录到svc的ep列表中。
- readinessProbe:
- httpGet:
- port: 80
- path: /koten.html
- failureThreshold: 3
- initialDelaySeconds: 20
- periodSeconds: 1
- successThreshold: 1
- timeoutSeconds: 1
- # 指定启动探针,若定义了该探针,会优先通过该探针检测,若检测成功,再去执行readinessProbe和livenessProbe探针。
- # 若没有定义该探针,则默认为成功。
- startupProbe:
- httpGet:
- port: 80
- path: /start.html
- failureThreshold: 3
- initialDelaySeconds: 40
- periodSeconds: 1
- successThreshold: 1
- timeoutSeconds: 1
-
- ---
-
- apiVersion: v1
- kind: Service
- metadata:
- name: probe-startup-readiness-liveness-httpget
- spec:
- type: ClusterIP
- selector:
- hobby: linux
- ports:
- - port: 8888
- targetPort: 80
运行资源清单观察效果,启动后进入容器,啥也不干,发现自动退出了,查看详细信息,确实是43秒退出的
- [root@Master231 pod]# kubectl apply -f 33-pods-startupProbe-livenessProbe-readinessProbe-httGet.yaml
- pod/probe-startup-liveness-readiness-httpget created
- service/probe-startup-readiness-liveness-httpget created
- [root@Master231 pod]# kubectl exec -it probe-startup-liveness-readiness-httpget -- sh
- / # command terminated with exit code 137
- [root@Master231 pod]# kubectl describe pod probe-startup-liveness-readiness-httpget
- ......
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 56s default-scheduler Successfully assigned default/probe-startup-liveness-readiness-httpget to worker232
- Normal Pulled 13s (x2 over 56s) kubelet Container image "harbor.koten.com/koten-web/nginx:1.25.1-alpine" already present on machine
- Normal Created 13s (x2 over 56s) kubelet Created container web
- Normal Started 13s (x2 over 55s) kubelet Started container web
- Warning Unhealthy 13s (x3 over 15s) kubelet Startup probe failed: HTTP probe failed with statuscode: 404
- Normal Killing 13s kubelet Container web failed startup probe, will be restarted
这次我们在40秒内echo进去\start.html,就不会报错了,发现启动探针没有报错,但是可用性探针报错了,说明可用性探针和健康状态检查探针在启动探针检查服务可用后开始工作了
- [root@Master231 pod]# kubectl apply -f 33-pods-startupProbe-livenessProbe-readinessProbe-httGet.yaml
- pod/probe-startup-liveness-readiness-httpget created
- service/probe-startup-readiness-liveness-httpget created
- [root@Master231 pod]# kubectl exec -it probe-startup-liveness-readiness-httpget -- sh
- / # echo 111 > /usr/share/nginx/html/start.html
- / #
- [root@Master231 pod]# kubectl describe pod probe-startup-liveness-readiness-httpget
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 46s default-scheduler Successfully assigned default/probe-startup-liveness-readiness-httpget to worker232
- Normal Pulled 46s kubelet Container image "harbor.koten.com/koten-web/nginx:1.25.1-alpine" already present on machine
- Normal Created 46s kubelet Created container web
- Normal Started 45s kubelet Started container web
- Warning Unhealthy 1s (x5 over 5s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
静态pod是由kubelet进行管理的pod,不通过master节点的apiserver监管,可以有效预防kubectl误删除,用来保证k8s集群和核心业务功能的稳定性。
- 想要在特定节点上创建静态Pod,可以在该节点上创建静态Pod的清单文件,并将文件放在 /etc/kubernetes/manifests目录下。
- kubelet组件会监视该目录,并根据静态Pod的清单立即创建Pod。但是需要注意的是,这种方式只适用于在特定的Node节点上创建静态Pod,而不能用于在整个Kubernetes集群中创建静态Pod。
查看kubelet定义的默认静态Pod路径,可以看到里面有四个yaml文件,会根据yaml文件去自动创建Pod。
- [root@Master231 pod]# grep static /var/lib/kubelet/config.yaml
- staticPodPath: /etc/kubernetes/manifests
- [root@Master231 pod]# ls /etc/kubernetes/manifests/
- etcd.yaml kube-controller-manager.yaml
- kube-apiserver.yaml kube-scheduler.yaml
yaml文件内会有很多api-server的参数,看不懂没有关系,去官方文档看,参考链接如下
- https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/
- https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/
- https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/
- https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/
需要注意的是
- 1、静态Pod只能创建Pod,如果有其他的类型的文件不会创建,如果里面既有pod也有其他资源,会只创建pod;
- 2、如果更改了目录的文件,没有创建Pod,将其移动出去再移动进来即可;
- 3、修改Api Server非常容易报错,导致kubectl整体报错无法正常运行,所以提前做好快照!
1、修改API-server静态Pod的资源清单
- [root@Master231 pod]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- annotations:
- creationTimestamp: null
- labels:
- component: kube-apiserver
- tier: control-plane
- name: kube-apiserver
- namespace: kube-system
- spec:
- containers:
- - command:
- - kube-apiserver
- - --service-node-port-range=3000-50000 #在此处添加这一行即可
- - --advertise-address=10.0.0.231
- ......
2、移动资源清单
- [root@Master231 pod]# mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
- [root@Master231 pod]# mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
3、 创建svc的NodePort类型,验证端口是否可以设置8080
- [root@Master231 pod]# cat /manifests/svc/05-svc-NodeIP.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: myweb-nodeport
- spec:
- # 指定svc的类型为NodePort,也就是在默认的ClusterIP基础之上多监听所有worker节点的端口而已。
- type: NodePort
- # 基于标签选择器关联Pod
- selector:
- apps: web
- # 配置端口映射
- ports:
- # 指定Service服务本身的端口号
- - port: 8888
- # 后端Pod提供服务的端口号
- targetPort: 80
- # 如果是NodePort类型,可以指定NodePort监听的端口号,若不指定,则随机生成。
- nodePort: 30080
- # 默认端口范围是"30000-32767",官方规则,如果想要修改该范围,需要修改api-server启动时的参数.
- nodePort: 8080
启动rc资源清单和上面写的svc指定端口的资源清单
- [root@Master231 pod]# kubectl apply -f ../rc/03-rc-nginx.yaml
- configmap/nginx.conf unchanged
- replicationcontroller/koten-rc created
- [root@Master231 pod]# kubectl apply -f /manifests/svc/05-svc-NodeIP.yaml
- service/myweb-nodeport created
发现成功映射到了8080端口
- [root@Master231 pod]# kubectl get svc
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 2m29s
- myweb-nodeport NodePort 10.200.21.72 <none> 8888:8080/TCP 83s
- [root@Master231 pod]# curl 10.0.0.231:8080
- <html>
- <head><title>403 Forbidden</title></head>
- <body>
- <center><h1>403 Forbidden</h1></center>
- <hr><center>nginx/1.25.1</center>
- </body>
- </html>
优雅的终止是指延迟结束Pod的时间,在这个时间内可以进行一些操作,例如备份等操作
需要注意的是,优雅终止的延迟时间要大于Pod结束前做的事情的时间,不然Pod还没有处理完就已经终止Pod了
- [root@Master231 pod]# cat 34-pods-lifecycle-postStart-preStop.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: koten-pod-containers-lifecycle-001
- spec:
- volumes:
- - name: data
- hostPath:
- path: /koten-linux
- # 在pod优雅终止时,定义延迟发送kill信号的时间,此时间可用于pod处理完未处理的请求等状况。
- # 默认单位是秒,若不设置默认值为30s。
- terminationGracePeriodSeconds: 60
- # terminationGracePeriodSeconds: 5
- containers:
- - name: myweb
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- volumeMounts:
- - name: data
- mountPath: /data
- # 定义Pod的生命周期。
- lifecycle:
- # Pod启动之后做的事情,若该步骤未完成,创建容器时一直处于"ContainerCreating"状态,此时无法访问业务。
- # 换句话说,若此节点为执行成功,是不会分配IP地址的,对于比较复杂的动作,建议放在initContianers中执行。
- postStart:
- exec:
- command:
- - "/bin/sh"
- - "-c"
- # - "echo \"postStart at $(date +%F_%T)\" >> /data/postStart.log"
- - "sleep 30"
- # Pod停止之前做的事情
- preStop:
- exec:
- command:
- - "/bin/sh"
- - "-c"
- - "echo \"preStop at $(date +%F_%T)\" >> /data/preStop.log"
- # - "sleep 30"
不难推断,初始化容器的优先级一定比优雅终止和启动检查要高,经测试,portstart的优先级比启动检查高,所以在启动Pod时,以此经历初始化容器、portstart、启动检查,给大家提供可以测试的资源清单,感兴趣可以自行测试
- [root@Master231 pod]# cat 35-pods-lifecycle-postStart-preStop.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: koten-pod-containers-lifecycle-002
- spec:
- volumes:
- - name: data
- hostPath:
- path: /koten-linux
- # 在pod优雅终止时,定义延迟发送kill信号的时间,此时间可用于pod处理完未处理的请求等状况。
- # 默认单位是秒,若不设置默认值为30s。
- terminationGracePeriodSeconds: 60
- # terminationGracePeriodSeconds: 5
- initContainers:
- - name: init
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- command:
- - sleep
- - "10"
- containers:
- - name: myweb
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- volumeMounts:
- - name: data
- mountPath: /data
- # 定义Pod的生命周期。
- lifecycle:
- # Pod启动之后做的事情,若该步骤未完成,创建容器时一直处于"ContainerCreating"状态,此时无法访问业务。
- # 换句话说,若此节点为执行成功,是不会分配IP地址的哟~对于比较复杂的动作,建议放在initContianers中执行。
- # 值得注意的是,postStart是高于startupProbe。
- postStart:
- exec:
- command:
- - "/bin/sh"
- - "-c"
- # - "echo \"postStart at $(date +%F_%T)\" >> /data/postStart.log"
- - "sleep 30"
- # Pod停止之前做的事情
- preStop:
- exec:
- command:
- - "/bin/sh"
- - "-c"
- - "echo \"preStop at $(date +%F_%T)\" >> /data/preStop.log"
- # - "sleep 30"
- livenessProbe:
- httpGet:
- port: 80
- path: /linux.html
- failureThreshold: 3
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 1
- readinessProbe:
- httpGet:
- port: 80
- path: /koten.html
- failureThreshold: 3
- initialDelaySeconds: 20
- periodSeconds: 1
- successThreshold: 1
- timeoutSeconds: 1
- startupProbe:
- httpGet:
- port: 80
- path: /start.html
- failureThreshold: 3
- initialDelaySeconds: 5
- periodSeconds: 1
- successThreshold: 1
- timeoutSeconds: 1
- 1、完成Pod调度流程
- 2、initContainer初始化容器
- 3、容器启动并执行postStart
- 4、livessProbe
- 5、进入Running状态
- 6、livenessProbe与readinessProbe
- 7、service关联Pod
- 8、接收客户端请求
- 1、Pod被设置为Terminating状态,从service的endPoints列表中删除并不在接收客户端请求
- 2、执行PreStop(Pod停止之前做的事情)
- 3、k8s向pod中的容器发送SIGTERM信号(正常终止信号)终止Pod里面的主进程,这个信号让容器知道自己很快将会被关闭。
- 4、经过可选的配置参数terminationGracePeriodSeconds终止等待期,如果有设置宽限时间,则等待宽限时间到期,否则最多等待30秒。
- 5、k8S等待指定的时间称为优雅终止宽限期,默认情况下是30秒,值得注意的是等待期与preStop Hook和SIGTERM信号并行执行,即K8S可能不会等待preStop Hook完成(最长30秒之后主进程还没有介绍就强制终止Pod)。
- 6、SIGKILL信号被发送到Pod,并删除Pod。
要求用户访问svc的NodePort类型,升级过程中,应用不能停止,即升级过程中用户是可以访问的。
其实先前文章介绍过代码发布流程,有很多种,蓝绿发布、金丝雀发布、滚动发布等等,像蓝绿发布,比如我们的Pod是3个v1版本,我们可以再启动3个v2版本,将svc从v1指向v2即可;还有一种滚动式的发布是一个一个Pod去删除,反正rc会稳定在3个数量。
我这边演示后面滚动式发布的方法,用nginx1.14.0版本当做v1,用nginx1.15.1当做v2
这样方法的弊端是只能在rc修改镜像,如果修改了tag,rc那边就调度不上了,会影响客户体验。
先编写v1的rc和svc
- [root@Master231 rc]# cat 04-rc-apps.yaml
- apiVersion: v1
- kind: ReplicationController
- metadata:
- name: koten-rc-apps-v1
- spec:
- replicas: 3
- template:
- metadata:
- labels:
- apps: v1
- spec:
- containers:
- - name: v1
- image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
- [root@Master231 svc]# cat 06-rc-apps-svc.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: koten-apps
- spec:
- type: NodePort
- selector:
- apps: v1
- ports:
- - port: 80
- targetPort: 80
- nodePort: 30080
启动v1版本的rc和svc,观察label,浏览器访问节点的IP
- [root@Master231 rc]# kubectl apply -f 04-rc-apps.yaml
- replicationcontroller/koten-rc-apps-v1 created
- [root@Master231 svc]# kubectl apply -f 06-rc-apps-svc.yaml
- service/koten-apps created
- [root@Master231 svc]# kubectl get pods --show-labels
- NAME READY STATUS RESTARTS AGE LABELS
- koten-rc-apps-v1-7crtc 1/1 Running 0 3m20s apps=v1
- koten-rc-apps-v1-pt4lm 1/1 Running 0 3m20s apps=v1
- koten-rc-apps-v1-sfsk8 1/1 Running 0 3m20s apps=v1
- [root@Master231 svc]# kubectl get svc koten-apps
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- koten-apps NodePort 10.200.167.49 <none> 80:30080/TCP 26s
修改svc调度,Pod不会直接发生变化
- [root@Master231 rc]# cat 04-rc-apps.yaml
- apiVersion: v1
- kind: ReplicationController
- metadata:
- name: koten-rc-apps-v1
- spec:
- replicas: 3
- template:
- metadata:
- labels:
- apps: v1
- spec:
- containers:
- - name: v1
- image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
- [root@Master231 rc]# kubectl apply -f 04-rc-apps.yaml
- replicationcontroller/koten-rc-apps-v1 configured
一个一个删除Pod,发现Pod都变成了v2
- [root@Master231 rc]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- koten-rc-apps-v1-7crtc 1/1 Running 0 7m16s 10.100.1.179 worker232 <none> <none>
- koten-rc-apps-v1-pt4lm 1/1 Running 0 7m16s 10.100.2.86 worker233 <none> <none>
- koten-rc-apps-v1-sfsk8 1/1 Running 0 7m16s 10.100.1.178 worker232 <none> <none>
- [root@Master231 rc]# kubectl delete pods koten-rc-apps-v1-7crtc
- pod "koten-rc-apps-v1-7crtc" deleted
- [root@Master231 rc]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- koten-rc-apps-v1-h4l7x 1/1 Running 0 3s 10.100.1.180 worker232 <none> <none>
- koten-rc-apps-v1-pt4lm 1/1 Running 0 7m34s 10.100.2.86 worker233 <none> <none>
- koten-rc-apps-v1-sfsk8 1/1 Running 0 7m34s 10.100.1.178 worker232 <none> <none>
- [root@Master231 rc]# curl -I 10.100.1.180
- HTTP/1.1 200 OK
- Server: nginx/1.25.1
- Date: Tue, 20 Jun 2023 15:00:03 GMT
- Content-Type: text/html
- Content-Length: 615
- Last-Modified: Tue, 13 Jun 2023 17:34:28 GMT
- Connection: keep-alive
- ETag: "6488a8a4-267"
- Accept-Ranges: bytes
- [root@Master231 rc]# kubectl delete pod koten-rc-apps-v1-pt4lm koten-rc-apps-v1-sfsk8
- pod "koten-rc-apps-v1-pt4lm" deleted
- pod "koten-rc-apps-v1-sfsk8" deleted
我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。