当前位置:   article > 正文

运维实战 kubernetes(k8s) 之 pod 的建立_k8s新建一个简易busybox的pod

k8s新建一个简易busybox的pod

1. Pod管理

  • Pod 是可以创建和管理Kubernetes计算的最小可部署单元,一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。
  • 一个 pod 类似一个豌豆荚,包含一个或多个容器(通常是docker),多个容器间共享IPC、Network和UTC namespace。
  • kubectl 命令指南:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
  1. 创建 Pod 应用:
[root@server2 ~]# kubectl run demo --image=myapp:v1
pod/demo created
[root@server2 ~]# kubectl get pod	##查看pod 信息
NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          15s
[root@server2 ~]# kubectl get pod -o wide	##查看 pod 详细信息
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          21s   10.244.2.2   server4   <none>           <none>
[root@server2 ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
 ##集群内部任意节点可以访问Pod,但集群外部无法直接访问。
[root@server2 ~]# kubectl get all
NAME       READY   STATUS    RESTARTS   AGE
pod/demo   1/1     Running   0          5m53s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d22h
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

可以看到此时集群给 pod 分配的 ip 是 10.244.2.2 ,该地址在初始化集群时是定义过的;此时运行在 server4 上;每个节点上的网段是不同的;默认情况下 k8smaster 不参加调度,网段应该是 0 网段;
在用 kuectl get all 查看时,可以看到通过 run 来运行的 pod 并没有使用控制器,这种 pod 称为自主式 pod ;

这种自主式的 pod ,在删除时便会彻底删除;而通过控制器运行的 pod 在删除之后,控制器会自动再次新建一个 pod

[root@server2 ~]# kubectl delete pod demo 
pod "demo" deleted
[root@server2 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d23h
  • 1
  • 2
  • 3
  • 4
  • 5

常用操作:
参数 --restart=Never表示退出不重启;默认是会重启服务的。

[root@server2 ~]# kubectl run -i -t busybox --image=busybox --restart=Never
If you don't see a command prompt, try pressing enter.
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 8e:95:86:15:0e:38 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.3/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
       
[root@server2 ~]# kubectl get pod	
##此时退出之后就不再重启
NAME      READY   STATUS      RESTARTS   AGE
busybox   0/1     Completed   0          26s
[root@server2 ~]# kubectl delete pod busybox 
pod "busybox" deleted
[root@server2 ~]# kubectl get pod
No resources found in default namespace.

[root@server2 ~]# kubectl run -i -t busybox --image=busybox
##不加参数时
If you don't see a command prompt, try pressing enter.
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 9a:d4:c9:6f:14:e9 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.4/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running
[root@server2 ~]# kubectl get pod
##退出之后,自动重启,用命令可以进入 pod;
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   1          31s
[root@server2 ~]# kubectl attach busybox -c busybox -it
If you don't see a command prompt, try pressing enter.
/ # 
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running
[root@server2 ~]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   2          2m11s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

在用参加入 pod 时,第一个busybox 表示 pod 的名称,第二个表示 pod 内容器的名称,当容器中和只有一个容器时,此处可以不加容器名称;

  1. service
    service 是一个抽象概念,定义了一个服务的多个 pod 逻辑合集和访问 pod 的策略,一般把 service 称为微服务。

创建 service
再次创建一个 pod ,其还是运行在之前的主机上;k8s 的策略就是最小化原则,之前调度的节点已经完成了镜像的下载,所以会优先调度,不用再次下载;k8s 的镜像拉取策略时always,不管镜像存在与否,都会尝试去拉取,当检测到存在时,就不再拉取。有更新时便会更新。

[root@server2 ~]# kubectl run demo --image=myapp:v1
pod/demo created
[root@server2 ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          11s   10.244.2.5   server4   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5

对于 ip 的动态变化,外部访问容器需要微服务;

deployment 控制器来新建 pod:此时创建的 pod 如果只删除 pod 是删不掉的,删除之后,控制器会再次拉起一个 pod,除非删除控制器。

[root@server2 ~]# kubectl delete pod demo
pod "demo" deleted
[root@server2 ~]# kubectl create deployment demo --image=myapp:v1
deployment.apps/demo created
[root@server2 ~]# kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/demo-5b4fc8bb88-dvdkt   1/1     Running   0          7s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d23h

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo   1/1     1            1           7s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-5b4fc8bb88   1         1         1       7s
[root@server2 ~]# kubectl delete pod demo-5b4fc8bb88-dvdkt
pod "demo-5b4fc8bb88-dvdkt" deleted
[root@server2 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
demo-5b4fc8bb88-flfq8   1/1     Running   0          18s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

动态拉伸:之前是一份,此时拉伸到

[root@server2 ~]# kubectl scale deployment --replicas=2 demo 
deployment.apps/demo scaled
[root@server2 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
demo-5b4fc8bb88-flfq8   1/1     Running   0          3m1s
demo-5b4fc8bb88-g4vz8   1/1     Running   0          5s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

暴露控制器的端口信息;

[root@server2 ~]# kubectl expose deployment demo --port 80 --target-port 80
service/demo exposed
[root@server2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
demo         ClusterIP   10.111.253.254   <none>        80/TCP    10s
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d23h
[root@server2 ~]# kubectl describe svc demo
Name:              demo
Namespace:         default
Labels:            app=demo
Annotations:       <none>
Selector:          app=demo
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.111.253.254
IPs:               10.111.253.254
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.3:80,10.244.2.6:80
Session Affinity:  None
Events:            <none>
[root@server2 ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE      NOMINATED NODE   READINESS GATES
demo-5b4fc8bb88-flfq8   1/1     Running   0          9m2s   10.244.2.6   server4   <none>           <none>
demo-5b4fc8bb88-g4vz8   1/1     Running   0          6m6s   10.244.1.3   server3   <none>           <none>

[root@server2 ~]# kubectl scale deployment --replicas=3 demo 
deployment.apps/demo scaled
[root@server2 ~]# kubectl describe svc demo
Name:              demo
Namespace:         default
Labels:            app=demo
Annotations:       <none>
Selector:          app=demo
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.111.253.254
IPs:               10.111.253.254
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.3:80,10.244.2.6:80,10.244.2.7:80
Session Affinity:  None
Events:            <none>
[root@server2 ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
demo-5b4fc8bb88-fkz6v   1/1     Running   0          86s     10.244.2.7   server4   <none>           <none>
demo-5b4fc8bb88-flfq8   1/1     Running   0          11m     10.244.2.6   server4   <none>           <none>
demo-5b4fc8bb88-g4vz8   1/1     Running   0          8m39s   10.244.1.3   server3   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50

此时 pod 客户端可以通过 service 的名称访问后端的两个 Pod;
ClusterIP: 默认类型,自动分配一个仅集群内部可以访问的虚拟IP.

此时在访问时,三个后端是负载均衡的:

[root@server2 ~]# curl 10.111.253.254 
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-fkz6v
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-flfq8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-fkz6v
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-flfq8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-fkz6v
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

缩减:

[root@server2 ~]# kubectl scale deployment --replicas=2 demo 
deployment.apps/demo scaled
[root@server2 ~]# kubectl describe svc demo
Name:              demo
Namespace:         default
Labels:            app=demo
Annotations:       <none>
Selector:          app=demo
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.111.253.254
IPs:               10.111.253.254
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.3:80,10.244.2.6:80
Session Affinity:  None
Events:            <none>
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-flfq8
[root@server2 ~]# curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-flfq8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

此时的 ClusterIP 只允许集群内访问;用镜像 busyboxplus来测试;
可以看到其内部是可以访问的,并且是负载均衡;

[root@server2 ~]# kubectl delete pod busybox 
pod "busybox" deleted
[root@server2 ~]# kubectl run -i -t busybox --image=busyboxplus --restart=Never
If you don't see a command prompt, try pressing enter.
/ # curl 10.111.253.254
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
/ # curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
/ # curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-flfq8
/ # curl 10.111.253.254/hostname.html
demo-5b4fc8bb88-g4vz8
/ # 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 使用 NodePort 类型暴露端口,让外部客户端访问 Pod;
[root@server2 ~]# kubectl edit svc demo  
service/demo edited

     28   sessionAffinity: None
     29   type: NodePort	##修改service的type为NodePort

[root@server2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
demo         NodePort    10.111.253.254   <none>        80:30448/TCP   31m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        3d
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort 也可以在创建 service 时指定类型.

  • NodePort: 在 ClusterIP 基础上为 Service 在每台机器上绑定一个端口,这样就可以通过 NodeIP:NodePort 来访问该服务;

此时外部主机在访问时可以指定端口:

[root@westos ~]# curl 172.25.25.2:30448
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@westos ~]# curl 172.25.25.2:30448/hostname.html
demo-5b4fc8bb88-g4vz8
##此时在访问集群中的任何一个节点都是可以的,并且是负载均衡的。
[root@westos ~]# curl 172.25.25.3:30448/hostname.html
demo-5b4fc8bb88-flfq8
[root@westos ~]# curl 172.25.25.3:30448/hostname.html
demo-5b4fc8bb88-g4vz8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

扩容和所容必须是在控制器的基础上来做,自主式 pod 是不能做扩容和所容的。

更新pod镜像

在更新之后此时的 rc 就已经改变了;

[root@server2 ~]# kubectl set image deployment demo myapp=myapp:v2
deployment.apps/demo image updated
[root@server2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
demo         NodePort    10.111.253.254   <none>        80:30448/TCP   37m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        3d
[root@server2 ~]# curl 10.111.253.254
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@server2 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
demo-7bd47bddfc-cr2bq   1/1     Running   0          30s
demo-7bd47bddfc-sslpv   1/1     Running   0          27s
```
回滚:
查看历史版本:

```php
[root@server2 ~]# kubectl rollout history deployment demo 
deployment.apps/demo 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

回滚版本:

[root@server2 ~]# kubectl rollout undo deployment demo --to-revision 1
deployment.apps/demo rolled back
[root@server2 ~]# curl 10.111.253.254
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server2 ~]# kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/demo-5b4fc8bb88-k9rc4   1/1     Running   0          14s
pod/demo-5b4fc8bb88-tdtng   1/1     Running   0          16s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/demo         NodePort    10.111.253.254   <none>        80:30448/TCP   40m
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        3d

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo   2/2     2            2           49m

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-5b4fc8bb88   2         2         2       49m
replicaset.apps/demo-7bd47bddfc   0         0         0       3m
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

2. 资源清单

回收:

[root@server2 ~]# kubectl delete svc demo 
service "demo" deleted
[root@server2 ~]# kubectl delete deployments.apps demo 
deployment.apps "demo" deleted
[root@server2 ~]# kubectl get pod
No resources found in default namespace.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

建立自主式 pod 的清单文件:

[root@server2 ~]# mkdir k8s
[root@server2 ~]# cd k8s/
[root@server2 k8s]# kubectl api-versions  	##查看版本
[root@server2 k8s]# kubectl explain pod		##查看帮助
[root@server2 k8s]# kubectl explain pod.spec
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1	
##指明api资源属于哪个群组和版本,同一个组可以有多个版本
kind: Pod
##标记创建的资源类型,k8s主要支持以下资源类别:       (Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob)
metadata:		##元数据
  name: demo	##对像名称
spec:			##定义目标资源的期望状态
  containers:	##容器
  - name: demo
    image: myapp:v1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

其中还有一些常用参数如:namespace表示对象属于哪个命名空间;labels表示指定资源标签,标签是一种键值数据。
以上在书写的过程中都可以用帮助命令来查看:kubectl explain pod
使清单生效:

[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          2m17s
[root@server2 k8s]# kubectl get all
NAME       READY   STATUS    RESTARTS   AGE
pod/demo   1/1     Running   0          2m19s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d2h
[root@server2 k8s]# kubectl delete -f pod.yaml 	##删除pod
pod "demo" deleted
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

注:同一个清单文件中,写入两个 pod 时,如果两个 pod 用的是一个端口,只能启动一个;
同一个 pod 中使用的是同一个网络接口。

如以下清单文件在生成 pod 时;当两个 pod 占用一个端口时,会有一个启动失败;

[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
  - name: demo2
    image: myapp:v2
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod
NAME   READY   STATUS   RESTARTS   AGE
demo   1/2     Error    1          9s
[root@server2 k8s]# kubectl describe pod demo 

Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  27s               default-scheduler  Successfully assigned default/demo to server3
  Normal   Pulled     26s               kubelet            Container image "myapp:v1" already present on machine
  Normal   Created    26s               kubelet            Created container demo
  Normal   Started    25s               kubelet            Started container demo
  Normal   Pulled     7s (x3 over 25s)  kubelet            Container image "myapp:v2" already present on machine
  Normal   Created    6s (x3 over 25s)  kubelet            Created container demo2
  Normal   Started    6s (x3 over 25s)  kubelet            Started container demo2
  Warning  BackOff    3s (x2 over 19s)  kubelet            Back-off restarting failed container
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

再次修改清单文件,来测试同一个 pod 中的容器使用的是同一个网络接口;

[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
  - name: busybox
    image: busyboxplus
    stdin: true
    tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
demo   2/2     Running   0          12s
[root@server2 k8s]# kubectl attach demo -c busybox -it
	##加入进去
If you don't see a command prompt, try pressing enter.
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 92:89:66:28:58:09 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.11/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # curl localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
##可以看到虽然连的是 busybox ,但是访问到的是 myapp的信息
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

清单文件的参数:

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

  1. 镜像拉取设定:设定参数的意思是表示当镜像不存在的时候会拉取;默认值是always
[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
  #- name: busybox
  #  image: busyboxplus
  #  stdin: true
  #  tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl describe pod demo 

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  26s   default-scheduler  Successfully assigned default/demo to server4
  Normal  Pulled     25s   kubelet            Container image "myapp:v1" already present on machine
  Normal  Created    25s   kubelet            Created container demo
  Normal  Started    25s   kubelet            Started container demo
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  1. 指定使用主机网络:
[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
  hostNetwork: true
  #- name: busybox
  #  image: busyboxplus
  #  stdin: true
  #  tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod -o wide	##可以看到和宿主机使用相同的网路
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          10s   172.25.25.4   server4   <none>           <none>

[root@westos ~]# curl 172.25.25.4
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  1. 限制 cpu 和内存:
[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
    resources:
      limits:			##上限
        cpu: 1
        memory: 100Mi
      requests:			##下线
        cpu: 0.5
        memory: 50Mi 
  #hostNetwork: true
  #- name: busybox
  #  image: busyboxplus
  #  stdin: true
  #  tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl describe pod demo 

    Restart Count:  0
    Limits:
      cpu:     1
      memory:  100Mi
    Requests:
      cpu:        500m
      memory:     50Mi
    Environment:  <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  1. 标签
    之前一直调度到的是server4可以通过主机名来将其调度到server3,优先级最高,但是当server3 不存在时就有问题;
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 1
        memory: 100Mi
      requests:
        cpu: 0.5
        memory: 50Mi 
  nodeName: server3
  #hostNetwork: true
  #- name: busybox
  #  image: busyboxplus
  #  stdin: true
  #  tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          10s   10.244.1.9   server3   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

当调度到的主机名不存在时,此时就会失败;
下面来使用标签:

[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# kubectl get node --show-labels 
NAME      STATUS   ROLES                  AGE    VERSION   LABELS
server2   Ready    control-plane,master   3d3h   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
server3   Ready    <none>                 3d3h   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4   Ready    <none>                 3d3h   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
[root@server2 k8s]# kubectl label nodes server3 app=demo
node/server3 labeled
[root@server2 k8s]# kubectl label nodes server4 app=demo
node/server4 labeled
[root@server2 k8s]# kubectl get node --show-labels 
NAME      STATUS   ROLES                  AGE    VERSION   LABELS
server2   Ready    control-plane,master   3d3h   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
server3   Ready    <none>                 3d3h   v1.21.1   app=demo,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4   Ready    <none>                 3d3h   v1.21.1   app=demo,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

server3server4都加上标签,当一个退出时,会自动调度到另外一个上;

[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 1
        memory: 100Mi
      requests:
        cpu: 0.5
        memory: 50Mi 
  nodeSelector:
    app: demo
  #nodeName: server3
  #hostNetwork: true
  #- name: busybox
  #  image: busyboxplus
  #  stdin: true
  #  tty: true
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          34s   10.244.1.10   server3   <none>           <none>
[root@server2 k8s]# kubectl delete -f pod.yaml 
pod "demo" deleted
[root@server2 k8s]# kubectl label nodes server3 app=test 
error: 'app' already has a value (demo), and --overwrite is false
[root@server2 k8s]# kubectl label nodes server3 app=test  --overwrite
node/server3 labeled
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          9s    10.244.2.14   server4   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

除了写的方式以外,官网还有许多例子贡参考学习。

3. Pod生命周期

在这里插入图片描述

  • Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
  • Init 容器与普通的容器非常像,除了如下两点:
    它们总是运行到完成。
    Init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
  • 如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。然而,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
  • Init 容器能做什么?
    Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
    Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
    应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
    Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
    由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。

将之前的pod回收:

[root@server2 k8s]# kubectl get pod 
NAME      READY   STATUS      RESTARTS   AGE
busybox   0/1     Completed   0          154m
[root@server2 k8s]# kubectl delete pod busybox 
pod "busybox" deleted
[root@server2 k8s]# kubectl get pod 
No resources found in default namespace.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

创建svc服务:

[root@server2 k8s]# vim svc.yaml
[root@server2 k8s]# cat svc.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

[root@server2 k8s]# kubectl apply -f svc.yaml 
service/myservice created
[root@server2 k8s]# kubectl get svc	##服务成功创建
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   3d4h
myservice    ClusterIP   10.110.126.152   <none>        80/TCP    9s
[root@server2 k8s]# kubectl describe svc myservice 
Name:              myservice
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.126.152
IPs:               10.110.126.152
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
[root@server2 k8s]# kubectl delete -f svc.yaml 
service "myservice" deleted
[root@server2 k8s]# vim pod.yaml 
[root@server2 k8s]# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: demo
    image: myapp:v1
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 1
        memory: 100Mi
      requests:
        cpu: 0.5
        memory: 50Mi 
  initContainers:
  - name: busybox
    image: busyboxplus
    command: ['sh','-c', "until nslookup myservice.default.svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
[root@server2 k8s]# kubectl apply -f pod.yaml 
pod/demo created
[root@server2 k8s]# kubectl get pod	##一直在初始化状态
NAME   READY   STATUS     RESTARTS   AGE
demo   0/1     Init:0/1   0          7s
[root@server2 k8s]# kubectl logs demo		##看其日志是等待初始化完成
Error from server (BadRequest): container "demo" in pod "demo" is waiting to start: PodInitializing
[root@server2 k8s]# kubectl logs demo -c busybox

waiting for myservice
nslookup: can't resolve 'myservice.default.svc.cluster.local'

[root@server2 k8s]# kubectl apply -f svc.yaml 
service/myservice created
[root@server2 k8s]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   3d4h
myservice    ClusterIP   10.98.229.110   <none>        80/TCP    10s
[root@server2 k8s]# kubectl logs demo
[root@server2 k8s]# kubectl logs demo -c busybox 

Name:      myservice.default.svc.cluster.local
Address 1: 10.98.229.110 myservice.default.svc.cluster.local
[root@server2 k8s]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   3d4h
myservice    ClusterIP   10.98.229.110   <none>        80/TCP    2m11s
[root@server2 k8s]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          5m4s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

在运行结束之后,即使删除初始化容器,pod 也还是运行,因为运行pod 是在初始化结束之后进行的。

  • 探针是由 kubelet 对容器执行的定期诊断:
    ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
    TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
    HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
  • 每次探测都将获得以下三种结果之一:
    成功:容器通过了诊断。
    失败:容器未通过诊断。
    未知:诊断失败,因此不会采取任何行动。
  • Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
    livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其重启策略的影响。如果容器不提供存活探针,则默认状态为 Success。
    readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
    startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
  • 重启策略:
    PodSpec中有一个 restartPolicy 字段,可能的值为 Always、OnFailure 和 Never。默认为 Always。
  • Pod 的生命:
    一般Pod 不会消失,直到人为销毁他们,这可能是一个人或控制器。
    建议创建适当的控制器来创建 Pod,而不是直接自己创建 Pod。因为单独的 Pod 在机器故障的情况下没有办法自动复原,而控制器却可以。
  • 三种可用的控制器:
    使用 Job 运行预期会终止的 Pod,例如批量计算。Job 仅适用于重启策略为 OnFailure 或 Never 的 Pod。
    对预期不会终止的 Pod 使用 ReplicationControllerReplicaSetDeployment,例如 Web 服务器。 ReplicationController仅适用于具有 restartPolicy 为 Always 的 Pod。
    提供特定于机器的系统服务,使用 DaemonSet为每台机器运行一个 Pod 。
  1. livenessProbe(存活探针)
[root@server2 k8s]# vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1
        livenessProbe:		##存活探针
          tcpSocket:
            port: 80		##检测端口信息
          initialDelaySeconds: 2	##初始化完成之后2s再开始检测
          periodSeconds: 3			##每隔3s检测一次
          timeoutSeconds: 1			##检测超时
[root@server2 k8s]# kubectl apply -f deployment.yaml 
deployment.apps/deployment-example created
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-857bf7b8d7-gstst   1/1     Running   0          16s

[root@server2 k8s]# kubectl describe pod deployment-example-857bf7b8d7-5smfw 

    Ready:          True
    Restart Count:  0
    Liveness:       tcp-socket :80 delay=2s timeout=1s period=3s #success=1 #failure=3
    Environment:    <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

修改为一个不正确的端口再次检测一下:

[root@server2 k8s]# kubectl delete -f deployment.yaml 
deployment.apps "deployment-example" deleted
[root@server2 k8s]# vim deployment.yaml 
[root@server2 k8s]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 2
          periodSeconds: 3
          timeoutSeconds: 1
[root@server2 k8s]# kubectl apply -f deployment.yaml 
deployment.apps/deployment-example created
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-5d4c89ff67-glrrk   1/1     Running   3          50s
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS             RESTARTS   AGE
deployment-example-5d4c89ff67-glrrk   0/1     CrashLoopBackOff   4          76s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

此时错误的端口会一直报错,检测不到启动信息,会一直尝试重启;

  1. readinessProbe(就绪探针)
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-857bf7b8d7-9d8wl   1/1     Running   0          7s
[root@server2 k8s]# vim deployment.yaml 

        readinessProbe:			##加入就绪探针
          httpGet:
            path: /test.html
            port: 80
          initialDelaySeconds: 1
          periodSeconds: 3
          timeoutSeconds: 1
[root@server2 k8s]# kubectl apply -f deployment.yaml 
deployment.apps/deployment-example configured
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-7c8698d5bb-s97dq   0/1     Running   0          60s
deployment-example-857bf7b8d7-9d8wl   1/1     Running   0          115s
[root@server2 k8s]# kubectl describe pod deployment-example-7c8698d5bb-s97dq 
	##404错误,页面访问不到
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  77s                 default-scheduler  Successfully assigned default/deployment-example-7c8698d5bb-s97dq to server4
  Normal   Pulled     77s                 kubelet            Container image "myapp:v1" already present on machine
  Normal   Created    77s                 kubelet            Created container nginx
  Normal   Started    77s                 kubelet            Started container nginx
  Warning  Unhealthy  12s (x22 over 75s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404
[root@server2 k8s]# kubectl exec -it deployment-example-7c8698d5bb-s97dq -- sh
##进入容器中,编写发布页面信息
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    index.html
/usr/share/nginx/html # echo www.westos.org > test.html
/usr/share/nginx/html # cat test.html
www.westos.org
/usr/share/nginx/html # [root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-7c8698d5bb-s97dq   1/1     Running   0          5m31s
[root@server2 k8s]# kubectl describe pod deployment-example-7c8698d5bb-s97dq 

    Liveness:       tcp-socket :80 delay=2s timeout=1s period=3s #success=1 #failure=3
    Readiness:      http-get http://:80/test.html delay=1s timeout=1s period=3s #success=1 #failure=3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43

4. 控制器

  • Pod 的分类:
    自主式 Pod:Pod 退出后不会被创建
    控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
  • 控制器类型:
    Replication Controller和ReplicaSet
    Deployment
    DaemonSet
    StatefulSet
    Job
    CronJob
    HPA全称Horizontal Pod Autoscaler
  • Replication ControllerReplicaSet
    ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet。
    ReplicaSet 和 Replication Controller 的唯一区别是选择器的支持,ReplicaSet 支持新的基于集合的选择器需求。
    ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。
    虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制。
  • Deployment
    Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法。
    典型的应用场景:
    用来创建 Pod 和 ReplicaSet ,滚动更新和回滚,扩容和缩容,暂停与恢复。
  • DaemonSet
    DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
    DaemonSet 的典型用法:
    在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
    在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
    在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
    一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用。
    一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。
[root@server2 k8s]# kubectl -n kube-system get deployments.apps 
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           3d6h
[root@server2 k8s]# kubectl -n kube-system get daemonsets.apps 
NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-flannel-ds   3         3         3       3            3           <none>                   3d6h
kube-proxy        3         3         3       3            3           kubernetes.io/os=linux   3d6h
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • StatefulSet
    StatefulSet 是用来管理有状态应用的工作负载 API 对象。实例之间有不对等关系,以及实例对外部数据有依赖关系的应用,称为“有状态应用”
    StatefulSet 用来管理 Deployment 和扩展一组 Pod,并且能为这些 Pod 提供"序号和唯一性保证"。
    StatefulSets 对于需要满足以下一个或多个需求的应用程序很有价值:
    稳定的、唯一的网络标识符。
    稳定的、持久的存储。
    有序的、优雅的部署和缩放。
    有序的、自动的滚动更新。
  • Job
    执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束。
  • CronJob
    Cron Job 创建基于时间调度的 Jobs。
    一个 CronJob 对象就像 crontab (cron table) 文件中的一行,它用 Cron 格式进行编写,并周期性地在给定的调度时间执行 Job。
  • HPA,必须要有指标参数;默认的只有cpu和内存,其他的需要第三方软件提供。
    根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放。
  1. ReplicaSet
    可以看到在指定为3个副本时,当控制器生效后,就会有3个pod;
[root@server2 k8s]# vim rs.yaml
[root@server2 k8s]# cat rs.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: replicaset-example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1
[root@server2 k8s]# kubectl apply -f rs.yaml 
replicaset.apps/replicaset-example created
[root@server2 k8s]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
replicaset-example-fgsz7   1/1     Running   0          16s
replicaset-example-p94zg   1/1     Running   0          16s
replicaset-example-vt8t7   1/1     Running   0          16s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

由于 rs 能确保任何时间都有指定数量的 Pod 副本在运行,当修改其中一个标签之后,会检测到少于指定数量会再次拉起一个 pod 来;当发现多于指定数量时,会立即会收掉多余的 pod;rs 在控制副本时,是通过标签来实现的;

[root@server2 k8s]# kubectl get pod --show-labels 
NAME                       READY   STATUS    RESTARTS   AGE   LABELS
replicaset-example-fgsz7   1/1     Running   0          32s   app=nginx
replicaset-example-p94zg   1/1     Running   0          32s   app=nginx
replicaset-example-vt8t7   1/1     Running   0          32s   app=nginx
[root@server2 k8s]# kubectl label pod replicaset-example-fgsz7 app=myapp --overwrite
pod/replicaset-example-fgsz7 labeled
[root@server2 k8s]# kubectl get pod --show-labels 
##此时被改过标签的pod 并不在 rs 的管理范围内
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-fgsz7   1/1     Running   0          2m19s   app=myapp
replicaset-example-p94zg   1/1     Running   0          2m19s   app=nginx
replicaset-example-rwbxk   1/1     Running   0          10s     app=nginx
replicaset-example-vt8t7   1/1     Running   0          2m19s   app=nginx
[root@server2 k8s]# kubectl get rs
NAME                 DESIRED   CURRENT   READY   AGE
replicaset-example   3         3         3       2m8s

[root@server2 k8s]# kubectl delete -f rs.yaml 
replicaset.apps "replicaset-example" deleted
[root@server2 k8s]# kubectl get pod --show-labels 
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-fgsz7   1/1     Running   0          6m23s   app=myapp
[root@server2 k8s]# kubectl delete pod replicaset-example-fgsz7	
##将其看做自主式的pod来删除
pod "replicaset-example-fgsz7" deleted
[root@server2 k8s]# kubectl get pod
No resources found in default namespace.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
[root@server2 k8s]# kubectl apply -f rs.yaml 
replicaset.apps/replicaset-example created
[root@server2 k8s]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
replicaset-example-bt6gs   1/1     Running   0          7s
replicaset-example-c55p7   1/1     Running   0          7s
replicaset-example-vh7sl   1/1     Running   0          7s
[root@server2 k8s]# kubectl delete pod replicaset-example-bt6gs
pod "replicaset-example-bt6gs" deleted
[root@server2 k8s]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
replicaset-example-c55p7   1/1     Running   0          44s
replicaset-example-jkslv   1/1     Running   0          18s
replicaset-example-vh7sl   1/1     Running   0          44s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  1. Deployment和ReplicaSet 实现滚动更新

此时在让Deployment 控制器生效时,还有一个 rs

[root@server2 k8s]# vim deployment.yaml 
[root@server2 k8s]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1
        livenessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 2
          periodSeconds: 3
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            path: /hostname.html
            port: 80
          initialDelaySeconds: 1
          periodSeconds: 3
          timeoutSeconds: 1
[root@server2 k8s]# kubectl apply -f deployment.yaml 
deployment.apps/deployment-example created
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-5b768f7647-tjkkk   1/1     Running   0          4s
[root@server2 k8s]# kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/deployment-example-5b768f7647-tjkkk   1/1     Running   0          12s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d7h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-example   1/1     1            1           12s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-example-5b768f7647   1         1         1       12s
[root@server2 k8s]# kubectl get rs
NAME                            DESIRED   CURRENT   READY   AGE
deployment-example-5b768f7647   1         1         1       5m32s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52

来修改Deployment 控制器来观察 rs的效果:

[root@server2 k8s]# vim deployment.yaml 

  replicas: 3
  selector:
[root@server2 k8s]# kubectl apply -f deployment.yaml 
deployment.apps/deployment-example configured
[root@server2 k8s]# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
deployment-example-5b768f7647-4l244   1/1     Running   0          16s
deployment-example-5b768f7647-7456n   1/1     Running   0          16s
deployment-example-5b768f7647-tjkkk   1/1     Running   0          4m26s
[root@server2 k8s]# kubectl get rs
NAME                            DESIRED   CURRENT   READY   AGE
deployment-example-5b768f7647   3         3         3       4m56s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

更新镜像,更新完成之后,前面的还在方便回滚。

[root@server2 k8s]# vim deployment.yaml 
      - name: nginx
        image: myapp:v2
[root@server2 k8s]# kubectl apply -f deployment.yaml 
[root@server2 k8s]# kubectl get rs
NAME                            DESIRED   CURRENT   READY   AGE
deployment-example-5b768f7647   0         0         0       7m25s
deployment-example-65d9654644   3         3         3       18s
[root@server2 k8s]# curl 10.244.2.27
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  1. DaemonSet控制器:
    对于每个节点分布一个;
[root@server2 k8s]# vim daemonset.yaml
[root@server2 k8s]# cat daemonset.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-example
  labels:
    k8s-app: zabbix-agent
spec:
  selector:
    matchLabels:
      name: zabbix-agent
  template:
    metadata:
      labels:
        name: zabbix-agent
    spec:
      containers:
      - name: zabbix-agent
        image: myapp:v1
[root@server2 k8s]# kubectl apply -f daemonset.yaml 
daemonset.apps/daemonset-example created
[root@server2 k8s]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
daemonset-example-8zwbg   1/1     Running   0          20s
daemonset-example-vtftj   1/1     Running   0          20s
[root@server2 k8s]# kubectl get node
NAME      STATUS   ROLES                  AGE    VERSION
server2   Ready    control-plane,master   3d7h   v1.21.1
server3   Ready    <none>                 3d7h   v1.21.1
server4   Ready    <none>                 3d7h   v1.21.1
[root@server2 k8s]# kubectl describe nodes server2 | grep Tain
##由于server2默认是不参加调度的
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@server2 k8s]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
daemonset-example-8zwbg   1/1     Running   0          105s   10.244.2.29   server4   <none>           <none>
daemonset-example-vtftj   1/1     Running   0          105s   10.244.1.24   server3   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  1. job 管理器:

用控制器来做一次计算pi的值,计算到小数点后1000位;

[root@server2 k8s]# vim job.yaml
[root@server2 k8s]# cat job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(1000)"]
      restartPolicy: Never
  backoffLimit: 4
[root@server2 k8s]# kubectl apply -f job.yaml 
job.batch/pi created
[root@server2 k8s]# kubectl get pod
NAME       READY   STATUS              RESTARTS   AGE
pi-nt9vb   0/1     ContainerCreating   0          10s

[root@server2 k8s]# kubectl describe pod pi

  Normal  Pulled     9s    kubelet            Successfully pulled image "perl" in 1m2.625821525s
  Normal  Created    9s    kubelet            Created container pi
  Normal  Started    9s    kubelet            Started container pi
[root@server2 k8s]# kubectl get pod
NAME       READY   STATUS      RESTARTS   AGE
pi-nt9vb   0/1     Completed   0          98s
[root@server2 k8s]# kubectl logs pi-nt9vb		##查看 pi 的计算结果
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587006606315588174881520920962829254091715364367892590360011330530548820466521384146951941511609433057270365759591953092186117381932611793105118548074462379962749567351885752724891227938183011949129833673362440656643086021394946395224737190702179860943702770539217176293176752384674818467669405132000568127145263560827785771342757789609173637178721468440901224953430146549585371050792279689258923542019956112129021960864034418159813629774771309960518707211349999998372978049951059731732816096318595024459455346908302642522308253344685035261931188171010003137838752886587533208381420617177669147303598253490428755468731159562863882353787593751957781857780532171226806613001927876611195909216420199
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  1. cronjob控制器:
    周期性的动作;
[root@server2 k8s]# vim cronjob.yaml
[root@server2 k8s]# cat cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronjob-example
spec:
  schedule: "* * * * *"		##每分钟做一次
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cronjob
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from k8s cluster
          restartPolicy: OnFailure
[root@server2 k8s]# kubectl apply -f cronjob.yaml 
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
cronjob.batch/cronjob-example created
[root@server2 k8s]# kubectl get cronjobs.batch 
NAME              SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob-example   * * * * *   False     0        <none>          55s
[root@server2 k8s]# kubectl get pod
NAME                             READY   STATUS              RESTARTS   AGE
cronjob-example-27051133-zxdxs   0/1     ContainerCreating   0          1s


[root@server2 k8s]# kubectl get cronjobs.batch 
NAME              SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob-example   * * * * *   False     0        <none>          21s
[root@server2 k8s]# kubectl get pod
NAME                             READY   STATUS      RESTARTS   AGE
cronjob-example-27048096-8h5zs   0/1     Completed   0          10s
[root@server2 k8s]# kubectl logs cronjob-example-27048096-8h5zs 
Sat Jun  5 09:36:00 UTC 2021
Hello from k8s cluster

[root@server2 k8s]# kubectl get pod
NAME                             READY   STATUS      RESTARTS   AGE
cronjob-example-27048096-8h5zs   0/1     Completed   0          82s
cronjob-example-27048097-258jf   0/1     Completed   0          22s
[root@server2 k8s]# kubectl logs cronjob-example-27048096-8h5zs 
Sat Jun  5 09:36:00 UTC 2021
Hello from k8s cluster
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48

当一分钟之后,才会再次更新

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/161960
推荐阅读
  

闽ICP备14008679号