赞
踩
利用 kubeadm 创建高可用集群
为什么master至少要3台?
答: 数据的一致性问题,如何实现同时写的问题,每台master上的数据都是一样的。
raft协议:一致性协议
paxos: MySQL的group replication 使用的
外部的etcd服务器-外部 etcd 拓扑
高可用拓扑选项
根据pod调度策略和方法:
1.deployment: 全自动调度: 根据node的算力(cpu,内存,带宽,已经运行的pod等)
2.node selector:定向调度
3.nodeaffinity --》尽量把不同的pod放到一台node上 亲和度
4.podaffinity --》尽量把相同的pod放到一起
5.taints和tolerations 污点和容忍
为什么master上没有启动业务app pod?
答案: 这是因为scheduler 调度器会根据调度策略 给pod打了污点,避免了在master上建立业务上的pod
kubectl create deployment k8s-nginx --image=nginx -r 10
kubectl scale deployment/k8s-nginx --replicas 16
第四步中副本控制器是怎么知道要创建新的pod?怎么发现新的app?
Kubernetes 集群中的 Pod 主要有两种用法:
任何的数据变化都会存到etcd里面,副本控制器通过api server 访问到etcb获取数据。
运行单个容器的 Pod。"每个 Pod 一个容器"模型是最常见的 Kubernetes 用例; 在这种情况下,可以将 Pod 看作单个容器的包装器,并且 Kubernetes 直接管理 Pod,而不是容器。
运行多个协同工作的容器的 Pod。 Pod 可能封装由多个紧密耦合且需要共享资源的共处容器组成的应用程序。 这些位于同一位置的容器可能形成单个内聚的服务单元 —— 一个容器将文件从共享卷提供给公众, 而另一个单独的“边车”(sidecar)容器则刷新或更新这些文件。 Pod 将这些容器和存储资源打包为一个可管理的实体。
[root@master lianxi]# cat chen.yml apiVersion: v1 kind: Pod metadata: name: memorydemo2 namespace: mem2 spec: containers: - name: memory-demo-ctr image: polinux/stress resources: limits: memory: "200Mi" requests: memory: "100Mi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] - name: webleader2 image: "docker.io/nginx" resources: requests: cpu: 100m memory: 100Mi limits: memory: 200Mi ports: - containerPort: 80 [root@master lianxi]# kubectl create namespace mem2 namespace/mem2 created [root@master lianxi]# kubectl apply -f chen.yml pod/memorydemo2 created [root@master lianxi]# kubectl get pod -n mem2 NAME READY STATUS RESTARTS AGE memorydemo2 2/2 Running 0 71s [root@master lianxi]# kubectl get pod -n mem2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES memorydemo2 2/2 Running 0 91s 10.244.6.36 node3 <none> <none> # 到对应节点上面查看启动的容器
pause容器(把pod的公用的命名空间都创建好)–》init容器—》app容器/sidecar容器
初始化(init)容器像常规应用容器一样,只有一点不同:初始化(init)容器必须在应用容器启动前运行完成。 Init 容器的运行顺序:一个初始化(init)容器必须在下一个初始化(init)容器开始前运行完成。
理解init容器(官网)
官网实验
[root@master lianxi]# cat tangyuhao.yml apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"] - name: init-mydb image: busybox:1.28 command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
[root@master lianxi]# kubectl apply -f tangyuhao.yml
pod/myapp-pod created
[root@master lianxi]# kubectl get pod myapp-pod
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 76m
[root@master lianxi]# kubectl get pod myapp-pod -o wide # 此时需要的俩个服务没启动,所以容器没启动
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-pod 0/1 Init:0/2 0 76m 10.244.1.3 node1 <none> <none>
创建service
[root@master lianxi]# vim service.yml --- apiVersion: v1 kind: Service metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 --- apiVersion: v1 kind: Service metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 [root@master lianxi]# mv service.yml services.yaml [root@master lianxi]# kubectl create -f services.yaml service/myservice created service/mydb created [root@master lianxi]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 8h mydb ClusterIP 10.1.104.64 <none> 80/TCP 16s myservice ClusterIP 10.1.193.29 <none> 80/TCP 16s [root@master lianxi]# kubectl get pod myapp-pod -o wide # 此时pod都运行起来了 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-pod 1/1 Running 0 102m 10.244.1.3 node1 <none> <none>
在创建一个pod的时候,最先创建的容器
使用探针来检查容器有四种不同的方法。 每个探针都必须准确定义为这四种机制中的一种:
针对运行中的容器,kubelet 可以选择是否执行以下三种探针,以及如何针对探测结果作出反应:
[root@master lianxi]# cat exec-liveness.yaml apiVersion: v1 kind: Pod # 启动一个pod metadata: labels: test: liveness # 打标签为liveness name: liveness-exec # pod名称为liveness-exec spec: containers: - name: liveness # 启动容器名字叫liveness image: busybox # 使用对应的镜像 args: # 容器里面执行的命令 - /bin/sh - -c # 建文件、睡30s、删除文件、睡10分钟 - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: # 定义存活探针 exec: # 定义探测机制(方法) command: - cat - /tmp/healthy # 查看文件 initialDelaySeconds: 5 # 容器启动起来后推迟5秒钟执行命令 periodSeconds: 5 # 每隔5秒执行一次
[root@master lianxi]# kubectl apply -f exec-liveness.yaml [root@master lianxi]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 4 (79s ago) 7m [root@master lianxi]# kubectl describe pod liveness-exec ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 20m default-scheduler Successfully assigned default/liveness-exec to node3 Normal Pulled 20m kubelet Successfully pulled image "busybox" in 25.368558545s Normal Pulled 18m kubelet Successfully pulled image "busybox" in 5.003375984s Normal Created 17m (x3 over 20m) kubelet Created container liveness Normal Started 17m (x3 over 20m) kubelet Started container liveness Normal Pulled 17m kubelet Successfully pulled image "busybox" in 6.111309174s Normal Killing 16m (x3 over 19m) kubelet Container liveness failed liveness probe, will be restarted Normal Pulling 16m (x4 over 20m) kubelet Pulling image "busybox" Warning Unhealthy 15m (x11 over 19m) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory Warning BackOff 5m29s (x24 over 11m) kubelet Back-off restarting failed container Normal Pulled 31s kubelet (combined from similar events): Successfully pulled image "busybox" in 4.224120868s
[root@master lianxi]# kubectl apply -f wang.yml pod/liveness-http created [root@master lianxi]# cat wang.yml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-http spec: containers: - name: liveness image: liveness args: - /server livenessProbe: httpGet: path: /healthz port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3
[root@master lianxi]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 0/1 CrashLoopBackOff 11 (17s ago) 29m
liveness-http 0/1 ImagePullBackOff 0 34s
#镜像拉取失败,需要在国外的服务器上拉取镜像上传到本地实验才能进行
[root@master lianxi]# cat tcp-liveness-readiness.yaml apiVersion: v1 kind: Pod metadata: name: goproxy labels: app: goproxy spec: containers: - name: goproxy image: k8s.gcr.io/goproxy:0.1 # 需要在node节点上准备好此镜像 ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 15 periodSeconds: 20
[root@master lianxi]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goproxy 1/1 Running 0 76s 10.244.3.4 node3 <none> <none>
在master上通过pod时间来检测存活探测器
[root@master lianxi]# nc -z 10.244.3.4 8080
[root@master lianxi]# echo $?
0
[root@master lianxi]# nc -z 10.244.3.4 8090
[root@master lianxi]# echo $?
1
尽管每个 Pod 都有一个唯一的 IP 地址,但是如果没有 Service ,这些 IP 不会暴露在集群外部。Service 允许您的应用程序接收流量。Service 也可以用在 ServiceSpec 标记type的方式暴露
就是发布内网的pod,k8s集群外面的用户可以访问
[root@master lianxi]# kubectl get services #
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 8h
mydb ClusterIP 10.1.104.64 <none> 80/TCP 4m52s
myservice ClusterIP 10.1.193.29 <none> 80/TCP 4m52s
[root@master lianxi]# kubectl describe services mydb Name: mydb Namespace: default Labels: <none> Annotations: <none> Selector: <none> Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.1.104.64 IPs: 10.1.104.64 Port: <unset> 80/TCP TargetPort: 9377/TCP Endpoints: <none> Session Affinity: None Events: <none>
https://kubernetes.io/zh/docs/concepts/services-networking/connect-applications-service/
[root@master pod+server]# cat pod_server.yml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 6 # 部署6个pod template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80
[root@master pod+server]# kubectl apply -f pod_server.yml # 这使得可以从集群中任何一个节点来访问它
deployment.apps/my-nginx created
[root@master pod+server]# kubectl get pods -l run=my-nginx -o # 检查节点,该 Pod 正在运行 wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-cf54cdbf7-7st8h 1/1 Running 0 31s 10.244.1.4 node1 <none> <none>
my-nginx-cf54cdbf7-bqcg6 1/1 Running 0 31s 10.244.2.5 node2 <none> <none>
my-nginx-cf54cdbf7-hshqp 1/1 Running 0 31s 10.244.3.7 node3 <none> <none>
my-nginx-cf54cdbf7-wtf5k 1/1 Running 0 31s 10.244.3.6 node3 <none> <none>
my-nginx-cf54cdbf7-zpc2v 1/1 Running 0 31s 10.244.1.5 node1 <none> <none>
my-nginx-cf54cdbf7-zwmsz 1/1 Running 0 31s 10.244.2.4 node2 <none> <none>
# 检查 Pod 的 IP 地址
[root@master pod+server]# kubectl get pods -l run=my-nginx -o yaml | grep podIP
podIP: 10.244.1.4
podIPs:
podIP: 10.244.2.5
podIPs:
podIP: 10.244.3.7
podIPs:
podIP: 10.244.3.6
podIPs:
podIP: 10.244.1.5
podIPs:
podIP: 10.244.2.4
podIPs:
[root@master pod+server]# cat sevice.yml apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: type: NodePort ports: - port: 8080 --》vip的 targetPort: 80 --》pod里的容器的 protocol: TCP name: http selector: run: my-nginx
[root@master pod+server]# kubectl apply -f sevice.yml
service/my-nginx created
[root@master pod+server]# kubectl get svc my-nginx # 查看你的 Service 资源
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx NodePort 10.1.198.223 <none> 8080:32566/TCP 54s
# 32566是宿主机IP的端口
[root@master pod+service]# kubectl describe svc my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: <none> Selector: run=my-nginx Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.1.84.198 --》vip是serivces给我们在k8s内部创建的一个负载均衡器的ip IPs: 10.1.84.198 Port: http 8080/TCP --》vip上端口号 TargetPort: 80/TCP ---》pod里的容器的端口号 NodePort: http 31884/TCP Endpoints: 10.244.1.14:80,10.244.1.15:80,10.244.2.25:80 + 3 more... Endpoints 终点: 其实就最后的访问地址---》都pod的ip Session Affinity: None External Traffic Policy: Cluster Events: <none>
[root@master pod+server]# curl 192.168.168.152:32566 # 有页面返回表示访问成功
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。