赞
踩
$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-6d4bc5cc76-7n5tn 1/1 Running 0 20h 241.255.0.122 master <none> <none> myapp-deploy-6d4bc5cc76-fprjr 1/1 Running 0 20h 241.255.3.41 node1 <none> <none> $ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 241.254.0.1 <none> 443/TCP 24d <none> myapp ClusterIP 241.254.228.65 <none> 80/TCP 24h app=myapp,release=stabel #有两个pod分别运行在master和node1节点上,service服务运行在master节点上 #按理访问curl 241.254.228.65会轮询到后端的两个pod,但是现在只能访问到master节点上的Pod,无法访问node1节点上 #也无法正常ping通node1节点上的Pod ip $ curl 241.254.228.65/hostname.html curl: (7) Failed connect to 241.254.228.65:80; 连接超时. $ curl 241.254.228.65/hostname.html myapp-deploy-6d4bc5cc76-7n5tn $ kubectl exec -it myapp-deploy-6d4bc5cc76-7n5tn -- ping 241.255.3.41 PING 241.255.3.41 (241.255.3.41) 56(84) bytes of data.
相同物理机pod内容器之间互相访问通过lo本地回环
相同物理机Pod之间访问通过kube-bridge
不同物理机Pod之间访问通过flannel或者kube-route网络插件
由于service与node1在不同物理机,所以判断是kube-router网络插件出现问题
$ kubectl get pod -n kube-system -o wide | grep router kube-router-2npds 1/1 Running 0 6d19h 192.168.234.132 master <none> <none> kube-router-gzzct 1/1 Running 0 65m 192.168.234.134 node1 <none> <none> #重启node1节点上kube-router $ kubectl delete pod -n kube-system kube-router-gzzct $ curl 241.254.228.65/hostname.html myapp-deploy-6d4bc5cc76-7n5tn $ curl 241.254.228.65/hostname.html myapp-deploy-6d4bc5cc76-fprjr $ kubectl exec -it myapp-deploy-6d4bc5cc76-7n5tn -- ping 241.255.3.41 PING 241.255.3.41 (241.255.3.41) 56(84) bytes of data. 64 bytes from 241.255.3.41: icmp_seq=1 ttl=62 time=11.7 ms 64 bytes from 241.255.3.41: icmp_seq=2 ttl=62 time=0.563 ms 64 bytes from 241.255.3.41: icmp_seq=3 ttl=62 time=0.309 ms
当服务器突然断电重启后,有可能会出现apiserver无法启动,查看kubelet日志提示node “master” not found
解决方法:重新初始化集群
#master节点:
$ kubeadm reset -f
$ kubeadm init --config kube-init.yaml
#node节点
$ kubeadm reset -f
$ rm -rf /etc/kubernetes/*
$ kubeadm join k8s-api:6443 --token 5166g3.zfz8nt8k5zfv2ld8 --discovery-token-ca-cert-hash sha256:c8318efd25c31fc29e509e0c08c6bee9dd2f62886252d2684c7762023f99ba68
$ kubectl apply -f nfs-provisioner.yaml
deployment.apps/nfs-client-provisioner created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-5454975f6b-5sv8k 0/1 ContainerCreating 0 4s
$ kubectl describe pod nfs-client-provisioner-5454975f6b-5sv8k
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /opt/kubelet/pods/03ade740-0dea-4795-a0da-d648c972e7d4/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 192.168.234.134:/opt /opt/kubelet/pods/03ade740-0dea-4795-a0da-d648c972e7d4/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-120444.scope.
mount.nfs: access denied by server while mounting 192.168.234.134:/opt
解决方法:
1、不要让nfs-client-provisioner的pod与nfs运行在同一台机器上。
或者
2、nfs的机器/etc/exports 允许的ip修改为*
$ cat /etc/exports
/opt *(insecure,rw,sync,no_root_squash)
正确的删除方法:删除pod–> 删除pvc —> 删除pv --> 删除名称空间。但是有时候遇到pod、pvc、pv始终处于“Terminating”状态,而且删不掉。之所以出现pod、pvc、pv、ns无法删除,那是因为kubelet 阻塞,有其他的资源在使用该namespace,比如CRD等,尝试重启kubelet。
加参数 --force --grace-period=0,grace-period表示过渡存活期,默认30s,在删除POD之前允许POD慢慢终止其上的容器进程,从而优雅退出,0表示立即终止POD
$ kubectl delete pod <your-pod-name> -n <name-space> --force --grace-period=0
$ kubectl patch pvc pvcNmae -p '{"metadata":{"finalizers":null}}' -n namespace
$ kubectl delete ns <terminating-namespace> --force --grace-period=0
如果上述命令依然无法删除ns,可尝试以下操作
1、 $ kubectl get namespace <terminating-namespace> -o json >tmp.json 2、编辑tmp.josn,删除finalizers 字段的值 { "apiVersion": "v1", "kind": "Namespace", "metadata": { "creationTimestamp": "2019-11-20T15:18:06Z", "deletionTimestamp": "2020-01-16T02:50:02Z", "name": "<terminating-namespace>", "resourceVersion": "3249493", "selfLink": "/api/v1/namespaces/knative-eventing", "uid": "f300ea38-c8c2-4653-b432-b66103e412db" }, "spec": { #从此行开始删除 "finalizers": [] }, # 删到此行 "status": { "phase": "Terminating" } } 3、开启proxy,执行该命令后,当前终端会被卡住 $ kubectl proxy 4、打开新的一个窗口,执行以下命令 $ curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/<terminating-namespace>/finalize 输出如下信息: { "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "istio-system", "selfLink": "/api/v1/namespaces/istio-system/finalize", "uid": "2e274537-727f-4a8f-ae8c-397473ed619a", "resourceVersion": "3249492", "creationTimestamp": "2019-11-20T15:18:06Z", "deletionTimestamp": "2020-01-16T02:50:02Z" }, "spec": { }, "status": { "phase": "Terminating" } } 5、确认处于Terminating 状态的namespace已经被删除 $ kubectl get ns
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。