当前位置:   article > 正文

CKA考题解析_kubernetes.io/change-cause: 'resize

kubernetes.io/change-cause: 'resize

0. 环境

操作系统:ubuntu18以上
通常:1个master,2个node
可参考文档:

题目:共计 17 题,题库一般每2年更新
官方考试模拟器:https://killer.sh
官方playground:https://killercoda.com/

1. 题目

1.1 基于角色的访问控制 RBAC

题目:
创建资源clusterrole,名称deployment-clusterrole,针对“deployment”、“statefulset”和“deamonset”有“create”权限
创建资源serviceaccount,名称cicd-token,名称空间app-team1
给名为cicd-token的cicd-token资源绑定集群角色deployment-clusterrole,指定名称空间app-team1
image.png
参考:https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/

# 注意资源名称后面要加“s”
$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,deamonsets
$ kubectl create sa cicd-token -n app-team1
# --serviceaccount=namespace:serviceaccount
$ kubectl create rolebinding cicd-toker-binding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token --namespace app-team1
  • 1
  • 2
  • 3
  • 4
  • 5

1.2 节点维护–指定node节点不可用

题目:
设置节点不可调度
驱逐节点上的所有pod
image.png

注意首先执行题干中提示的命令进行切换上下文

参考:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain

$ kubectl config user-context ek8s
$ kubectl cordon ek8s-node-1
$ kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force
  • 1
  • 2
  • 3

1.3 k8s版本升级

题目
image.png
参考:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

注意版本不同,升级的具体命令略微不同

$ kubectl config user-context mk8s
$ kubectl get nodes		# 查看集群控制节点
$ kubectl cordon k8s-master
$ kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets --force

# 按题目提示,切换至k8s-master节点
$ ssh k8s-master
$ apt update
$ apt-cache policy kubeadm | grep 1.19.0		# 查看支持哪个版本
$ apt-get install kubeadm=1.19.0-00

# 验证升级计划
$ kubeadm upgrade plan
# 升级master节点
$ kubeadm upgrade apply v1.19.0 --etcd-upgrade=false
# 升级kubectl和kubelet
$ apt-get install -y kubelet=1.19.0-00 kubectl=1.19.0-00
$ systemctl daemon-reload
$ systemctl restart kubelet

$ kubectl uncordon k8s-master
$ kubectl get node
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

1.4 etcd备份与恢复

题目
根据给定信息备份etcd
根据给定信息还原etcd快照
image.png
image.png
参考:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/

# 备份
$ export ETCDCTL_API=3
$ etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /srv/data/etcd/etcd-snapshot.db
  • 1
  • 2
  • 3
# 还原
$ mkdir -p /opt/backup
$ cd /etc/kubernetes/manifests && mv kube-* /opt/backup/
$ export ETCDCTL_API=3
$ etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db --data-dir=/var/kib/etcd-restore
	# 备注
  --data-dir		指定快照恢复后的存储路径
$ vim /opt/backup/etcd.yaml
## 将 volume 配置的 path:/var/lib/etcd 改成 /var/lib/etcd-restore
volumes:
- hostPath:
    path: /etc/kubernetes/ki/etcd
    type: DirectoryOrCreate
  name: etcd-certs
- hostPath:
    path: /var/lib/etcd-restore

## 还原 k8s 组件
$ mv /opt/backup/* /etc/kubernetes/mainfests/
$ systemctl restart kubelet
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

1.5 网络策略–NetworkPolicy

题目
在名称空间-internal下,新建网络策略-allow-port-from-namespace,要求:
允许访问同名称空间下9000端口的pod
不允许访问非9000端口的pod
不允许访问不在名称空间-internal下的pod
image.png
参考 https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: internal
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
      - podSelector: {}
      ports:
      - port: 9000
        protocol: TCP
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

1.6 四层负载均衡–service

题目
编辑已经存在的deployment资源:front-end

  • 在已存在的容器nginx中新增端口:名称为http的端口:80/tcp

新建service资源:

  • 名称:front-end-svc
  • 容器端口:http

image.png<br />参考
https://kubernetes.io/zh-cn/docs/tutorials/services/connect-applications-service/#the-kubernetes-model-for-connecting-containers
https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/

$ kubectl edit deployment.apps front-end
...
name: nginx
ports:
- containerPort: 80
  name: http
  protocol: TCP
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

创建svc资源有两种方式:

  • 命令行
$ kubectl expose deploy front-end --name=front-end-svc --port=80 --taget-port=http --type=NodePort
  • 1
  • yaml
apiVersion: v1
kind: Service
metadata:
  name: front-end-svc
spec:
  #selector:
  #  app.kubernetes.io/name: MyApp
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
      targetPort: http
    	#nodePort: 30007
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

1.7 七层负载均衡–ingress

题目
image.png
image.png
参考:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pong
	namespace: ing-internal
spec:
  rules:
  - http:
      paths:
      - path: /hi
        pathType: Prefix
        backend:
          service:
            name: hi
            port: 5678
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

1.8 Deployment扩容Pod

题目
image.png
参考:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment
两种方式可以实现

  • edit
$ kubectl edit deployment loadbalancer
replicas: 6
  • 1
  • 2
  • 命令行

kubectl scale deployment/loadbalancer --replicas=6

1.9 Pod调度指定节点

题目
image.png
参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disk: spinning
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

1.10 检查Node节点的健康状态

题目
image.png

$ kubectl get node | grep -i ready		# 记录总数为 A
$ kubectl describe node | grep Taint | grep NoScherdule		# 记录总数为 B
# 将 A-B 的值写入 /opt/KUSC00402/kusc00402.txt
$ echo X >> /opt/KUSC00402/kusc00402.txt
  • 1
  • 2
  • 3
  • 4

1.11 一个Pod封装多个容器

题目
image.png
参考:https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/#what-is-a-pod

apiVersion: v1
kind: Pod
metadata:
  name: kucc1
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis
  - name: memcached
    image: memcached
  - name: consul
    image: consul
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

1.12 持久化存储卷PersustentVolume

题目
image.png
参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/srv/app-config"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

1.13 PersistentVolumeClaim

题目
image.png
image.png
参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pvc

  1. 创建 PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  1. 创建 Pod
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  volumes:
    - name: pv-volume
      persistentVolumeClaim:
        claimName: pv-volume
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: pv-volume
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  1. 扩容 PVC
  • 方法一:edit
$ kubectl edit pvc pv-volume
# 若没有,增新增注解。作用:可以修改大小
kubernetes.io/change-cause: 'resize'
# 扩容
storage: 70Mi
  • 1
  • 2
  • 3
  • 4
  • 5

image.png

  • 方法二:命令行–patch

kubectl patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage": "70Mi"}}}}' --record

$ kubectl patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage": "70Mi"}}}}' --record

# 备注
	-p		更新
  --record	生成更新记录
  • 1
  • 2
  • 3
  • 4
  • 5

1.14 监控 Pod 日志

题目
image.png

$ kubectl config user-context k8s
$ kubectl logs foobar |grep unable-access-websote > /opt/KUTR00101/foobar
  • 1
  • 2

1.15 Sidecar代理

image.png
image.png
image.png
参考:https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/

$ kubectl edit pod legacy-app
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  ...
  # 新增以下
  - name: count
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -F /var/log/legacy-app.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
    emptyDir: {}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

1.16 Pod度量指标

题目
找出所有带有 name=cpu-user 标签的 pod 中 CPU 使用率最高的 pod,将该 Pod 名称追加至 /opt/KUTR00401/KUTR00401.txt 文件中
image.png

$ kubectl top po -A -l name=cpu-user
# 注意:CPU默认单位是“核”,可省略,1核=1000m。因此要注意区分CPU使用率的单位
$ echo "xxxxx" >> /opt/KUTR00401/KUTR00401.txt
  • 1
  • 2
  • 3

1.17 集群故障排查–kubelet故障

题目
image.png

$ kubectl config use-context wk8s
$ ssh wk8s-node-0
$ sudo -i
$ systemctl status kubelet
$ systemctl start kubelet
$ systemctl enable kubelet
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/171503
推荐阅读
相关标签
  

闽ICP备14008679号