当前位置:   article > 正文

云原生|kubernetes|CKA真题解析-------(1-5题)_cka是线上考试吗

cka是线上考试吗

前言:

 

CKA是一个线上的技术认证考试,具体是个什么样的认证网上的资料非常多,在此不重复了。

大家比较关心的问题大体集中在这几点:

1,CKA认证的含金量到底如何?

CKA 证书是云原生计算基金会CNCF组织的,它考察的是你是否具备足够管理 Kubernetes 集群的必备知识。考试形式是上机直接在集群上操作,非常考验个人知识的扎实程度和 Kubernetes 实践经验。它本身是非常细致、严谨的官方技术考试。目前来看关于这块技术的认证,不论是发证机构还是考核方式及难度,CKA看来都是很权威的

2,CKA认证的难易程度?

CKA认证考试的题目全部是实操题(多说一句和rhce这些线下考试基本一样,都是上机实操,区别仅仅是CKA是线上而已),没有任何一个选择题或者填空题之类的,而且经常听学员反馈考点设备的网络较卡,设备反应慢等问题,考试环境中需要你自己动手把所有的实操题全部完成才行,所以对于没有基础的学员来说还是有难度的,也就是说,会者不难,难者不会。

3,是否需要CKA认证?

我看过了比较多的各种各样的CKA题库,从实际的生产中来说,CKA认证里的很多部分是用不到的或者说很少会用到的,比如,集群的升级,etcd数据库的备份(通常etcd的备份都是脚本自动化完成),节点的维护等等这些。但,大部分的内容还是有实际的应用价值的。

综上,可以负责任的说,CKA认证还是有实际意义的,确实能够证明一个人对kubernetes集群的管理能力。

有官网有模拟考试两次的机会,两次正式考试的机会。

其实任何的考试都是一个窍门:勤学苦练  ,没有什么特别的技巧(天赋异禀的另说喽)。




本文计划以2022CKA考试题库为蓝本,每天详细记录5道题目的解题思路,解题方法。题库总计17道题目。




一,

RBAC

题目要求:

创建名称 deployment-clusterrole 的 ClusterRole,该⻆色具备创建 Deployment、Statefulset、 Daemonset 的权限,在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount,绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1。 

解题思路:

此题涉及四种资源:serviceAccount,clusterrole,rolebinding,namespace,本文的资源都使用缩写。

这里主要是考察三点,一个是给sa赋权,一个是sa和clusterrole的绑定,一个是限定命名空间,而绑定只有rolebinding是和命名空间有关,clusterrolebinding是无关命名空间的,因此,绑定需要使用rolebinding。

这里说明一下如何知道某个资源是和namespace有关联的:

例如,我们查询角色绑定 rolebinding 输出会有namespace

很明显,角色绑定rolebinding和namespace是绑定的

  1. root@k8s-master:~# kubectl get rolebindings.rbac.authorization.k8s.io -A
  2. NAMESPACE NAME ROLE AGE
  3. default cicd-deployment ClusterRole/deployment-clusterrole 54m
  4. default leader-locking-nfs-client-provisioner Role/leader-locking-nfs-client-provisioner 354d
  5. ingress-nginx ingress-nginx Role/ingress-nginx 354d
  6. ingress-nginx ingress-nginx-admission Role/ingress-nginx-admission 354d
  7. kube-public kubeadm:bootstrap-signer-clusterinfo Role/kubeadm:bootstrap-signer-clusterinfo 369d
  8. kube-public system:controller:bootstrap-signer Role/system:controller:bootstrap-signer 369d

查看角色 role呢?输出会有namespace

很明显,role是和namespace绑定的 

  1. root@k8s-master:~# kubectl get roles.rbac.authorization.k8s.io -A
  2. NAMESPACE NAME CREATED AT
  3. default leader-locking-nfs-client-provisioner 2021-12-23T09:57:07Z
  4. ingress-nginx ingress-nginx 2021-12-23T10:10:47Z
  5. ingress-nginx ingress-nginx-admission 2021-12-23T10:10:49Z
  6. kube-public kubeadm:bootstrap-signer-clusterinfo 2021-12-08T06:32:46Z

查询集群角色 clusterrole呢? 输出没有namespace

很明显,clusterrole是不和namespace绑定的

  1. root@k8s-master:~# kubectl get clusterroles.rbac.authorization.k8s.io -A
  2. NAME CREATED AT
  3. admin 2021-12-08T06:32:43Z
  4. calico-kube-controllers 2021-12-08T06:43:37Z
  5. calico-node 2021-12-08T06:43:37Z
  6. cluster-admin 2021-12-08T06:32:43Z

查询集群角色绑定 clusterrolebinding呢?输出没有namespace

很明显,clusterrolebinding是不和namespace绑定的

  1. root@k8s-master:~# kubectl get clusterrolebindings.rbac.authorization.k8s.io -A
  2. NAME ROLE AGE
  3. calico-kube-controllers ClusterRole/calico-kube-controllers 369d
  4. calico-node ClusterRole/calico-node 369d
  5. cluster-admin ClusterRole/cluster-admin 369d
  6. ingress-nginx ClusterRole/ingress-nginx 354d
  7. ingress-nginx-admission ClusterRole/ingress-nginx-admission 354d
  8. kubeadm:get-nodes ClusterRole/kubeadm:get-nodes 369d
  9. kubeadm:kubelet-bootstrap ClusterRole/system:node-bootstrapper 369d

 查询服务账号sa,输出是有namespace的,

很明显,serviceaccount是和namespace绑定的

  1. root@k8s-master:~# kubectl get sa -A
  2. NAMESPACE NAME SECRETS AGE
  3. app-team1 cicd-token 1 85m
  4. app-team1 default 1 354d
  5. default default 1 369d
  6. default nfs-client-provisioner 1 354d

解题代码:

根据题目要求,得出应该有如下代码:

需要非常注意的是,rolebinding 一定要加namespace,否则此题不算通过。

  1. root@k8s-master:~# kubectl create ns app-team1
  2. Error from server (AlreadyExists): namespaces "app-team1" already exists
  3. root@k8s-master:~# kubectl create sa cicd-token -n app-team1
  4. error: failed to create serviceaccount: serviceaccounts "cicd-token" already exists
  5. root@k8s-master:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet -n app-team1
  6. Error from server (AlreadyExists): clusterroles.rbac.authorization.k8s.io "deployment-clusterrole" already exists
  7. root@k8s-master:~# kubectl create rolebinding cicd-deployment --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1
  8. error: failed to create rolebinding: rolebindings.rbac.authorization.k8s.io "cicd-deployment" already exists

验证解题结果:

查看创建的资源

  1. root@k8s-master:~# kubectl get sa cicd-token -n app-team1
  2. NAME SECRETS AGE
  3. cicd-token 1 96m
  4. root@k8s-master:~# kubectl get clusterrole deployment-clusterrole
  5. NAME CREATED AT
  6. deployment-clusterrole 2022-12-13T02:52:51Z
  7. root@k8s-master:~# kubectl get rolebindings.rbac.authorization.k8s.io cicd-deployment -n app-team1
  8. NAME ROLE AGE
  9. cicd-deployment ClusterRole/deployment-clusterrole 46s

查看权限:

  1. root@k8s-master:~# kubectl describe clusterroles.rbac.authorization.k8s.io deployment-clusterrole
  2. Name: deployment-clusterrole
  3. Labels: <none>
  4. Annotations: <none>
  5. PolicyRule:
  6. Resources Non-Resource URLs Resource Names Verbs
  7. --------- ----------------- -------------- -----
  8. daemonsets.apps [] [] [create]
  9. deployments.apps [] [] [create]
  10. statefulsets.apps [] [] [create]

官网参考链接:使用 RBAC 鉴权 | Kubernetes (中文)

使用 RBAC 鉴权 | Kubernetes(中文)

二,

节点调度

题目要求:

设置 ek8s-node-1 节点为不可用,重新调度该节点上的所有 pod

解题思路:

  1. 整个的节点维护的流程:
  2. 1)首先查看当前集群所有节点状态,例如共有四个节点都处于ready状态;
  3. 2)查看当前nginx两个副本分别运行在d-node1和k-node2两个节点上;
  4. 3)使用cordon命令将d-node1标记为不可调度;
  5. 4)再使用kubectl get nodes查看节点状态,发现d-node1虽然还处于Ready状态,但是同时还能被禁调度,代表新的pod将不会被调度到d-node1上。
  6. 5)再查看nginx状态,没有任何变化,两个副本仍运行在d-node1和k-node2上;
  7. 6)执行drain命令,将运行在d-node1上运行的pod平滑的赶到其他节点上;
  8. 7)再查看nginx的状态发现,d-node1上的副本已经被迁移到k-node1上;此时就可以对d-node1进行一些节点维护的操作,如升级内核,升级Docker等;
  9. 8)节点维护完后,使用uncordon命令解锁d-node1,使其重新变得可调度;
  10. 9)检查节点状态,发现d-node1重新变回Ready状态。

因此,需要使用到三个命令,cordon,drain,uncordon,根据题目要求不需要恢复节点调度,因此,uncordon不需要使用,但恢复使用也没什么错。

解题步骤:

1,

查询节点状态和要维护节点内运行的pod,这里以k8s-node1节点作为维护对象

  1. root@k8s-master:~# kubectl get no k8s-node1
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-node1 Ready <none> 13h v1.22.2
  4. root@k8s-master:~# kubectl get po -A -owide |grep k8s-node1
  5. default front-end-6f94965fd9-dq7t8 1/1 Running 1 (173m ago) 13h 10.244.36.74 k8s-node1 <none> <none>
  6. default guestbook-86bb8f5bc9-mcdvg 1/1 Running 1 (173m ago) 13h 10.244.36.77 k8s-node1 <none> <none>
  7. default guestbook-86bb8f5bc9-zh7zq 1/1 Running 1 (173m ago) 13h 10.244.36.76 k8s-node1 <none> <none>
  8. default nfs-client-provisioner-56dd5765dc-gp6mz 1/1 Running 2 (173m ago) 13h 10.244.36.72 k8s-node1 <none> <none>
  9. default task-2-ds-pmlqw 1/1 Running 1 (173m ago) 13h 10.244.36.75 k8s-node1 <none> <none>
  10. ing-internal nginx-app-68b95cb66f-qkkpx 1/1 Running 1 (173m ago) 13h 10.244.36.73 k8s-node1 <none> <none>
  11. ingress-nginx ingress-nginx-controller-gqzgg 1/1 Running 1 (173m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  12. kube-system calico-node-g6rwl 1/1 Running 1 (173m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  13. kube-system kube-proxy-6ckmt 1/1 Running 1 (173m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  14. kube-system metrics-server-576fc6cd56-svg7q 1/1 Running 1 (173m ago) 13h 10.244.36.78 k8s-node1 <none> <none>

2,

进入维护状态,查询是否为正确的维护状态

  1. root@k8s-master:~# kubectl cordon k8s-node1
  2. node/k8s-node1 cordoned
  3. root@k8s-master:~# kubectl get no k8s-node1
  4. NAME STATUS ROLES AGE VERSION
  5. k8s-node1 Ready,SchedulingDisabled <none> 13h v1.22.2

3,

平滑驱逐该节点的所有pod除daemonset类资源

  1. root@k8s-master:~# kubectl drain k8s-node1 --
  2. --add-dir-header --client-certificate --grace-period --log-file= --pod-selector --skip-headers --user
  3. --alsologtostderr --client-certificate= --grace-period= --log-file-max-size --pod-selector= --skip-log-headers --user=
  4. --as --client-key --ignore-daemonsets --log-file-max-size= --profile --skip-wait-for-delete-timeout --username
  5. --as= --client-key= --ignore-errors --log-flush-frequency --profile= --skip-wait-for-delete-timeout= --username=
  6. --as-group --cluster --insecure-skip-tls-verify --log-flush-frequency= --profile-output --stderrthreshold --v
  7. --as-group= --cluster= --kubeconfig --logtostderr --profile-output= --stderrthreshold= --v=
  8. --cache-dir --context --kubeconfig= --match-server-version --request-timeout --timeout --vmodule
  9. --cache-dir= --context= --log-backtrace-at --namespace --request-timeout= --timeout= --vmodule=
  10. --certificate-authority --delete-emptydir-data --log-backtrace-at= --namespace= --selector --tls-server-name --warnings-as-errors
  11. --certificate-authority= --disable-eviction --log-dir --one-output --selector= --tls-server-name=
  12. --chunk-size --dry-run --log-dir= --password --server --token
  13. --chunk-size= --force --log-file --password= --server= --token=
  14. root@k8s-master:~# kubectl drain k8s-node1 --ignore-daemonsets --delete-emptydir-data --force
  15. node/k8s-node1 already cordoned
  16. WARNING: ignoring DaemonSet-managed Pods: default/task-2-ds-pmlqw, ingress-nginx/ingress-nginx-controller-gqzgg, kube-system/calico-node-g6rwl, kube-system/kube-proxy-6ckmt
  17. evicting pod kube-system/metrics-server-576fc6cd56-svg7q
  18. evicting pod default/guestbook-86bb8f5bc9-zh7zq
  19. evicting pod default/front-end-6f94965fd9-dq7t8
  20. evicting pod default/guestbook-86bb8f5bc9-mcdvg
  21. evicting pod default/nfs-client-provisioner-56dd5765dc-gp6mz
  22. evicting pod ing-internal/nginx-app-68b95cb66f-qkkpx
  23. pod/guestbook-86bb8f5bc9-mcdvg evicted
  24. pod/front-end-6f94965fd9-dq7t8 evicted
  25. pod/guestbook-86bb8f5bc9-zh7zq evicted
  26. pod/nfs-client-provisioner-56dd5765dc-gp6mz evicted
  27. pod/metrics-server-576fc6cd56-svg7q evicted
  28. pod/nginx-app-68b95cb66f-qkkpx evicted
  29. node/k8s-node1 evicted

4,

查看驱逐结果

  1. root@k8s-master:~# kubectl get po -A -owide |grep k8s-node1
  2. default task-2-ds-pmlqw 1/1 Running 1 (179m ago) 13h 10.244.36.75 k8s-node1 <none> <none>
  3. ingress-nginx ingress-nginx-controller-gqzgg 1/1 Running 1 (179m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  4. kube-system calico-node-g6rwl 1/1 Running 1 (179m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  5. kube-system kube-proxy-6ckmt 1/1 Running 1 (179m ago) 13h 192.168.123.151 k8s-node1 <none> <none>
  6. root@k8s-master:~# kubectl get no k8s-node1
  7. NAME STATUS ROLES AGE VERSION
  8. k8s-node1 Ready,SchedulingDisabled <none> 13h v1.22.2

可以看到有许多不是daemonset控制器的pod已经被驱逐到其它节点了,此时这个节点是比较干净的。

5,

恢复节点调度,退出节点维护模式(不需要做)

  1. root@k8s-master:~# kubectl uncordon k8s-node1
  2. node/k8s-node1 uncordoned
  3. root@k8s-master:~# kubectl get no k8s-node1
  4. NAME STATUS ROLES AGE VERSION
  5. k8s-node1 Ready <none> 13h v1.22.2

官网参考链接: Kubectl Reference Docs

三,

集群升级组件

题目要求:

升级 master 节点为1.22.2,升级前确保drain master 节点,不要升级worker node 、容器 manager、 etcd、 CNI插件、DNS 等内容;

解题思路:

不升级worker node,也就是操作只在master节点,上一题的节点维护工作换到master了。

排除etcd升级,因为etcd是静态pod,不是daemonset部署,因此,upgrade的时候排除即可。

解题步骤:

  1. 切换 context
  2. kubectl get nodes
  3. ssh k8s-master
  4. kubectl cordon k8s-master
  5. kubectl drain k8s-master --ignore-daemonsets --force
  6. apt-mark unhold kubeadm kubectl kubelet
  7. apt-get update && apt-get install -y kubeadm=1.22.2-00 kubelet=1.22.2-00
  8. kubectl=1.22.2-00
  9. apt-mark hold kubeadm kubectl kubelet
  10. kubeadm upgrade plan
  11. kubeadm upgrade apply v1.22.2 --etcd-upgrade=false
  12. systemctl daemon-reload && systemctl restart kubelet
  13. //kubectl -n kube-system rollout undo deployment coredns 有些朋友建议
  14. rollout coredns,
  15. kubectl uncordon k8s-master
  16. 检查master节点状态以及版本
  17. kubectl get node

官网参考链接:升级 kubeadm 集群 | Kubernetes

四,

etcd的备份和还原

题目要求:

备份 https://127.0.0.1:2379 上的 etcd 数据到 /var/lib/backup/etcd-snapshot.db,使用之前的备份文件 /data/backup/etcd-snapshot-previous.db 还原 etcd,使用指定的 ca.crt 、 etcd-client.crt 、 etcd-client.key

解题思路:

这个没什么好说的,网上看的很多人有什么第二种方法,第三种方法什么的,没必要,就一个最标准的方法就行了。

  1. 注意:如果执行时,提示permission denied,则是权限不够,命令最前面加sudo即可。
  2. 备份:
  3. ETCDCTL_API=3 etcdctl --endpoints https://172.0.0.1:2379 --
  4. cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --
  5. key=/opt/KUIN00601/etcd-client.key snapshot save /var/lib/backup/etcd-snapshot.db
  6. 还原:
  7. ETCDCTL_API=3 etcdctl --endpoints https://172.0.0.1:2379 --
  8. cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --
  9. key=/opt/KUIN00601/etcd-client.key snapshot restore /data/backup/etcd-snapshotprevious.db
  10. 还原成功后,最好通过 get nodes 确定集群状态是正常的

解题步骤:

先查看总共有多少pod:

  1. root@k8s-master:~# kubectl get po -A |wc -l
  2. 24

按题目要求备份etcd,生成备份文件/var/lib/backup/etcd-snapshot.db:

  1. ETCDCTL_API=3 etcdctl \
  2. --endpoints https://192.168.123.150:2379 \
  3. --cacert=/opt/KUIN00601/ca.crt \
  4. --cert=/opt/KUIN00601/etcd-client.crt \
  5. --key=/opt/KUIN00601/etcd-client.key snapshot save /var/lib/backup/etcd-snapshot.db

输出如下;

  1. {"level":"info","ts":1670922988.1987517,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/var/lib/backup/etcd-snapshot.db.part"}
  2. {"level":"info","ts":1670922988.2269363,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
  3. {"level":"info","ts":1670922988.2272522,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
  4. {"level":"info","ts":1670922988.6475282,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
  5. {"level":"info","ts":1670922988.6933029,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"9.4 MB","took":"now"}
  6. {"level":"info","ts":1670922988.6935413,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/var/lib/backup/etcd-snapshot.db"}
  7. Snapshot saved at /var/lib/backup/etcd-snapshot.db

还原etcd:

 ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/server.crt --key=/opt/KUIN00601/server.key snapshot restore /data/backup/etcd-snapshot-previous.db

输出如下:

  1. 2022-12-13T17:17:15+08:00 info snapshot/v3_snapshot.go:251 restoring snapshot {"path": "/data/backup/etcd-snapshot-previous.db", "wal-dir": "default.etcd/member/wal", "data-dir": "default.etcd", "snap-dir": "default.etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
  2. 2022-12-13T17:17:15+08:00 info membership/store.go:119 Trimming membership information from the backend...
  3. 2022-12-13T17:17:16+08:00 info membership/cluster.go:393 added member {"cluster-id": "cdf818194e3a8c32", "local-member-id": "0", "added-peer-id": "8e9e05c52164694d", "added-peer-peer-urls": ["http://localhost:2380"]}
  4. 2022-12-13T17:17:16+08:00 info snapshot/v3_snapshot.go:272 restored snapshot {"path": "/data/backup/etcd-snapshot-previous.db", "wal-dir": "default.etcd/member/wal", "data-dir": "default.etcd", "snap-dir": "default.etcd/member/snap"}

官网参考链接:为 Kubernetes 运行 etcd 集群 | Kubernetes 

五,

配置网络策略 NetworkPolicy

题目要求:

题目解析:

本题需要根据官网资料来做一些修改,官网参考链接:网络策略 | Kubernetes   ,第一个示例复制下来后,根据题目要求修改即可。修改后的文件内容如下;

  1. apiVersion: networking.k8s.io/v1
  2. kind: NetworkPolicy
  3. metadata:
  4. name: allow-port-from-namespace
  5. namespace: fubar
  6. spec:
  7. podSelector:
  8. matchLabels:
  9. policyTypes:
  10. - Egress
  11. egress:
  12. - to:
  13. - namespaceSelector:
  14. matchLabels:
  15. project: my-app
  16. ports:
  17. - protocol: TCP
  18. port: 8080

应用部署:

  1. root@k8s-master:~# kubectl apply -f networkpolicy.yaml
  2. networkpolicy.networking.k8s.io/allow-port-from-namespace created

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号