当前位置:   article > 正文

云原生|kubernetes|kube-bench安全检测工具的部署和使用

云原生|kubernetes|kube-bench安全检测工具的部署和使用

前言:

 

安全是一个绕不开的话题,那么,在云原生领域,在kubernetes内更加的需要安全。毕竟没有人愿意自己的项目是千疮百孔,适当的安全可以保证项目或者平台稳定高效的运行。

安全性是一个永远不会消失的问题,无论该技术推出多长时间都无关紧要。因此,我们应该始终使用 kube bench、kube-hunter 和 CIS 基准来强化 Kubernetes 集群。我们的集群越安全,它们就越不容易出现停机和黑客攻击。

一,

什么是CIS?

当然了,我们这里提到的CIS指的是一个技术性的安全组织,不是什么独联体,无关政治。

互联网安全中心 (CIS) 是一家历史悠久的全球安全组织,已将其 Kubernetes 基准创建为“客观的、共识驱动的安全指南”,为集群组件配置提供行业标准建议并强化 Kubernetes 安全态势。

级别 1 建议实施起来相对简单,同时提供主要好处,通常不会影响业务功能。级别 2 或“深度防御”建议适用于需要更全面战略的关键任务环境。

CIS 还提供确保集群资源符合基准并为不合规组件生成警报的工具。CIS 框架适用于所有 Kubernetes 发行版。

  • 优点:  严格且被广泛接受的配置设置蓝图。
  • 缺点:  并非所有建议都与所有组织相关,必须相应地进行评估。

二,

现在的kubernetes集群安全现状

ubernetes (K8s) 成为世界领先的容器编排平台是有原因的,当今有74% 的 IT 公司将其用于生产中的容器化工作负载。它通常是大规模处理容器配置、部署和管理的最简单方法。

但是,尽管 Kubernetes 使容器的使用变得更容易,但在安全性方面也增加了复杂性。究其原因,集群的规模随着业务的增长可能也会不断增加,并且可能会有若干个集群,而每一套集群都是由若干个组件组成的,并且可能还会有高可用的需求,这些使得集群变得更加复杂。

Kubernetes 的默认配置并不总是为所有部署的工作负载和微服务提供最佳安全性。此外,如今您不仅要负责保护您的环境免受恶意网络攻击,还要负责满足各种合规性要求。说人话就是kubeadm这样的工具使得部署不在困难,但,每一个集群并不具有最佳的安全性。

合规性已成为确保业务连续性、防止声誉受损和确定每个应用程序的风险级别的关键。合规性框架旨在通过轻松监控控制、团队级别的问责制和漏洞评估来解决安全和隐私问题——所有这些都在 K8s 环境中提出了独特的挑战。

为了完全保护 Kubernetes,需要多管齐下的方法:干净的代码、完全的可观察性、防止与不受信任的服务交换信息以及数字签名。还必须考虑网络、供应链和 CI/CD 管道安全、资源保护、架构最佳实践、机密管理和保护、漏洞扫描和容器运行时保护。

三,

什么是kube-bench?

Kube-Bench是一款针对Kubernete的安全检测工具,从本质上来说,Kube-Bench是一个基于Go开发的应用程序,它可以帮助研究人员对部署的Kubernete进行安全检测

那么,kube-bench到底是检测kubernetes集群的什么呢?

主要是检测kubernetes集群的各个组件的配置文件,查看这些配置文件是否符合安全基线的标准。

例如,kubelet的配置文件:

其中的anonymous的值可以是true或者false,默认anonymous-auth参数设置成true,也就是可以进行匿名认证,这时对kubelet API的请求都以匿名方式进行,系统会使用默认匿名用户和默认用户组来进行访问,默认用户名“system:anonymous”,默认用户组名“system:unauthenticated”。那么,这显然是不安全的一种配置方式,因此,下面的文件做了显式的定义false,可以有效的提高集群的安全性。

那么,如果我们每一个集群都这样人肉检索配置是否合乎安全,显然是不合适的。

kube-bench这个工具可以自动的将各个组件内的这些错误的,不合乎安全基线的配置找出来,并生成一个详细的报告。也就是说,kube-bench是一个自动化的安全检测工具。
 

  1. [root@k8s-master cfg]# cat /var/lib/kubelet/config.yaml
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. authentication:
  4. anonymous:
  5. enabled: false
  6. webhook:
  7. cacheTTL: 0s
  8. enabled: true
  9. x509:
  10. clientCAFile: /etc/kubernetes/pki/ca.crt
  11. authorization:
  12. mode: Webhook
  13. webhook:
  14. cacheAuthorizedTTL: 0s
  15. cacheUnauthorizedTTL: 0s
  16. cgroupDriver: systemd
  17. clusterDNS:
  18. - 10.96.0.10
  19. clusterDomain: cluster.local
  20. cpuManagerReconcilePeriod: 0s
  21. evictionPressureTransitionPeriod: 0s
  22. fileCheckFrequency: 0s
  23. healthzBindAddress: 127.0.0.1
  24. healthzPort: 10248
  25. httpCheckFrequency: 0s
  26. imageMinimumGCAge: 0s
  27. kind: KubeletConfiguration
  28. logging:
  29. flushFrequency: 0
  30. options:
  31. json:
  32. infoBufferSize: "0"
  33. verbosity: 0
  34. memorySwap: {}
  35. nodeStatusReportFrequency: 0s
  36. nodeStatusUpdateFrequency: 0s
  37. rotateCertificates: true
  38. runtimeRequestTimeout: 0s
  39. shutdownGracePeriod: 0s
  40. shutdownGracePeriodCriticalPods: 0s
  41. staticPodPath: /etc/kubernetes/manifests
  42. streamingConnectionIdleTimeout: 0s
  43. syncFrequency: 0s
  44. volumeStatsAggPeriod: 0s

四,

kube-bench的部署

kube-bench的部署方式大体有三种

第一种:

二进制部署

二进制的部署文件下载地址:Releases · aquasecurity/kube-bench · GitHub

上传到kubernetes集群的master节点,解压后,得到一个可执行文件kube-bench,一个配置文件夹cfg:

  1. [root@k8s-master ~]# ll
  2. total 25248
  3. drwxr-xr-x 13 root root 193 Jan 4 05:10 cfg
  4. -rwxr-xr-x 1 1001 116 17838153 May 26 2021 kube-bench
  5. -rw-r--r-- 1 root root 8009592 Jan 4 05:09 kube-bench_0.6.2_linux_amd64.tar.gz

 执行此命令即可开始安全检测:

 ./kube-bench run --targets master  --config-dir ./cfg --config ./cfg/config.yaml

这个是检测master节点的,当然了,输出会比较多,这一节就不多说什么了,检测结果在下一节进行分析,这里只是演示一下部署。

第二种:

容器运行kube-bench

直接运行一个容器,此容器持久化kube-bench命令到本地

docker run --rm -v `pwd`:/host docker.io/aquasec/kube-bench:latest install

此命令的日志输出如下:

  1. Unable to find image 'aquasec/kube-bench:latest' locally
  2. latest: Pulling from aquasec/kube-bench
  3. a0d0a0d46f8b: Pull complete
  4. de518435ffef: Pull complete
  5. c9e72fdf9efa: Pull complete
  6. cfa016c18c39: Pull complete
  7. 629f37d57c12: Pull complete
  8. 04166caf0b52: Pull complete
  9. fc9451058e82: Pull complete
  10. 8fe3a63c04b4: Pull complete
  11. 7e2fab6be223: Pull complete
  12. b5eba067eb85: Pull complete
  13. 5d793ab25bac: Pull complete
  14. db448d0d5d19: Pull complete
  15. Digest: sha256:efaf4ba0d1c98d798c15baff83d0c15d56b4253d7e778a734a6eccc9c555626e
  16. Status: Downloaded newer image for aquasec/kube-bench:latest
  17. ===============================================
  18. kube-bench is now installed on your host
  19. Run ./kube-bench to perform a security check
  20. ===============================================

开始安全检测:

  1. [root@k8s-master ~]# ./kube-bench run --targets=master
  2. [INFO] 1 Master Node Security Configuration
  3. [INFO] 1.1 Master Node Configuration Files
  4. [PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
  5. [PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
  6. [PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
  7. [PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
  8. [PASS] 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
  9. [PASS] 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
  10. [PASS] 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
  11. [PASS] 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
  12. [WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
  13. [PASS] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
  14. [PASS] 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
  15. [FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
  16. 。。。。。。。。。。

第三种:

在集群内运行kube-bench的job类型pod

这里需要在GitHub下载源码包来获取yaml文件,我这里下载的是kube-bench-0.6.6.zip

将该源码包上传到服务器后解压并进入解压目录,可以看到有很多yaml文件:

  1. [root@k8s-master kube-bench-0.6.6]# ls
  2. cfg cmd CONTRIBUTING.md docs go.mod hack integration job-ack.yaml job-eks-asff.yaml job-gke.yaml job-master.yaml job.yaml main.go mkdocs.yml OWNERS
  3. check codecov.yml Dockerfile entrypoint.sh go.sum hooks internal job-aks.yaml job-eks.yaml job-iks.yaml job-node.yaml LICENSE makefile NOTICE README.md

选择job.yaml 这个文件,执行它:

  1. [root@k8s-master kube-bench-0.6.6]# kubectl apply -f job.yaml
  2. job.batch/kube-bench created

等待job任务完成,状态变为completed: 

  1. [root@k8s-master kube-bench-0.6.6]# kubectl get po
  2. NAME READY STATUS RESTARTS AGE
  3. kube-bench-rn9dg 0/1 Completed 0 62s

查看此pod的日志,该日志即为安全检测的结果;

  1. [root@k8s-master kube-bench-0.6.6]# kubectl logs kube-bench-rn9dg
  2. [INFO] 4 Worker Node Security Configuration
  3. [INFO] 4.1 Worker Node Configuration Files
  4. [PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
  5. [PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
  6. [PASS] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
  7. [PASS] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)
  8. [PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
  9. [PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)
  10. 。。。。。。。

当然了,如果想只检测master,那么,apply  job-master.yaml 这个文件即可,此文件的内容如下:

  1. [root@k8s-master kube-bench-0.6.6]# cat job-master.yaml
  2. ---
  3. apiVersion: batch/v1
  4. kind: Job
  5. metadata:
  6. name: kube-bench-master
  7. spec:
  8. template:
  9. spec:
  10. hostPID: true
  11. nodeSelector:
  12. node-role.kubernetes.io/master: ""
  13. tolerations:
  14. - key: node-role.kubernetes.io/master
  15. operator: Exists
  16. effect: NoSchedule
  17. containers:
  18. - name: kube-bench
  19. image: aquasec/kube-bench:latest
  20. command: ["kube-bench", "run", "--targets", "master"]
  21. volumeMounts:
  22. - name: var-lib-etcd
  23. mountPath: /var/lib/etcd
  24. readOnly: true
  25. - name: var-lib-kubelet
  26. mountPath: /var/lib/kubelet
  27. readOnly: true
  28. - name: var-lib-kube-scheduler
  29. mountPath: /var/lib/kube-scheduler
  30. readOnly: true
  31. - name: var-lib-kube-controller-manager
  32. mountPath: /var/lib/kube-controller-manager
  33. readOnly: true
  34. - name: etc-systemd
  35. mountPath: /etc/systemd
  36. readOnly: true
  37. - name: lib-systemd
  38. mountPath: /lib/systemd/
  39. readOnly: true
  40. - name: srv-kubernetes
  41. mountPath: /srv/kubernetes/
  42. readOnly: true
  43. - name: etc-kubernetes
  44. mountPath: /etc/kubernetes
  45. readOnly: true
  46. # /usr/local/mount-from-host/bin is mounted to access kubectl / kubelet, for auto-detecting the Kubernetes version.
  47. # You can omit this mount if you specify --version as part of the command.
  48. - name: usr-bin
  49. mountPath: /usr/local/mount-from-host/bin
  50. readOnly: true
  51. - name: etc-cni-netd
  52. mountPath: /etc/cni/net.d/
  53. readOnly: true
  54. - name: opt-cni-bin
  55. mountPath: /opt/cni/bin/
  56. readOnly: true
  57. - name: etc-passwd
  58. mountPath: /etc/passwd
  59. readOnly: true
  60. - name: etc-group
  61. mountPath: /etc/group
  62. readOnly: true
  63. restartPolicy: Never
  64. volumes:
  65. - name: var-lib-etcd
  66. hostPath:
  67. path: "/var/lib/etcd"
  68. - name: var-lib-kubelet
  69. hostPath:
  70. path: "/var/lib/kubelet"
  71. - name: var-lib-kube-scheduler
  72. hostPath:
  73. path: "/var/lib/kube-scheduler"
  74. - name: var-lib-kube-controller-manager
  75. hostPath:
  76. path: "/var/lib/kube-controller-manager"
  77. - name: etc-systemd
  78. hostPath:
  79. path: "/etc/systemd"
  80. - name: lib-systemd
  81. hostPath:
  82. path: "/lib/systemd"
  83. - name: srv-kubernetes
  84. hostPath:
  85. path: "/srv/kubernetes"
  86. - name: etc-kubernetes
  87. hostPath:
  88. path: "/etc/kubernetes"
  89. - name: usr-bin
  90. hostPath:
  91. path: "/usr/bin"
  92. - name: etc-cni-netd
  93. hostPath:
  94. path: "/etc/cni/net.d/"
  95. - name: opt-cni-bin
  96. hostPath:
  97. path: "/opt/cni/bin/"
  98. - name: etc-passwd
  99. hostPath:
  100. path: "/etc/passwd"
  101. - name: etc-group
  102. hostPath:
  103. path: "/etc/group"

查看此job的日志,该日志即为master的安全检测结果;

  1. [root@k8s-master kube-bench-0.6.6]# kubectl logs kube-bench-master-mkgcd
  2. [INFO] 1 Master Node Security Configuration
  3. [INFO] 1.1 Master Node Configuration Files
  4. [PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
  5. [PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
  6. [PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
  7. [PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
  8. 。。。。。。。。。。。。。。。

五,

kube-bench安全检测的结果分析和安全漏洞修复示例

在正式的开始安全检测前,我想应该先说明一下我这个kubernetes集群的状态,是全新安装的kubeadm部署的全部默认配置的集群,一个master节点,两个work节点。

kube-bench使用的是二进制部署

  1. [root@k8s-master mnt]# kubectl get no
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready control-plane,master 33h v1.23.15
  4. k8s-node1 Ready <none> 33h v1.23.15
  5. k8s-node2 Ready <none> 33h v1.23.15

那么,先进行master节点的安全检测吧:

检测命令的参数如下:

The specified --targets "policyers" are not configured for the CIS Benchmark cis-1.6\n Valid targets [master node controlplane etcd policies]

根据以上报错,我们使用master这个参数对master节点进行安全检测

附:

每一个检测点都有标号,例如,1.1.1指的是 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)

那么,如果此检测点是FAIL也就是失败,下面的== Remediations master ==将会提示如何修复,例如,第一个WARN提示是1.1.9,修复方法是:

  1. 1.1.9 Run the below command (based on the file location on your system) on the master node.
  2. For example,
  3. chmod 644 <path/to/cni/files>
  1. [root@k8s-master mnt]# ./kube-bench run --targets=master --config-dir ./cfg --config ./cfg/config.yaml
  2. [INFO] 1 Master Node Security Configuration
  3. [INFO] 1.1 Master Node Configuration Files
  4. [PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
  5. [PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
  6. [PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
  7. [PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
  8. [PASS] 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
  9. [PASS] 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
  10. [PASS] 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
  11. [PASS] 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
  12. [WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
  13. [WARN] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
  14. [PASS] 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
  15. [FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
  16. [PASS] 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
  17. [PASS] 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
  18. [PASS] 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
  19. [PASS] 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
  20. [PASS] 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
  21. [PASS] 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
  22. [PASS] 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
  23. [PASS] 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
  24. [PASS] 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
  25. [INFO] 1.2 API Server
  26. [WARN] 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
  27. [PASS] 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated)
  28. [PASS] 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated)
  29. [PASS] 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
  30. [PASS] 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
  31. [FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
  32. [PASS] 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
  33. [PASS] 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
  34. [PASS] 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
  35. [WARN] 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
  36. [PASS] 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
  37. [WARN] 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
  38. [WARN] 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
  39. [PASS] 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
  40. [PASS] 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
  41. [FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
  42. [PASS] 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated)
  43. [PASS] 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated)
  44. [FAIL] 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated)
  45. [PASS] 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated)
  46. [FAIL] 1.2.21 Ensure that the --profiling argument is set to false (Automated)
  47. [FAIL] 1.2.22 Ensure that the --audit-log-path argument is set (Automated)
  48. [FAIL] 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
  49. [FAIL] 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
  50. [FAIL] 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
  51. [WARN] 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated)
  52. [PASS] 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated)
  53. [PASS] 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
  54. [PASS] 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
  55. [PASS] 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
  56. [PASS] 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated)
  57. [PASS] 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
  58. [WARN] 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
  59. [WARN] 1.2.34 Ensure that encryption providers are appropriately configured (Manual)
  60. [WARN] 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
  61. [INFO] 1.3 Controller Manager
  62. [WARN] 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
  63. [FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
  64. [PASS] 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
  65. [PASS] 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
  66. [PASS] 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
  67. [PASS] 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
  68. [PASS] 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
  69. [INFO] 1.4 Scheduler
  70. [FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)
  71. [PASS] 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
  72. == Remediations master ==
  73. 1.1.9 Run the below command (based on the file location on your system) on the master node.
  74. For example,
  75. chmod 644 <path/to/cni/files>
  76. 1.1.10 Run the below command (based on the file location on your system) on the master node.
  77. For example,
  78. chown root:root <path/to/cni/files>
  79. 1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
  80. from the below command:
  81. ps -ef | grep etcd
  82. Run the below command (based on the etcd data directory found above).
  83. For example, chown etcd:etcd /var/lib/etcd
  84. 1.2.1 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  85. on the master node and set the below parameter.
  86. --anonymous-auth=false
  87. 1.2.6 Follow the Kubernetes documentation and setup the TLS connection between
  88. the apiserver and kubelets. Then, edit the API server pod specification file
  89. /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the
  90. --kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
  91. --kubelet-certificate-authority=<ca-string>
  92. 1.2.10 Follow the Kubernetes documentation and set the desired limits in a configuration file.
  93. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  94. and set the below parameters.
  95. --enable-admission-plugins=...,EventRateLimit,...
  96. --admission-control-config-file=<path/to/configuration/file>
  97. 1.2.12 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  98. on the master node and set the --enable-admission-plugins parameter to include
  99. AlwaysPullImages.
  100. --enable-admission-plugins=...,AlwaysPullImages,...
  101. 1.2.13 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  102. on the master node and set the --enable-admission-plugins parameter to include
  103. SecurityContextDeny, unless PodSecurityPolicy is already in place.
  104. --enable-admission-plugins=...,SecurityContextDeny,...
  105. 1.2.16 Follow the documentation and create Pod Security Policy objects as per your environment.
  106. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  107. on the master node and set the --enable-admission-plugins parameter to a
  108. value that includes PodSecurityPolicy:
  109. --enable-admission-plugins=...,PodSecurityPolicy,...
  110. Then restart the API Server.
  111. 1.2.19 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  112. on the master node and set the below parameter.
  113. --insecure-port=0
  114. 1.2.21 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  115. on the master node and set the below parameter.
  116. --profiling=false
  117. 1.2.22 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  118. on the master node and set the --audit-log-path parameter to a suitable path and
  119. file where you would like audit logs to be written, for example:
  120. --audit-log-path=/var/log/apiserver/audit.log
  121. 1.2.23 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  122. on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days:
  123. --audit-log-maxage=30
  124. 1.2.24 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  125. on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
  126. value.
  127. --audit-log-maxbackup=10
  128. 1.2.25 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  129. on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB.
  130. For example, to set it as 100 MB:
  131. --audit-log-maxsize=100
  132. 1.2.26 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  133. and set the below parameter as appropriate and if needed.
  134. For example,
  135. --request-timeout=300s
  136. 1.2.33 Follow the Kubernetes documentation and configure a EncryptionConfig file.
  137. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  138. on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config=</path/to/EncryptionConfig/File>
  139. 1.2.34 Follow the Kubernetes documentation and configure a EncryptionConfig file.
  140. In this file, choose aescbc, kms or secretbox as the encryption provider.
  141. 1.2.35 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
  142. on the master node and set the below parameter.
  143. --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM
  144. _SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM
  145. _SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM
  146. _SHA384
  147. 1.3.1 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
  148. on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold,
  149. for example:
  150. --terminated-pod-gc-threshold=10
  151. 1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
  152. on the master node and set the below parameter.
  153. --profiling=false
  154. 1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
  155. on the master node and set the below parameter.
  156. --profiling=false
  157. == Summary master ==
  158. 43 checks PASS
  159. 11 checks FAIL
  160. 11 checks WARN
  161. 0 checks INFO
  162. == Summary total ==
  163. 43 checks PASS
  164. 11 checks FAIL
  165. 11 checks WARN
  166. 0 checks INFO

输出总体上可以分为三部分,第一部分是检测结果,第二部分是== Remediations master == 这一行后面的安全整改建议,第三部分是安全检测结果总结,

  • [PASS]:测试通过
  • [FAIL]:测试未通过,重点关注,在测试结果会给出修复建议
  • [WARN]:警告,可做了解
  • [INFO]:信息,没什么可说的,就是字面意思

那么,输出的东西有点多,可以将检测报告重定向到某个文件内,我这里重定向到result 这个文件内:

./kube-bench run --targets=master  --config-dir ./cfg --config ./cfg/config.yaml >result

以上输出中有一个FAIL是这个:

[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)

这个表示etcd集群的数据目录 属主部署etcd这个用户,那么,我们将数据目录属主更改后,看看是否可以消除错误:

新建etcd用户,并且该用户不可登陆系统,然后给予etcd集群的数据目录 etcd这个用户属主

  1. [root@k8s-master mnt]# useradd etcd -s /sbin/nologin
  2. [root@k8s-master mnt]# id etcd
  3. uid=1000(etcd) gid=1000(etcd) groups=1000(etcd)
  4. [root@k8s-master mnt]# chown -Rf etcd. /var/lib/etcd/

再次检测:

./kube-bench run --targets=master  --config-dir ./cfg --config ./cfg/config.yaml >result

可以看到这个安全隐患清除了: 

  1. [root@k8s-master mnt]# tail -f result
  2. 10 checks FAIL
  3. 11 checks WARN
  4. 0 checks INFO
  5. == Summary total ==
  6. 44 checks PASS
  7. 10 checks FAIL
  8. 11 checks WARN
  9. 0 checks INFO

OK,检测一下工作节点,可以看到有一个FAIL:

./kube-bench run --targets=node  --config-dir ./cfg --config ./cfg/config.yaml >result
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)

修复建议是这个:

  1. == Remediations node ==
  2. 4.2.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
  3. If using command line arguments, edit the kubelet service file
  4. /lib/systemd/system/kubelet.service on each worker node and
  5. set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
  6. --protect-kernel-defaults=true
  7. Based on your system, restart the kubelet service. For example:
  8. systemctl daemon-reload
  9. systemctl restart kubelet.service

OK,这个建议我们接受,登陆工作节点,修改kubelet的配置文件/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf(三个节点都修改)

  1. [root@k8s-node1 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. Environment="KUBELET_SYSTEM_PODS_ARGS=--protect-kernel-defaults=true" #这个是新增的
  7. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  8. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  9. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  10. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  11. EnvironmentFile=-/etc/sysconfig/kubelet
  12. ExecStart=
  13. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --system-reserved=cpu=1,memory=2Gi,ephemeral-storage=5Gi $KUBELET_SYSTEM_PODS_ARGS #这个也是新增的

重启kubelet服务,再次检测可以看到没有FAIL了:

systemctl daemon-reload && systemctl restart kubelet

再次使用kube-bench安全检测master,可以看到 --protect-kernel-defaults=true参数生效了

  1. [root@k8s-master mnt]# ./kube-bench run --targets=node --config-dir ./cfg --config ./cfg/config.yaml
  2. [INFO] 4 Worker Node Security Configuration
  3. [INFO] 4.1 Worker Node Configuration Files
  4. [PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
  5. [PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
  6. [PASS] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
  7. [PASS] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)
  8. [PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
  9. [PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)
  10. [PASS] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
  11. [PASS] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
  12. [PASS] 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
  13. [PASS] 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
  14. [INFO] 4.2 Kubelet
  15. [PASS] 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated)
  16. [PASS] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
  17. [PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
  18. [PASS] 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
  19. [PASS] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
  20. [PASS] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
  21. [PASS] 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
  22. [PASS] 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
  23. [WARN] 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
  24. [WARN] 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
  25. [PASS] 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
  26. [PASS] 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
  27. [WARN] 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
  28. 。。。。。。。。。。。。。。。。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/566631
推荐阅读
相关标签
  

闽ICP备14008679号