当前位置:   article > 正文

k8s常见的资源对象使用_qvfoze

qvfoze

目录

一、kubernetes内置资源对象

1.1、kubernetes内置资源对象介绍

1.2、kubernetes资源对象操作命令

二、job与cronjob计划任务

2.1、job计划任务

2.2、cronjob计划任务

三、RC/RS副本控制器

3.1、RC副本控制器

3.2、RS副本控制器

3.3、RS更新pod

四、Deployment副本控制器

4.1、Deployment副本控制器

五、Kubernetes之Service

5.1、Kubernetes Service介绍

5.2、service类型

六、Kubernetes之configmap

七、Kubernetes之Secret

7.1、Secret简介

7.2、Secret简介类型

7.3、Secret类型-Opaque格式

7.4、Secret类型-kubernetes.io/tls-为nginx提供证书

7.5、Secret-kubernetes.io/dockerconfigjson类型


一、kubernetes内置资源对象

1.1、kubernetes内置资源对象介绍

1.2、kubernetes资源对象操作命令

官网介绍:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/

二、job与cronjob计划任务

2.1、job计划任务

job属于一次性任务,常用于环境初始化例如mysql/elasticsearch。

  1. root@easzlab-deploy:~/jiege-k8s/pod-test# cat 1.job.yaml apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: job-mysql-init
  5. spec:
  6. template:
  7. spec:
  8. containers:
  9. - name: job-mysql-init-container
  10. image: centos:7.9.2009
  11. command: ["/bin/sh"]
  12. args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
  13. volumeMounts:
  14. - mountPath: /cache
  15. name: cache-volume
  16. volumes:
  17. - name: cache-volume
  18. hostPath:
  19. path: /tmp/jobdata
  20. restartPolicy: Never
  21. root@easzlab-deploy:~/pod-test# kubectl apply -f 1.job.yaml
  22. job.batch/job-mysql-init created
  23. root@easzlab-deploy:~/pod-test# kubectl get pod -A
  24. NAMESPACE NAME READY STATUS RESTARTS AGE
  25. default job-mysql-init-n29g9 0/1 ContainerCreating 0 14s
  26. kube-system calico-kube-controllers-5c8bb696bb-fxbmr 1/1 Running 1 (3d7h ago) 7d18h
  27. kube-system calico-node-2qtfm 1/1 Running 1 (3d7h ago) 7d18h
  28. kube-system calico-node-8l78t 1/1 Running 1 (3d7h ago) 7d18h
  29. kube-system calico-node-9b75m 1/1 Running 1 (3d7h ago) 7d18h
  30. kube-system calico-node-k75jh 1/1 Running 1 (3d7h ago) 7d18h
  31. kube-system calico-node-kmbhs 1/1 Running 1 (3d7h ago) 7d18h
  32. kube-system calico-node-lxfk9 1/1 Running 1 (3d7h ago) 7d18h
  33. kube-system coredns-69548bdd5f-6df7j 1/1 Running 1 (3d7h ago) 7d6h
  34. kube-system coredns-69548bdd5f-nl5qc 1/1 Running 1 (3d7h ago) 7d6h
  35. kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-2d275 1/1 Running 1 (3d7h ago) 7d6h
  36. kubernetes-dashboard kubernetes-dashboard-5676d8b865-6l8n8 1/1 Running 1 (3d7h ago) 7d6h
  37. linux70 linux70-tomcat-app1-deployment-5d666575cc-kbjhk 1/1 Running 1 (3d7h ago) 5d7h
  38. myserver linux70-nginx-deployment-55dc5fdcf9-58ll2 1/1 Running 0 20h
  39. myserver linux70-nginx-deployment-55dc5fdcf9-6xcjk 1/1 Running 0 20h
  40. myserver linux70-nginx-deployment-55dc5fdcf9-cxg5m 1/1 Running 0 20h
  41. myserver linux70-nginx-deployment-55dc5fdcf9-gv2gk 1/1 Running 0 20h
  42. velero-system velero-858b9459f9-5mxxx 1/1 Running 0 21h
  43. root@easzlab-deploy:~/pod-test#

2.2、cronjob计划任务

cronjob属于周期性任务,cronjob广泛用于数据库计划备份场景。

  1. root@easzlab-deploy:~/jiege-k8s/pod-test# cat 2.cronjob.yaml
  2. apiVersion: batch/v1
  3. kind: CronJob
  4. metadata:
  5. name: cronjob-mysql-databackup
  6. spec:
  7. schedule: "*/2 * * * *"
  8. jobTemplate:
  9. spec:
  10. template:
  11. spec:
  12. containers:
  13. - name: cronjob-mysql-databackup-pod
  14. image: centos:7.9.2009
  15. command: ["/bin/sh"]
  16. args: ["-c", "echo mysql databackup cronjob at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
  17. volumeMounts:
  18. - mountPath: /cache
  19. name: cache-volume
  20. volumes:
  21. - name: cache-volume
  22. hostPath:
  23. path: /tmp/cronjobdata
  24. restartPolicy: OnFailure
  25. root@easzlab-deploy:~/pod-test# kubectl apply -f 2.cronjob.yaml
  26. root@easzlab-deploy:~/pod-test#
  27. root@easzlab-deploy:~/pod-test# kubectl get pod -A -owide
  28. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  29. default cronjob-mysql-databackup-27661544-wntbb 0/1 Completed 0 4m3s 10.200.2.13 172.16.88.159 <none> <none>
  30. default cronjob-mysql-databackup-27661546-lbf2t 0/1 Completed 0 2m3s 10.200.2.14 172.16.88.159 <none> <none>
  31. default cronjob-mysql-databackup-27661548-8p9j6 0/1 Completed 0 3s 10.200.2.15 172.16.88.159 <none> <none>
  32. kube-system calico-kube-controllers-5c8bb696bb-fxbmr 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.159 172.16.88.159 <none> <none>
  33. kube-system calico-node-2qtfm 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.158 172.16.88.158 <none> <none>
  34. kube-system calico-node-8l78t 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.154 172.16.88.154 <none> <none>
  35. kube-system calico-node-9b75m 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.156 172.16.88.156 <none> <none>
  36. kube-system calico-node-k75jh 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.157 172.16.88.157 <none> <none>
  37. kube-system calico-node-kmbhs 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.159 172.16.88.159 <none> <none>
  38. kube-system calico-node-lxfk9 1/1 Running 1 (3d7h ago) 7d18h 172.16.88.155 172.16.88.155 <none> <none>
  39. kube-system coredns-69548bdd5f-6df7j 1/1 Running 1 (3d7h ago) 7d6h 10.200.2.6 172.16.88.159 <none> <none>
  40. kube-system coredns-69548bdd5f-nl5qc 1/1 Running 1 (3d7h ago) 7d6h 10.200.40.199 172.16.88.157 <none> <none>
  41. kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-2d275 1/1 Running 1 (3d7h ago) 7d6h 10.200.40.197 172.16.88.157 <none> <none>
  42. kubernetes-dashboard kubernetes-dashboard-5676d8b865-6l8n8 1/1 Running 1 (3d7h ago) 7d6h 10.200.40.198 172.16.88.157 <none> <none>
  43. linux70 linux70-tomcat-app1-deployment-5d666575cc-kbjhk 1/1 Running 1 (3d7h ago) 5d7h 10.200.233.67 172.16.88.158 <none> <none>
  44. myserver linux70-nginx-deployment-55dc5fdcf9-58ll2 1/1 Running 0 21h 10.200.2.10 172.16.88.159 <none> <none>
  45. myserver linux70-nginx-deployment-55dc5fdcf9-6xcjk 1/1 Running 0 21h 10.200.2.9 172.16.88.159 <none> <none>
  46. myserver linux70-nginx-deployment-55dc5fdcf9-cxg5m 1/1 Running 0 21h 10.200.2.11 172.16.88.159 <none> <none>
  47. myserver linux70-nginx-deployment-55dc5fdcf9-gv2gk 1/1 Running 0 21h 10.200.233.69 172.16.88.158 <none> <none>
  48. velero-system velero-858b9459f9-5mxxx 1/1 Running 0 21h 10.200.40.202 172.16.88.157 <none> <none>
  49. root@easzlab-deploy:~/pod-test#

三、RC/RS副本控制器

3.1、RC副本控制器

Replication Controller: 副本控制器( selector = !=) #第一代pod副本控制器
https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicationcontroller/
https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/

  1. root@easzlab-deploy:~/jiege-k8s/pod-test# cat 1.rc.yaml
  2. apiVersion: v1
  3. kind: ReplicationController
  4. metadata:
  5. name: ng-rc
  6. spec:
  7. replicas: 2
  8. selector:
  9. app: ng-rc-80
  10. template:
  11. metadata:
  12. labels:
  13. app: ng-rc-80
  14. spec:
  15. containers:
  16. - name: ng-rc-80
  17. image: nginx
  18. ports:
  19. - containerPort: 80
  20. root@easzlab-deploy:~/pod-test# kubectl apply -f 1.rc.yaml
  21. replicationcontroller/ng-rc created
  22. root@easzlab-deploy:~/pod-test#
  23. root@easzlab-deploy:~/pod-test# kubectl get pod -A
  24. NAMESPACE NAME READY STATUS RESTARTS AGE
  25. default ng-rc-528fl 1/1 Running 0 2m8s
  26. default ng-rc-d6zqx 1/1 Running 0 2m8s
  27. kube-system calico-kube-controllers-5c8bb696bb-fxbmr 1/1 Running 1 (3d10h ago) 7d21h
  28. kube-system calico-node-2qtfm 1/1 Running 1 (3d10h ago) 7d21h
  29. kube-system calico-node-8l78t 1/1 Running 1 (3d10h ago) 7d21h
  30. kube-system calico-node-9b75m 1/1 Running 1 (3d10h ago) 7d21h
  31. kube-system calico-node-k75jh 1/1 Running 1 (3d10h ago) 7d21h
  32. kube-system calico-node-kmbhs 1/1 Running 1 (3d10h ago) 7d21h
  33. kube-system calico-node-lxfk9 1/1 Running 1 (3d10h ago) 7d21h
  34. kube-system coredns-69548bdd5f-6df7j 1/1 Running 1 (3d10h ago) 7d9h
  35. kube-system coredns-69548bdd5f-nl5qc 1/1 Running 1 (3d10h ago) 7d9h
  36. kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-2d275 1/1 Running 1 (3d10h ago) 7d9h
  37. kubernetes-dashboard kubernetes-dashboard-5676d8b865-6l8n8 1/1 Running 1 (3d10h ago) 7d9h
  38. linux70 linux70-tomcat-app1-deployment-5d666575cc-kbjhk 1/1 Running 1 (3d10h ago) 5d9h
  39. myserver linux70-nginx-deployment-55dc5fdcf9-58ll2 1/1 Running 0 23h
  40. myserver linux70-nginx-deployment-55dc5fdcf9-6xcjk 1/1 Running 0 23h
  41. myserver linux70-nginx-deployment-55dc5fdcf9-cxg5m 1/1 Running 0 23h
  42. myserver linux70-nginx-deployment-55dc5fdcf9-gv2gk 1/1 Running 0 23h
  43. velero-system velero-858b9459f9-5mxxx 1/1 Running 0 24h
  44. root@easzlab-deploy:~/pod-test#

3.2、RS副本控制器

ReplicaSet:副本控制器,和副本控制器的区别是:对选择器的支持( selector 还支持in notin) #第二代pod副本控制器
https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicaset/

  1. root@easzlab-deploy:~/jiege-k8s/pod-test# cat 2.rs.yaml
  2. apiVersion: apps/v1
  3. kind: ReplicaSet
  4. metadata:
  5. name: frontend
  6. spec:
  7. replicas: 2
  8. selector:
  9. matchExpressions:
  10. - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]}
  11. template:
  12. metadata:
  13. labels:
  14. app: ng-rs-80
  15. spec:
  16. containers:
  17. - name: ng-rs-80
  18. image: nginx
  19. ports:
  20. - containerPort: 80
  21. root@easzlab-deploy:~/pod-test# kubectl apply -f 2.rs.yaml
  22. replicaset.apps/frontend created
  23. root@easzlab-deploy:~/pod-test# kubectl get pod -A
  24. NAMESPACE NAME READY STATUS RESTARTS AGE
  25. default frontend-jl67s 1/1 Running 0 97s
  26. default frontend-w7rb5 1/1 Running 0 97s
  27. kube-system calico-kube-controllers-5c8bb696bb-fxbmr 1/1 Running 1 (3d10h ago) 7d21h
  28. kube-system calico-node-2qtfm 1/1 Running 1 (3d10h ago) 7d21h
  29. kube-system calico-node-8l78t 1/1 Running 1 (3d10h ago) 7d21h
  30. kube-system calico-node-9b75m 1/1 Running 1 (3d10h ago) 7d21h
  31. kube-system calico-node-k75jh 1/1 Running 1 (3d10h ago) 7d21h
  32. kube-system calico-node-kmbhs 1/1 Running 1 (3d10h ago) 7d21h
  33. kube-system calico-node-lxfk9 1/1 Running 1 (3d10h ago) 7d21h
  34. kube-system coredns-69548bdd5f-6df7j 1/1 Running 1 (3d10h ago) 7d10h
  35. kube-system coredns-69548bdd5f-nl5qc 1/1 Running 1 (3d10h ago) 7d10h
  36. kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-2d275 1/1 Running 1 (3d10h ago) 7d10h
  37. kubernetes-dashboard kubernetes-dashboard-5676d8b865-6l8n8 1/1 Running 1 (3d10h ago) 7d10h
  38. linux70 linux70-tomcat-app1-deployment-5d666575cc-kbjhk 1/1 Running 1 (3d10h ago) 5d10h
  39. myserver linux70-nginx-deployment-55dc5fdcf9-58ll2 1/1 Running 0 24h
  40. myserver linux70-nginx-deployment-55dc5fdcf9-6xcjk 1/1 Running 0 24h
  41. myserver linux70-nginx-deployment-55dc5fdcf9-cxg5m 1/1 Running 0 24h
  42. myserver linux70-nginx-deployment-55dc5fdcf9-gv2gk 1/1 Running 0 24h
  43. velero-system velero-858b9459f9-5mxxx 1/1 Running 0 24h
  44. root@easzlab-deploy:~/pod-test#

3.3、RS更新pod

如需要手动指定镜像进行更新

kubectl set image replicaset/fronted ng-rs-80=nginx:1.18.2

四、Deployment副本控制器

4.1、Deployment副本控制器

Deployment 为 Pod 和 ReplicaSet 提供声明式的更新能力,Deployment比rs更高一级的控制器,除了有rs的功能之外,还有滚动升级、回滚、策略清理、金丝雀部署等等。

官网文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/

  1. root@easzlab-deploy:~/jiege-k8s/pod-test# cat 1.deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-deployment
  6. labels:
  7. app: nginx
  8. spec:
  9. replicas: 3 #设置副本数
  10. selector:
  11. matchLabels:
  12. app: nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.14.2
  21. ports:
  22. - containerPort: 80
  23. root@easzlab-deploy:~/jiege-k8s/pod-test# kubectl apply -f 1.deployment.yaml
  24. deployment.apps/nginx-deployment created
  25. root@easzlab-deploy:~/jiege-k8s/pod-test# kubectl get pod
  26. NAME READY STATUS RESTARTS AGE
  27. mysql-77d55bfdd8-cbtcz 1/1 Running 2 (16h ago) 39h
  28. nginx-deployment-6595874d85-hm5gx 1/1 Running 0 19m
  29. nginx-deployment-6595874d85-wdwx9 1/1 Running 0 19m
  30. nginx-deployment-6595874d85-z8dsf 1/1 Running 0 19m
  31. root@easzlab-deploy:~/jiege-k8s/pod-test#

五、Kubernetes之Service

5.1、Kubernetes Service介绍

由于pod重建之后ip就变了, 因此pod之间使用pod的IP直接访问会出现无法访问的问题, 而service则解耦了服务和应用, service的实现方式就是通过label标签动态匹配后端endpoint。
kube-proxy监听着k8s-apiserver,一旦service资源发生变化(调k8sapi修改service信息) , kubeproxy就会生成对应的负载调度的调整, 这样就保证service的最新状态。
kube-proxy有三种调度模型

  • userspace: k8s1.1之前
  • iptables: 1.2-k8s1.11之前
  • ipvs: k8s 1.11之后, 如果没有开启ipvs, 则自动降级为iptables
5.2、service类型
  • ClusterIP: 用于内部服务基于service name的访问。
  • NodePort: 用于kubernetes集群以外的服务主动访问运行在kubernetes集群内部的服务。
  • LoadBalancer: 用于公有云环境的服务暴露。
  • ExternalName: 用于将k8s集群外部的服务映射至k8s集群内部访问, 从而让集群内部的pod能够通过固定的service name访问集群外部的服务, 有时候也用于将不同namespace之间的pod通过ExternalName进行访问。

应用案例

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2# cat 1-deploy_node.yml
  2. #apiVersion: extensions/v1beta1
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. name: nginx-deployment
  7. spec:
  8. replicas: 1
  9. selector:
  10. #matchLabels: #rs or deployment
  11. # app: ng-deploy3-80
  12. matchExpressions:
  13. - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  14. template:
  15. metadata:
  16. labels:
  17. app: ng-deploy-80
  18. spec:
  19. containers:
  20. - name: ng-deploy-80
  21. image: nginx:1.16.1
  22. ports:
  23. - containerPort: 80
  24. #nodeSelector:
  25. # env: group1
  26. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2#
  27. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2#
  28. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2# cat 2-svc_service.yml
  29. apiVersion: v1
  30. kind: Service
  31. metadata:
  32. name: ng-deploy-80
  33. spec:
  34. ports:
  35. - name: http
  36. port: 88
  37. targetPort: 80
  38. protocol: TCP
  39. type: ClusterIP
  40. selector:
  41. app: ng-deploy-80
  42. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2#
  43. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2# cat 3-svc_NodePort.yml
  44. apiVersion: v1
  45. kind: Service
  46. metadata:
  47. name: ng-deploy-80
  48. spec:
  49. ports:
  50. - name: http
  51. port: 90
  52. targetPort: 80
  53. nodePort: 30012
  54. protocol: TCP
  55. type: NodePort
  56. selector:
  57. app: ng-deploy-80
  58. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case2#

六、Kubernetes之configmap

Configmap配置信息和镜像解耦, 实现方式为将配置信息放到configmap对象中, 然后在pod的中作为Volume挂载到pod中, 从而实现导入配置的目的。

使用场景:

  • 通过Configmap给pod定义全局环境变量
  • 通过Configmap给pod传递命令行参数, 如mysql -u -p中的账户名密码可以通过Configmap传递。
  • 通过Configmap给pod中的容器服务提供配置文件, 配置文件以挂载到容器的形式使用。

注意事项:

  • Configmap需要在pod使用它之前创建。
  • pod只能使用位于同一个namespace的Configmap, 及Configmap不能夸namespace使用。
  • 通常用于非安全加密的配置场景。
  • Configmap通常是小于1MB的配置。

应用案例

root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# cat deploy_configmap.yml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: nginx-config
  5. data:
  6. default: |
  7. server {
  8. listen 80;
  9. server_name www.mysite.com;
  10. index index.html;
  11. location / {
  12. root /data/nginx/html;
  13. if (!-e $request_filename) {
  14. rewrite ^/(.*) /index.html last;
  15. }
  16. }
  17. }
  18. ---
  19. #apiVersion: extensions/v1beta1
  20. apiVersion: apps/v1
  21. kind: Deployment
  22. metadata:
  23. name: nginx-deployment
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: ng-deploy-80
  29. template:
  30. metadata:
  31. labels:
  32. app: ng-deploy-80
  33. spec:
  34. containers:
  35. - name: ng-deploy-8080
  36. image: tomcat
  37. ports:
  38. - containerPort: 8080
  39. volumeMounts:
  40. - name: nginx-config
  41. mountPath: /data
  42. - name: ng-deploy-80
  43. image: nginx
  44. ports:
  45. - containerPort: 80
  46. volumeMounts:
  47. - mountPath: /data/nginx/html
  48. name: nginx-static-dir
  49. - name: nginx-config
  50. mountPath: /etc/nginx/conf.d
  51. volumes:
  52. - name: nginx-static-dir
  53. hostPath:
  54. path: /data/nginx/linux70
  55. - name: nginx-config
  56. configMap:
  57. name: nginx-config
  58. items:
  59. - key: default
  60. path: mysite.conf
  61. ---
  62. apiVersion: v1
  63. kind: Service
  64. metadata:
  65. name: ng-deploy-80
  66. spec:
  67. ports:
  68. - name: http
  69. port: 81
  70. targetPort: 80
  71. nodePort: 30019
  72. protocol: TCP
  73. type: NodePort
  74. selector:
  75. app: ng-deploy-80

安装并验证

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. mysql-77d55bfdd8-cbtcz 1/1 Running 2 (18h ago) 41h
  4. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get configmap
  5. NAME DATA AGE
  6. istio-ca-root-cert 1 40h
  7. kube-root-ca.crt 1 47h
  8. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl apply -f deploy_configmap.yml
  9. configmap/nginx-config created
  10. deployment.apps/nginx-deployment created
  11. service/ng-deploy-80 created
  12. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get pod -owide
  13. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  14. mysql-77d55bfdd8-cbtcz 1/1 Running 2 (18h ago) 41h 10.200.104.212 172.16.88.163 <none> <none>
  15. nginx-deployment-5699c4696d-gr4gm 2/2 Running 0 27s 10.200.104.216 172.16.88.163 <none> <none>
  16. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get configmap
  17. NAME DATA AGE
  18. istio-ca-root-cert 1 40h
  19. kube-root-ca.crt 1 47h
  20. nginx-config 1 32s
  21. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get configmap nginx-config -oyaml
  22. apiVersion: v1
  23. data:
  24. default: |
  25. server {
  26. listen 80;
  27. server_name www.mysite.com;
  28. index index.html;
  29. location / {
  30. root /data/nginx/html;
  31. if (!-e $request_filename) {
  32. rewrite ^/(.*) /index.html last;
  33. }
  34. }
  35. }
  36. kind: ConfigMap
  37. metadata:
  38. annotations:
  39. kubectl.kubernetes.io/last-applied-configuration: |
  40. {"apiVersion":"v1","data":{"default":"server {\n listen 80;\n server_name www.mysite.com;\n index index.html;\n\n location / {\n root /data/nginx/html;\n if (!-e $request_filename) {\n rewrite ^/(.*) /index.html last;\n }\n }\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-config","namespace":"default"}}
  41. creationTimestamp: "2022-10-20T08:29:50Z"
  42. name: nginx-config
  43. namespace: default
  44. resourceVersion: "388823"
  45. uid: 1a04f3c2-bc33-4ddc-ac0a-f726c9fa33f6
  46. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6#
  47. root@easzlab-deploy:~# kubectl get svc
  48. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  49. kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 47h
  50. mysql-service NodePort 10.100.125.186 <none> 3306:33306/TCP 41h
  51. ng-deploy-80 NodePort 10.100.80.101 <none> 81:30019/TCP 2m16s
  52. root@easzlab-deploy:~#

root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# cat deploy_configmapenv.yml #带value值

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: nginx-config
  5. data:
  6. username: user1
  7. ---
  8. #apiVersion: extensions/v1beta1
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. metadata:
  12. name: nginx-deployment
  13. spec:
  14. replicas: 1
  15. selector:
  16. matchLabels:
  17. app: ng-deploy-80
  18. template:
  19. metadata:
  20. labels:
  21. app: ng-deploy-80
  22. spec:
  23. containers:
  24. - name: ng-deploy-80
  25. image: nginx
  26. env:
  27. - name: "magedu"
  28. value: "n70"
  29. - name: MY_USERNAME
  30. valueFrom:
  31. configMapKeyRef:
  32. name: nginx-config
  33. key: username
  34. ports:
  35. - containerPort: 80

安装并验证

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl apply -f deploy_configmapenv.yml
  2. configmap/nginx-config configured
  3. deployment.apps/nginx-deployment configured
  4. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6# kubectl get configmap -oyaml
  5. apiVersion: v1
  6. items:
  7. - apiVersion: v1
  8. data:
  9. root-cert.pem: |
  10. -----BEGIN CERTIFICATE-----
  11. MIIC/DCCAeSgAwIBAgIQOeHImLiidfxNM+2MuCKFMDANBgkqhkiG9w0BAQsFADAY
  12. MRYwFAYDVQQKEw1jbHVzdGVyLmxvY2FsMB4XDTIyMTAxODE2MjIzN1oXDTMyMTAx
  13. NTE2MjIzN1owGDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDCCASIwDQYJKoZIhvcN
  14. AQEBBQADggEPADCCAQoCggEBALJL3P9+3f3SnYE8fFuitxosDPobOAkTy4kuGIMq
  15. 68SzumFalYz5LjlBQpTfo0Hv/OXWWctiJuUm/oJs4jVLhruALQ1JjV5EK82iiwQo
  16. KypBaUHL1ql5AHBMKmmwqLSo/yd/zNqmU/iwasVN7G/ykAfqaapEvFbnJJhJT0Dz
  17. 0amhRs/oPB1umgfwmiRYrCTZu9iKihBaYjbkmJ6o4/oUCw1Pse1PZLt4MkctTSiZ
  18. WXvtTF9YyQCqSAe62mVQkmYRBjf4x7QkmfZnvCnHvhJ86RfTOcIMYK8l5xgiaZyG
  19. 1EUrOfMgJ/DQFdC7DKzIbbktTJ2YvA33VTb9gpIQKrCAHhECAwEAAaNCMEAwDgYD
  20. VR0PAQH/BAQDAgIEMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFA2bWsIMmCNm
  21. cgQJFjZrUwtYWf0gMA0GCSqGSIb3DQEBCwUAA4IBAQCIVbuVBrRigwzrF08/v2yc
  22. qhjunL/QrLh6nzRmfHlKn4dNlKMczReMc0yrxcl6V6rdzXpDpVb663Q36hhmmvwe
  23. WwmnJMZUUsFrYiTt1KYQg9o0dNcRFzYx/W9Dpi9YPwmS2Xqqc94rUDIkBMIOGnc9
  24. H99gvMOJbfK5BnzXko3A+dCVwUngdmxQpRePjzWSDhU1pWkyZp+hKxZff/1ieFqF
  25. Joh3bHInmEsWqZRWRhkmzwwjnlvVy3h90TKUizidYfXPz4xgXf/FVp++0mp09U4T
  26. tnFjivOFyXH/jwpRbZJq8uXsV+joxMEYy/JPbgywYoynvwejcEHksact/3FTQLd5
  27. -----END CERTIFICATE-----
  28. kind: ConfigMap
  29. metadata:
  30. creationTimestamp: "2022-10-18T16:22:39Z"
  31. labels:
  32. istio.io/config: "true"
  33. name: istio-ca-root-cert
  34. namespace: default
  35. resourceVersion: "65285"
  36. uid: 76575e18-c8b2-4dd9-b1d7-ffef0f43c640
  37. - apiVersion: v1
  38. data:
  39. ca.crt: |
  40. -----BEGIN CERTIFICATE-----
  41. MIIDlDCCAnygAwIBAgIUXgL7CLqvFf9DxZvFt+UAzbLlYMUwDQYJKoZIhvcNAQEL
  42. BQAwYTELMAkGA1UEBhMCQ04xETAPBgNVBAgTCEhhbmdaaG91MQswCQYDVQQHEwJY
  43. UzEMMAoGA1UEChMDazhzMQ8wDQYDVQQLEwZTeXN0ZW0xEzARBgNVBAMTCmt1YmVy
  44. bmV0ZXMwIBcNMjIxMDEzMTIyMTAwWhgPMjEyMjA5MTkxMjIxMDBaMGExCzAJBgNV
  45. BAYTAkNOMREwDwYDVQQIEwhIYW5nWmhvdTELMAkGA1UEBxMCWFMxDDAKBgNVBAoT
  46. A2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVzMIIBIjAN
  47. BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApw+3h+j5I/2exVVSvxL/j70XZ5ep
  48. XW5tclKag7Qf/x5oZe8O1yMxZXiPKgzqGGS68morpG5vD2hVPEsqICOhHiFl2AD3
  49. ZgMCDWMGeOyk6zGgDbnTUsFO7R/v7kNTnBV6BqgKKlG9NqTtrDSPLoeakTB2qBtV
  50. Wjhv+YrXXsMVcEaiuEQ4wLD87Kmy8r7xRtEttELKHwdI8iS4Caq+qxtm/EosyTiT
  51. bQbUB4mkGZ6sFFwKSKaLUGz8Nq1yHkJYbI77YDhUBnaNEQBemPmEfkBeHCajbzx1
  52. CKPIairrAZNaoMPK9stuK+YLk9Z/gLUYrZe2S8S+k6DPlvuj327bLwKWCwIDAQAB
  53. o0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
  54. XUwALoYNGxfIG/8BrPlezZd3uaQwDQYJKoZIhvcNAQELBQADggEBAIhIDLiS0R1M
  55. bq2RZMQROrEzKs02CclxYjwcr8hrXm/YlB6a8bHG2v3HASi+7QZ89+agz/Oeo+Cp
  56. 6abDTiXHolUkUuyddd14KBwanC7ubwDBsqxr4iteNz5H4ml1uxaZ8G94uVyBgC2U
  57. qjkWGtXbw6RuY+YTuqYzX3S621U+hwLWN1cXmRcydDZwnMuI+rCwEKLXqLESDMbG
  58. jiQ1sbLI12oQa07fe+rffnGAWe7P2fMAu/MQxm9Mm8+pX+2WgKauDwpG/v2oZxAO
  59. iQqICEaYBecgLRBTj868LHVli1CnqUDVjJt59vD2/LZ8I5WnqnGFfONluYSgFiFQ
  60. m/7XupOph3k=
  61. -----END CERTIFICATE-----
  62. kind: ConfigMap
  63. metadata:
  64. annotations:
  65. kubernetes.io/description: Contains a CA bundle that can be used to verify the
  66. kube-apiserver when using internal endpoints such as the internal service
  67. IP or kubernetes.default.svc. No other usage is guaranteed across distributions
  68. of Kubernetes clusters.
  69. creationTimestamp: "2022-10-18T09:07:42Z"
  70. name: kube-root-ca.crt
  71. namespace: default
  72. resourceVersion: "271"
  73. uid: f63b2e93-d94f-4c2c-831f-49863f82e3e5
  74. - apiVersion: v1
  75. data:
  76. username: user1
  77. kind: ConfigMap
  78. metadata:
  79. annotations:
  80. kubectl.kubernetes.io/last-applied-configuration: |
  81. {"apiVersion":"v1","data":{"username":"user1"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-config","namespace":"default"}}
  82. creationTimestamp: "2022-10-20T08:37:17Z"
  83. name: nginx-config
  84. namespace: default
  85. resourceVersion: "390419"
  86. uid: 0136af36-4a7f-407a-a61c-bea7ef19497c
  87. kind: List
  88. metadata:
  89. resourceVersion: ""
  90. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case6#

七、Kubernetes之Secret

7.1、Secret简介
  • Secret 的功能类似于 ConfigMap给pod提供额外的配置信息,但是Secret是一种包含少量敏感信息例如密码、 令牌或密钥的对象。
  • Secret 的名称必须是合法的 DNS 子域名。
  • 每个Secret的大小最多为1MiB, 主要是为了避免用户创建非常大的Secret进而导致API服务器和kubelet内存耗尽, 不过创建很多小的Secret也可能耗尽内存, 可以使用资源配额来约束每个名字空间中Secret的个数。
  • 在通过yaml文件创建secret时, 可以设置data或stringData字段,data和stringData字段都是可选的, data字段中所有键值都必须是base64编码的字符串, 如果不希望执行这种 base64字符串的转换操作, 也可以选择设置stringData字段, 其中可以使用任何非加密的字符串作为其取值。

Pod 可以用三种方式的任意一种来使用 Secret:

  • 作为挂载到一个或多个容器上的卷 中的文件(crt文件、 key文件)。
  • 作为容器的环境变量。
  • 由 kubelet 在为 Pod 拉取镜像时使用(与镜像仓库的认证)。
7.2、Secret简介类型

Kubernetes默认支持多种不同类型的secret, 用于一不同的使用场景, 不同类型的secret的配置参数也不一样。

7.3、Secret类型-Opaque格式

Opaque格式-data类型数据-事先使用base64加密

#echo admin |base64
#echo 123456 |base64

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# echo admin |base64
  2. YWRtaW4K
  3. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# echo 123456 |base64
  4. MTIzNDU2Cg==
  5. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# cat 1-secret-Opaque-data.yaml
  6. apiVersion: v1
  7. kind: Secret
  8. metadata:
  9. name: mysecret-data
  10. namespace: myserver
  11. type: Opaque
  12. data:
  13. user: YWRtaW4K
  14. password: MTIzNDU2Cg==
  15. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# kubectl apply -f 1-secret-Opaque-data.yaml
  16. secret/mysecret-data created
  17. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#
  18. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# kubectl get secrets mysecret-data -n myserver -o yaml
  19. apiVersion: v1
  20. data:
  21. password: MTIzNDU2Cg==
  22. user: YWRtaW4K
  23. kind: Secret
  24. metadata:
  25. annotations:
  26. kubectl.kubernetes.io/last-applied-configuration: |
  27. {"apiVersion":"v1","data":{"password":"MTIzNDU2Cg==","user":"YWRtaW4K"},"kind":"Secret","metadata":{"annotations":{},"name":"mysecret-data","namespace":"myserver"},"type":"Opaque"}
  28. creationTimestamp: "2022-10-20T09:03:33Z"
  29. name: mysecret-data
  30. namespace: myserver
  31. resourceVersion: "394995"
  32. uid: b0788df4-0195-429f-bda5-eafb5d51bd6a
  33. type: Opaque
  34. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#

Opaque格式stringData类型数据-不用事先加密-上传到k8s会加密

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# cat 2-secret-Opaque-stringData.yaml
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. name: mysecret-stringdata
  6. namespace: myserver
  7. type: Opaque
  8. stringData:
  9. superuser: 'admin'
  10. password: '123456'
  11. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#
  12. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#
  13. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# kubectl apply -f 2-secret-Opaque-stringData.yaml
  14. secret/mysecret-stringdata created
  15. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#
  16. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8# kubectl get secrets mysecret-stringdata -n myserver -o yaml
  17. apiVersion: v1
  18. data:
  19. password: MTIzNDU2
  20. superuser: YWRtaW4=
  21. kind: Secret
  22. metadata:
  23. annotations:
  24. kubectl.kubernetes.io/last-applied-configuration: |
  25. {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"mysecret-stringdata","namespace":"myserver"},"stringData":{"password":"123456","superuser":"admin"},"type":"Opaque"}
  26. creationTimestamp: "2022-10-20T09:07:15Z"
  27. name: mysecret-stringdata
  28. namespace: myserver
  29. resourceVersion: "395636"
  30. uid: 4134fe69-389d-47d0-b870-f83dd34fa537
  31. type: Opaque
  32. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case8#

7.4、Secret类型-kubernetes.io/tls-为nginx提供证书

自签名证书:

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# mkdir certs
  2. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# ls
  3. 4-secret-tls.yaml certs
  4. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# cd certs/
  5. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# ls
  6. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#
  7. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
  8. Generating a RSA private key
  9. ..............................................++++
  10. ....................................................................++++
  11. writing new private key to 'ca.key'
  12. -----
  13. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#
  14. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.mysite.com'
  15. Generating a RSA private key
  16. .......................................................................................................................................................................................++++
  17. ................................................++++
  18. writing new private key to 'server.key'
  19. -----
  20. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#
  21. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
  22. Signature ok
  23. subject=CN = www.mysite.com
  24. Getting CA Private Key
  25. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#
  26. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# ll -h
  27. total 28K
  28. drwxr-xr-x 2 root root 4.0K Oct 20 20:09 ./
  29. drwxr-xr-x 3 root root 4.0K Oct 20 20:06 ../
  30. -rw-r--r-- 1 root root 1.8K Oct 20 20:08 ca.crt
  31. -rw------- 1 root root 3.2K Oct 20 20:08 ca.key
  32. -rw-r--r-- 1 root root 1.7K Oct 20 20:09 server.crt
  33. -rw-r--r-- 1 root root 1.6K Oct 20 20:09 server.csr
  34. -rw------- 1 root root 3.2K Oct 20 20:09 server.key
  35. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs# kubectl create secret tls myserver-tls-key --cert=./server.crt --key=./server.key -n myserver
  36. secret/myserver-tls-key created
  37. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#
  38. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9/certs#

创建web服务nginx并使用证书:

root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# cat 4-secret-tls.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: nginx-config
  5. namespace: myserver
  6. data:
  7. default: |
  8. server {
  9. listen 80;
  10. listen 443 ssl;
  11. server_name www.mysite.com;
  12. ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
  13. ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;
  14. location / {
  15. root /usr/share/nginx/html;
  16. index index.html;
  17. if ($scheme = http){
  18. rewrite / https://www.mysite.com permanent;
  19. }
  20. if (!-e $request_filename){
  21. rewrite ^/(.*) /index.html last;
  22. }
  23. }
  24. }
  25. ---
  26. apiVersion: apps/v1
  27. kind: Deployment
  28. metadata:
  29. name: myserver-myapp-frontend-deployment
  30. namespace: myserver
  31. spec:
  32. replicas: 1
  33. selector:
  34. matchLabels:
  35. app: myserver-myapp-frontend
  36. template:
  37. metadata:
  38. labels:
  39. app: myserver-myapp-frontend
  40. spec:
  41. containers:
  42. - name: myserver-myapp-frontend
  43. image: nginx:1.20.2-alpine
  44. ports:
  45. - containerPort: 80
  46. volumeMounts:
  47. - name: nginx-config
  48. mountPath: /etc/nginx/conf.d/myserver
  49. - name: myserver-tls-key
  50. mountPath: /etc/nginx/conf.d/certs
  51. volumes:
  52. - name: nginx-config
  53. configMap:
  54. name: nginx-config
  55. items:
  56. - key: default
  57. path: mysite.conf
  58. - name: myserver-tls-key
  59. secret:
  60. secretName: myserver-tls-key
  61. ---
  62. apiVersion: v1
  63. kind: Service
  64. metadata:
  65. name: myserver-myapp-frontend
  66. namespace: myserver
  67. spec:
  68. type: NodePort
  69. ports:
  70. - name: http
  71. port: 80
  72. targetPort: 80
  73. nodePort: 30018
  74. protocol: TCP
  75. - name: https
  76. port: 443
  77. targetPort: 443
  78. nodePort: 30019
  79. protocol: TCP
  80. selector:
  81. app: myserver-myapp-frontend
  82. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl apply -f 4-secret-tls.yaml
  83. configmap/nginx-config created
  84. deployment.apps/myserver-myapp-frontend-deployment created
  85. service/myserver-myapp-frontend created
  86. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9#

 验证nginx pod信息

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl get pod -n myserver
  2. NAME READY STATUS RESTARTS AGE
  3. myserver-myapp-frontend-deployment-7694cb4fcb-j9hcq 1/1 Running 0 54m
  4. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl get secret -n myserver
  5. NAME TYPE DATA AGE
  6. myserver-tls-key kubernetes.io/tls 2 71m
  7. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl get secret -n myserver -oyaml
  8. apiVersion: v1
  9. items:
  10. - apiVersion: v1
  11. data:
  12. tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVvakNDQW9vQ0FRRXdEUVlKS29aSWh2Y05BUUVMQlFBd0ZURVRNQkVHQTFVRUF3d0tkM2QzTG1OaExtTnYKYlRBZUZ3MHlNakV3TWpBeE1qQTVNakJhRncwek1qRXdNVGN4TWpBNU1qQmFNQmt4RnpBVkJnTlZCQU1NRG5kMwpkeTV0ZVhOcGRHVXVZMjl0TUlJQ0lqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUF5RE9VCmt5UHJMbC9adFRLMk1ZOWxQWU8zQXVXRnExcEJFZG9PT0R2dndtZ3FabUV1VjVNeXNTdnJLcmJBeW1pQnd1d1gKd2JYZU1MZGlxOW1GK0NzTWFkR21tUW9aY0VXNW54MGZZSzNLbVdIY2hrc0JITnlKNnhXV2ZCQk0yRDlzSnM1cQpXVWJnbUFDdGNINW1iSXo5TlV3MGwwZ2pTa3oyNDM2RVNPNjdPOFo5WEVEVGlFUTN5YnM0RTV2azNiVGJ3ZzBYCmthK2Q5Z3RCQTFmQmZFOGFEZkRweWhkZTZ5L0YxTjBlWmladFlNdUp1QTIzYkVxcWR5d1hJemJCUDBjTHdyUEIKUG5vTXY5OUdTWTVzZEZZMkRrdHMxVHdUUjRqdHVWdExTTTY1MXNXeVpVckVUcm53U3RNcXJRT1Q0QUdpcXc1MQpHZTZoQXJxZ0Y1VS96U3BiL29nQjZ1T0IvWThiNDBvazFNNGRKWXRETVhvZFFtYnNtOU9pa3VOc2lRWTVIZUlnCmVpVUdnVVRoRFRGYlB6SW5acWo3TDM5bmMvUlREeWxicGRsRVRNamVTS0o5MHBpMERNM1VOVkxzTEg0WitrWnEKZ282N0hneHFCVlBYb1dTTm54UFFheEI5TkFnTUl3aVVXZ0NoM2pHVlpwMTE5VGpsQXRqYTV0OHZVeFhCWDdUMApkSDVCYUZjTTRGTmwzYmYzMmJRck9vcWlkREFMQ2JrcVZmNjNxNUVsUERNV2p0UGdhd0JtUlZ0V0Vlb2hIV0t3CitKTFVod1o4UUNPYjhBcTYwVHk3bFdvc0JyUlZZRWgwOTJjbFU4clJmOFF1VC9venl5N3BkWHlYVzNYUGZDbHAKVXo1NGF5eVdYOVZXa3pxZHRIby8xRXRtUHpCV1FzcnhSeVNjOTg4Q0F3RUFBVEFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQWdFQXRFUkdPbENCQnFmRTZzODdGcmFReXc5Q0pocUs4TndrajFVa3YwMzFsaFIwakhKSUE1TU1lYlFNCkU2YnBPdm5mS1pOWk82OXJtaVV6S3pJd2IrZjV5THZaWHJzVzdYUU5VQitiRlFJRG14cVp1dGJNbHR5YjdrcWIKaTA1eVpqUWpJNHNSYVFrUnRpZ1JBdkFWRFk4RWl3WSsvb1BFUjg4N0QrbXk2ZlZJZFhFTzNSQUloT1FhNWF1bQphV1o3bVBjL2xkd1ZoNFVicG0yTGZCNDhvb3BvS05pZ1hsZWloNWg5VWc2Mms3NHFLdVB2cnVvdVBvWWtoWGlXCmFuQzBtTXFWalk5bFMzSi9CdXpKdFpwUExlcllLdjJIQ1RiSklYdmhCazNQU3MwbmZlUFoyUDkrNHMzQXhMcSsKVGxhMHlwcXZueWtHMXRyd0g4dXMzZEM1ZEg3ZWNzWC9SRm8wM2NXbUNGZUM2Yzk4YjhiVDBXd0x6Ym1HWUl6bgpWaVk3UTRVSTNya2wrdjBySC91aDZ5OVFkU1FRZS9vaERCeEJtelQxWXVqdDMyMUpiSTIzMklrTEFQSUpWbDBnClo5SktFaCtSRko0djVRK1N6U1BSZyt5ZWJCQjExZUVvc1l3N2lQd2J4a0U1UVpaUzBLY1N4TDB3UGF3R3NXK0MKYkFmZUFpMVhFdG51MFlVMzlOTHkrOWluRFBEcjlyM3Y2N1d0UkxFeldWSzcwS05RUTY1R3VTOUsxbHFMdzdqUApwWE1RclpuQnpDelVDb0VmYkFIN0tmSXd3OGgxWFZhTnJFeXZDWnJRRlNiTjJTUzFwYjZidXFSVmtjald2U2Y3Clp0ejUzYXBDQkFXd3JxUGFSbW1VVHd0RnpZZUIvVmVsNEdJVzlEbkIwRTF1NWdXaHpaQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  13. tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRd0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1Mwd2dna3BBZ0VBQW9JQ0FRRElNNVNUSStzdVg5bTEKTXJZeGoyVTlnN2NDNVlXcldrRVIyZzQ0TysvQ2FDcG1ZUzVYa3pLeEsrc3F0c0RLYUlIQzdCZkJ0ZDR3dDJLcgoyWVg0S3d4cDBhYVpDaGx3UmJtZkhSOWdyY3FaWWR5R1N3RWMzSW5yRlpaOEVFellQMndtem1wWlJ1Q1lBSzF3CmZtWnNqUDAxVERTWFNDTktUUGJqZm9SSTdyczd4bjFjUU5PSVJEZkp1emdUbStUZHROdkNEUmVScjUzMkMwRUQKVjhGOFR4b044T25LRjE3ckw4WFUzUjVtSm0xZ3k0bTREYmRzU3FwM0xCY2pOc0UvUnd2Q3M4RStlZ3kvMzBaSgpqbXgwVmpZT1MyelZQQk5IaU8yNVcwdEl6cm5XeGJKbFNzUk91ZkJLMHlxdEE1UGdBYUtyRG5VWjdxRUN1cUFYCmxUL05LbHYraUFIcTQ0SDlqeHZqU2lUVXpoMGxpME14ZWgxQ1p1eWIwNktTNDJ5SkJqa2Q0aUI2SlFhQlJPRU4KTVZzL01pZG1xUHN2ZjJkejlGTVBLVnVsMlVSTXlONUlvbjNTbUxRTXpkUTFVdXdzZmhuNlJtcUNqcnNlREdvRgpVOWVoWkkyZkU5QnJFSDAwQ0F3akNKUmFBS0hlTVpWbW5YWDFPT1VDMk5ybTN5OVRGY0ZmdFBSMGZrRm9Wd3pnClUyWGR0L2ZadENzNmlxSjBNQXNKdVNwVi9yZXJrU1U4TXhhTzArQnJBR1pGVzFZUjZpRWRZckQ0a3RTSEJueEEKSTV2d0NyclJQTHVWYWl3R3RGVmdTSFQzWnlWVHl0Ri94QzVQK2pQTEx1bDFmSmRiZGM5OEtXbFRQbmhyTEpaZgoxVmFUT3AyMGVqL1VTMlkvTUZaQ3l2RkhKSnozendJREFRQUJBb0lDQVFEQ21WaW0rYmdGdU1ldW1KOStad3NhCmt5aFdXWEhuMEhBRmdUWm5OT05semNqQkFWK0JZcVJZa1A4aTRzZGROOTVCOFNsYWNvU0tTQWRTVWJzbU1mbjcKOWZ5Qkw4N3dVZVlQSXNpNE9kWC81NTdxcm9kalhYOTJFZUxYcnlSeTRwc20wV2VRWmhPenpKektCeU5hQ21XcAo0K3dPek9ENHZQMFN2b3lwTTl5dFNzL1oxMjJHUEFFYVJyQklaelU4eUNzQVlhZHlSZ2s5KzB4emlsNlpqVzRlCjlQamJKb0p1QzE2NS9VRXFPOW4veDNpVGZrbTNxcEF1REo1azdUbEVYN092eXZoZzJWUUJRVzlaMm1YVFkyVmgKMmJEdFNGclpJdUVvVmZSRXppVFgvZ3pjNXFNUWZ5NXlIUGFUZkRIR0FQRDBZcll5d2NDaUhYTzEySzVPcUFrSQpGV0FIUnZYQTNmMHo3TGtWNDQ1OXg4aFE4WDZFSHV5Ykx6REY5dGprT1ZUUGJ4Q0poS1FMRWZlVStQbi83ZjB5CkVteXpmODRNWU9BWHNpbk1TakVsaUZnWHFrYVFLNXdUZDhaN1R4STcxSjJ1UXBMV2VyTXlBb3BKc0FDYWJjZFcKcEVXUEJhdDZHZ0FnM1NGQUE1SUFGZk9BMFdWbUxuL3UzUjdMekM2dklucUtSYW1qZUlKY0paeWkranlEVzNrQgpzWTd1ZTRMZGYyNC9IMVlCeUhISmpsSnZRWjBWYnVuVWMvZ0c2UFUzNTZ0OUhlaGwwM1lxZXVXUE5ySW9maktECjBlQWFIc3NzY2laeXdKOUFqNUZsTEJIVS9xeUhWL1RjZTQxNEQxN2NuMit3azloUmJITUo2RFh5WFdORFFWZXAKbHBKaHUyS1hoQmNZZVFrb3pEZkJRUUtDQVFFQTQyKzhsdE83eXRld3Y1OHVwWkdBa3BJKy84NElqWWZtaGhoYgpHMlRXU0FpQmxseWxTU1YzejNtd2hzRE4yTWY1QXU4L3d2ZnpxZ3g5OHhhUW9pQjBsbnBDTzJ6OXFaSkE5YVc1CnpTQkdQS095YkhBNkpZQ0ZZSjBrUHNiVlRzY2IzTVhvSUEveHg2Wkg0QWtreGtPYTBMQmpJMW11anNVRlI3akUKMmhweUVUenZPRlNXaUNpSnA2RzFDeXBzRWozdzVQR2dGb0lCdFpwblVCM1ZDVXViNEhLVTkzT3pvaXhad05mVQpTaGdYbHZqOW5OWkdpN1NJRzJxY2xvOTI5ZlRESnV6bnUvdjBndlJnRytwbUxPSHZjM09EMzBOTzA4alhQbkNjCnJzU2sxTHVCQkNyektzUjl0RHNHTEtyWW9McFlCMUxYcTdJQXNGRFU5aUN1UlBwaW9RS0NBUUVBNFZnM25tdDkKNUFWL3NnaUx5aGtBeU5BSVUvTWFvR1RiUVgzL0hyUnlsWlRQL2d0ZWZ6WDB5Rm1lVWlRUTJHYlp1MUY0c1pFaAo2WmZWaGlOMkJsSFkyNkkwblpjQkw5Q3hDM0NNZTY3eUNmdUhDekNNN0Q4R3JieHhLV2duSWxHT1hrcFhyMzdYClg2aDBKSzV3VjlJaVlLYXVaZ2xUWm1vT2g0aTl5M252Um5ESkFrREIwMzlNYjBVUjVaaTAwOElrbUw3bDBsU2MKL0lJenBGajJTeHIvUWUySVBLYkpTWDBjWW44am5yamVZUzBjczNaKzJNMVRDVTRZVU5rTnVMVFV2ZFBPWnBNRApaUmx1MWRLbElmZDMrb0lZWkhhNmxLVDlDeitlYmdQS0Jxb0tsa1hJM0RNTldGWTZhSlF4Y0N6RkkwZStKWmVVCld4Uk96WU94Wk5PMGJ3S0NBUUVBbVVTaWZhNGdmcmpPRnNScVJnK0E2c1Y5aVJ2S3JiNG92ck5KS25QUTUrZzcKbEIzSkVUc2J1NGpSU201Q0NsWHczR1pvdkxZbDBiSHJhdGNKRHdqNktMSXBVaXpINE85N3NVOUdvQktnNHBxYQpVZk5yYS94cFpjdGdNcUlCKzcyNGJCWStzT1N0MWhLYm0wSHVNMkk1d1dzczFCVEt5dEhCRmkxUkUzNEE0dGNDCml4Nk45eUlDYWlKU2hEekphWjJ1YWtyZXpHdytSS2pSK0s2eDh6cXR5QnJQZ3RiSTlvQVcyQnRhcDdnR3Bhb1UKRnc1YnFpZzJGT3ZLckxmdnZoNTlLUTA3dVhZNHQ4dUJ2UzVBUHZ6ZlJobFJoREt5dTR3OGFZcXdQQ0t1eGVHNgpOeG5PbDBLbFI4RUREelR2R1ptYVd3MGI1RXZucE9wRUtiMnFVemU5SVFLQ0FRQmhzeHU2SmFTWlBnRVZNVHRhClRlalhKOHJVaXV3YWFsL2RUMEZURUswMVNTVzhZVFdCTmVXQkQ4bmlseHh1bG5rRUM5aW1NK1JlSUtSRTJnOEwKd21TaEpQeG03dGRtNGJaQTNYVXJFcmlCdDNuZlVoZG5QaFFwTXpCazRYRkdJZEgxODRsODN5T0ZwOFZqT2ZZZgpQVTRHVlgzN1kwT3pmWHY3SzBBT2ZqbE5jd3pUV3p3dDlGMHhTT0x2aG51djY5WnVHeVlOUVA0blJGUWJoeTZSCmRZMENDbmdzdzZzMW4zYTFCYVp0NUgwVjZMY3UzOHN6T0NJdVFKdXVRY3ovTGZlbXJiUXBLTWdxQnhMVXhkVXUKbXRwNzAvZTdadmFTQjg1bUdCa2FYYTR6b1htaG1YUHlkSGZ1dXNQc0g0UW52R0ZrWUhDQ1grdkVhVk9aS3VXNApiMGtsQW9JQkFDNmVZdlhzYUVITW5CUUQ5cDdXaGVNZitmOEtHMlAwcFFEYnFmM0x4bGtMWlIxV3l1SnMyOSttCkgrQm15OEM5blJDcGpzT0VJT3pCTW9seFdlN3F2aUhDeGsreG9SdkNFVlZvNklMd3gyQU0xV3MvTnJBTEE5Q0QKd1QyTjBQdkdnR01jZmIwS3RMeGtJbXVDaW1nSEdnak5hWkJhNjYxeHpWNVh1cnZNTndHaUw1R2lwMlA1R1pUUwpQSEdkamg5SFVTQUtibkNqcG9CL2Z5MHBiNk9YRkJHT1JvNVkvcFV4cHYrQ3JHdEQreHkrS2UzcWd6UjIrdkQxCmNnNmU2Vk1jWHVGUk45YUl5UHdpZHZJL2hwTFdNNGtiZjNlOFJ6ZGRjWUlBQjlwZ2E3dDFyWmVJVFJtNUVqMlIKd1BZRTg3b3hRWVdTNmorUjBSWWNIb2pIK0lPZWhaMD0KLS0tLS1FTkQgUFJJVkFURSBLRVktLS0tLQo=
  14. kind: Secret
  15. metadata:
  16. creationTimestamp: "2022-10-20T12:09:46Z"
  17. name: myserver-tls-key
  18. namespace: myserver
  19. resourceVersion: "427221"
  20. uid: cef4b425-8572-44f2-9097-5a1040c9bd03
  21. type: kubernetes.io/tls
  22. kind: List
  23. metadata:
  24. resourceVersion: ""
  25. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9#

此时发现pod没有监听443端口

解决办法

  1. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl exec -it -n myserver myserver-myapp-frontend-deployment-7694cb4fcb-l449j sh
  2. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  3. / #
  4. /etc/nginx/conf.d/myserver # vi /etc/nginx/nginx.conf
  5. /etc/nginx/conf.d/myserver # cat /etc/nginx/nginx.conf
  6. user nginx;
  7. worker_processes auto;
  8. error_log /var/log/nginx/error.log notice;
  9. pid /var/run/nginx.pid;
  10. events {
  11. worker_connections 1024;
  12. }
  13. http {
  14. include /etc/nginx/mime.types;
  15. default_type application/octet-stream;
  16. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  17. '$status $body_bytes_sent "$http_referer" '
  18. '"$http_user_agent" "$http_x_forwarded_for"';
  19. access_log /var/log/nginx/access.log main;
  20. sendfile on;
  21. #tcp_nopush on;
  22. keepalive_timeout 65;
  23. #gzip on;
  24. include /etc/nginx/conf.d/*.conf;
  25. include /etc/nginx/conf.d/myserver/*.conf;
  26. }
  27. /etc/nginx/conf.d/myserver # ls /etc/nginx/conf.d/myserver/*.conf
  28. /etc/nginx/conf.d/myserver/mysite.conf
  29. /etc/nginx/conf.d/myserver #
  30. /etc/nginx/conf.d/myserver # nginx -t
  31. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
  32. nginx: configuration file /etc/nginx/nginx.conf test is successful
  33. /etc/nginx/conf.d/myserver # nginx -s reload
  34. 2022/10/20 13:56:20 [notice] 52#52: signal process started
  35. /etc/nginx/conf.d/myserver # netstat -tnlp
  36. Active Internet connections (only servers)
  37. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  38. tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1/nginx: master pro
  39. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
  40. tcp 0 0 :::80 :::* LISTEN 1/nginx: master pro
  41. /etc/nginx/conf.d/myserver #

配置负载均衡转发请求到nodeport

root@easzlab-haproxy-keepalive-01:~# vi /etc/haproxy/haproxy.cfg

  1. listen myserer-nginx-80
  2. bind 172.16.88.200:80
  3. mode tcp
  4. server easzlab-k8s-master-01 172.16.88.157:30018 check inter 2000 fall 3 rise 5
  5. server easzlab-k8s-master-02 172.16.88.158:30018 check inter 2000 fall 3 rise 5
  6. server easzlab-k8s-master-03 172.16.88.159:30018 check inter 2000 fall 3 rise 5
  7. listen myserer-nginx-443
  8. bind 172.16.88.200:443
  9. mode tcp
  10. server easzlab-k8s-master-01 172.16.88.157:30019 check inter 2000 fall 3 rise 5
  11. server easzlab-k8s-master-02 172.16.88.158:30019 check inter 2000 fall 3 rise 5
  12. server easzlab-k8s-master-03 172.16.88.159:30019 check inter 2000 fall 3 rise 5

root@easzlab-haproxy-keepalive-01:~# systemctl restart haproxy

 配置hosts 解析

通过curl命令查看证书来源

  1. root@easzlab-haproxy-keepalive-01:~# curl -lvk https://www.mysite.com
  2. * Trying 172.16.88.200:443...
  3. * TCP_NODELAY set
  4. * Connected to www.mysite.com (172.16.88.200) port 443 (#0)
  5. * ALPN, offering h2
  6. * ALPN, offering http/1.1
  7. * successfully set certificate verify locations:
  8. * CAfile: /etc/ssl/certs/ca-certificates.crt
  9. CApath: /etc/ssl/certs
  10. * TLSv1.3 (OUT), TLS handshake, Client hello (1):
  11. * TLSv1.3 (IN), TLS handshake, Server hello (2):
  12. * TLSv1.2 (IN), TLS handshake, Certificate (11):
  13. * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  14. * TLSv1.2 (IN), TLS handshake, Server finished (14):
  15. * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  16. * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  17. * TLSv1.2 (OUT), TLS handshake, Finished (20):
  18. * TLSv1.2 (IN), TLS handshake, Finished (20):
  19. * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
  20. * ALPN, server accepted to use http/1.1
  21. * Server certificate:
  22. * subject: CN=www.mysite.com
  23. * start date: Oct 20 12:09:20 2022 GMT
  24. * expire date: Oct 17 12:09:20 2032 GMT
  25. * issuer: CN=www.ca.com
  26. * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
  27. > GET / HTTP/1.1
  28. > Host: www.mysite.com
  29. > User-Agent: curl/7.68.0
  30. > Accept: */*
  31. >
  32. * Mark bundle as not supporting multiuse
  33. < HTTP/1.1 200 OK
  34. < Server: nginx/1.20.2
  35. < Date: Thu, 20 Oct 2022 14:06:47 GMT
  36. < Content-Type: text/html
  37. < Content-Length: 612
  38. < Last-Modified: Tue, 16 Nov 2021 15:04:23 GMT
  39. < Connection: keep-alive
  40. < ETag: "6193c877-264"
  41. < Accept-Ranges: bytes
  42. <
  43. <!DOCTYPE html>
  44. <html>
  45. <head>
  46. <title>Welcome to nginx!</title>
  47. <style>
  48. body {
  49. width: 35em;
  50. margin: 0 auto;
  51. font-family: Tahoma, Verdana, Arial, sans-serif;
  52. }
  53. </style>
  54. </head>
  55. <body>
  56. <h1>Welcome to nginx!</h1>
  57. <p>If you see this page, the nginx web server is successfully installed and
  58. working. Further configuration is required.</p>
  59. <p>For online documentation and support please refer to
  60. <a href="http://nginx.org/">nginx.org</a>.<br/>
  61. Commercial support is available at
  62. <a href="http://nginx.com/">nginx.com</a>.</p>
  63. <p><em>Thank you for using nginx.</em></p>
  64. </body>
  65. </html>
  66. * Connection #0 to host www.mysite.com left intact
  67. root@easzlab-haproxy-keepalive-01:~#
  68. root@easzlab-haproxy-keepalive-01:~# curl -vvi https://www.mysite.com
  69. * Trying 172.16.88.200:443...
  70. * TCP_NODELAY set
  71. * Connected to www.mysite.com (172.16.88.200) port 443 (#0)
  72. * ALPN, offering h2
  73. * ALPN, offering http/1.1
  74. * successfully set certificate verify locations:
  75. * CAfile: /etc/ssl/certs/ca-certificates.crt
  76. CApath: /etc/ssl/certs
  77. * TLSv1.3 (OUT), TLS handshake, Client hello (1):
  78. * TLSv1.3 (IN), TLS handshake, Server hello (2):
  79. * TLSv1.2 (IN), TLS handshake, Certificate (11):
  80. * TLSv1.2 (OUT), TLS alert, unknown CA (560):
  81. * SSL certificate problem: unable to get local issuer certificate
  82. * Closing connection 0
  83. curl: (60) SSL certificate problem: unable to get local issuer certificate
  84. More details here: https://curl.haxx.se/docs/sslcerts.html
  85. curl failed to verify the legitimacy of the server and therefore could not
  86. establish a secure connection to it. To learn more about this situation and
  87. how to fix it, please visit the web page mentioned above.
  88. root@easzlab-haproxy-keepalive-01:~#

7.5、Secret-kubernetes.io/dockerconfigjson类型

存储docker registry的认证信息, 在下载镜像的时候使用, 这样每一个node节点就可以不登录也可以下载私有级别的镜像了。

  1. root@easzlab-deploy:~# docker login --username=c******2 registry.cn-shenzhen.aliyuncs.com
  2. Password:
  3. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  4. Configure a credential helper to remove this warning. See
  5. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  6. Login Succeeded
  7. root@easzlab-deploy:~# cat /root/.docker/config.json
  8. {
  9. "auths": {
  10. "harbor.magedu.net": {
  11. "auth": "YWRtaW46SGFyYm9yMTIzNDU="
  12. },
  13. "registry.cn-shenzhen.aliyuncs.com": {
  14. "auth": "Y*********************==" #此处这里显示脱敏
  15. }
  16. }
  17. }
  18. root@easzlab-deploy:~#
  19. root@easzlab-deploy:~# kubectl create secret generic aliyun-registry-image-pull-key \
  20. > --from-file=.dockerconfigjson=/root/.docker/config.json \
  21. > --type=kubernetes.io/dockerconfigjson \
  22. > -n myserver #将本地登录阿里云私有仓库信息存储起来,共享给k8s集群节点使用
  23. secret/aliyun-registry-image-pull-key created
  24. root@easzlab-deploy:~#
  25. root@easzlab-deploy:~# kubectl get secret -n myserver
  26. NAME TYPE DATA AGE
  27. aliyun-registry-image-pull-key kubernetes.io/dockerconfigjson 1 9m24s
  28. myserver-tls-key kubernetes.io/tls 2 150m
  29. root@easzlab-deploy:~#
  30. root@easzlab-deploy:~# kubectl get secret -n myserver aliyun-registry-image-pull-key -oyaml
  31. apiVersion: v1
  32. data:
  33. .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoYXJib3IubWFnZWR1Lm5ldCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0sCgkJInJlZ2lzdHJ5LmNuLXNoZW56aGVuLmFsaXl1bmNzLmNvbSI6IHsKCQkJImF1d*************************n0=
  34. kind: Secret
  35. metadata:
  36. creationTimestamp: "2022-10-20T14:30:23Z"
  37. name: aliyun-registry-image-pull-key
  38. namespace: myserver
  39. resourceVersion: "451590"
  40. uid: f084175a-6260-4435-acfb-bcec9095e5a6
  41. type: kubernetes.io/dockerconfigjson
  42. root@easzlab-deploy:~#
  43. root@easzlab-deploy:~# cd jiege-k8s/pod-test/case-yaml/case9/
  44. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# vi 6-secret-imagePull.yaml
  45. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# cat 6-secret-imagePull.yaml
  46. apiVersion: apps/v1
  47. kind: Deployment
  48. metadata:
  49. name: myserver-myapp-frontend-deployment-2
  50. namespace: myserver
  51. spec:
  52. replicas: 1
  53. selector:
  54. matchLabels:
  55. app: myserver-myapp-frontend-2
  56. template:
  57. metadata:
  58. labels:
  59. app: myserver-myapp-frontend-2
  60. spec:
  61. containers:
  62. - name: myserver-myapp-frontend-2
  63. image: registry.cn-shenzhen.aliyuncs.com/cyh01/nginx:1.22.0 #指向阿里云公有私仓镜像
  64. ports:
  65. - containerPort: 80
  66. imagePullSecrets:
  67. - name: aliyun-registry-image-pull-key
  68. ---
  69. apiVersion: v1
  70. kind: Service
  71. metadata:
  72. name: myserver-myapp-frontend-2
  73. namespace: myserver
  74. spec:
  75. ports:
  76. - name: http
  77. port: 80
  78. targetPort: 80
  79. nodePort: 30033
  80. protocol: TCP
  81. type: NodePort
  82. selector:
  83. app: myserver-myapp-frontend-2
  84. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl apply -f 6-secret-imagePull.yaml
  85. deployment.apps/myserver-myapp-frontend-deployment-2 created
  86. service/myserver-myapp-frontend-2 created
  87. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9#
  88. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl get pod -n myserver -owide
  89. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  90. myserver-myapp-frontend-deployment-2-6d96b76bb-bgmzf 1/1 Running 0 30s 10.200.104.226 172.16.88.163 <none> <none>
  91. myserver-myapp-frontend-deployment-6f48755cbd-k2dbs 1/1 Running 0 28m 10.200.105.158 172.16.88.164 <none> <none>
  92. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9#
  93. #验证pod信息
  94. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9# kubectl describe pod -n myserver myserver-myapp-frontend-deployment-2-6d96b76bb-bgmzf
  95. Name: myserver-myapp-frontend-deployment-2-6d96b76bb-bgmzf
  96. Namespace: myserver
  97. Priority: 0
  98. Node: 172.16.88.163/172.16.88.163
  99. Start Time: Thu, 20 Oct 2022 23:01:25 +0800
  100. Labels: app=myserver-myapp-frontend-2
  101. pod-template-hash=6d96b76bb
  102. Annotations: <none>
  103. Status: Running
  104. IP: 10.200.104.226
  105. IPs:
  106. IP: 10.200.104.226
  107. Controlled By: ReplicaSet/myserver-myapp-frontend-deployment-2-6d96b76bb
  108. Containers:
  109. myserver-myapp-frontend-2:
  110. Container ID: containerd://20d2061b0eaa8e21748fed2559ba0fe35e7271730097809f210e50d650ad20f9
  111. Image: registry.cn-shenzhen.aliyuncs.com/cyh01/nginx:1.22.0
  112. Image ID: registry.cn-shenzhen.aliyuncs.com/cyh01/nginx@sha256:b3a676a9145dc005062d5e79b92d90574fb3bf2396f4913dc1732f9065f55c4b
  113. Port: 80/TCP
  114. Host Port: 0/TCP
  115. State: Running
  116. Started: Thu, 20 Oct 2022 23:01:27 +0800
  117. Ready: True
  118. Restart Count: 0
  119. Environment: <none>
  120. Mounts:
  121. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j7wtn (ro)
  122. Conditions:
  123. Type Status
  124. Initialized True
  125. Ready True
  126. ContainersReady True
  127. PodScheduled True
  128. Volumes:
  129. kube-api-access-j7wtn:
  130. Type: Projected (a volume that contains injected data from multiple sources)
  131. TokenExpirationSeconds: 3607
  132. ConfigMapName: kube-root-ca.crt
  133. ConfigMapOptional: <nil>
  134. DownwardAPI: true
  135. QoS Class: BestEffort
  136. Node-Selectors: <none>
  137. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  138. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  139. Events:
  140. Type Reason Age From Message
  141. ---- ------ ---- ---- -------
  142. Normal Scheduled 105s default-scheduler Successfully assigned myserver/myserver-myapp-frontend-deployment-2-6d96b76bb-bgmzf to 172.16.88.163
  143. Normal Pulled 103s kubelet Container image "registry.cn-shenzhen.aliyuncs.com/cyh01/nginx:1.22.0" already present on machine
  144. Normal Created 103s kubelet Created container myserver-myapp-frontend-2
  145. Normal Started 103s kubelet Started container myserver-myapp-frontend-2
  146. root@easzlab-deploy:~/jiege-k8s/pod-test/case-yaml/case9#

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/987993
推荐阅读
相关标签
  

闽ICP备14008679号