当前位置:   article > 正文

Prometheus监控实战之Prometheus监控K8S_prometheus监控pod

prometheus监控pod

1 监控方案

Cadvisor + node-exporter + prometheus + grafana

  • Cadvisor:数据采集
  • node-exporter:汇总
  • prometheus:处理、存储
  • grafana:展示

2 监控流程

  • 容器监控:Prometheus使用cadvisor采集容器监控指标,而cadvisor集成在K8S的kubelet中所以无需部署,通过Prometheus进程存储,使用grafana进行展示。
  • node节点监控:node端的监控通过node_exporter采集当前主机的资源,通过Prometheus进程存储,最后使用grafana进行展示。
  • master节点监控:master的监控通过kube-state-metrics插件从K8S获取到apiserver的相关数据并通过网页页面暴露出来,然后通过Prometheus进程存储,最后使用grafana进行展示

3 Kubernetes监控指标

3.1 K8S自身监控指标

  • node资源利用率:监控node节点上的cpu、内存、硬盘等硬件资源
  • node数量:监控node数量与资源利用率、业务负载的比例情况,对成本、资源扩展进行评估
  • pod数量:监控当负载到一定程度时,node与pod的数量。评估负载到哪个阶段,大约需要多少服务器,以及每个pod的资源占用率,然后进行整体评估
  • 资源对象状态:在K8S运行过程中,会创建很多pod、控制器、任务等,这些内容都是由K8S中的资源对象进行维护,所以可以对资源对象进行监控,获取资源对象的状态

3.2 Pod的监控

  • 每个项目中pod的数量:分别监控正常、有问题的pod数量
  • 容器资源利用率:统计当前pod的资源利用率,统计pod中的容器资源利用率,结合cpu、网络、内存进行评估
  • 应用程序:监控项目中程序的自身情况,例如并发、请求响应等
监控指标具体实现案例
Pod性能cadvisor容器的CPU、内存利用率
Node性能node-exporternode节点的CPU、内存利用率
K8S资源对象kube-state-metricspod、deployment、service

4 服务发现 

从k8s的api中发现抓取的目标,并且始终与k8s集群状态保持一致。动态获取被抓取的目标,实时从api中获取当前状态是否存在,此过程为服务发现。

自动发现支持的组件:

  • node:自动发现集群中的node节点
  • pod:自动发现运行的容器和端口
  • service:自动发现创建的serviceIP、端口
  • endpoints:自动发现pod中的容器
  • ingress:自动发现创建的访问入口和规则

5 Prometheus监控Kubernetes部署实践

5.1 部署准备

案例仅在master节点pull image

  1. [root@master ~]# git clone https://github.com/redhatxl/k8s-prometheus-grafana.git #这个仓库的yaml有几个错误,在本文章末尾已经改过来了,可以直接使用末尾的yaml文件
  2. Cloning into 'k8s-prometheus-grafana'...
  3. remote: Enumerating objects: 21, done.
  4. remote: Total 21 (delta 0), reused 0 (delta 0), pack-reused 21
  5. Unpacking objects: 100% (21/21), done.
  6. [root@master ~]# ll
  7. total 24
  8. drwxr-xr-x 5 root root 94 Jul 12 16:07 k8s-prometheus-grafana #克隆的目录
  9. #在所有节点提前下载镜像
  10. [root@master ~]# docker pull prom/node-exporter
  11. [root@master ~]# docker pull prom/prometheus:v2.0.0
  12. [root@master ~]# docker pull grafana/grafana:6.1.4

5.2 采用daemonset方式部署node-exporter

  1. [root@master ~]# cd k8s-prometheus-grafana/
  2. [root@master k8s-prometheus-grafana]# ll
  3. total 8
  4. drwxr-xr-x 2 root root 81 Jul 12 16:07 grafana
  5. -rw-r--r-- 1 root root 668 Jul 12 16:07 node-exporter.yaml
  6. drwxr-xr-x 2 root root 106 Jul 12 16:07 prometheus
  7. -rw-r--r-- 1 root root 117 Jul 12 16:07 README.md
  8. [root@master k8s-prometheus-grafana]# cat node-exporter.yaml
  9. ---
  10. apiVersion: apps/v1
  11. kind: DaemonSet
  12. metadata:
  13. name: node-exporter
  14. namespace: kube-system
  15. labels:
  16. k8s-app: node-exporter
  17. spec:
  18. selector:
  19. matchLabels:
  20. k8s-app: node-exporter
  21. template:
  22. metadata:
  23. labels:
  24. k8s-app: node-exporter
  25. spec:
  26. containers:
  27. - image: prom/node-exporter
  28. name: node-exporter
  29. ports:
  30. - containerPort: 9100
  31. protocol: TCP
  32. name: http
  33. ---
  34. apiVersion: v1
  35. kind: Service
  36. metadata:
  37. labels:
  38. k8s-app: node-exporter
  39. name: node-exporter
  40. namespace: kube-system
  41. spec:
  42. ports:
  43. - name: http
  44. port: 9100
  45. nodePort: 31672
  46. protocol: TCP
  47. type: NodePort
  48. selector:
  49. k8s-app: node-exporter
  50. [root@master k8s-prometheus-grafana]# kubectl apply -f node-exporter.yaml
  51. daemonset.apps/node-exporter created
  52. service/node-exporter created
  53. [root@master k8s-prometheus-grafana]# kubectl get pods -A
  54. NAMESPACE NAME READY STATUS RESTARTS AGE
  55. default recycler-for-prometheus-data 0/1 ContainerCreating 0 5m42s
  56. kube-system coredns-7ff77c879f-m986g 1/1 Running 0 29h
  57. kube-system coredns-7ff77c879f-xhknw 1/1 Running 0 29h
  58. kube-system etcd-master 1/1 Running 0 29h
  59. kube-system kube-apiserver-master 1/1 Running 0 29h
  60. kube-system kube-controller-manager-master 1/1 Running 2 29h
  61. kube-system kube-flannel-ds-ln5f6 1/1 Running 0 26h
  62. kube-system kube-flannel-ds-zhq42 1/1 Running 0 29h
  63. kube-system kube-proxy-9bssw 1/1 Running 0 26h
  64. kube-system kube-proxy-wcdzk 1/1 Running 0 29h
  65. kube-system kube-scheduler-master 1/1 Running 3 29h
  66. kube-system node-exporter-bnkm8 1/1 Running 0 2m19s #这就是新创建的
  67. [root@master k8s-prometheus-grafana]# kubectl get daemonset -A
  68. NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  69. kube-system kube-flannel-ds 2 2 2 2 2 <none> 29h
  70. kube-system kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 29h
  71. kube-system node-exporter 1 1 1 1 1 <none> 2m23s #这个是新的daemonset
  72. [root@master k8s-prometheus-grafana]# kubectl get service -A
  73. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  74. default kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 29h
  75. kube-system kube-dns ClusterIP 172.16.0.10 <none> 53/UDP,53/TCP,9153/TCP 29h
  76. kube-system node-exporter NodePort 172.16.201.35 <none> 9100:31672/TCP 2m29s #这是新的service

5.3 部署Prometheus

  1. [root@master k8s-prometheus-grafana]# cd prometheus/
  2. [root@master prometheus]# ll
  3. total 20
  4. -rw-r--r-- 1 root root 5631 Jul 12 16:07 configmap.yaml
  5. -rw-r--r-- 1 root root 1119 Jul 12 16:07 prometheus.deploy.yml
  6. -rw-r--r-- 1 root root 237 Jul 12 16:07 prometheus.svc.yml
  7. -rw-r--r-- 1 root root 716 Jul 12 16:07 rbac-setup.yaml
  8. [root@master prometheus]# kubectl apply -f rbac-setup.yaml
  9. clusterrole.rbac.authorization.k8s.io/prometheus configured
  10. serviceaccount/prometheus configured
  11. clusterrolebinding.rbac.authorization.k8s.io/prometheus configured
  12. [root@master prometheus]# kubectl apply -f configmap.yaml
  13. configmap/prometheus-config configured
  14. [root@master prometheus]# kubectl apply -f prometheus.deploy.yml
  15. deployment.apps/prometheus created
  16. [root@master prometheus]# kubectl apply -f prometheus.svc.yml
  17. service/prometheus created

5.4 部署grafana

  1. [root@master prometheus]# cd ../grafana/
  2. [root@master grafana]# ll
  3. total 12
  4. -rw-r--r-- 1 root root 1449 Jul 12 16:07 grafana-deploy.yaml
  5. -rw-r--r-- 1 root root 256 Jul 12 16:07 grafana-ing.yaml
  6. -rw-r--r-- 1 root root 225 Jul 12 16:07 grafana-svc.yaml
  7. [root@master grafana]# kubectl apply -f grafana-deploy.yaml
  8. deployment.apps/grafana-core created
  9. [root@master grafana]# kubectl apply -f grafana-svc.yaml
  10. service/grafana created
  11. [root@master grafana]# kubectl apply -f grafana-ing.yaml
  12. ingress.extensions/grafana created

5.5 校验测试

1)查看pod/svc信息

  1. [root@master grafana]# kubectl get pods -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default recycler-for-prometheus-data 0/1 ContainerCreating 0 2m7s
  4. kube-system coredns-7ff77c879f-m986g 1/1 Running 0 30h
  5. kube-system coredns-7ff77c879f-xhknw 1/1 Running 0 30h
  6. kube-system etcd-master 1/1 Running 0 30h
  7. kube-system grafana-core-768b6bf79c-wcmk9 1/1 Running 0 2m48s
  8. kube-system kube-apiserver-master 1/1 Running 0 30h
  9. kube-system kube-controller-manager-master 1/1 Running 2 30h
  10. kube-system kube-flannel-ds-ln5f6 1/1 Running 0 26h
  11. kube-system kube-flannel-ds-zhq42 1/1 Running 0 29h
  12. kube-system kube-proxy-9bssw 1/1 Running 0 26h
  13. kube-system kube-proxy-wcdzk 1/1 Running 0 30h
  14. kube-system kube-scheduler-master 1/1 Running 3 30h
  15. kube-system node-exporter-bnkm8 1/1 Running 0 18m
  16. kube-system prometheus-7486bf7f4b-f8k6x 1/1 Running 0 7m12s
  17. [root@master grafana]# kubectl get svc -A
  18. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  19. default kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 30h
  20. kube-system grafana NodePort 172.16.198.56 <none> 3000:30931/TCP 5m15s #grafana端口30931
  21. kube-system kube-dns ClusterIP 172.16.0.10 <none> 53/UDP,53/TCP,9153/TCP 30h
  22. kube-system node-exporter NodePort 172.16.201.35 <none> 9100:31672/TCP 22m #node-exporter端口31672
  23. kube-system prometheus NodePort 172.16.176.125 <none> 9090:30003/TCP 9m12s #prometheus端口30003

2)查看页面

访问http://10.10.11.202:31672/metrics,这是node-exporter采集的数据。

在这里插入图片描述

 访问http://10.10.11.202:30003,这是Prometheus的页面,依次点击Status>Targets可以看到已经成功连接到k8s的apiserver。

在这里插入图片描述

访问http://10.10.11.202:30931,这是grafana的页面,账户、密码都是admin。

在这里插入图片描述

在这里插入图片描述 5.6 grafana模版配置

1)添加数据源,点击add,点击Prometheus

在这里插入图片描述

在这里插入图片描述 2)依次进行设置,这里的URL需要注意:

URL需要写成,service.namespace:port 的格式,例如:

  1. [root@master grafana]# kubectl get svc -A
  2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. default kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 46h
  4. kube-system grafana NodePort 172.16.195.186 <none> 3000:30931/TCP 4m16s
  5. kube-system kube-dns ClusterIP 172.16.0.10 <none> 53/UDP,53/TCP,9153/TCP 46h
  6. kube-system node-exporter NodePort 172.16.201.35 <none> 9100:31672/TCP 17h
  7. kube-system prometheus NodePort 172.16.176.125 <none> 9090:30003/TCP 16h
  8. #以这里为例,namespace是kube-system,service是prometheus,pod端口是9090,那么最后的URL就是;
  9. http://prometheus.kube-system:9090

在这里插入图片描述

在这里插入图片描述 3)导入K8S Dashboard模板

在这里插入图片描述

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

在这里插入图片描述 4)name自定义,uid可以为空,会自己生成,然后选择数据源,选择刚才创建的Prometheus,最后点击import

在这里插入图片描述

5)效果图展示

在这里插入图片描述6 yaml配置文件

6.1 node-exporter.yaml

  1. ---
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. name: node-exporter
  6. namespace: kube-system
  7. labels:
  8. k8s-app: node-exporter
  9. spec:
  10. selector:
  11. matchLabels:
  12. k8s-app: node-exporter
  13. template:
  14. metadata:
  15. labels:
  16. k8s-app: node-exporter
  17. spec:
  18. containers:
  19. - image: prom/node-exporter
  20. name: node-exporter
  21. ports:
  22. - containerPort: 9100
  23. protocol: TCP
  24. name: http
  25. ---
  26. apiVersion: v1
  27. kind: Service
  28. metadata:
  29. labels:
  30. k8s-app: node-exporter
  31. name: node-exporter
  32. namespace: kube-system
  33. spec:
  34. ports:
  35. - name: http
  36. port: 9100
  37. nodePort: 31672
  38. protocol: TCP
  39. type: NodePort
  40. selector:
  41. k8s-app: node-exporter

6.2 rbac-setup.yaml

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: prometheus
  5. rules:
  6. - apiGroups: [""]
  7. resources:
  8. - nodes
  9. - nodes/proxy
  10. - services
  11. - endpoints
  12. - pods
  13. verbs: ["get", "list", "watch"]
  14. - apiGroups:
  15. - extensions
  16. resources:
  17. - ingresses
  18. verbs: ["get", "list", "watch"]
  19. - nonResourceURLs: ["/metrics"]
  20. verbs: ["get"]
  21. ---
  22. apiVersion: v1
  23. kind: ServiceAccount
  24. metadata:
  25. name: prometheus
  26. namespace: kube-system
  27. ---
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. kind: ClusterRoleBinding
  30. metadata:
  31. name: prometheus
  32. roleRef:
  33. apiGroup: rbac.authorization.k8s.io
  34. kind: ClusterRole
  35. name: prometheus
  36. subjects:
  37. - kind: ServiceAccount
  38. name: prometheus
  39. namespace: kube-system

6.3 configmap.yaml

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: prometheus
  5. rules:
  6. - apiGroups: [""]
  7. resources:
  8. - nodes
  9. - nodes/proxy
  10. - services
  11. - endpoints
  12. - pods
  13. verbs: ["get", "list", "watch"]
  14. - apiGroups:
  15. - extensions
  16. resources:
  17. - ingresses
  18. verbs: ["get", "list", "watch"]
  19. - nonResourceURLs: ["/metrics"]
  20. verbs: ["get"]
  21. ---
  22. apiVersion: v1
  23. kind: ServiceAccount
  24. metadata:
  25. name: prometheus
  26. namespace: kube-system
  27. ---
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. kind: ClusterRoleBinding
  30. metadata:
  31. name: prometheus
  32. roleRef:
  33. apiGroup: rbac.authorization.k8s.io
  34. kind: ClusterRole
  35. name: prometheus
  36. subjects:
  37. - kind: ServiceAccount
  38. name: prometheus
  39. namespace: kube-system
  40. [root@master prometheus]#
  41. [root@master prometheus]# cat configmap.yaml
  42. apiVersion: v1
  43. kind: ConfigMap
  44. metadata:
  45. name: prometheus-config
  46. namespace: kube-system
  47. data:
  48. prometheus.yml: |
  49. global:
  50. scrape_interval: 15s
  51. evaluation_interval: 15s
  52. scrape_configs:
  53. - job_name: 'kubernetes-apiservers'
  54. kubernetes_sd_configs:
  55. - role: endpoints
  56. scheme: https
  57. tls_config:
  58. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  59. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  60. relabel_configs:
  61. - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
  62. action: keep
  63. regex: default;kubernetes;https
  64. - job_name: 'kubernetes-nodes'
  65. kubernetes_sd_configs:
  66. - role: node
  67. scheme: https
  68. tls_config:
  69. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  70. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  71. relabel_configs:
  72. - action: labelmap
  73. regex: __meta_kubernetes_node_label_(.+)
  74. - target_label: __address__
  75. replacement: kubernetes.default.svc:443
  76. - source_labels: [__meta_kubernetes_node_name]
  77. regex: (.+)
  78. target_label: __metrics_path__
  79. replacement: /api/v1/nodes/${1}/proxy/metrics
  80. - job_name: 'kubernetes-cadvisor'
  81. kubernetes_sd_configs:
  82. - role: node
  83. scheme: https
  84. tls_config:
  85. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  86. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  87. relabel_configs:
  88. - action: labelmap
  89. regex: __meta_kubernetes_node_label_(.+)
  90. - target_label: __address__
  91. replacement: kubernetes.default.svc:443
  92. - source_labels: [__meta_kubernetes_node_name]
  93. regex: (.+)
  94. target_label: __metrics_path__
  95. replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
  96. - job_name: 'kubernetes-service-endpoints'
  97. kubernetes_sd_configs:
  98. - role: endpoints
  99. relabel_configs:
  100. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
  101. action: keep
  102. regex: true
  103. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
  104. action: replace
  105. target_label: __scheme__
  106. regex: (https?)
  107. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
  108. action: replace
  109. target_label: __metrics_path__
  110. regex: (.+)
  111. - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
  112. action: replace
  113. target_label: __address__
  114. regex: ([^:]+)(?::\d+)?;(\d+)
  115. replacement: $1:$2
  116. - action: labelmap
  117. regex: __meta_kubernetes_service_label_(.+)
  118. - source_labels: [__meta_kubernetes_namespace]
  119. action: replace
  120. target_label: kubernetes_namespace
  121. - source_labels: [__meta_kubernetes_service_name]
  122. action: replace
  123. target_label: kubernetes_name
  124. - job_name: 'kubernetes-services'
  125. kubernetes_sd_configs:
  126. - role: service
  127. metrics_path: /probe
  128. params:
  129. module: [http_2xx]
  130. relabel_configs:
  131. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
  132. action: keep
  133. regex: true
  134. - source_labels: [__address__]
  135. target_label: __param_target
  136. - target_label: __address__
  137. replacement: blackbox-exporter.example.com:9115
  138. - source_labels: [__param_target]
  139. target_label: instance
  140. - action: labelmap
  141. regex: __meta_kubernetes_service_label_(.+)
  142. - source_labels: [__meta_kubernetes_namespace]
  143. target_label: kubernetes_namespace
  144. - source_labels: [__meta_kubernetes_service_name]
  145. target_label: kubernetes_name
  146. - job_name: 'kubernetes-ingresses'
  147. kubernetes_sd_configs:
  148. - role: ingress
  149. relabel_configs:
  150. - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
  151. action: keep
  152. regex: true
  153. - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
  154. regex: (.+);(.+);(.+)
  155. replacement: ${1}://${2}${3}
  156. target_label: __param_target
  157. - target_label: __address__
  158. replacement: blackbox-exporter.example.com:9115
  159. - source_labels: [__param_target]
  160. target_label: instance
  161. - action: labelmap
  162. regex: __meta_kubernetes_ingress_label_(.+)
  163. - source_labels: [__meta_kubernetes_namespace]
  164. target_label: kubernetes_namespace
  165. - source_labels: [__meta_kubernetes_ingress_name]
  166. target_label: kubernetes_name
  167. - job_name: 'kubernetes-pods'
  168. kubernetes_sd_configs:
  169. - role: pod
  170. relabel_configs:
  171. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
  172. action: keep
  173. regex: true
  174. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
  175. action: replace
  176. target_label: __metrics_path__
  177. regex: (.+)
  178. - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
  179. action: replace
  180. regex: ([^:]+)(?::\d+)?;(\d+)
  181. replacement: $1:$2
  182. target_label: __address__
  183. - action: labelmap
  184. regex: __meta_kubernetes_pod_label_(.+)
  185. - source_labels: [__meta_kubernetes_namespace]
  186. action: replace
  187. target_label: kubernetes_namespace
  188. - source_labels: [__meta_kubernetes_pod_name]
  189. action: replace
  190. target_label: kubernetes_pod_name

6.4 prometheus.deploy.yml 

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. labels:
  6. name: prometheus-deployment
  7. name: prometheus
  8. namespace: kube-system
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: prometheus
  14. template:
  15. metadata:
  16. labels:
  17. app: prometheus
  18. spec:
  19. containers:
  20. - image: prom/prometheus:v2.0.0
  21. name: prometheus
  22. command:
  23. - "/bin/prometheus"
  24. args:
  25. - "--config.file=/etc/prometheus/prometheus.yml"
  26. - "--storage.tsdb.path=/prometheus"
  27. - "--storage.tsdb.retention=24h"
  28. ports:
  29. - containerPort: 9090
  30. protocol: TCP
  31. volumeMounts:
  32. - mountPath: "/prometheus"
  33. name: data
  34. - mountPath: "/etc/prometheus"
  35. name: config-volume
  36. resources:
  37. requests:
  38. cpu: 100m
  39. memory: 100Mi
  40. limits:
  41. cpu: 500m
  42. memory: 2500Mi
  43. serviceAccountName: prometheus
  44. volumes:
  45. - name: data
  46. emptyDir: {}
  47. - name: config-volume
  48. configMap:
  49. name: prometheus-config

6.5 prometheus.svc.yml 

  1. ---
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. labels:
  6. app: prometheus
  7. name: prometheus
  8. namespace: kube-system
  9. spec:
  10. type: NodePort
  11. ports:
  12. - port: 9090
  13. targetPort: 9090
  14. nodePort: 30003
  15. selector:
  16. app: prometheus

6.6 grafana-deploy.yaml 

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: grafana-core
  5. namespace: kube-system
  6. labels:
  7. app: grafana
  8. component: core
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: grafana
  14. template:
  15. metadata:
  16. labels:
  17. app: grafana
  18. component: core
  19. spec:
  20. containers:
  21. - image: grafana/grafana:6.1.4
  22. name: grafana-core
  23. imagePullPolicy: IfNotPresent
  24. # env:
  25. resources:
  26. # keep request = limit to keep this container in guaranteed class
  27. limits:
  28. cpu: 100m
  29. memory: 100Mi
  30. requests:
  31. cpu: 100m
  32. memory: 100Mi
  33. env:
  34. # The following env variables set up basic auth twith the default admin user and admin password.
  35. - name: GF_AUTH_BASIC_ENABLED
  36. value: "true"
  37. - name: GF_AUTH_ANONYMOUS_ENABLED
  38. value: "false"
  39. # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
  40. # value: Admin
  41. # does not really work, because of template variables in exported dashboards:
  42. # - name: GF_DASHBOARDS_JSON_ENABLED
  43. # value: "true"
  44. readinessProbe:
  45. httpGet:
  46. path: /login
  47. port: 3000
  48. # initialDelaySeconds: 30
  49. # timeoutSeconds: 1
  50. #volumeMounts: #先不进行挂载
  51. #- name: grafana-persistent-storage
  52. # mountPath: /var
  53. #volumes:
  54. #- name: grafana-persistent-storage
  55. #emptyDir: {}

 6.7 grafana-svc.yaml

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: grafana
  5. namespace: kube-system
  6. labels:
  7. app: grafana
  8. component: core
  9. spec:
  10. type: NodePort
  11. ports:
  12. - port: 3000
  13. selector:
  14. app: grafana
  15. component: core

 6.8 grafana-svc.yaml 

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: grafana
  5. namespace: kube-system
  6. spec:
  7. rules:
  8. - host: k8s.grafana
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: grafana
  14. servicePort: 3000

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/338778
推荐阅读