当前位置:   article > 正文

k8s集群监控方案--node-exporter+prometheus+grafana_node-exporter.yaml

node-exporter.yaml

目录

前置条件

一、下载yaml文件

二、部署yaml各个组件

2.1 node-exporter.yaml

2.2 Prometheus

2.3 grafana

2.4访问测试

三、grafana初始化

3.1加载数据源

3.2导入模板

四、helm方式部署


前置条件

安装好k8s集群(几个节点都可以,本人为了方便实验k8s集群只有一个master节点),注意prometheus是部署在k8s集群内部的,不同于传统监控分为监控端和被控端。

部署k8s参考教程:Linux部署单节点k8s_linux单节点安装k8s_luo_guibin的博客-CSDN博客

                               k8s集群环境的搭建 · 语雀

11.0.1.12

k8s-master / node-exporter+prometheus+grafana

一、下载yaml文件

链接:https://pan.baidu.com/s/1vmT0Xu7SBB36-odiCMy9zA  (链接永久有效

提取码:9999

解压

  1. [root@prometheus opt]# yum install -y zip unzip tree
  2. [root@prometheus opt]# unzip k8s-prometheus-grafana-master.zip
  1. [root@k8s-master k8s-prometheus-grafana-master]# pwd
  2. /opt/k8s-prometheus-grafana-master
  3. [root@k8s-master k8s-prometheus-grafana-master]# tree
  4. .
  5. ├── grafana
  6. ?? ├── grafana-deploy.yaml
  7. ?? ├── grafana-ing.yaml
  8. ?? └── grafana-svc.yaml
  9. ├── node-exporter.yaml
  10. ├── prometheus
  11. ?? ├── configmap.yaml
  12. ?? ├── prometheus.deploy.yml
  13. ?? ├── prometheus.svc.yml
  14. ?? └── rbac-setup.yaml
  15. └── README.md

二、部署yaml各个组件

kubectl命令tab补全

  1. [root@k8s-master ~]# yum install -y bash-completion
  2. [root@k8s-master ~]# source <(kubectl completion bash)

2.1 node-exporter.yaml

  1. [root@k8s-master k8s-prometheus-grafana-master]# kubectl apply -f node-exporter.yaml
  2. daemonset.apps/node-exporter created
  3. service/node-exporter created
  4. #因为只有一个节点,这里如果有多个节点,节点数=node-exporterPod数
  5. [root@k8s-master k8s-prometheus-grafana-master]# kubectl get pod -A | grep node-exporter
  6. kube-system node-exporter-kpdxh 0/1 ContainerCreating 0 28s
  7. [root@k8s-master k8s-prometheus-grafana-master]# kubectl get daemonset -A | grep exporter
  8. kube-system node-exporter 1 1 1 1 1 <none> 2m43s
  9. [root@k8s-master k8s-prometheus-grafana-master]# kubectl get service -A | grep exporter
  10. kube-system node-exporter NodePort 10.96.73.86 <none> 9100:31672/TCP 2m59s

2.2 Prometheus

  1. [root@k8s-master prometheus]# pwd
  2. /opt/k8s-prometheus-grafana-master/prometheus
  3. [root@k8s-master prometheus]# ls
  4. configmap.yaml prometheus.deploy.yml prometheus.svc.yml rbac-setup.yaml

按照顺序 rbac-setup.yaml configmap.yaml prometheus.deploy.yml prometheus.svc.yml ,yaml、yml文件没区别,不用在意。

  1. [root@k8s-master prometheus]# kubectl apply -f rbac-setup.yaml
  2. clusterrole.rbac.authorization.k8s.io/prometheus created
  3. serviceaccount/prometheus created
  4. clusterrolebinding.rbac.authorization.k8s.io/prometheus created
  5. [root@k8s-master prometheus]# kubectl apply -f configmap.yaml
  6. configmap/prometheus-config created
  7. [root@k8s-master prometheus]# kubectl apply -f prometheus.deploy.yml
  8. deployment.apps/prometheus created
  9. [root@k8s-master prometheus]# kubectl apply -f prometheus.svc.yml
  10. service/prometheus created

2.3 grafana

  1. [root@k8s-master grafana]# pwd
  2. /opt/k8s-prometheus-grafana-master/grafana
  3. [root@k8s-master grafana]# ls
  4. grafana-deploy.yaml grafana-ing.yaml grafana-svc.yaml

按照顺序安装 grafana-deploy.yaml grafana-svc.yaml grafana-ing.yaml

  1. [root@k8s-master grafana]# kubectl apply -f grafana-deploy.yaml
  2. deployment.apps/grafana-core created
  3. [root@k8s-master grafana]# kubectl apply -f grafana-svc.yaml
  4. service/grafana created
  5. [root@k8s-master grafana]# kubectl apply -f grafana-ing.yaml
  6. ingress.extensions/grafana created

检查三个pod(node-exporter可能有多个)

  1. [root@k8s-master grafana]# kubectl get pod -A | grep node-exporter
  2. kube-system node-exporter-kpdxh 1/1 Running 0 13m
  3. [root@k8s-master grafana]# kubectl get pod -A | grep prometheus
  4. kube-system prometheus-7486bf7f4b-xb4t8 1/1 Running 0 6m21s
  5. [root@k8s-master grafana]# kubectl get pod -A | grep grafana
  6. kube-system grafana-core-664b68875b-fhjvt 1/1 Running 0 2m18s

检查服务service,三个svc类型均为NodePort

  1. [root@k8s-master grafana]# kubectl get svc -A
  2. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h17m
  4. kube-system grafana NodePort 10.107.115.11 <none> 3000:31748/TCP 3m41s
  5. kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7h17m
  6. kube-system node-exporter NodePort 10.96.73.86 <none> 9100:31672/TCP 15m
  7. kube-system prometheus NodePort 10.111.178.83 <none> 9090:30003/TCP 7m57s

2.4访问测试

curl访问测试 

  1. [root@k8s-master grafana]# curl 127.0.0.1:31672
  2. <html lang="en">
  3. ......
  4. #node-exporter收集到的数据
  5. [root@k8s-master grafana]# curl 127.0.0.1:31672/metrics
  6. # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
  7. [root@k8s-master grafana]# curl 127.0.0.1:30003
  8. <a href="/graph">Found</a>.
  9. [root@k8s-master grafana]# curl 127.0.0.1:31748
  10. <a href="/login">Found</a>.

 访问11.0.1.12:31672/metrics,node-exporter收集到的数据

 访问11.0.1.12:30003/graph

选择 "status"--"targets"

三、grafana初始化

访问11.0.1.12:31748/login

账号密码都是admin

3.1加载数据源

监听地址 10.111.178.83:9090,prometheus的service地址,重启后grafana数据源地址可能会改变导致grafana连接不上prometheus且,注意检测数据源地址。在左侧configuration--data source可编辑数据源。

  1. [root@k8s-master grafana]# kubectl get svc -A | grep prometheus
  2. kube-system prometheus NodePort 10.111.178.83 <none> 9090:30003/TCP 40m

3.2导入模板

模板编号315

选择数据源prometheus

 完成监控

四、helm方式部署

......(待更新)

参考文档:

原理:Node Exporter 简介_node_exporter_富士康质检员张全蛋的博客-CSDN博客

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/378654
推荐阅读
相关标签
  

闽ICP备14008679号