当前位置:   article > 正文

dashboard的安装使用_dashboard 安装

dashboard 安装

目录

一、下载dashboard的yaml文件

二、修改dsashboard的yaml文件镜像

三、修改dashboard的yaml文件内容

四、创建dashboard


通常而言kubernetes中完成的所有操作都是通过命令行工具kubectl完成的

但为了提供更丰富的用户体验,k8s还开发了一个基于web的用户界面Dashboard

dashboard是官方提供的一个K8S的前端组件,使操作更简便

查看目前k8s集群版本

一、下载dashboard的yaml文件

 1、进入github官方下载 

  每个版本的dashboard可支持的K8S集群版本不一样,需要根据自己K8S的版本选择对应的dashboard, 比如我K8S版本是1.24; dashboard-v2.6.1可以支持1.24

  2、下载yaml文件,页面有下载链接

https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
  1. 创建个目录用来存放dashborad相关文件
  2. root@k8s-deploy:~/yaml/20220724# mkdir dashboard-v2.6.1
  3. root@k8s-deploy:~/yaml/20220724# cd dashboard-v2.6.1
  4. 下载yaml文件
  5. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
  6. 将yaml修改名称
  7. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# mv recommended.yaml dashboard-v2.6.1.yaml

二、修改dsashboard的yaml文件镜像

1、yaml文件中有两个镜像是互联网的,这里我把这两个镜像下载到本地harbor

2、下载镜像

  1. 把yaml文件中的两个镜像下载下来
  2. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker pull kubernetesui/dashboard:v2.6.1
  3. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker pull kubernetesui/metrics-scraper:v1.0.8
  4. 重新打标签
  5. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker tag kubernetesui/dashboard:v2.6.1 harbor.magedu.net/baseimages/dashboard:v2.6.1
  6. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker tag kubernetesui/metrics-scraper:v1.0.8 harbor.magedu.net/baseimages/metrics-scraper:v1.0.8
  7. 上传到harbor服务器
  8. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# docker push harbor.magedu.net/baseimages/dashboard:v2.6.1
  9. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# docker push harbor.magedu.net/baseimages/metrics-scraper:v1.0.8

3、修改yaml文件,把镜像更换为本地harbor镜像

  1. 把两个镜像换成harbor镜像
  2. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# vim dashboard-v2.6.1.yaml
  3. containers:
  4. - name: kubernetes-dashboard
  5. image: harbor.magedu.net/baseimages/dashboard:v2.6.1
  6. imagePullPolicy: Always
  7. ports:
  8. - containerPort: 8443
  9. protocol: TCP
  10. containers:
  11. - name: dashboard-metrics-scraper
  12. image: harbor.magedu.net/baseimages/metrics-scraper:v1.0.8
  13. ports:
  14. - containerPort: 8000
  15. protocol: TCP

三、修改dashboard的yaml文件内容

 dashboard默认情况只限于K8S内部访问,因此需要再暴露端口给客户端外部环境访问

在Service资源里增加两行

 service端口是443,而后把请求再转给pod的8443端口,手动增加NodePort类型,再增加个30000端口

 上面的30000端口必须要在hosts文件中定义的端口范围之内

四、创建dashboard

1、创建资源

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl apply -f dashboard-v2.6.1.yaml

2、 访问下node节点的30000端口,一定要是https协议才行,因为它是有证书的

显示需要token

dashboard只是个访问页面,并没有创建访问的用户;用户需要自己手动创建

3、创建访问用户

  1. 创建资源文件
  2. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# vim admin-user.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: admin-user #用户创建在下面的namespace中
  7. namespace: kubernetes-dashboard
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRoleBinding #做角色绑定
  11. metadata:
  12. name: admin-user
  13. roleRef:
  14. apiGroup: rbac.authorization.k8s.io #api版本
  15. kind: ClusterRole #授予admin-user用户ClusterRole(集群管理员)权限
  16. name: cluster-admin
  17. subjects:
  18. - kind: ServiceAccount
  19. name: admin-user #该用户与上面 ClusterRole角色进行绑定,拥有集群管理员权限
  20. namespace: kubernetes-dashborad
  21. 创建资源
  22. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# kubectl apply -f admin-user.yaml
  23. serviceaccount/admin-user created
  24. clusterrolebinding.rbac.authorization.k8s.io/admin-user created

  1. 创建Secret文件
  2. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# vim admin-secret.yaml
  3. apiVersion: v1
  4. kind: Secret #类型是Secret
  5. type: kubernetes.io/service-account-token
  6. metadata:
  7. name: dashboard-admin-user #Secret的名字是这个
  8. namespace: kubernetes-dashboard #创建在这个namesapce中
  9. annotations: #注解
  10. kubernetes.io/service-account.name: "admin-user"
  11. # 注解内容是和哪个service-account进行绑定是和admin-user进行绑定 ,也就是为admin-user创建一个叫做dashboard-admin-user名字的Secret,这个Secret中就包含它需要登录的tonken了
  12. 创建资源
  13. root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# kubectl apply -f admin-secret.yaml
  14. secret/dashboard-admin-user created

4、取出tonken

  1. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl get secrets -A | grep admin
  2. kubernetes-dashboard dashboard-admin-user kubernetes.io/service-account-token 3 17s
  3. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl describe secrets dashboard-admin-user -n kubernetes-dashboard
  4. Name: dashboard-admin-user
  5. Namespace: kubernetes-dashboard
  6. Labels: <none>
  7. Annotations: kubernetes.io/service-account.name: admin-user
  8. kubernetes.io/service-account.uid: 52419154-cc5f-4276-9b1a-2c76eb08f2a9
  9. Type: kubernetes.io/service-account-token
  10. Data
  11. ====
  12. namespace: 20 bytes
  13. token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImIxX29tV0s1MEZyT216ZDhiN0lGNGx3VUQxQ1ltQ3ZaWTZmRm1zQkJMZHMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTI0MTkxNTQtY2M1Zi00Mjc2LTliMWEtMmM3NmViMDhmMmE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.DX-sID2NzIjIzCMgMb9ugcC-7icaSiLOGZyWp1-PiY_W4oKaphuHhSspEH1M98agOrP-NIrzuKko_GCqxyEVmrJ6Mws7YAGE80_RQpxAyERMA4M0qQv8JPw8U3IJMmvw34xnyJLYBiSaLv4RVIG2IqPu635mEIZNmkZ7r5Cs0DhOxCxnK086QNj1zMqdu7p-NEmYGZedS1TAw7rVW8gBZbgvzViO8jMAZWYf2arN77RbNOPbLTyzCWKc8qwL2fcOpkwSiGCxKzpFV4cnwb4n8RCgtxgi5B3q5OwOyQC_SfCFOzr_RHqq65voZ0SS2buMk9SwC-q_k-dMlxe1dqYhjw
  14. ca.crt: 1302 bytes

 把tonken取出来就可以登录页面了

记一次报错:

登录页面后发现没有任何资源

 看了下pod状态有报错

  1. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl logs -f dashboard-metrics-scraper-67969bbbb6-hprg5 -n kubernetes-dashboard
  2. W1121 06:50:20.910976 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
  3. {"level":"info","msg":"Kubernetes host: https://10.100.0.1:443","time":"2022-11-21T06:50:20Z"}
  4. {"level":"info","msg":"Namespace(s): []","time":"2022-11-21T06:50:20Z"}
  5. {"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-11-21T06:57:20Z"}
  6. 10.200.137.64 - - [21/Nov/2022:06:57:21 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
  7. 172.31.7.112 - - [21/Nov/2022:06:57:29 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  8. 172.31.7.112 - - [21/Nov/2022:06:57:39 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  9. 172.31.7.112 - - [21/Nov/2022:06:57:49 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  10. 10.200.137.64 - - [21/Nov/2022:06:57:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
  11. 172.31.7.112 - - [21/Nov/2022:06:57:59 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  12. 172.31.7.112 - - [21/Nov/2022:06:58:09 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  13. 172.31.7.112 - - [21/Nov/2022:06:58:19 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  14. {"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-11-21T06:58:20Z"}
  15. 10.200.137.64 - - [21/Nov/2022:06:58:21 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
  16. 172.31.7.112 - - [21/Nov/2022:06:58:29 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  17. 172.31.7.112 - - [21/Nov/2022:06:58:39 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  18. 172.31.7.112 - - [21/Nov/2022:06:58:49 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  19. 10.200.137.64 - - [21/Nov/2022:06:58:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
  20. 172.31.7.112 - - [21/Nov/2022:06:58:59 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  21. 172.31.7.112 - - [21/Nov/2022:06:59:09 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
  22. 172.31.7.112 - - [21/Nov/2022:06:59:19 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"

  1. root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl logs -f kubernetes-dashboard-557cd5b7d6-qcmc9 -n kubernetes-dashboard
  2. 2022/11/21 06:50:20 Starting overwatch
  3. 2022/11/21 06:50:20 Using namespace: kubernetes-dashboard
  4. 2022/11/21 06:50:20 Using in-cluster config to connect to apiserver
  5. 2022/11/21 06:50:20 Using secret token for csrf signing
  6. 2022/11/21 06:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
  7. 2022/11/21 06:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
  8. 2022/11/21 06:50:20 Successful initial request to the apiserver, version: v1.24.3
  9. 2022/11/21 06:50:20 Generating JWE encryption key
  10. 2022/11/21 06:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
  11. 2022/11/21 06:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
  12. 2022/11/21 06:50:21 Initializing JWE encryption key from synchronized object
  13. 2022/11/21 06:50:21 Creating in-cluster Sidecar client
  14. 2022/11/21 06:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
  15. 2022/11/21 06:50:21 Auto-generating certificates
  16. 2022/11/21 06:50:21 Successfully created certificates
  17. 2022/11/21 06:50:21 Serving securely on HTTPS port: 8443
  18. 2022/11/21 06:50:51 Successful request to sidecar
  19. 2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:30606: remote error: tls: bad certificate
  20. 2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:28961: remote error: tls: bad certificate
  21. 2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:9274: remote error: tls: bad certificate
  22. 2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:29868: remote error: tls: bad certificate

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/429630
推荐阅读
相关标签
  

闽ICP备14008679号