当前位置:   article > 正文

使用kubeadm从0到1搭建kubernete集群_hostname "master": lookup master on 192.168.115.2:

hostname "master": lookup master on 192.168.115.2:53: no such host

目录

概述

安装前提示

安装docker

安装kubeadm

安装kubernete集群master节点

安装 kubeadm/kubectl/kubelet组件

安装kubernete master节点

安装CNI网络插件

部署集群worker节点

安装dashboard

总结

参考文章


概述

kubeadm是一个部署kubernete集群的非常易用的工具,只需要2条命令kubeadm init和kubeadm join就可以搭建起kubernete的集群。它采用的方案是把kubelete直接部署在宿主机上,其他组件部署在容器中。

本文采用的机器资源:

2个vmware虚机,安装系统centos7,2核CPU,4G内存

hostname

内存

CPU

操作系统

虚拟类型

master

4G

2核

centos7

vmware

worker1

4G

2核

centos7

vmware

安装前提示

1.给虚拟机安装系统过程中,一定要指定hostname,不然2个虚拟机hostname默认一致,worker节点加入集群后,不能显示。安装好kubernete后再修改,会有各种问题,我本人没有搞定这些问题,所以我又重装了系统。安装操作系统时指定hostname的地方在页面上的network。参见下图。

2.虚机的硬件条件一定要满足2核CPU,不然会安装失败,错误如下:

3.安装系统时network里面的ens33要选择connection,如下图。不然安装安装后ens33没有ip地址,需要命令修改文件/etc/sysconfig/network-scripts/ifcfg-ens33中onboot=yes,之后执行命令service network restart

4.安装kubelet kubeadm kubectl组件时最好指定版本,阿里云不一定有最新版本,本文采用v1.17.3,详见后面内容

 

安装docker

这里我们选择安装17.03.2版本,使用阿里云的镜像。

我选择离线安装

在官网https://download.docker.com/linux/centos/7/x86_64/stable/Packages/下载2个文件

  1. docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  
  2. docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

之后拷贝到虚机,执行下面命令即可安装

  1. rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
  2. rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

  1. yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
  2. yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

过程如下:

  1. [root@worker1 docker]# rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
  2. warning: docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
  3. Preparing...                          ################################# [100%]
  4. Updating / installing...
  5.    1:docker-ce-selinux-17.03.2.ce-1.el################################# [100%]
  6. Re-declaration of type docker_t
  7. Failed to create node
  8. Bad type declaration at /etc/selinux/targeted/tmp/modules/400/docker/cil:1
  9. /usr/sbin/semodule:  Failed!
  10. restorecon:  lstat(/var/lib/docker) failed:  No such file or directory
  11. warning: %post(docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch) scriptlet failed, exit status 255
  12. [root@worker1 docker]# rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
  13. warning: docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
  14. Preparing...                          ################################# [100%]
  15. Updating / installing...
  16.    1:docker-ce-17.03.2.ce-1.el7.centos################################# [100%]
  17. [root@worker1 docker]# docker -v
  18. Docker version 17.03.2-ce, build f5ec1e2

启动docker服务

systemctl start docker.service

安装kubeadm

有3中方式:二进制安装、翻墙安装和使用阿里云镜像安装,前2种方式不介绍了,大家可以百度一下,使用阿里云镜像是最简单的安装方式。

使用如下代码配置源文件地址

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

再执行如下命令就可以安装了:

yum install -y kubelet kubeadm kubectl

安装kubernete集群master节点

安装 kubeadm/kubectl/kubelet组件

安装好kubeadm后,就可以使用kubeadm来部署kubernete集群了,首先部署master节点,命令如下:

kubeadm init

再次执行kubeadm init命令,报如下错误:

从报错信息中可以找到问题,执行如下2个命令进行修复:

  1. echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
  2. swapoff -a

再次查看swap是否已经关闭,可以看出,swap已经是0了

 

再来,这次报错不一样了,kubernete相关组件的国外镜像拉取不到

使用阿里云镜像来拉取kubernete组件,命令如下:

kubeadm init --image-repository registry.aliyuncs.com/google_containers

 

再次报错,找不到v1.18.3版本,只好降一个版本,使用v1.17.3,这时候需要重新安装kubeadm组件,首先删除之前的组件

yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64  

执行命令安装版本v1.17.3

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

安装成功日志如下:

安装kubernete master节点

kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

这次竟然默认安装v1.17.6版本,又是找不到,我们还是指定版本号v1.17.3吧

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.3 --pod-network-cidr=192.168.59.0/16

这次一路成功,最后日志如下:

  1. [root@master docker]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.3 --pod-network-cidr=192.168.59.0/16
  2. W0528 09:08:40.669505 24739 validation.go:28] Cannot validate kube-proxy config - no validator is available
  3. W0528 09:08:40.669664 24739 validation.go:28] Cannot validate kubelet config - no validator is available
  4. [init] Using Kubernetes version: v1.17.3
  5. [preflight] Running pre-flight checks
  6. [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  7. [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
  8. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  9. [WARNING Hostname]: hostname "master" could not be reached
  10. [WARNING Hostname]: hostname "master": lookup master on 192.168.59.2:53: server misbehaving
  11. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  12. [preflight] Pulling images required for setting up a Kubernetes cluster
  13. [preflight] This might take a minute or two, depending on the speed of your internet connection
  14. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  15. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  16. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  17. [kubelet-start] Starting the kubelet
  18. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  19. [certs] Generating "ca" certificate and key
  20. [certs] Generating "apiserver" certificate and key
  21. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.59.132]
  22. [certs] Generating "apiserver-kubelet-client" certificate and key
  23. [certs] Generating "front-proxy-ca" certificate and key
  24. [certs] Generating "front-proxy-client" certificate and key
  25. [certs] Generating "etcd/ca" certificate and key
  26. [certs] Generating "etcd/server" certificate and key
  27. [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.59.132 127.0.0.1 ::1]
  28. [certs] Generating "etcd/peer" certificate and key
  29. [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.59.132 127.0.0.1 ::1]
  30. [certs] Generating "etcd/healthcheck-client" certificate and key
  31. [certs] Generating "apiserver-etcd-client" certificate and key
  32. [certs] Generating "sa" key and public key
  33. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  34. [kubeconfig] Writing "admin.conf" kubeconfig file
  35. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  36. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  37. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  38. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  39. [control-plane] Creating static Pod manifest for "kube-apiserver"
  40. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  41. W0528 09:08:55.439922 24739 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  42. [control-plane] Creating static Pod manifest for "kube-scheduler"
  43. W0528 09:08:55.442026 24739 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  44. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  45. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  46. [kubelet-check] Initial timeout of 40s passed.
  47. [apiclient] All control plane components are healthy after 217.504294 seconds
  48. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  49. [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
  50. [upload-certs] Skipping phase. Please see --upload-certs
  51. [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  52. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  53. [bootstrap-token] Using token: kw377c.z478de8wq0i41ksq
  54. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  55. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  56. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  57. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  58. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  59. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  60. [addons] Applied essential addon: CoreDNS
  61. [addons] Applied essential addon: kube-proxy
  62. Your Kubernetes control-plane has initialized successfully!
  63. To start using your cluster, you need to run the following as a regular user:
  64. mkdir -p $HOME/.kube
  65. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  66. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  67. You should now deploy a pod network to the cluster.
  68. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  69. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  70. Then you can join any number of worker nodes by running the following on each as root:
  71. kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq \
  72. --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f

可以看到安装成功了,我们查看一下pod状态,

  1. [root@localhost docker]# kubectl get pods --all-namespaces
  2. The connection to the server localhost:8080 was refused - did you specify the right host or port?

执行如下命令可以解决:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

执行这3条命令的原因是:Kubernetes 集群默认需要加密方式访问。将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认就会使用这个目录下的授权信息访问 Kubernetes 集群。

如果遇到问题:Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")   也可以执行上面命令解决

 再次查看pod状态

  1. [root@localhost docker]# kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-9d85f5447-c9vg5 0/1 Pending 0 19h
  4. kube-system coredns-9d85f5447-w4w9n 0/1 Pending 0 19h
  5. kube-system etcd-localhost.localdomain 1/1 Running 0 19h
  6. kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 19h
  7. kube-system kube-controller-manager-localhost.localdomain 1/1 Running 1 19h
  8. kube-system kube-proxy-zvq6z 1/1 Running 0 19h
  9. kube-system kube-scheduler-localhost.localdomain 1/1 Running 1 19h

下面,我们再看一下nodes状态:

  1. [root@localhost docker]# kubectl get nodes
  2. NAME                    STATUS     ROLES    AGE   VERSION
  3. Master   NotReady   master   20h   v1.17.3

 

安装CNI网络插件

从上面可以看出master节点还没有准备好,原因在于网络插件没有部署好,下面我们部署网络插件。Kubernete网络方案主要是基于CNI的实现,比如Flannel、Calico、Canal、Romana,这儿我们使用flannel。命令如下:

kubectl apply -f ttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

我使用的yaml文件地址:

https://github.com/jinjunzhu/kubernete/blob/master/kube-flannel.yml

注意:这个过程会慢一些,看本地网络状况,耐心等待。

之后在查看pod状态:

  1. [root@localhost k8s]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-9d85f5447-lcs5s 0/1 Running 0 5m31s
  4. coredns-9d85f5447-wllzv 0/1 Running 0 5m31s
  5. etcd-localhost.localdomain 1/1 Running 0 5m40s
  6. kube-apiserver-localhost.localdomain 1/1 Running 0 5m40s
  7. kube-controller-manager-localhost.localdomain 1/1 Running 0 5m40s
  8. kube-flannel-ds-amd64-9vv4m 1/1 Running 0 38s
  9. kube-proxy-qv6z5 1/1 Running 0 5m31s
  10. kube-scheduler-localhost.localdomain 1/1 Running 0 5m40s

再查看节点状态

  1. [root@localhost flannel]# kubectl get node
  2. NAME                    STATUS   ROLES    AGE     VERSION
  3. master   Ready    master   3m20s   v1.17.3

至此master节点启动成功了

注意:在部署过程中,可以会遇到网络失败的问题,这时候如果各种命令都不好解决,可以执行kubeadm reset 之后重新执行init过程

 

部署集群worker节点

部署worker节点比部署master节点简单,不用运行 kube-apiserver、kube-scheduler、kube-controller-manger这3个节点。

Worker节点机器上也要执行下面命令:

  1. echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
  2. swapoff -a

首先跟上一节一样安装docker和kubeadm,然后执行如下命令,也就是部署master节点的输出

kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f

注意:worker节点要关闭防火墙:

  1. systemctl stop firewalld
  2. service  iptables stop

执行命令后报错:

  1. [root@localhost pki]# kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f
  2. W0527 13:15:35.059010 18952 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  3. [preflight] Running pre-flight checks
  4. [preflight] Reading configuration from the cluster...
  5. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  6. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
  7. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Starting the kubelet
  10. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  11. [kubelet-check] Initial timeout of 40s passed.
  12. [kubelet-check] It seems like the kubelet isn't running or healthy.
  13. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  14. [kubelet-check] It seems like the kubelet isn't running or healthy.
  15. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  16. [kubelet-check] It seems like the kubelet isn't running or healthy.
  17. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  18. [kubelet-check] It seems like the kubelet isn't running or healthy.
  19. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  20. [kubelet-check] It seems like the kubelet isn't running or healthy.
  21. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  22. Unfortunately, an error has occurred:
  23. timed out waiting for the condition
  24. This error is likely caused by:
  25. - The kubelet is not running
  26. - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  27. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
  28. - 'systemctl status kubelet'
  29. - 'journalctl -xeu kubelet'
  30. error execution phase kubelet-start: timed out waiting for the condition
  31. To see the stack trace of this error execute with --v=5 or higher

从日志中可以看出,kubelete启动异常,这个原因是因为这个虚机上我之前搭建过minikube。我尝试了多种方式删除之前的环境,但是都没有成功。只能重新装系统了。我给这个虚机重新装了系统后,重新安装docker,kubeadm,之后执行kubectl join命令,结果如下:

  1. [root@worker1 ~]# kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f
  2. W0528 21:57:11.003270 16810 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  3. [preflight] Running pre-flight checks
  4. [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [WARNING Hostname]: hostname "worker1" could not be reached
  7. [WARNING Hostname]: hostname "worker1": lookup worker1 on 192.168.59.2:53: server misbehaving
  8. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  9. [preflight] Reading configuration from the cluster...
  10. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  11. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
  12. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  13. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  14. [kubelet-start] Starting the kubelet
  15. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  16. This node has joined the cluster:
  17. * Certificate signing request was sent to apiserver and a response was received.
  18. * The Kubelet was informed of the new secure connection details.
  19. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  20. 在master节点上执行kubectl get nodes结果如下:
  21. [root@master ~]# kubectl get node
  22. NAME STATUS ROLES AGE VERSION
  23. master Ready master 12h v1.17.3
  24. worker1 Ready <none> 29s v1.17.3

 

安装dashboard

安装dashboard理论上非常简单,实际上坑非常多,命令如下:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml

安装后发现dashboard的pod一直启动失败,如下图:

用如下命令查看日志

  1. [root@master kubernetes]# kubectl logs kubernetes-dashboard-64999dbccd-gmk5x --namespace=kubernetes-dashboard
  2. Error from server: Get https://192.168.59.136:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-64999dbccd-gmk5x/kubernetes-dashboard: dial tcp 192.168.59.136:10250: connect: no route to host

 根据网上的一些文档,master上默认不能部署pod,在master上搭建dashboard,需要注释掉如下3行:

之后再报这个错误:

initializing csrf token from kubernetes-dashboard-csrf secret panic: Get https://10.2.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.244.0.1:443: i/o timeout

 网上找了一下原因,Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部。最后参照网上的步骤做了修复,源码文件如下:

https://github.com/jinjunzhu/kubernete/blob/master/recommended.yaml

改好yaml文件后,执行kubectl apply -f recommended.yaml 安装成功

登陆dashboard页面,如下,需要token

 创建用户,命令如下:

  1. kubectl create serviceaccount dashboard-admin -n kube-system
  2. kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  3. kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}‘)

最后一条命令截取一部分输出如下:

  1. Type: kubernetes.io/service-account-token
  2. Data
  3. ====
  4. ca.crt: 1025 bytes
  5. namespace: 11 bytes
  6. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkF5bjJEM3g0cjJCS282TlNCcjU0aVdTRE4wT0JqaE05LUxuODlTRFVkR1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLWs2ejd2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OWMyN2NlYy04MDY5LTRkOWItOTdkNi1lZjVjMzk5NGI1Y2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.BFORGRlEK7i7kPvGnDQDZKr5ow6feuWWymhor_BPecd1YUUMXnwDy9JvPPUizHMQnRxmA4HO-WlRcAcYXOFsWBQ9fz3KqLQuSJEDICz128XyA5bUEesS_MKqGTh7p4drc2OuduW7EHm2_UEs8g9SUeogTrp9JksQlEXUoln5TnactpzMr2J6w3hPKO85z3eUv_14f240kfYgN0jR6Q9owlDEcG27onNlDHvT2hGNs-9IJaBFSuPobf7zuJLY4GR2qkLGclszgFKHGsl8NObrS2c5_Ep7iQBBfw4STTCzuW5tG9gNKWzwXKwAnJTM2wu6oePBJ34df6rGAjzjXNlvHg
  7. Name: coredns-token-z26jv
  8. Namespace: kube-system
  9. Labels: <none>
  10. Annotations: kubernetes.io/service-account.name: coredns
  11. kubernetes.io/service-account.uid: ddd1ae67-1045-4674-8277-11f4ccee2e65
  12. Type: kubernetes.io/service-account-token

把上面的token复制到上面截图的token输入框,登陆成功。查看集群信息,截图如下:

总结

部署kubernete集群还是有不小的门槛,先得了解一下kubernete的工作原理和各个组件的作用,遇到不能拉取的镜像首先选择阿里云,阿里云一般都可以找到,遇到问题多看官网上的issue,好多中文的博客讲不透彻。

参考文章

  1. https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11xCHTc0
  2. https://www.kubernetes.org.cn/7189.html
  3. https://github.com/kubernetes/kubernetes/issues/54542
  4. https://kubernetes.io/docs/setup/production-environment/container-runtimes/
  5. https://github.com/kubernetes/dashboard/blob/master/docs/user/installation.md
  6. https://github.com/kubernetes/dashboard/blob/master/README.md
  7. http://www.mamicode.com/info-detail-2961782.html

 

微信公众号,欢迎关注,共同学习进步

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/565663
推荐阅读
相关标签
  

闽ICP备14008679号