当前位置:   article > 正文

高可用安装K8s集群1.20.x

50imtx.xyz

目录

  • 1. 安装说明

  • 2. 节点规划

  • 3. 基本配置

  • 4. 内核配置

  • 5. 基本组件安装

  • 6. 高可用组件安装

  • 7. 集群初始化

  • 8. 高可用Master

  • 9. 添加Node节点

  • 10. Calico安装

  • 11. Metrics Server部署

  • 12. Dashboard部署

1. 安装说明

虽然K8s 1.20版本宣布将在1.23版本之后将不再维护dockershim,意味着K8s将不直接支持Docker,不过大家不必过于担心。一是在1.23版本之前我们仍然可以使用Docker,二是dockershim肯定会有人接盘,我们同样可以使用Docker,三是Docker制作的镜像仍然可以在其他Runtime环境中使用,所以大家不必过于恐慌。

本次安装采用的是Kubeadm安装工具,安装版本是K8s 1.20+,采用的系统为CentOS 7.9,其中Master节点3台,Node节点2台,高可用工具采用HAProxy + KeepAlived,高可用架构视频讲解点我

前沿技术,快人一步,点我了解~

2. 节点规划

主机名IP地址角色配置
k8s-master01 ~ 03192.168.0.201 ~ 203Master/Worker节点2C2G 40G
k8s-node01 ~ 02192.168.0.204 ~ 205Worker节点2C2G 40G
k8s-master-lb192.168.0.236VIPVIP不占用机器
信息备注
系统版本CentOS 7.9
Docker版本19.03.x
K8s版本1.20.x
Pod网段172.168.0.0/16
Service网段10.96.0.0/12

3. 基本配置

所有节点配置hosts

  1. [root@k8s-master01 ~]# cat /etc/hosts
  2. 192.168.0.201 k8s-master01
  3. 192.168.0.202 k8s-master02
  4. 192.168.0.203 k8s-master03
  5. 192.168.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
  6. 192.168.0.204 k8s-node01
  7. 192.168.0.205 k8s-node02

yum源配置

  1. curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  4. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  5. [kubernetes]
  6. name=Kubernetes
  7. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  8. enabled=1
  9. gpgcheck=1
  10. repo_gpgcheck=1
  11. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  12. EOF
  13. sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

成为K8s架构师只需一步,点我了解~

所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

  1. systemctl disable --now firewalld
  2. systemctl disable --now dnsmasq
  3. systemctl disable --now NetworkManager
  4. setenforce 0
  5. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
  6. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区

  1. swapoff -a && sysctl -w vm.swappiness=0
  2. sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

安装ntpdate

  1. rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
  2. yum install ntpdate -y

所有节点同步时间。时间同步配置如下:

  1. ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  2. echo 'Asia/Shanghai' >/etc/timezone
  3. ntpdate time2.aliyun.com

加入到crontab

*/5 * * * * ntpdate time2.aliyun.com

所有节点配置limit:

  1. ulimit -SHn 65535
  2. vim /etc/security/limits.conf
  3. # 末尾添加如下内容
  4. * soft nofile 655360
  5. * hard nofile 131072
  6. * soft nproc 655350
  7. * hard nproc 655350
  8. * soft memlock unlimited
  9. * hard memlock unlimited

Master01节点免密钥登录其他节点:

  1. ssh-keygen -t rsa
  2. for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

下载安装所有的源码文件:

cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git

所有节点升级系统并重启:

yum update -y  && reboot

掌握K8s就是掌握云计算的未来,点我了解~

4. 内核配置

所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块

  1. vim /etc/modules-load.d/ipvs.conf
  2. # 加入以下内容
  3. ip_vs
  4. ip_vs_lc
  5. ip_vs_wlc
  6. ip_vs_rr
  7. ip_vs_wrr
  8. ip_vs_lblc
  9. ip_vs_lblcr
  10. ip_vs_dh
  11. ip_vs_sh
  12. ip_vs_fo
  13. ip_vs_nq
  14. ip_vs_sed
  15. ip_vs_ftp
  16. ip_vs_sh
  17. nf_conntrack_ipv4
  18. ip_tables
  19. ip_set
  20. xt_set
  21. ipt_set
  22. ipt_rpfilter
  23. ipt_REJECT
  24. ipip

加载内核配置

systemctl enable --now systemd-modules-load.service

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. fs.may_detach_mounts = 1
  6. vm.overcommit_memory=1
  7. vm.panic_on_oom=0
  8. fs.inotify.max_user_watches=89100
  9. fs.file-max=52706963
  10. fs.nr_open=52706963
  11. net.netfilter.nf_conntrack_max=2310720
  12. net.ipv4.tcp_keepalive_time = 600
  13. net.ipv4.tcp_keepalive_probes = 3
  14. net.ipv4.tcp_keepalive_intvl =15
  15. net.ipv4.tcp_max_tw_buckets = 36000
  16. net.ipv4.tcp_tw_reuse = 1
  17. net.ipv4.tcp_max_orphans = 327680
  18. net.ipv4.tcp_orphan_retries = 3
  19. net.ipv4.tcp_syncookies = 1
  20. net.ipv4.tcp_max_syn_backlog = 16384
  21. net.ipv4.ip_conntrack_max = 65536
  22. net.ipv4.tcp_max_syn_backlog = 16384
  23. net.ipv4.tcp_timestamps = 0
  24. net.core.somaxconn = 16384
  25. EOF
  26. sysctl --system

5. 基本组件安装

所有节点安装Docker-ce 19.03

yum install docker-ce-19.03.* -y

所有节点设置开机自启动Docker

systemctl daemon-reload && systemctl enable --now docker

安装k8s组件

yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装最新版本kubeadm

yum install kubeadm -y

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:

  1. cat >/etc/sysconfig/kubelet<<EOF
  2. KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
  3. EOF

设置Kubelet开机自启动

  1. systemctl daemon-reload
  2. systemctl enable --now kubelet

高薪职业,永不失业,点我了解~

6. 高可用组件安装

注意:如果不是高可用集群或者在云上安装,haproxy和keepalived无需安装
所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同):

  1. [root@k8s-master01 etc]# mkdir /etc/haproxy
  2. [root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg
  3. global
  4. maxconn 2000
  5. ulimit-n 16384
  6. log 127.0.0.1 local0 err
  7. stats timeout 30s
  8. defaults
  9. log global
  10. mode http
  11. option httplog
  12. timeout connect 5000
  13. timeout client 50000
  14. timeout server 50000
  15. timeout http-request 15s
  16. timeout http-keep-alive 15s
  17. frontend monitor-in
  18. bind *:33305
  19. mode http
  20. option httplog
  21. monitor-uri /monitor
  22. frontend k8s-master
  23. bind 0.0.0.0:16443
  24. bind 127.0.0.1:16443
  25. mode tcp
  26. option tcplog
  27. tcp-request inspect-delay 5s
  28. default_backend k8s-master
  29. backend k8s-master
  30. mode tcp
  31. option tcplog
  32. option tcp-check
  33. balance roundrobin
  34. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  35. server k8s-master01 192.168.0.201:6443 check
  36. server k8s-master02 192.168.0.202:6443 check
  37. server k8s-master03 192.168.0.203:6443 check

所有Master节点配置KeepAlived,配置不一样,注意区分
注意每个节点的IP和网卡(interface参数)
Master01节点的配置:

  1. [root@k8s-master01 etc]# mkdir /etc/keepalived
  2. [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
  3. ! Configuration File for keepalived
  4. global_defs {
  5. router_id LVS_DEVEL
  6. script_user root
  7. enable_script_security
  8. }
  9. vrrp_script chk_apiserver {
  10. script "/etc/keepalived/check_apiserver.sh"
  11. interval 5
  12. weight -5
  13. fall 2
  14. rise 1
  15. }
  16. vrrp_instance VI_1 {
  17. state MASTER
  18. interface ens192
  19. mcast_src_ip 192.168.0.201
  20. virtual_router_id 51
  21. priority 101
  22. advert_int 2
  23. authentication {
  24. auth_type PASS
  25. auth_pass K8SHA_KA_AUTH
  26. }
  27. virtual_ipaddress {
  28. 192.168.0.236
  29. }
  30. # track_script {
  31. # chk_apiserver
  32. # }
  33. }

Master02节点的配置:

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. script_user root
  5. enable_script_security
  6. }
  7. vrrp_script chk_apiserver {
  8. script "/etc/keepalived/check_apiserver.sh"
  9. interval 5
  10. weight -5
  11. fall 2
  12. rise 1
  13. }
  14. vrrp_instance VI_1 {
  15. state BACKUP
  16. interface ens192
  17. mcast_src_ip 192.168.0.202
  18. virtual_router_id 51
  19. priority 100
  20. advert_int 2
  21. authentication {
  22. auth_type PASS
  23. auth_pass K8SHA_KA_AUTH
  24. }
  25. virtual_ipaddress {
  26. 192.168.0.236
  27. }
  28. # track_script {
  29. # chk_apiserver
  30. # }
  31. }

Master03节点的配置:

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. script_user root
  5. enable_script_security
  6. }
  7. vrrp_script chk_apiserver {
  8. script "/etc/keepalived/check_apiserver.sh"
  9. interval 5
  10. weight -5
  11. fall 2
  12. rise 1
  13. }
  14. vrrp_instance VI_1 {
  15. state BACKUP
  16. interface ens192
  17. mcast_src_ip 192.168.0.203
  18. virtual_router_id 51
  19. priority 100
  20. advert_int 2
  21. authentication {
  22. auth_type PASS
  23. auth_pass K8SHA_KA_AUTH
  24. }
  25. virtual_ipaddress {
  26. 192.168.0.236
  27. }
  28. # track_script {
  29. # chk_apiserver
  30. # }
  31. }

注意上述的健康检查是关闭的,集群建立完成后再开启:

  1. # track_script {
  2. # chk_apiserver
  3. # }

配置KeepAlived健康检查文件:

  1. [root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh
  2. #!/bin/bash
  3. err=0
  4. for k in $(seq 1 3)
  5. do
  6. check_code=$(pgrep haproxy)
  7. if [[ $check_code == "" ]]; then
  8. err=$(expr $err + 1)
  9. sleep 1
  10. continue
  11. else
  12. err=0
  13. break
  14. fi
  15. done
  16. if [[ $err != "0" ]]; then
  17. echo "systemctl stop keepalived"
  18. /usr/bin/systemctl stop keepalived
  19. exit 1
  20. else
  21. exit 0
  22. fi
  23. chmod +x /etc/keepalived/check_apiserver.sh
  24. 启动haproxy和keepalived
  25. [root@k8s-master01 keepalived]# systemctl daemon-reload
  26. [root@k8s-master01 keepalived]# systemctl enable --now haproxy
  27. [root@k8s-master01 keepalived]# systemctl enable --now keepalived

测试VIP

  1. [root@k8s-master01 ~]# ping 192.168.0.236 -c 4
  2. PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data.
  3. 64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms
  4. 64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms
  5. 64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms
  6. 64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms

方向比努力更重要,点我了解~

7. 集群初始化

Master01节点创建new.yaml配置文件如下:

  1. apiVersion: kubeadm.k8s.io/v1beta2
  2. bootstrapTokens:
  3. - groups:
  4. - system:bootstrappers:kubeadm:default-node-token
  5. token: 7t2weq.bjbawausm0jaxury
  6. ttl: 24h0m0s
  7. usages:
  8. - signing
  9. - authentication
  10. kind: InitConfiguration
  11. localAPIEndpoint:
  12. advertiseAddress: 192.168.0.201
  13. bindPort: 6443
  14. nodeRegistration:
  15. criSocket: /var/run/dockershim.sock
  16. name: k8s-master01
  17. taints:
  18. - effect: NoSchedule
  19. key: node-role.kubernetes.io/master
  20. ---
  21. apiServer:
  22. certSANs:
  23. - 192.168.0.236
  24. timeoutForControlPlane: 4m0s
  25. apiVersion: kubeadm.k8s.io/v1beta2
  26. certificatesDir: /etc/kubernetes/pki
  27. clusterName: kubernetes
  28. controlPlaneEndpoint: 192.168.0.236:16443
  29. controllerManager: {}
  30. dns:
  31. type: CoreDNS
  32. etcd:
  33. local:
  34. dataDir: /var/lib/etcd
  35. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  36. kind: ClusterConfiguration
  37. kubernetesVersion: v1.20.0
  38. networking:
  39. dnsDomain: cluster.local
  40. podSubnet: 172.168.0.0/16
  41. serviceSubnet: 10.96.0.0/12
  42. scheduler: {}

注意:如果不是高可用集群,192.168.0.236:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改v1.20.0为自己服务器kubeadm的版本:kubeadm version
将new.yaml文件复制到其他master节点,之后所有Master节点提前下载镜像,可以节省初始化时间:

kubeadm config images pull --config /root/new.yaml 

所有节点设置开机自启动kubelet

systemctl enable --now kubelet(如果启动失败无需管理,初始化成功以后即可启动)

Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

kubeadm init --config /root/new.yaml  --upload-certs

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. mkdir -p $HOME/.kube
  4. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  5. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  6. Alternatively, if you are the root user, you can run:
  7. export KUBECONFIG=/etc/kubernetes/admin.conf
  8. You should now deploy a pod network to the cluster.
  9. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  10. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  11. You can now join any number of the control-plane node running the following command on each as root:
  12. kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
  13. --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \
  14. --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa
  15. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  16. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  17. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  18. Then you can join any number of worker nodes by running the following on each as root:
  19. kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
  20. --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

Master01节点配置环境变量,用于访问Kubernetes集群:

  1. cat <<EOF >> /root/.bashrc
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. source /root/.bashrc

查看节点状态:

  1. [root@k8s-master01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 NotReady control-plane,master 74s v1.20.0

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

  1. [root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE
  3. coredns-777d78ff6f-kstsz 0/1 Pending 0 14m <none> <none>
  4. coredns-777d78ff6f-rlfr5 0/1 Pending 0 14m <none> <none>
  5. etcd-k8s-master01 1/1 Running 0 14m 192.168.0.201 k8s-master01
  6. kube-apiserver-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01
  7. kube-controller-manager-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01
  8. kube-proxy-8d4qc 1/1 Running 0 14m 192.168.0.201 k8s-master01
  9. kube-scheduler-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01

8. 高可用Master

初始化其他master加入集群

  1. kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
  2. --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \
  3. --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa

9. 添加Node节点

  1. kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
  2. --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

查看集群状态:

  1. [root@k8s-master01]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 NotReady control-plane,master 8m53s v1.20.0
  4. k8s-master02 NotReady control-plane,master 2m25s v1.20.0
  5. k8s-master03 NotReady control-plane,master 31s v1.20.0
  6. k8s-node01 NotReady <none> 32s v1.20.0
  7. k8s-node02 NotReady <none> 88s v1.20.0

10. Calico安装

K8s架构师学习路径,点我了解~

以下步骤只在master01执行

cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/

修改calico-etcd.yaml的以下位置

  1. sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379"#g' calico-etcd.yaml
  2. ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
  3. ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
  4. ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
  5. sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
  6. sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
  7. POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
  8. sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

创建calico

kubectl apply -f calico-etcd.yaml

11. Metrics Server部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到所有Node节点

  1. scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
  2. scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server

  1. cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/
  2. [root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl create -f comp.yaml
  3. serviceaccount/metrics-server created
  4. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  5. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  6. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  7. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  8. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  9. service/metrics-server created
  10. deployment.apps/metrics-server created
  11. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

等待kube-system命令空间下的Pod全部启动后,查看状态

  1. [root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl top node
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. k8s-master01 109m 2% 1296Mi 33%
  4. k8s-master02 99m 2% 1124Mi 29%
  5. k8s-master03 104m 2% 1082Mi 28%
  6. k8s-node01 55m 1% 761Mi 19%
  7. k8s-node02 53m 1% 663Mi 17%

12. Dashboard部署

  1. cd /root/k8s-ha-install/dashboard/
  2. [root@k8s-master01 dashboard]# kubectl create -f .
  3. serviceaccount/admin-user created
  4. clusterrolebinding.rbac.authorization.k8s.io/admin-user created
  5. namespace/kubernetes-dashboard created
  6. serviceaccount/kubernetes-dashboard created
  7. service/kubernetes-dashboard created
  8. secret/kubernetes-dashboard-certs created
  9. secret/kubernetes-dashboard-csrf created
  10. secret/kubernetes-dashboard-key-holder created
  11. configmap/kubernetes-dashboard-settings created
  12. role.rbac.authorization.k8s.io/kubernetes-dashboard created
  13. clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
  14. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  15. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  16. deployment.apps/kubernetes-dashboard created
  17. service/dashboard-metrics-scraper created
  18. deployment.apps/dashboard-metrics-scraper created

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图:

--test-type --ignore-certificate-errors


更改dashboard的svc为NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard


将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):
查看端口号:

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.0.236:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式)

查看token值:

  1. [root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  2. Name: admin-user-token-r4vcp
  3. Namespace: kube-system
  4. Labels: <none>
  5. Annotations: kubernetes.io/service-account.name: admin-user
  6. kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023
  7. Type: kubernetes.io/service-account-token
  8. Data
  9. ====
  10. ca.crt: 1025 bytes
  11. namespace: 11 bytes
  12. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

将token值输入到令牌后,单击登录即可访问Dashboard

END

精彩文章推荐

安装kubernetes集群-灵活安装k8s各个版本高可用集群

Prometheus+Grafana+Alertmanager搭建全方位的监控告警系统-超详细文档

linux架构师成长路线图:如何从月薪3千涨到月薪3万

k8s+SpringCloud+DevOps全栈技术解读

基于Jenkins共享库实践之自定义通知器

Kubernetes将弃用Docker,不必恐慌

手把手教你使用 Jenkins 配合 Github hook 持续集成

5个维度对 Kubernetes 集群优化

Docker+k8s+DevOps企业级架构师成长路线图

开源API网关Kong基本介绍和安装验证

Red Hat通过整合Ansible来提升Kubernetes集群管理能力

K8s自动扩缩容工具KEDA发布2.0版本,全面升级应用扩展能力

应该监控哪些Kubernetes健康指标?

k8s+jenkins+SonarQube+harbor构建DevOps自动化容器云平台

技术交流

为了大家更快速的学习知识,掌握技术,随时沟通交流问题,特组建了技术交流群,大家在群里可以分享自己的技术栈,抛出日常问题,群里会有很多大佬及时解答的,这样我们就会结识很多志同道合的人,长按或者扫描下图二维码可加我微信,备注运维或者k8s或者devops即可进群,让我们共同的努力,向着美好的未来出发吧~~~,想要免费获取linux、k8s、DevOps、Openstack、Openshift、运维、开发、测试、架构师、Python、Go、面试文档、容器、岗位内推等资料也可进群获取哈~~    

微信: luckylucky421302

好课推荐

kubernetes/k8s+SpringCloud全栈技术:基于世界500强的企业实战课程


微信公众号



点击阅读原文即可了解更多信息

       

       

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/304667
推荐阅读
相关标签
  

闽ICP备14008679号