当前位置:   article > 正文

Kubernetes(k8s)高可用简介与安装_k8s 高可用

k8s 高可用

一、简介

Kubernetes是Google 2014年创建管理的,是Google 10多年大规模容器管理技术Borg的开源版本。它是容器集群管理系统,是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制

Kubernetes一个核心的特点就是能够自主的管理容器来保证云平台中的容器按照用户的期望状态运行着(比如用户想让apache一直运行,用户不需要关心怎么去做,Kubernetes会自动去监控,然后去重启,新建,总之,让apache一直提供服务),管理员可以加载一个微型服务,让规划器来找到合适的位置,同时,Kubernetes也系统提升工具以及人性化方面,让用户能够方便的部署自己的应用(就像canary deployments)

现在Kubernetes着重于不间断的服务状态(比如web服务器或者缓存服务器)和原生云平台应用(Nosql),在不久的将来会支持各种生产云平台中的各种服务,例如,分批,工作流,以及传统数据库

 Kubernetes作用:快速部署应用、快速扩展应用、无缝对接新的应用功能、节省资源,优化硬件资源的使用
Kubernetes 特点:
可移植:支持公有云,私有云,混合云,多重云(multi-cloud)
可扩展:模块化, 插件化, 可挂载, 可组合
自动化:自动部署,自动重启,自动复制,自动伸缩/扩展

Kubernetes架构

Kubernetes集群包含有节点代理kubelet和Master组件(APIs, scheduler, etc),一切都基于分布式的存储系统。下面这张图是Kubernetes的架构图

 Kubernetes节点

在这张系统架构图中,我们把服务分为运行在工作节点上的服务和组成集群级别控制板的服务
Kubernetes节点有运行应用容器必备的服务,而这些都是受Master的控制
每次个节点上当然都要运行Docker。Docker来负责所有具体的映像下载和容器运行

Kubernetes主要由以下几个核心组件组成:

etcd:保存了整个集群的状态;
apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的Add-ons:

kube-dns:负责为整个集群提供DNS服务
Ingress Controller:为服务提供外网入口
Heapster:提供资源监控
Dashboard:提供GUI
Federation:提供跨可用区的集群
Fluentd-elasticsearch:提供集群日志采集、存储与查询

 分层架构

Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示

 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口层:kubectl命令行工具、客户端SDK以及集群联邦
生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
        Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
        Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

kubelet

kubelet负责管理pods和它们上面的容器,images镜像、volumes、etc。

kube-proxy

每一个节点也运行一个简单的网络代理和负载均衡(详见services FAQ )(PS:官方 英文)。 正如Kubernetes API里面定义的这些服务(详见the services doc)(PS:官方 英文)也可以在各种终端中以轮询的方式做一些简单的TCPUDP传输。
服务端点目前是通过DNS或者环境变量( Docker-links-compatibleKubernetes{FOO}_SERVICE_HOST 及 {FOO}_SERVICE_PORT 变量都支持)。这些变量由服务代理所管理的端口来解析。

Kubernetes控制面板

Kubernetes控制面板可以分为多个部分。目前它们都运行在一个master 节点,然而为了达到高可用性,这需要改变。不同部分一起协作提供一个统一的关于集群的视图。

etcd

所有master的持续状态都存在etcd的一个实例中。这可以很好地存储配置数据。因为有watch(观察者)的支持,各部件协调中的改变可以很快被察觉。

Kubernetes API Server

API服务提供Kubernetes API (PS:官方 英文)的服务。这个服务试图通过把所有或者大部分的业务逻辑放到不两只的部件中从而使其具有CRUD特性。它主要处理REST操作,在etcd中验证更新这些对象(并最终存储)。

Scheduler

调度器把未调度的pod通过binding api绑定到节点上。调度器是可插拔的,并且我们期待支持多集群的调度,未来甚至希望可以支持用户自定义的调度器。

Kubernetes控制管理服务器

所有其它的集群级别的功能目前都是由控制管理器所负责。例如,端点对象是被端点控制器来创建和更新。这些最终可以被分隔成不同的部件来让它们独自的可插拔。

replicationcontroller(PS:官方 英文)是一种建立于简单的 pod API之上的一种机制。一旦实现,我们最终计划把这变成一种通用的插件机制

二、安装Kubernetes

 Kubernetes的安装方式

1、Kubeadm 安装(官方建议、建议学习研究使用;以容器的方式运行)
2、二进制安装(生产环境使用;以进程的方式运行)
3、Ansible安装 

系统性能IP主机名
CentOS 7.4内存=4G;处理器=2

192.168.2.1

虚拟ip:192.168.2.5

k8s-master1
CentOS 7.4内存=4G;处理器=2

192.168.2.2

虚拟ip:192.168.2.5

k8s-master2
CentOS 7.4内存=2G;处理器=1192.168.2.3k8s-node1
CentOS 7.4内存=2G;处理器=1192.168.2.4k8s-node2

 1、环境配置

注意: 在所有服务器上添加网卡可以ping通外网

  1. #在master12、node12上配置hosts文件
  2. echo '
  3. 192.168.2.1 k8s-master1
  4. 192.168.2.2 k8s-master2
  5. 192.168.2.3 k8s-node1
  6. 192.168.2.4 k8s-node2
  7. '>> /etc/hosts

所有服务器关闭防火墙、selinux、dnsmasq、swap

  1. systemctl disable --now firewalld
  2. systemctl disable --now NetworkManager #CentOS8无需关闭
  1. 所有服务器都运行此命令
  2. [root@k8s-master1 ~]# swapoff -a && sysctl -w vm.swappiness=0 #关闭交换空间
  3. vm.swappiness = 0
  1. 所有服务器都运行此命令 #不注释可能会影响k8s的性能
  2. [root@k8s-master1 ~]# sed -i '12 s/^/#/' /etc/fstab #注意每个人的行号可能不一样
  1. 所有服务器都运行此命令
  2. [root@k8s-master1 ~]# yum -y install ntpdate #安装同步时间命令
  3. [root@k8s-master1 ~]# ntpdate time2.aliyun.com #同步时间
  1. 所有服务器都运行此命令
  2. [root@k8s-master1 ~]# ulimit -SHn 65535 #设置进程打开文件数为65535
  1. Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作
  2. [root@k8s-master1 ~]# ssh-keygen -t rsa #生成密钥(只在master1上操作)
  3. [root@k8s-master1 ~]# for i in k8s-master1 k8s-master2 k8s-node1 k8s-node2;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
  4. .......
  5. ...
  1. 全部服务器上执行
  2. [root@k8s-master1 ~]# yum -y install wget
  3. .......
  4. ..
  5. [root@k8s-master1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  6. .............
  7. ........
  8. [root@k8s-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  9. ........
  10. ...
  11. [root@k8s-master1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  1. 全部服务器上执行
  2. 设置k8s的yum源
  3. [root@k8s-master1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  4. [kubernetes]
  5. name=Kubernetes
  6. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  7. enabled=1
  8. gpgcheck=1
  9. repo_gpgcheck=1
  10. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  11. EOF
  12. [root@k8s-master1 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
  13. [root@k8s-master1 ~]# yum makecache #建立缓存
  14. ..........
  15. ...
  1. 全部服务器上执行
  2. [root@k8s-master1 ~]# yum -y install wget psmisc vim net-tools telnet #安装常用命令

2、内核配置

所有节点安装ipvsadm:(ipvs性能比iptables性能好)

  1. 全部服务器上执行
  2. [root@k8s-master1 ~]# yum -y install ipvsadm ipset sysstat conntrack libseccomp
  1. 所有节点配置ipvs模块
  2. [root@k8s-master1 ~]# modprobe -- ip_vs
  3. [root@k8s-master1 ~]# modprobe -- ip_vs_rr
  4. [root@k8s-master1 ~]# modprobe -- ip_vs_wrr
  5. [root@k8s-master1 ~]# modprobe -- ip_vs_sh
  6. [root@k8s-master1 ~]# modprobe -- nf_conntrack_ipv4
  7. ————————————————————————————————————
  8. modprobe -- ip_vs
  9. modprobe -- ip_vs_rr
  10. modprobe -- ip_vs_wrr
  11. modprobe -- ip_vs_sh
  12. modprobe -- nf_conntrack_ipv4
  1. 全部服务器上执行
  2. [root@k8s-master1 ~]# vi /etc/modules-load.d/ipvs.conf #配置成开机自动加载
  3. ip_vs
  4. ip_vs_rr
  5. ip_vs_wrr
  6. ip_vs_sh
  7. nf_conntrack_ipv4
  8. ip_tables
  9. ip_set
  10. xt_set
  11. ipt_set
  12. ipt_rpfilter
  13. ipt_REJECT
  14. ipip
  15. 保存
  16. [root@k8s-master1 ~]# systemctl enable --now systemd-modules-load.service
  17. [root@k8s-master1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4 #查看是否加载
  18. nf_conntrack_ipv4 15053 0
  19. nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
  20. ip_vs_sh 12688 0
  21. ip_vs_wrr 12697 0
  22. ip_vs_rr 12600 0
  23. ip_vs 141092 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  24. nf_conntrack 133387 2 ip_vs,nf_conntrack_ipv4
  25. libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
  1. 所有节点配置k8s内核 直接复制以下配置
  2. [root@k8s-master1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
  3. net.ipv4.ip_forward = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. fs.may_detach_mounts = 1
  6. vm.overcommit_memory=1
  7. vm.panic_on_oom=0
  8. fs.inotify.max_user_watches=89100
  9. fs.file-max=52706963
  10. fs.nr_open=52706963
  11. net.netfilter.nf_conntrack_max=2310720
  12. net.ipv4.tcp_keepalive_time = 600
  13. net.ipv4.tcp_keepalive_probes = 3
  14. net.ipv4.tcp_keepalive_intvl =15
  15. net.ipv4.tcp_max_tw_buckets = 36000
  16. net.ipv4.tcp_tw_reuse = 1
  17. net.ipv4.tcp_max_orphans = 327680
  18. net.ipv4.tcp_orphan_retries = 3
  19. net.ipv4.tcp_syncookies = 1
  20. net.ipv4.tcp_max_syn_backlog = 16384
  21. net.ipv4.ip_conntrack_max = 65536
  22. net.ipv4.tcp_max_syn_backlog = 16384
  23. net.ipv4.tcp_timestamps = 0
  24. net.core.somaxconn = 16384
  25. EOF
  26. [root@k8s-master1 ~]# sysctl --system sysctl -p

3、组件安装

  1. 全部服务器上安装最新版本的Docker k8s管理的是pod,在pod里面有一个或多个容器
  2. [root@k8s-master1 ~]# yum -y install docker-ce
  3. ..........
  4. ....
  5. [root@k8s-master1 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r #查看k8s版本
  6. .........
  7. ...
  8. .
  9. [root@k8s-master1 ~]# yum install -y kubeadm-1.19.3-0.x86_64 kubectl-1.19.3-0.x86_64 kubelet-1.19.3-0.x86_64
  10. ..........
  11. .....
  12. ..
  13. kubeadm: 用来初始化集群的指令
  14. kubelet: 在集群中的每个节点上用来启动 pod 和 container 等
  15. kubectl: 用来与集群通信的命令行工具
  1. 所有节点设置开机自启动Docker
  2. [root@k8s-master1 ~]# systemctl enable --now docker

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像,pause镜像启动成容器可以处理容器的僵尸进程

  1. 全部服务器上执行
  2. [root@k8s-master1 ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
  3. cat >/etc/sysconfig/kubelet<<EOF
  4. KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
  5. EOF
  1. 所有服务器设置Kubelet开机自启动
  2. systemctl daemon-reload
  3. systemctl enable --now kubelet

4、高可用组件安装

注意:是在所有Master节点通过yum安装HAProxyKeepAlived

  1. [root@k8s-master1 ~]# yum -y install keepalived haproxy

所有Master节点配置HAProxy;所有Master节点的HAProxy配置相同

  1. [root@k8s-master1 ~]# vim /etc/haproxy/haproxy.cfg
  2. global
  3. maxconn 2000
  4. ulimit-n 16384
  5. log 127.0.0.1 local0 err
  6. stats timeout 30s
  7. defaults
  8. log global
  9. mode http
  10. option httplog
  11. timeout connect 5000
  12. timeout client 50000
  13. timeout server 50000
  14. timeout http-request 15s
  15. timeout http-keep-alive 15s
  16. frontend monitor-in
  17. bind *:33305
  18. mode http
  19. option httplog
  20. monitor-uri /monitor
  21. listen stats
  22. bind *:8006
  23. mode http
  24. stats enable
  25. stats hide-version
  26. stats uri /stats
  27. stats refresh 30s
  28. stats realm Haproxy\ Statistics
  29. stats auth admin:admin
  30. frontend k8s-master
  31. bind 0.0.0.0:16443
  32. bind 127.0.0.1:16443
  33. mode tcp
  34. option tcplog
  35. tcp-request inspect-delay 5s
  36. default_backend k8s-master
  37. backend k8s-master
  38. mode tcp
  39. option tcplog
  40. option tcp-check
  41. balance roundrobin
  42. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  43. server k8s-master1 192.168.2.1:6443 check #修改ip
  44. server k8s-master2 192.168.2.2:6443 check #修改ip
  45. 保存

单配置Master1节点上的keepalived

  1. [root@k8s-master1 ~]# vim /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. router_id LVS_DEVEL
  5. }
  6. vrrp_script chk_apiserver {
  7. script "/etc/keepalived/check_apiserver.sh"
  8. interval 2
  9. weight -5
  10. fall 3
  11. rise 2
  12. }
  13. vrrp_instance VI_1 {
  14. state MASTER
  15. interface ens33 #修改网卡名称
  16. mcast_src_ip 192.168.2.1 #修改ip地址
  17. virtual_router_id 51
  18. priority 100
  19. advert_int 2
  20. authentication {
  21. auth_type PASS
  22. auth_pass K8SHA_KA_AUTH
  23. }
  24. virtual_ipaddress {
  25. 192.168.2.5 #设置虚拟IP
  26. }
  27. # track_script { #健康检查是关闭的,集群建立完成后再开启
  28. # chk_apiserver
  29. # }
  30. }
  31. 保存

单配置Master2节点上的keepalived

  1. [root@k8s-master2 ~]# vim /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. router_id LVS_DEVEL
  5. }
  6. vrrp_script chk_apiserver {
  7. script "/etc/keepalived/check_apiserver.sh"
  8. interval 2
  9. weight -5
  10. fall 3
  11. rise 2
  12. }
  13. vrrp_instance VI_1 {
  14. state BACKUP
  15. interface ens33 #修改网卡名称
  16. mcast_src_ip 192.168.2.2 #修改ip
  17. virtual_router_id 51
  18. priority 99
  19. advert_int 2
  20. authentication {
  21. auth_type PASS
  22. auth_pass K8SHA_KA_AUTH
  23. }
  24. virtual_ipaddress {
  25. 192.168.2.5 #修改虚拟ip
  26. }
  27. # track_script { #健康检查是关闭的,集群建立完成后再开启
  28. # chk_apiserver
  29. # }
  30. }
  31. 保存

在所有Master节点上配置KeepAlived健康检查文件

  1. [root@k8s-master1 ~]# vim /etc/keepalived/check_apiserver.sh
  2. #!/bin/bash
  3. err=0
  4. for k in $(seq 1 5)
  5. do
  6. check_code=$(pgrep kube-apiserver)
  7. if [[ $check_code == "" ]]; then
  8. err=$(expr $err + 1)
  9. sleep 5
  10. continue
  11. else
  12. err=0
  13. break
  14. fi
  15. done
  16. if [[ $err != "0" ]]; then
  17. echo "systemctl stop keepalived"
  18. /usr/bin/systemctl stop keepalived
  19. exit 1
  20. else
  21. exit 0
  22. fi
  23. 保存
  24. [root@k8s-master1 ~]# chmod a+x /etc/keepalived/check_apiserver.sh

在master上启动haproxy和keepalived

  1. [root@k8s-master1 ~]# systemctl enable --now haproxy
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
  3. [root@k8s-master1 ~]# systemctl enable --now keepalived
  4. Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

查看是否有192.168.2.5

  1. [root@k8s-master1 ~]# ip a
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  9. link/ether 00:0c:29:7b:b0:46 brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.2.1/24 brd 192.168.2.255 scope global ens33
  11. valid_lft forever preferred_lft forever
  12. inet 192.168.2.5/32 scope global ens33 #虚拟IP
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::238:ba2d:81b5:920e/64 scope link
  15. valid_lft forever preferred_lft forever
  16. ......
  17. ...
  18. [root@k8s-master1 ~]# netstat -anptu |grep 16443
  19. tcp 0 0 127.0.0.1:16443 0.0.0.0:* LISTEN 105533/haproxy
  20. tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 105533/haproxy
  21. ————————————————————————————————————————————————————
  22. [root@k8s-master2 ~]# netstat -anptu |grep 16443
  23. tcp 0 0 127.0.0.1:16443 0.0.0.0:* LISTEN 96274/haproxy
  24. tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 96274/haproxy

查看master所需镜像

  1. [root@k8s-master1 ~]# kubeadm config images list
  2. I0306 14:28:17.418780 104960 version.go:252] remote version is much newer: v1.23.4; falling back to: stable-1.18
  3. W0306 14:28:19.249961 104960 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  4. k8s.gcr.io/kube-apiserver:v1.18.20
  5. k8s.gcr.io/kube-controller-manager:v1.18.20
  6. k8s.gcr.io/kube-scheduler:v1.18.20
  7. k8s.gcr.io/kube-proxy:v1.18.20
  8. k8s.gcr.io/pause:3.2
  9. k8s.gcr.io/etcd:3.4.3-0
  10. k8s.gcr.io/coredns:1.6.7

 以下操作在所有master上操作

  1. 在所有master上进行脚本下载
  2. [root@k8s-master1 ~]# vim alik8simages.sh
  3. #!/bin/bash
  4. list='kube-apiserver:v1.19.3
  5. kube-controller-manager:v1.19.3
  6. kube-scheduler:v1.19.3
  7. kube-proxy:v1.19.3
  8. pause:3.2
  9. etcd:3.4.13-0
  10. coredns:1.7.0'
  11. for item in ${list}
  12. do
  13. docker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item && docker pull jmgao1983/flannel
  14. done
  15. 保存
  16. [root@k8s-master1 ~]# bash alik8simages.sh #执行脚本进行下载
  17. ————————————————————————————————————————————
  18. #上面过程
  19. #docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5
  20. #docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5 k8s.gcr.io/kube-apiserver:v1.19.5
  21. #docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5

以下操作在所有node上操作

  1. [root@k8s-node1 ~]# vim alik8simages.sh
  2. #!/bin/bash
  3. list='kube-proxy:v1.19.3
  4. pause:3.2'
  5. for item in ${list}
  6. do
  7. docker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item && docker pull jmgao1983/flannel
  8. done
  9. 保存
  10. [root@k8s-node1 ~]# bash alik8simages.sh

所有节点设置开机自启动kubelet

  1. [root@k8s-master1 ~]# systemctl enable --now kubelet

Master1节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master1即可

  1. --kubernetes-version=v1.19.3 #k8s的版本
  2. --apiserver-advertise-address=192.168.2.1 #地址写master1
  3. --pod-network-cidr=10.244.0.0/16 #pod 指定网段(容器的网段)
  4. ————————————————————————————执行下面命令
  5. [root@k8s-master1 ~]# kubeadm init --kubernetes-version=v1.19.3 --apiserver-advertise-address=192.168.2.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
  6. ......
  7. ...
  8. ..
  9. mkdir -p $HOME/.kube
  10. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  11. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  12. ........
  13. ..... 以下是加入群集所需要的命令
  14. kubeadm join 192.168.2.1:6443 --token ona3p0.flcw3tmfl3fsfn5r \
  15. --discovery-token-ca-cert-hash sha256:8c74d27c94b5c6a1f2c226e93e605762df708b44129145791608e959d30aa36f
  16. ————————————————————————————————
  17. 执行提示命令
  18. [root@k8s-master1 ~]# mkdir -p $HOME/.kube
  19. [root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  20. [root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

所有Master节点配置环境变量,用于访问Kubernetes集群

  1. cat <<EOF >> /root/.bashrc
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. source /root/.bashrc

 查看节点状态

  1. [root@k8s-master1 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 NotReady master 7m7s v1.19.3
  1. 全部机器执行以写命令
  2. iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

将其他节点加入群集(node1、2内容相同直接复制即可)

  1. 使用上面提示的命令加入群集
  2. [root@k8s-master2 ~]# kubeadm join 192.168.2.1:6443 --token ona3p0.flcw3tmfl3fsfn5r \
  3. > --discovery-token-ca-cert-hash sha256:8c74d27c94b5c6a1f2c226e93e605762df708b44129145791608e959d30aa36f
  4. ..............
  5. .....
  6. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  1. [root@k8s-master1 ~]# kubectl get nodes #加入群集后查看
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 NotReady master 18m v1.19.3
  4. k8s-master2 NotReady <none> 2m19s v1.19.3
  5. k8s-node1 NotReady <none> 2m16s v1.19.3
  6. k8s-node2 NotReady <none> 2m15s v1.19.3
  1. 只在master1上运行此命令,作用:允许master部署应用
  2. 因我们环境配置有限才执行此命令,如果在生产环境在可以不用执行
  3. [root@k8s-master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
  4. node/k8s-master1 untainted
  5. taint "node-role.kubernetes.io/master" not found
  6. taint "node-role.kubernetes.io/master" not found
  7. taint "node-role.kubernetes.io/master" not found
  8. [root@k8s-master1 ~]# kubectl describe nodes k8s-master1 | grep -E '(Roles|Taints)'
  9. Roles: master
  10. Taints: <none>
  11. ————————————————————————————————————————————————————————————————-
  12. 传输给master2
  13. [root@k8s-master1 ~]# scp /etc/kubernetes/admin.conf root@192.168.2.2:/etc/kubernetes/admin.conf
  14. admin.conf 100% 5567 4.6MB/s 00:00
  15. [root@k8s-master2 ~]# kubectl describe nodes k8s-master2 | grep -E '(Roles|Taints)'
  16. Roles: <none>
  17. Taints: <none>

 flannel组件配置

  1. [root@k8s-master1 ~]# vim flannel.yml
  2. ---
  3. apiVersion: policy/v1beta1
  4. kind: PodSecurityPolicy
  5. metadata:
  6. name: psp.flannel.unprivileged
  7. annotations:
  8. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  9. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  10. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  11. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  12. spec:
  13. privileged: false
  14. volumes:
  15. - configMap
  16. - secret
  17. - emptyDir
  18. - hostPath
  19. allowedHostPaths:
  20. - pathPrefix: "/etc/cni/net.d"
  21. - pathPrefix: "/etc/kube-flannel"
  22. - pathPrefix: "/run/flannel"
  23. readOnlyRootFilesystem: false
  24. # Users and groups
  25. runAsUser:
  26. rule: RunAsAny
  27. supplementalGroups:
  28. rule: RunAsAny
  29. fsGroup:
  30. rule: RunAsAny
  31. # Privilege Escalation
  32. allowPrivilegeEscalation: false
  33. defaultAllowPrivilegeEscalation: false
  34. # Capabilities
  35. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  36. defaultAddCapabilities: []
  37. requiredDropCapabilities: []
  38. # Host namespaces
  39. hostPID: false
  40. hostIPC: false
  41. hostNetwork: true
  42. hostPorts:
  43. - min: 0
  44. max: 65535
  45. # SELinux
  46. seLinux:
  47. # SELinux is unused in CaaSP
  48. rule: 'RunAsAny'
  49. ---
  50. kind: ClusterRole
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. metadata:
  53. name: flannel
  54. rules:
  55. - apiGroups: ['extensions']
  56. resources: ['podsecuritypolicies']
  57. verbs: ['use']
  58. resourceNames: ['psp.flannel.unprivileged']
  59. - apiGroups:
  60. - ""
  61. resources:
  62. - pods
  63. verbs:
  64. - get
  65. - apiGroups:
  66. - ""
  67. resources:
  68. - nodes
  69. verbs:
  70. - list
  71. - watch
  72. - apiGroups:
  73. - ""
  74. resources:
  75. - nodes/status
  76. verbs:
  77. - patch
  78. ---
  79. kind: ClusterRoleBinding
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. metadata:
  82. name: flannel
  83. roleRef:
  84. apiGroup: rbac.authorization.k8s.io
  85. kind: ClusterRole
  86. name: flannel
  87. subjects:
  88. - kind: ServiceAccount
  89. name: flannel
  90. namespace: kube-system
  91. ---
  92. apiVersion: v1
  93. kind: ServiceAccount
  94. metadata:
  95. name: flannel
  96. namespace: kube-system
  97. ---
  98. kind: ConfigMap
  99. apiVersion: v1
  100. metadata:
  101. name: kube-flannel-cfg
  102. namespace: kube-system
  103. labels:
  104. tier: node
  105. app: flannel
  106. data:
  107. cni-conf.json: |
  108. {
  109. "name": "cbr0",
  110. "cniVersion": "0.3.1",
  111. "plugins": [
  112. {
  113. "type": "flannel",
  114. "delegate": {
  115. "hairpinMode": true,
  116. "isDefaultGateway": true
  117. }
  118. },
  119. {
  120. "type": "portmap",
  121. "capabilities": {
  122. "portMappings": true
  123. }
  124. }
  125. ]
  126. }
  127. net-conf.json: |
  128. {
  129. "Network": "10.244.0.0/16",
  130. "Backend": {
  131. "Type": "vxlan"
  132. }
  133. }
  134. ---
  135. apiVersion: apps/v1
  136. kind: DaemonSet
  137. metadata:
  138. name: kube-flannel-ds
  139. namespace: kube-system
  140. labels:
  141. tier: node
  142. app: flannel
  143. spec:
  144. selector:
  145. matchLabels:
  146. app: flannel
  147. template:
  148. metadata:
  149. labels:
  150. tier: node
  151. app: flannel
  152. spec:
  153. affinity:
  154. nodeAffinity:
  155. requiredDuringSchedulingIgnoredDuringExecution:
  156. nodeSelectorTerms:
  157. - matchExpressions:
  158. - key: kubernetes.io/os
  159. operator: In
  160. values:
  161. - linux
  162. hostNetwork: true
  163. priorityClassName: system-node-critical
  164. tolerations:
  165. - operator: Exists
  166. effect: NoSchedule
  167. serviceAccountName: flannel
  168. initContainers:
  169. - name: install-cni
  170. image: jmgao1983/flannel:latest
  171. command:
  172. - cp
  173. args:
  174. - -f
  175. - /etc/kube-flannel/cni-conf.json
  176. - /etc/cni/net.d/10-flannel.conflist
  177. volumeMounts:
  178. - name: cni
  179. mountPath: /etc/cni/net.d
  180. - name: flannel-cfg
  181. mountPath: /etc/kube-flannel/
  182. containers:
  183. - name: kube-flannel
  184. image: jmgao1983/flannel:latest
  185. command:
  186. - /opt/bin/flanneld
  187. args:
  188. - --ip-masq
  189. - --kube-subnet-mgr
  190. resources:
  191. requests:
  192. cpu: "100m"
  193. memory: "50Mi"
  194. limits:
  195. cpu: "100m"
  196. memory: "50Mi"
  197. securityContext:
  198. privileged: false
  199. capabilities:
  200. add: ["NET_ADMIN", "NET_RAW"]
  201. env:
  202. - name: POD_NAME
  203. valueFrom:
  204. fieldRef:
  205. fieldPath: metadata.name
  206. - name: POD_NAMESPACE
  207. valueFrom:
  208. fieldRef:
  209. fieldPath: metadata.namespace
  210. volumeMounts:
  211. - name: run
  212. mountPath: /run/flannel
  213. - name: flannel-cfg
  214. mountPath: /etc/kube-flannel/
  215. volumes:
  216. - name: run
  217. hostPath:
  218. path: /run/flannel
  219. - name: cni
  220. hostPath:
  221. path: /etc/cni/net.d
  222. - name: flannel-cfg
  223. configMap:
  224. name: kube-flannel-cfg
  225. 保存 确保版本一样,如果不一样,修改即可

 执行flannel.yml

  1. [root@k8s-master1 ~]# kubectl apply -f flannel.yml
  2. podsecuritypolicy.policy/psp.flannel.unprivileged created
  3. clusterrole.rbac.authorization.k8s.io/flannel created
  4. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  5. serviceaccount/flannel created
  6. configmap/kube-flannel-cfg created
  7. daemonset.apps/kube-flannel-ds created

 成后观察Master上运行的pod

  1. [root@k8s-master1 ~]# kubectl get -A pods -o wide
  2. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. kube-system coredns-f9fd979d6-cnrvb 1/1 Running 0 6m32s 10.244.0.3 k8s-master1 <none> <none>
  4. kube-system coredns-f9fd979d6-fxsdt 1/1 Running 0 6m32s 10.244.0.2 k8s-master1 <none> <none>
  5. kube-system etcd-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none>
  6. kube-system kube-apiserver-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none>
  7. kube-system kube-controller-manager-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none>
  8. kube-system kube-flannel-ds-7rt9n 1/1 Running 0 52s 192.168.2.3 k8s-node1 <none> <none>
  9. kube-system kube-flannel-ds-brktl 1/1 Running 0 52s 192.168.2.1 k8s-master1 <none> <none>
  10. kube-system kube-flannel-ds-kj9hg 1/1 Running 0 52s 192.168.2.4 k8s-node2 <none> <none>
  11. kube-system kube-flannel-ds-ld7xj 1/1 Running 0 52s 192.168.2.2 k8s-master2 <none> <none>
  12. kube-system kube-proxy-4wbh9 1/1 Running 0 3m27s 192.168.2.2 k8s-master2 <none> <none>
  13. kube-system kube-proxy-crfmv 1/1 Running 0 3m24s 192.168.2.3 k8s-node1 <none> <none>
  14. kube-system kube-proxy-twttg 1/1 Running 0 6m32s 192.168.2.1 k8s-master1 <none> <none>
  15. kube-system kube-proxy-xdg6r 1/1 Running 0 3m24s 192.168.2.4 k8s-node2 <none> <none>
  16. kube-system kube-scheduler-k8s-master1 1/1 Running 0 6m42s 192.168.2.1 k8s-master1 <none> <none>

注意:如果上面查看发现状态不是Running 可以重新初始化Kubernetes集群

 注意:先查看版本是否一致如果不一致就算重新初始化有没有用

  1. [root@k8s-master1 ~]# rm -rf /etc/kubernetes/*
  2. [root@k8s-master1 ~]# rm -rf ~/.kube/*
  3. [root@k8s-master1 ~]# rm -rf /var/lib/etcd/*
  4. [root@k8s-master1 ~]# kubeadm reset -f
  5. rm -rf /etc/kubernetes/*
  6. rm -rf ~/.kube/*
  7. rm -rf /var/lib/etcd/*
  8. kubeadm reset -f
  1. 同上面的初始化
  2. [root@k8s-master1 ~]# kubeadm init --kubernetes-version=v1.19.3 --apiserver-advertise-address=192.168.2.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
  3. .............
  4. .......
  5. ..
  6. [root@k8s-master1 ~]# mkdir -p $HOME/.kube
  7. [root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  8. [root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  9. 加入群集
  10. [root@k8s-master2 ~]# kubeadm join 192.168.2.1:6443 --token aqywqm.fcddl4o1sy2q1qgj \
  11. > --discovery-token-ca-cert-hash sha256:4d3b60e0801e9c307ae6d94507e1fac514a493e277c715dc873eeadb950e5215
  12. .........
  13. ...
  14. [root@k8s-master1 ~]# kubectl get nodes
  15. NAME STATUS ROLES AGE VERSION
  16. k8s-master1 Ready master 4m13s v1.19.3
  17. k8s-master2 Ready <none> 48s v1.19.3
  18. k8s-node1 Ready <none> 45s v1.19.3
  19. k8s-node2 Ready <none> 45s v1.19.3
  20. [root@k8s-master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
  21. node/k8s-master1 untainted
  22. taint "node-role.kubernetes.io/master" not found
  23. taint "node-role.kubernetes.io/master" not found
  24. taint "node-role.kubernetes.io/master" not found
  25. [root@k8s-master1 ~]# kubectl describe nodes k8s-master1 | grep -E '(Roles|Taints)'
  26. Roles: master
  27. Taints: <none>
  28. [root@k8s-master1 ~]# kubectl apply -f flannel.yml
  29. ...........
  30. .....
  31. .

  1. [root@k8s-master1 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 Ready master 43m v1.19.3
  4. k8s-master2 Ready <none> 40m v1.19.3
  5. k8s-node1 Ready <none> 40m v1.19.3
  6. k8s-node2 Ready <none> 40m v1.19.3
  7. [root@k8s-master1 ~]# systemctl enable haproxy
  8. [root@k8s-master1 ~]# systemctl enable keepalived
  9. ————————————————————————————————————————————————————
  10. [root@k8s-master2 ~]# systemctl enable haproxy
  11. [root@k8s-master2 ~]# systemctl enable keepalived

5、Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

 Metrics介绍
Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成

 下载下面所需要的进行以及脚本

上传所需镜像;在所有服务器上上传 

  1. 传输给其他服务器
  2. [root@k8s-master1 ~]# scp metrics* root@192.168.2.2:$PWD
  3. metrics-scraper_v1.0.1.tar 100% 38MB 76.5MB/s 00:00
  4. metrics-server.tar.gz 100% 39MB 67.1MB/s 00:00
  5. [root@k8s-master1 ~]# scp metrics* root@192.168.2.3:$PWD
  6. metrics-scraper_v1.0.1.tar 100% 38MB 56.9MB/s 00:00
  7. metrics-server.tar.gz 100% 39MB 40.7MB/s 00:00
  8. [root@k8s-master1 ~]# scp metrics* root@192.168.2.4:$PWD
  9. metrics-scraper_v1.0.1.tar 100% 38MB 61.8MB/s 00:00
  10. metrics-server.tar.gz 100% 39MB 49.2MB/s 00:00
  11. 上传镜像 在所有服务器上执行
  12. [root@k8s-master1 ~]# docker load -i metrics-scraper_v1.0.1.tar
  13. .........
  14. ....
  15. [root@k8s-master1 ~]# docker load -i metrics-server.tar.gz
  16. .......
  17. ...

上传components.yaml文件

  1. 在master1 上操作
  2. [root@k8s-master1 ~]# kubectl apply -f components.yaml
  3. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  4. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  5. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  6. Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
  7. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
  8. serviceaccount/metrics-server created
  9. deployment.apps/metrics-server created
  10. service/metrics-server created
  11. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  12. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  13. ————————————————————————————————————
  14. 查看状态
  15. [root@k8s-master1 ~]# kubectl -n kube-system get pods -l k8s-app=metrics-server
  16. NAME READY STATUS RESTARTS AGE
  17. metrics-server-5c98b8989-54npg 1/1 Running 0 9m55s
  18. metrics-server-5c98b8989-9w9dd 1/1 Running 0 9m55s
  1. 查看资源监控 master1上操作
  2. [root@k8s-master1 ~]# kubectl top nodes #节点占用情况
  3. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  4. k8s-master1 166m 8% 1388Mi 36%
  5. k8s-master2 61m 3% 728Mi 18%
  6. k8s-node1 50m 5% 889Mi 47%
  7. k8s-node2 49m 4% 878Mi 46%
  8. [root@k8s-master1 ~]# kubectl top pods -A #pods占用情况
  9. NAMESPACE NAME CPU(cores) MEMORY(bytes)
  10. kube-system coredns-f9fd979d6-cnrvb 2m 14Mi
  11. kube-system coredns-f9fd979d6-fxsdt 2m 16Mi
  12. kube-system etcd-k8s-master1 11m 72Mi
  13. kube-system kube-apiserver-k8s-master1 27m 298Mi
  14. kube-system kube-controller-manager-k8s-master1 11m 48Mi
  15. kube-system kube-flannel-ds-7rt9n 1m 9Mi
  16. kube-system kube-flannel-ds-brktl 1m 14Mi
  17. kube-system kube-flannel-ds-kj9hg 2m 14Mi
  18. kube-system kube-flannel-ds-ld7xj 1m 15Mi
  19. kube-system kube-proxy-4wbh9 1m 19Mi
  20. kube-system kube-proxy-crfmv 1m 11Mi
  21. kube-system kube-proxy-twttg 1m 20Mi
  22. kube-system kube-proxy-xdg6r 1m 13Mi
  23. kube-system kube-scheduler-k8s-master1 2m 24Mi
  24. kube-system metrics-server-5c98b8989-54npg 1m 10Mi
  25. kube-system metrics-server-5c98b8989-9w9dd 1m 12Mi

6、Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等

Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用 

上传dashboard.yaml文件

  1. master1上操作
  2. [root@k8s-master1 ~]# vim dashboard.yaml
  3. .......
  4. ...
  5. 44 nodePort: 30001 #访问端口
  6. ......
  7. ..
  8. 保存
  1. master1上操作
  2. [root@k8s-master1 ~]# kubectl create -f dashboard.yaml
  3. namespace/kubernetes-dashboard created
  4. serviceaccount/kubernetes-dashboard created
  5. secret/kubernetes-dashboard-certs created
  6. secret/kubernetes-dashboard-csrf created
  7. secret/kubernetes-dashboard-key-holder created
  8. configmap/kubernetes-dashboard-settings created
  9. .....
  10. ..

确认Dashboard 关联pod和service的状态,这里注意kubernetes-dashboard会去自动下载镜像确保网络是可以通信的

  1. [root@k8s-master1 ~]# kubectl get pod,svc -n kubernetes-dashboard
  2. NAME READY STATUS RESTARTS AGE
  3. pod/dashboard-metrics-scraper-7445d59dfd-9jwcw 1/1 Running 0 36m
  4. pod/kubernetes-dashboard-7d8466d688-mgfq9 1/1 Running 0 36m
  5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. service/dashboard-metrics-scraper ClusterIP 10.1.70.163 <none> 8000/TCP 36m
  7. service/kubernetes-dashboard NodePort 10.1.158.233 <none> 443:30001/TCP 36m

创建serviceaccount和clusterrolebinding资源YAML文件

  1. [root@k8s-master1 ~]# vim adminuser.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kubernetes-dashboard
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding
  10. metadata:
  11. name: admin-user
  12. roleRef:
  13. apiGroup: rbac.authorization.k8s.io
  14. kind: ClusterRole
  15. name: cluster-admin
  16. subjects:
  17. - kind: ServiceAccount
  18. name: admin-user
  19. namespace: kubernetes-dashboard
  20. 保存
  1. [root@k8s-master1 ~]# kubectl create -f adminuser.yaml
  2. serviceaccount/admin-user created
  3. clusterrolebinding.rbac.authorization.k8s.io/admin-user created

访问测试:https://192.168.2.2:30001

获取token,用于登录Dashboard UI

  1. [root@k8s-master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') #查看
  2. Name: admin-user-token-rwzng
  3. Namespace: kubernetes-dashboard
  4. Labels: <none>
  5. Annotations: kubernetes.io/service-account.name: admin-user
  6. kubernetes.io/service-account.uid: 194f1012-cbed-4c15-b8c2-2142332174a9
  7. Type: kubernetes.io/service-account-token
  8. Data
  9. ====
  10. token:
  11. ————————————————复制以下秘钥
  12. eyJhbGciOiJSUzI1NiIsImtpZCI6Imxad29JeHUyYVFucGJuQzBDNm5qYU1NVDVDUUItU0NqWUxvQTdtWjcyYW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXJ3em5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxOTRmMTAxMi1jYmVkLTRjMTUtYjhjMi0yMTQyMzMyMTc0YTkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.nDvv2CevmtTBtpHqikXp5nRbzmJaMr13OSU5YCoBtMrOg1V6bMSn6Ctu5IdtxGExmDGY-69v4fBw7-DvJtLTon_rgsow6NA1LwUOuebMh8TwVrHSV0SW7yI0MCRFSMctC9NxIxyacxIDkDQ7eA7Rr9sQRKFpIWfjBgsC-k7z13IIuaAROXFrZKdqUUPd5hNTLhtFqtXOs7b_nMxzQTln9rSDIHozMTHbRMkL_oLm7McEGfrod7bO6RsTyPfn0TcK6pFCx5T9YA6AfoPMH3mNU0gsr-zbIYZyxIMr9FMpw2zpjP53BnaVhTQJ1M_c_Ptd774cRPk6vTWRPprul2U_OQ

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下

  1. [root@k8s-master1 ~]# curl 127.0.0.1:10249/proxyMode
  2. iptables
  3. [root@k8s-master1 ~]# kubectl edit cm kube-proxy -n kube-system
  4. ......
  5. 44 mode: "ipvs"
  6. ....
  7. ..
  8. 保存
  9. 更新Kube-Proxy的Pod
  10. [root@k8s-master1 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
  11. daemonset.apps/kube-proxy patched
  12. [root@k8s-master1 ~]# curl 127.0.0.1:10249/proxyMode
  13. ipvs

 注意:如果要做虚拟机快照,启动后会发现vip消失,30001端口无法访问,解决方法:重启全部master,作用恢复vip,然后,重新初始化,在执行相对于的yaml文件

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/214485
推荐阅读
相关标签
  

闽ICP备14008679号