当前位置:   article > 正文

Kubernetes 学习总结(29)—— 使用 kubeadm 部署 Kubernetes 1.24 详细步骤总结_usage of cri endpoints without url scheme is depre

usage of cri endpoints without url scheme is deprecated and can cause kubele

前言

kubeadm Kubernetes 官方提供的用于快速安部署 Kubernetes 集群的工具,伴随 Kubernetes 每个版本的发布都会同步更新,kubeadm 会对集群配置方面的一些实践做调整,通过实验 kubeadm 可以学习到 Kubernetes 官方在集群配置上一些新的最佳实践。

一、准备

1.1、系统配置

在安装之前,需要先做好如下准备。3 台 CentOS 7.9 主机如下:

  1. cat /etc/hosts
  2. 192.168.96.151 node1
  3. 192.168.96.152 node2
  4. 192.168.96.153 node3

在各个主机上完成下面的系统配置。如果各个主机启用了防火墙策略,需要开放 Kubernetes 各个组件所需要的端口,可以查看 Ports and Protocols 中的内容, 开放相关端口或者关闭主机的防火墙。

禁用SELINUX:

setenforce 0
  1. vi /etc/selinux/config
  2. SELINUX=disabled

创建 /etc/modules-load.d/containerd.conf 配置文件:

  1. cat << EOF > /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

执行以下命令使配置生效:

  1. modprobe overlay
  2. modprobe br_netfilter

创建 /etc/sysctl.d/99-kubernetes-cri.conf 配置文件:

  1. cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_forward = 1
  5. user.max_user_namespaces=28633
  6. EOF

执行以下命令使配置生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

1.2、配置服务器支持开启ipvs的前提条件

由于 ipvs 已经加入到了内核的主干,所以为 kube-proxy 开启 ipvs 的前提需要加载以下的内核模块:

  1. ip_vs
  2. ip_vs_rr
  3. ip_vs_wrr
  4. ip_vs_sh
  5. nf_conntrack_ipv4

在各个服务器节点上执行以下脚本:

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack_ipv4
  8. EOF
  9. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的 /etc/sysconfig/modules/ipvs.modules 文件,保证在节点重启后能自动加载所需模块。 使用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。接下来还需要确保各个节点上已经安装了 ipset 软件包,为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm。

yum install -y ipset ipvsadm

如果不满足以上前提条件,则即使 kube-proxy 的配置开启了 ipvs 模式,也会退回到 iptables 模式。

1.3、部署容器运行时 Containerd

在各个服务器节点上安装容器运行时 Containerd。下载Containerd的二进制包:

wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

cri-containerd-cni-1.6.4-linux-amd64.tar.gz 压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了 systemd 配置文件,containerd 以及 cni 的部署文件。 将解压缩到系统的根目录 / 中:

  1. tar -zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /
  2. etc/
  3. etc/systemd/
  4. etc/systemd/system/
  5. etc/systemd/system/containerd.service
  6. etc/crictl.yaml
  7. etc/cni/
  8. etc/cni/net.d/
  9. etc/cni/net.d/10-containerd-net.conflist
  10. usr/
  11. usr/local/
  12. usr/local/sbin/
  13. usr/local/sbin/runc
  14. usr/local/bin/
  15. usr/local/bin/critest
  16. usr/local/bin/containerd-shim
  17. usr/local/bin/containerd-shim-runc-v1
  18. usr/local/bin/ctd-decoder
  19. usr/local/bin/containerd
  20. usr/local/bin/containerd-shim-runc-v2
  21. usr/local/bin/containerd-stress
  22. usr/local/bin/ctr
  23. usr/local/bin/crictl
  24. ......
  25. opt/cni/
  26. opt/cni/bin/
  27. opt/cni/bin/bridge
  28. ......

注意经测试 cri-containerd-cni-1.6.4-linux-amd64.tar.gz 包中包含的 runc 在 CentOS 7 下的动态链接有问题,这里从 runc 的 github 上单独下载 runc,并替换上面安装的 containerd 中的 runc:

wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64

接下来生成 containerd 的配置文件:

  1. mkdir -p /etc/containerd
  2. containerd config default > /etc/containerd/config.toml

根据文档 Container runtimes 中的内容,对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为容器的 cgroup driver 可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上 containerd 的 cgroup driver 为 systemd。修改前面生成的配置文件 /etc/containerd/config.toml:

  1. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  2. ...
  3. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  4. SystemdCgroup = true

再修改 /etc/containerd/config.toml 中的

  1. [plugins."io.containerd.grpc.v1.cri"]
  2. ...
  3. # sandbox_image = "k8s.gcr.io/pause:3.6"
  4. sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

systemctl enable containerd --now

使用 crictl 测试一下,确保可以打印出版本信息并且没有错误信息输出:

  1. crictl version
  2. Version: 0.1.0
  3. RuntimeName: containerd
  4. RuntimeVersion: v1.6.4
  5. RuntimeApiVersion: v1alpha2

二、使用 kubeadm 部署 Kubernetes

2.1、安装 kubeadm 和 kubelet

下面在各节点安装 kubeadm 和 kubelet:

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF
  1. yum makecache fast
  2. yum install kubelet kubeadm kubectl

运行 kubelet --help 可以看到原来 kubelet 的绝大多数命令行 flag 参数都被 DEPRECATED 了,官方推荐我们使用 --config 指定配置文件,并在配置文件中指定原来这些 flag 所配置的内容。具体内容可以查看这里 Set Kubelet parameters via a config file。最初 Kubernetes 这么做是为了支持动态 Kubelet 配置(Dynamic Kubelet Configuration),但动态 Kubelet 配置特性从 k8s 1.22 中已弃用,并在 1.24 中被移除。如果需要调整集群汇总所有节点 kubelet 的配置,还是推荐使用 ansible 等工具将配置分发到各个节点kubelet 的配置文件必须是 json 或 yaml 格式,具体可查看这里。Kubernetes 1.8 开始要求关闭系统的 Swap,如果不关闭,默认配置下 kubelet 将无法启动。 关闭系统的 Swap 方法如下:

swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用 free -m 确认 swap 已经关闭。swappiness 参数调整,修改 /etc/sysctl.d/99-kubernetes-cri.conf 添加下面一行:

vm.swappiness=0

执行 sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf 使修改生效。

2.2、使用 kubeadm init 初始化集群

在各节点开机启动 kubelet 服务:

systemctl enable kubelet.service

使用 kubeadm config print init-defaults --component-configs KubeletConfiguration 可以打印集群初始化默认的使用的配置:

  1. apiVersion: kubeadm.k8s.io/v1beta3
  2. bootstrapTokens:
  3. - groups:
  4. - system:bootstrappers:kubeadm:default-node-token
  5. token: abcdef.0123456789abcdef
  6. ttl: 24h0m0s
  7. usages:
  8. - signing
  9. - authentication
  10. kind: InitConfiguration
  11. localAPIEndpoint:
  12. advertiseAddress: 1.2.3.4
  13. bindPort: 6443
  14. nodeRegistration:
  15. criSocket: unix:///var/run/containerd/containerd.sock
  16. imagePullPolicy: IfNotPresent
  17. name: node
  18. taints: null
  19. ---
  20. apiServer:
  21. timeoutForControlPlane: 4m0s
  22. apiVersion: kubeadm.k8s.io/v1beta3
  23. certificatesDir: /etc/kubernetes/pki
  24. clusterName: kubernetes
  25. controllerManager: {}
  26. dns: {}
  27. etcd:
  28. local:
  29. dataDir: /var/lib/etcd
  30. imageRepository: k8s.gcr.io
  31. kind: ClusterConfiguration
  32. kubernetesVersion: 1.24.0
  33. networking:
  34. dnsDomain: cluster.local
  35. serviceSubnet: 10.96.0.0/12
  36. scheduler: {}
  37. ---
  38. apiVersion: kubelet.config.k8s.io/v1beta1
  39. authentication:
  40. anonymous:
  41. enabled: false
  42. webhook:
  43. cacheTTL: 0s
  44. enabled: true
  45. x509:
  46. clientCAFile: /etc/kubernetes/pki/ca.crt
  47. authorization:
  48. mode: Webhook
  49. webhook:
  50. cacheAuthorizedTTL: 0s
  51. cacheUnauthorizedTTL: 0s
  52. cgroupDriver: systemd
  53. clusterDNS:
  54. - 10.96.0.10
  55. clusterDomain: cluster.local
  56. cpuManagerReconcilePeriod: 0s
  57. evictionPressureTransitionPeriod: 0s
  58. fileCheckFrequency: 0s
  59. healthzBindAddress: 127.0.0.1
  60. healthzPort: 10248
  61. httpCheckFrequency: 0s
  62. imageMinimumGCAge: 0s
  63. kind: KubeletConfiguration
  64. logging:
  65. flushFrequency: 0
  66. options:
  67. json:
  68. infoBufferSize: "0"
  69. verbosity: 0
  70. memorySwap: {}
  71. nodeStatusReportFrequency: 0s
  72. nodeStatusUpdateFrequency: 0s
  73. rotateCertificates: true
  74. runtimeRequestTimeout: 0s
  75. shutdownGracePeriod: 0s
  76. shutdownGracePeriodCriticalPods: 0s
  77. staticPodPath: /etc/kubernetes/manifests
  78. streamingConnectionIdleTimeout: 0s
  79. syncFrequency: 0s
  80. volumeStatsAggPeriod: 0s

从默认的配置中可以看到,可以使用 imageRepository 定制在集群初始化时拉取 k8s 所需镜像的地址。基于默认配置定制出本次使用 kubeadm 初始化集群所需的配置文件 kubeadm.yaml:

  1. apiVersion: kubeadm.k8s.io/v1beta3
  2. kind: InitConfiguration
  3. localAPIEndpoint:
  4. advertiseAddress: 192.168.96.151
  5. bindPort: 6443
  6. nodeRegistration:
  7. criSocket: unix:///run/containerd/containerd.sock
  8. taints:
  9. - effect: PreferNoSchedule
  10. key: node-role.kubernetes.io/master
  11. ---
  12. apiVersion: kubeadm.k8s.io/v1beta2
  13. kind: ClusterConfiguration
  14. kubernetesVersion: v1.24.0
  15. imageRepository: registry.aliyuncs.com/google_containers
  16. networking:
  17. podSubnet: 10.244.0.0/16
  18. ---
  19. apiVersion: kubelet.config.k8s.io/v1beta1
  20. kind: KubeletConfiguration
  21. cgroupDriver: systemd
  22. failSwapOn: false
  23. ---
  24. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  25. kind: KubeProxyConfiguration
  26. mode: ipvs

这里定制了 imageRepository 为阿里云的 registry,避免因 gcr 被墙,无法直接拉取镜像。criSocket 设置了容器运行时为 containerd。 同时设置 kubelet 的 cgroupDriver 为 systemd,设置 kube-proxy 代理模式为 ipvs。在开始初始化集群之前可以使用 kubeadm config images pull --config kubeadm.yaml 预先在各个服务器节点上拉取所 k8s 需要的容器镜像。

  1. kubeadm config images pull --config kubeadm.yaml
  2. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
  3. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
  4. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
  5. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.24.0
  6. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.7
  7. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.3-0
  8. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

接下来使用 kubeadm 初始化集群,选择 node1 作为 Master Node,在 node1 上执行下面的命令:

  1. kubeadm init --config kubeadm.yaml
  2. W0526 10:22:26.657615 24076 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
  3. W0526 10:22:26.660300 24076 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
  4. [init] Using Kubernetes version: v1.24.0
  5. [preflight] Running pre-flight checks
  6. [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
  7. [preflight] Pulling images required for setting up a Kubernetes cluster
  8. [preflight] This might take a minute or two, depending on the speed of your internet connection
  9. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  10. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  11. [certs] Generating "ca" certificate and key
  12. [certs] Generating "apiserver" certificate and key
  13. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.96.151]
  14. [certs] Generating "apiserver-kubelet-client" certificate and key
  15. [certs] Generating "front-proxy-ca" certificate and key
  16. [certs] Generating "front-proxy-client" certificate and key
  17. [certs] Generating "etcd/ca" certificate and key
  18. [certs] Generating "etcd/server" certificate and key
  19. [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1]
  20. [certs] Generating "etcd/peer" certificate and key
  21. [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.96.151 127.0.0.1 ::1]
  22. [certs] Generating "etcd/healthcheck-client" certificate and key
  23. [certs] Generating "apiserver-etcd-client" certificate and key
  24. [certs] Generating "sa" key and public key
  25. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  26. [kubeconfig] Writing "admin.conf" kubeconfig file
  27. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  28. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  29. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  30. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  31. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  32. [kubelet-start] Starting the kubelet
  33. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  34. [control-plane] Creating static Pod manifest for "kube-apiserver"
  35. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  36. [control-plane] Creating static Pod manifest for "kube-scheduler"
  37. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  38. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  39. [apiclient] All control plane components are healthy after 17.506804 seconds
  40. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  41. [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  42. [upload-certs] Skipping phase. Please see --upload-certs
  43. [mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
  45. [bootstrap-token] Using token: uufqmm.bvtfj4drwfvvbcev
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. [addons] Applied essential addon: kube-proxy
  55. Your Kubernetes control-plane has initialized successfully!
  56. To start using your cluster, you need to run the following as a regular user:
  57. mkdir -p $HOME/.kube
  58. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  59. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  60. Alternatively, if you are the root user, you can run:
  61. export KUBECONFIG=/etc/kubernetes/admin.conf
  62. You should now deploy a pod network to the cluster.
  63. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  64. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  65. Then you can join any number of worker nodes by running the following on each as root:
  66. kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \
  67. --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个 Kubernetes 集群所需要的关键步骤。 其中有以下关键内容:

  • [certs]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • 下面的命令是配置常规用户如何使用kubectl访问集群:mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将节点加入集群的命令kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \ --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6

查看一下集群状态,确认个组件都处于 healthy 状态,结果出现了错误:

  1. kubectl get cs
  2. Warning: v1 ComponentStatus is deprecated in v1.19+
  3. NAME STATUS MESSAGE ERROR
  4. scheduler Healthy ok
  5. controller-manager Healthy ok
  6. etcd-0 Healthy {"health":"true","reason":""}

集群初始化如果遇到问题,可以使用 kubeadm reset 命令进行清理。

2.3、安装包管理器 helm 3

Helm 是 Kubernetes 的包管理器,后续流程也将使用 Helm 安装 Kubernetes 的常用组件。 这里先在 master 节点 node1 上安装 helm。

  1. wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
  2. tar -zxvf helm-v3.9.0-linux-amd64.tar.gz
  3. mv linux-amd64/helm /usr/local/bin/

执行 helm list 确认没有错误输出。

2.4、部署 Pod Network 组件 Calico

选择 calico 作为 k8s 的 Pod 网络组件,下面使用 helm 在 k8s 集群中安装 calico。下载 tigera-operator 的 helm chart:

wget https://github.com/projectcalico/calico/releases/download/v3.23.1/tigera-operator-v3.23.1.tgz

查看这个 chart 的中可定制的配置:

  1. helm show values tigera-operator-v3.23.1.tgz
  2. imagePullSecrets: {}
  3. installation:
  4. enabled: true
  5. kubernetesProvider: ""
  6. apiServer:
  7. enabled: true
  8. certs:
  9. node:
  10. key:
  11. cert:
  12. commonName:
  13. typha:
  14. key:
  15. cert:
  16. commonName:
  17. caBundle:
  18. resources: {}
  19. # Configuration for the tigera operator
  20. tigeraOperator:
  21. image: tigera/operator
  22. version: v1.27.1
  23. registry: quay.io
  24. calicoctl:
  25. image: docker.io/calico/ctl
  26. tag: v3.23.1

定制的 values.yaml 如下:

  1. # 可针对上面的配置进行定制,例如 calico 的镜像改成从私有库拉取。
  2. # 这里只是个人本地环境测试 k8s 新版本,因此保留 value.yaml 为空即可

使用 helm 安装 calico:

helm install calico tigera-operator-v3.23.1.tgz -n kube-system  --create-namespace -f values.yaml

等待并确认所有 pod 处于 Running状态:

  1. kubectl get pod -n kube-system | grep tigera-operator
  2. tigera-operator-5fb55776df-wxbph 1/1 Running 0 5m10s
  3. kubectl get pods -n calico-system
  4. NAME READY STATUS RESTARTS AGE
  5. calico-kube-controllers-68884f975d-5d7p9 1/1 Running 0 5m24s
  6. calico-node-twbdh 1/1 Running 0 5m24s
  7. calico-typha-7b4bdd99c5-ssdn2 1/1 Running 0 5m24s

查看一下 calico 向 k8s 中添加的 api 资源:

  1. kubectl api-resources | grep calico
  2. bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
  3. bgppeers crd.projectcalico.org/v1 false BGPPeer
  4. blockaffinities crd.projectcalico.org/v1 false BlockAffinity
  5. caliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatus
  6. clusterinformations crd.projectcalico.org/v1 false ClusterInformation
  7. felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
  8. globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
  9. globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
  10. hostendpoints crd.projectcalico.org/v1 false HostEndpoint
  11. ipamblocks crd.projectcalico.org/v1 false IPAMBlock
  12. ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
  13. ipamhandles crd.projectcalico.org/v1 false IPAMHandle
  14. ippools crd.projectcalico.org/v1 false IPPool
  15. ipreservations crd.projectcalico.org/v1 false IPReservation
  16. kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
  17. networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
  18. networksets crd.projectcalico.org/v1 true NetworkSet
  19. bgpconfigurations bgpconfig,bgpconfigs projectcalico.org/v3 false BGPConfiguration
  20. bgppeers projectcalico.org/v3 false BGPPeer
  21. caliconodestatuses caliconodestatus projectcalico.org/v3 false CalicoNodeStatus
  22. clusterinformations clusterinfo projectcalico.org/v3 false ClusterInformation
  23. felixconfigurations felixconfig,felixconfigs projectcalico.org/v3 false FelixConfiguration
  24. globalnetworkpolicies gnp,cgnp,calicoglobalnetworkpolicies projectcalico.org/v3 false GlobalNetworkPolicy
  25. globalnetworksets projectcalico.org/v3 false GlobalNetworkSet
  26. hostendpoints hep,heps projectcalico.org/v3 false HostEndpoint
  27. ippools projectcalico.org/v3 false IPPool
  28. ipreservations projectcalico.org/v3 false IPReservation
  29. kubecontrollersconfigurations projectcalico.org/v3 false KubeControllersConfiguration
  30. networkpolicies cnp,caliconetworkpolicy,caliconetworkpolicies projectcalico.org/v3 true NetworkPolicy
  31. networksets netsets projectcalico.org/v3 true NetworkSet
  32. profiles projectcalico.org/v3 false Profile

这些 api 资源是属于 calico 的,因此不建议使用 kubectl 来管理,推荐按照 calicoctl 来管理这些 api 资源。 将 calicoctl 安装为 kubectl 的插件:

  1. cd /usr/local/bin
  2. curl -o kubectl-calico -O -L "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64"
  3. chmod +x kubectl-calico

验证插件正常工作:

kubectl calico -h

2.5、验证 k8s DNS 是否可用

  1. kubectl run curl --image=radial/busyboxplus:curl -it
  2. If you don't see a command prompt, try pressing enter.
  3. [ root@curl:/ ]$

进入后执行 nslookup kubernetes.default 确认解析正常:

  1. nslookup kubernetes.default
  2. Server: 10.96.0.10
  3. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  4. Name: kubernetes.default
  5. Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.6、向 Kubernetes 集群中添加 Node 节点

下面将 node2, node3 添加到 Kubernetes 集群中,分别在 node2, node3 上执行:

  1. kubeadm join 192.168.96.151:6443 --token uufqmm.bvtfj4drwfvvbcev \
  2. --discovery-token-ca-cert-hash sha256:5814415567d93f6d2d41fe4719be8221f45c29c482b5059aec2e27a832ac36e6

node2 和 node3 加入集群很是顺利,在 master 节点上执行命令查看集群中的节点:

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. node1 Ready control-plane,master 29m v1.24.0
  4. node2 Ready <none> 70s v1.24.0
  5. node3 Ready <none> 58s v1.24.0

三、Kubernetes 常用组件部署

3.1、使用 Helm 部署 ingress-nginx

为了便于将集群中的服务暴露到集群外部,需要使用 Ingress。接下来使用 Helm 将 ingress-nginx 部署到 Kubernetes上。 Nginx Ingress Controller 被部署在 Kubernetes 的边缘节点上。

这里将 node1(192.168.96.151) 作为边缘节点,打上 Label:

kubectl label node node1 node-role.kubernetes.io/edge=

下载 ingress-nginx 的 helm chart:

wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.1.2/ingress-nginx-4.1.2.tgz

查看 ingress-nginx-4.1.2.tgz 这个 chart 的可定制配置:

helm show values ingress-nginx-4.1.2.tgz

对 values.yaml 配置定制如下:

  1. controller:
  2. ingressClassResource:
  3. name: nginx
  4. enabled: true
  5. default: true
  6. controllerValue: "k8s.io/ingress-nginx"
  7. admissionWebhooks:
  8. enabled: false
  9. replicaCount: 1
  10. image:
  11. # registry: k8s.gcr.io
  12. # image: ingress-nginx/controller
  13. # tag: "v1.1.0"
  14. registry: docker.io
  15. image: unreachableg/k8s.gcr.io_ingress-nginx_controller
  16. tag: "v1.2.0"
  17. digest: sha256:314435f9465a7b2973e3aa4f2edad7465cc7bcdc8304be5d146d70e4da136e51
  18. hostNetwork: true
  19. nodeSelector:
  20. node-role.kubernetes.io/edge: ''
  21. affinity:
  22. podAntiAffinity:
  23. requiredDuringSchedulingIgnoredDuringExecution:
  24. - labelSelector:
  25. matchExpressions:
  26. - key: app
  27. operator: In
  28. values:
  29. - nginx-ingress
  30. - key: component
  31. operator: In
  32. values:
  33. - controller
  34. topologyKey: kubernetes.io/hostname
  35. tolerations:
  36. - key: node-role.kubernetes.io/master
  37. operator: Exists
  38. effect: NoSchedule
  39. - key: node-role.kubernetes.io/master
  40. operator: Exists
  41. effect: PreferNoSchedule

nginx ingress controller 的副本数 replicaCount 为 1,将被调度到 node1 这个边缘节点上。这里并没有指定 nginx ingress controller service 的 externalIPs,而是通过 hostNetwork: true 设置 nginx ingress controller 使用宿主机网络。 因为 k8s.gcr.io 被墙,这里替换成 unreachableg/k8s.gcr.io_ingress-nginx_controller 提前拉取一下镜像:

crictl pull unreachableg/k8s.gcr.io_ingress-nginx_controller:v1.2.0
helm install ingress-nginx ingress-nginx-4.1.2.tgz --create-namespace -n ingress-nginx -f values.yaml
  1. kubectl get pod -n ingress-nginx
  2. NAME READY STATUS RESTARTS AGE
  3. ingress-nginx-controller-7f574989bc-xwbf4 1/1 Running 0 117s

测试访问 http://192.168.96.151 返回默认的 nginx 404 页,则部署完成。

3.2、使用 Helm 部署 dashboard

先部署 metrics-server:

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/metrics-server-helm-chart-3.8.2/components.yaml

修改 components.yaml 中的 image 为 docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.5.2。 修改 components.yaml 中容器的启动参数,加入 --kubelet-insecure-tls。

kubectl apply -f components.yaml

metrics-server 的 pod 正常启动后,等一段时间就可以使用 kubectl top 查看集群和 pod 的 metrics 信息:

  1. kubectl top node
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. node1 509m 12% 3654Mi 47%
  4. node2 133m 3% 1786Mi 23%
  5. node3 117m 2% 1810Mi 23%
  6. kubectl top pod -n kube-system
  7. NAME CPU(cores) MEMORY(bytes)
  8. coredns-74586cf9b6-575nl 6m 16Mi
  9. coredns-74586cf9b6-mbn8s 5m 17Mi
  10. etcd-node1 49m 91Mi
  11. kube-apiserver-node1 142m 490Mi
  12. kube-controller-manager-node1 38m 54Mi
  13. kube-proxy-k5lzs 26m 19Mi
  14. kube-proxy-rb5pf 9m 15Mi
  15. kube-proxy-w5zpk 27m 16Mi
  16. kube-scheduler-node1 7m 18Mi
  17. metrics-server-8dfd488f5-r8pbh 8m 21Mi
  18. tigera-operator-5fb55776df-wxbph 10m 38Mi

接下来使用 helm 部署 k8s 的 dashboard,添加 chart repo:

  1. helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
  2. helm repo update

查看 chart 的可定制配置:

helm show values kubernetes-dashboard/kubernetes-dashboard

对 values.yaml 定制配置如下:

  1. image:
  2. repository: kubernetesui/dashboard
  3. tag: v2.5.1
  4. ingress:
  5. enabled: true
  6. annotations:
  7. nginx.ingress.kubernetes.io/ssl-redirect: "true"
  8. nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  9. hosts:
  10. - k8s.example.com
  11. tls:
  12. - secretName: example-com-tls-secret
  13. hosts:
  14. - k8s.example.com
  15. metricsScraper:
  16. enabled: true

先创建存放 k8s.example.comssl 证书的 secret:

  1. kubectl create secret tls example-com-tls-secret \
  2. --cert=cert.pem \
  3. --key=key.pem \
  4. -n kube-system

使用 helm 部署 dashboard:

  1. helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
  2. -n kube-system \
  3. -f values.yaml

确认上面的命令部署成功。创建管理员 sa:

  1. kubectl create serviceaccount kube-dashboard-admin-sa -n kube-system
  2. kubectl create clusterrolebinding kube-dashboard-admin-sa \
  3. --clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa

创建集群管理员登录 dashboard 所需 token:

  1. kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600h
  2. eyJhbGciOiJSUzI1NiIsImtpZCI6IlU1SlpSTS1YekNuVzE0T1k5TUdTOFFqN25URWxKckt6OUJBT0xzblBsTncifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxOTY4OTA4MjgyLCJpYXQiOjE2NTM1NDgyODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjY0MmMwMmExLWY1YzktNDFjNy04Mjc5LWQ1ZmI3MGRjYTQ3ZSJ9fSwibmJmIjoxNjUzNTQ4MjgyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.Xqxlo2vJ9Hb6UUVIqwvc8I5bahdxKzSRSaQI_67Yt7_YEHmkkHApxUGlwJYTKF9ufww3btlCmM8PtRn5_Q1yv-HAFyTOYKo8WHZ9UCm1bT3X8V8g4GQwZIl2dwmlUmKb1unBz2-em2uThQ015bMPDE8a42DV_bOwWjljVXat0nwV14nGorC8vKLjXbohrIJ3G1pgCJvlBn99F1RelmSUSQLlolUFoxpN6MamYTElwR6FfD-AGmFXvZSbcFaqVW0oxJHV70Gjs2igOtpqHFxxPlHT8aQzlRiybPtFyBf9Ll87TmVJimT89z8wv2si2Nee8bB2jhsApLn8TJyUSlbTXA

使用上面的 token 登录 k8s dashboard。

参考

  • Installing kubeadm
  • Creating a cluster with kubeadm
  • https://github.com/containerd/containerd
  • https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
  • https://docs.projectcalico.org/

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/477621
推荐阅读
相关标签
  

闽ICP备14008679号