当前位置:   article > 正文

kubernetes-1.25.6 二进制部署_二进制部署kubelet containerd

二进制部署kubelet containerd

目录

1. 基础环境

2. 基础环境配置

2.1 所有节点配置hosts

2.2 关闭防火墙,selinux,dnsmasq,swap

2.3 配置时间同步

2.4 节点修改资源限制

2.5 安装基本软件

2.6 升级系统内核

2.7 修改内核参数

2.8 加载ipvs模块

3. 软件包准备

4. 安装docker,cri-docker

4.1 安装docker-ce

4.2 安装cri-docker

4.3 安装containerd

4.4 安装crictl 客户工具

4.5 安装cfssl工具

5. 生成kubernetes集群证书

5.1 生成etcd的ca证书

5.2 创建kubernetes各组件证书

5.3 创建kueb-apiserver证书

5.4 创建proxy-client证书以及ca

5.5 创建kube-controller-manager证书与认证文件

5.6 生成kube-scheduler证书文件

5.7.生成kubernetes集群管理员证书

6. etcd 部署

6.1 安装etcd

6.2 配置etcdctl 客户端工具

7. 部署kubernetes

7.1 安装kube-apiserver

7.2 安装kube-controller-manager

7.3 安装kube-scheduler

7.4 在master节点部署kubectl工具

7.5 部署kubelet

7.6 部署kube-proxy

8. 安装组件

8.1 安装calico网络插件

8.2 安装calicoctl客户端

8.3 安装dashboard



1. 基础环境

主机名称ip地址
master110.66.6.2
node110.66.6.4
node2 10.66.6.5

说明:

master节点为2台以nginx为代理实现高可用

系统使用ubuntu 20.04

2. 基础环境配置

        通过公钥做ssh免密.

2.1 所有节点配置hosts

  1. 10.66.6.2 master1
  2. 10.66.6.4 node1
  3. 10.66.6.5 node2

2.2 关闭防火墙,selinux,dnsmasq,swap

  1. #关闭防火墙
  2. systemctl disable --now firewalld
  3. #关闭dnsmasq
  4. systemctl disable --now dnsmasq
  5. #关闭postfix
  6. systemctl disable --now postfix
  7. #关闭NetworkManager
  8. systemctl disable --now NetworkManager
  9. #关闭selinux
  10. sed -ri 's/(^SELINUX=).*/\1disabled/' /etc/selinux/config
  11. setenforce 0
  12. #关闭swap
  13. sed -ri 's@(^.*swap *swap.*0 0$)@#\1@' /etc/fstab
  14. swapoff -a

2.3 配置时间同步

  1. #安装ntpdate
  2. apt-get install ntpdate -y
  3. #执行同步,可以使用自己的ntp服务器如果没有
  4. ntpdate ntp1.aliyun.com
  5. #添加定时任务
  6. crontab -e
  7. 0 */1 * * * ntpdate ntp1.aliyun.com

2.4 节点修改资源限制

  1. cat > /etc/security/limits.conf <<EOF
  2. * soft core unlimited
  3. * hard core unlimited
  4. * soft nproc 1000000
  5. * hard nproc 1000000
  6. * soft nofile 1000000
  7. * hard nofile 1000000
  8. * soft memlock 32000
  9. * hard memlock 32000
  10. * soft msgqueue 8192000
  11. EOF

2.5 安装基本软件

apt-get install ipvsadm ipset conntrack sysstat libseccomp psmisc vim net-tools nfs-kernel-server  telnet lvm2 git tar curl -y

2.6 升级系统内核

  1. #查看系统内核
  2. uname -r
  3. #查看软件库中内核
  4. sudo apt list | grep linux-generic*
  5. #下载内核
  6. apt-get install linux-generic-hwe-20.04-edge/focal-updates
  7. #下载脚本
  8. wget https://raw.githubusercontent.com/pimlie/ubuntu-mainline-kernel.sh/master/ubuntu-mainline-kernel.sh
  9. #把脚本放在可执行路径
  10. install ubuntu-mainline-kernel.sh /usr/local/bin/
  11. #检查最新的可用内核版本
  12. ubuntu-mainline-kernel.sh -c
  13. #获得最新版本并确认这就是您想要安装在系统上的版本之后,运行
  14. ubuntu-mainline-kernel.sh -i
  15. #重启服务器后确认
  16. reboot
  17. uname -rs

2.7 修改内核参数

  1. cat >/etc/sysctl.conf<<EOF
  2. net.ipv4.tcp_keepalive_time=600
  3. net.ipv4.tcp_keepalive_intvl=30
  4. net.ipv4.tcp_keepalive_probes=10
  5. net.ipv6.conf.all.disable_ipv6=1
  6. net.ipv6.conf.default.disable_ipv6=1
  7. net.ipv6.conf.lo.disable_ipv6=1
  8. net.ipv4.neigh.default.gc_stale_time=120
  9. net.ipv4.conf.all.rp_filter=0 # 默认为1,系统会严格校验数据包的反向路径,可能导致丢包
  10. net.ipv4.conf.default.rp_filter=0
  11. net.ipv4.conf.default.arp_announce=2
  12. net.ipv4.conf.lo.arp_announce=2
  13. net.ipv4.conf.all.arp_announce=2
  14. net.ipv4.ip_local_port_range= 45001 65000
  15. net.ipv4.ip_forward=1
  16. net.ipv4.tcp_max_tw_buckets=6000
  17. net.ipv4.tcp_syncookies=1
  18. net.ipv4.tcp_synack_retries=2
  19. net.bridge.bridge-nf-call-ip6tables=1
  20. net.bridge.bridge-nf-call-iptables=1
  21. net.netfilter.nf_conntrack_max=2310720
  22. net.ipv6.neigh.default.gc_thresh1=8192
  23. net.ipv6.neigh.default.gc_thresh2=32768
  24. net.ipv6.neigh.default.gc_thresh3=65536
  25. net.core.netdev_max_backlog=16384 # 每CPU网络设备积压队列长度
  26. net.core.rmem_max = 16777216 # 所有协议类型读写的缓存区大小
  27. net.core.wmem_max = 16777216
  28. net.ipv4.tcp_max_syn_backlog = 8096 # 第一个积压队列长度
  29. net.core.somaxconn = 32768 # 第二个积压队列长度
  30. fs.inotify.max_user_instances=8192 # 表示每一个real user ID可创建的inotify instatnces的数量上限,默认128.
  31. fs.inotify.max_user_watches=524288 # 同一用户同时可以添加的watch数目,默认8192。
  32. fs.file-max=52706963
  33. fs.nr_open=52706963
  34. kernel.pid_max = 4194303
  35. net.bridge.bridge-nf-call-arptables=1
  36. vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
  37. vm.overcommit_memory=1 # 不检查物理内存是否够用
  38. vm.panic_on_oom=0 # 开启 OOM
  39. vm.max_map_count = 262144
  40. EOF

2.8 加载ipvs模块

  1. cat >/etc/modules-load.d/ipvs.conf <<EOF
  2. ip_vs
  3. ip_vs_lc
  4. ip_vs_wlc
  5. ip_vs_rr
  6. ip_vs_wrr
  7. ip_vs_lblc
  8. ip_vs_lblcr
  9. ip_vs_dh
  10. ip_vs_sh
  11. ip_vs_nq
  12. ip_vs_sed
  13. ip_vs_ftp
  14. ip_vs_sh
  15. nf_conntrack
  16. ip_tables
  17. ip_set
  18. xt_set
  19. ipt_set
  20. ipt_rpfilter
  21. ipt_REJECT
  22. EOF
  23. systemctl enable --now systemd-modules-load.service
  24. #重启服务器执行检查
  25. lsmod | grep -e ip_vs -e nf_conntrack

3. 软件包准备

以下均为github下载地址

kubernetes 1.25.6 地址

https://dl.k8s.io/v1.25.6/kubernetes-server-linux-amd64.tar.gz

etcd 地址

https://github.com/etcd-io/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz

docker-ce 地址

https://github.com/containerd/containerd/releases

cri-docker 地址

https://github.com/Mirantis/cri-dockerd/releases

containerd 地址

https://github.com/containerd/containerd/releases

cfssl 地址

https://github.com/cloudflare/cfssl/releases

4. 安装docker,cri-docker

4.1 安装docker-ce

  1. tar xf docker-23.0.1.tgz
  2. cp docker/* /usr/bin

container启动文件

  1. cat > /usr/lib/systemd/system/containerd.service << EOF
  2. [Unit]
  3. Description=containerd container runtime
  4. Documentation=https://containerd.io
  5. After=network.target local-fs.target
  6. [Service]
  7. ExecStartPre=-/sbin/modprobe overlay
  8. ExecStart=/usr/bin/containerd
  9. Type=notify
  10. Delegate=yes
  11. KillMode=process
  12. Restart=always
  13. RestartSec=5
  14. LimitNPROC=infinity
  15. LimitCORE=infinity
  16. LimitNOFILE=1048576
  17. TasksMax=infinity
  18. OOMScoreAdjust=-999
  19. [Install]
  20. WantedBy=multi-user.target
  21. EOF

docker 启动文件

  1. cat > /usr/lib/systemd/system/docker.service << EOF
  2. [Unit]
  3. Description=Docker Application Container Engine
  4. Documentation=https://docs.docker.com
  5. After=network-online.target firewalld.service containerd.service
  6. Wants=network-online.target
  7. Requires=docker.socket containerd.service
  8. [Service]
  9. Type=notify
  10. ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  11. ExecReload=/bin/kill -s HUP
  12. TimeoutSec=0
  13. RestartSec=2
  14. Restart=always
  15. StartLimitBurst=3
  16. StartLimitInterval=60s
  17. LimitNOFILE=infinity
  18. LimitNPROC=infinity
  19. LimitCORE=infinity
  20. TasksMax=infinity
  21. Delegate=yes
  22. KillMode=process
  23. OOMScoreAdjust=-500
  24. [Install]
  25. WantedBy=multi-user.target
  26. EOF

docker的socket文件

  1. cat > /usr/lib/systemd/system/ << EOF
  2. [Unit]
  3. Description=Docker Socket for the API
  4. [Socket]
  5. ListenStream=/var/run/docker.sock
  6. SocketMode=0660
  7. SocketUser=root
  8. SocketGroup=docker
  9. [Install]
  10. WantedBy=sockets.target
  11. EOF

创建docker配置文件

  1. cat > /etc/docker/daemon.json << EOF
  2. {
  3. "exec-opts": ["native.cgroupdriver=systemd"]
  4. }
  5. EOF

启动添加开机自启动

groupadd docker 
  1. systemctl enable --now containerd.service
  2. systemctl enable --now docker.socket
  3. systemctl enable --now docker.service

4.2 安装cri-docker

  1. tar xf cri-dockerd-0.3.1.amd64.tgz
  2. cp cri-dockerd/* /usr/bin

创建启动文件

  1. cat > /usr/lib/systemd/system/cri-docker.service << EOF
  2. [Unit]
  3. Description=CRI Interface for Docker Application Container Engine
  4. Documentation=https://docs.mirantis.com
  5. After=network-online.target firewalld.service docker.service
  6. Wants=network-online.target
  7. Requires=cri-docker.socket
  8. [Service]
  9. Type=notify
  10. ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=kubernetes/pause:latest
  11. ExecReload=/bin/kill -s HUP
  12. TimeoutSec=0
  13. RestartSec=2
  14. Restart=always
  15. # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
  16. # Both the old, and new location are accepted by systemd 229 and up, so using the old location
  17. # to make them work for either version of systemd.
  18. StartLimitBurst=3
  19. # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
  20. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
  21. # this option work for either version of systemd.
  22. StartLimitInterval=60s
  23. # Having non-zero Limit*s causes performance problems due to accounting overhead
  24. # in the kernel. We recommend using cgroups to do container-local accounting.
  25. LimitNOFILE=infinity
  26. LimitNPROC=infinity
  27. LimitCORE=infinity
  28. # Comment TasksMax if your systemd version does not support it.
  29. # Only systemd 226 and above support this option.
  30. TasksMax=infinity
  31. Delegate=yes
  32. KillMode=process
  33. [Install]
  34. WantedBy=multi-user.target
  35. EOF

创建cri-docker socker文件

  1. cat > /usr/lib/systemd/system/cri-docker.socket << EOF
  2. [Unit]
  3. Description=CRI Docker Socket for the API
  4. PartOf=cri-docker.service
  5. [Socket]
  6. ListenStream=%t/cri-dockerd.sock
  7. SocketMode=0660
  8. SocketUser=root
  9. SocketGroup=docker
  10. [Install]
  11. WantedBy=sockets.target
  12. EOF

添加开机自启动,并启动

  1. systemctl enable --now cri-docker.socket
  2. systemctl enable --now cri-docker

4.3 安装containerd

tar xf containerd-1.6.19-linux-amd64.tar.gz -C /
  1. cat > /usr/lib/systemd/system/containerd.service <<EOF
  2. [Unit]
  3. Description=containerd container runtime
  4. Documentation=https://containerd.io
  5. After=network.target local-fs.target
  6. [Service]
  7. ExecStartPre=-/sbin/modprobe overlay
  8. ExecStart=/usr/bin/containerd
  9. Type=notify
  10. Delegate=yes
  11. KillMode=process
  12. Restart=always
  13. RestartSec=5
  14. LimitNPROC=infinity
  15. LimitCORE=infinity
  16. LimitNOFILE=1048576
  17. TasksMax=infinity
  18. OOMScoreAdjust=-999
  19. [Install]
  20. WantedBy=multi-user.target
  21. EOF
systemctl enable --now  containerd.service

添加配置重启

  1. mkdir /etc/containerd
  2. /usr/local/bin/containerd config default > /etc/containerd/config.toml
  3. systemctl restart containerd

4.4 安装crictl 客户工具

  1. #解压
  2. tar xf crictl-v1.22.0-linux-amd64.tar.gz -C /usr/bin/
  3. #生成配置文件
  4. cat > /etc/crictl.yaml <<EOF
  5. runtime-endpoint: unix:///run/containerd/containerd.sock
  6. EOF
  7. #测试
  8. crictl info

4.5 安装cfssl工具

#主节点操作

  1. tar xf cfssl-1.6.3.tar.gz -C /usr/bin
  2. mkdir /opt/pki/{etcd,kubernetes} -p

5. 生成kubernetes集群证书

在主节点操作

5.1 生成etcd的ca证书

  1. mkdir /opt/pki/etcd/ -p
  2. cd /opt/pki/etcd/
  3. #创建etcd证书的ca
  4. mkdir ca
  5. #生成etcd证书ca配置文件与申请文件
  6. cd ca/

生成配置文件

  1. cat > ca-config.json <<EOF
  2. {
  3. "signing": {
  4. "default": {
  5. "expiry": "87600h"
  6. },
  7. "profiles": {
  8. "etcd": {
  9. "expiry": "87600h",
  10. "usages": [
  11. "signing",
  12. "key encipherment",
  13. "server auth",
  14. "client auth"
  15. ]
  16. }
  17. }
  18. }
  19. }
  20. EOF
  21. #生成申请文件
  22. cat > ca-csr.json <<EOF
  23. {
  24. "CA":{"expiry":"87600h"},
  25. "CN": "etcd-cluster",
  26. "key": {
  27. "algo": "rsa",
  28. "size": 2048
  29. },
  30. "names": [
  31. {
  32. "C": "CN",
  33. "TS": "Beijing",
  34. "L": "Beijing",
  35. "O": "etcd-cluster",
  36. "OU": "System"
  37. }
  38. ]
  39. }
  40. EOF
  41. #生成ca证书
  42. cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成etcd服务端证书

  1. cat > etcd-server-csr.json << EOF
  2. {
  3. "CN": "etcd-server",
  4. "hosts": [
  5. "10.66.6.2",
  6. "10.66.6.3",
  7. "10.66.6.4",
  8. "10.66.6.5",
  9. "10.66.6.6",
  10. "127.0.0.1"
  11. ],
  12. "key": {
  13. "algo": "rsa",
  14. "size": 2048
  15. },
  16. "names": [
  17. {
  18. "C": "CN",
  19. "TS": "Beijing",
  20. "L": "Beijing",
  21. "O": "etcd-server",
  22. "OU": "System"
  23. }
  24. ]
  25. }
  26. EOF
  1. #生成证书
  2. cfssl gencert \
  3. -ca=ca/ca.pem \
  4. -ca-key=ca/ca-key.pem \
  5. -config=ca/ca-config.json \
  6. -profile=etcd \
  7. etcd-server-csr.json | cfssljson -bare etcd-server

生成etcd客户端证书

  1. #生成etcd证书申请文件
  2. cd /opt/pki/etcd/
  3. cat > etcd-client-csr.json << EOF
  4. {
  5. "CN": "etcd-client",
  6. "key": {
  7. "algo": "rsa",
  8. "size": 2048
  9. },
  10. "names": [
  11. {
  12. "C": "CN",
  13. "TS": "Beijing",
  14. "L": "Beijing",
  15. "O": "etcd-client",
  16. "OU": "System"
  17. }
  18. ]
  19. }
  20. EOF
  1. #生成证书
  2. cfssl gencert \
  3. -ca=ca/ca.pem \
  4. -ca-key=ca/ca-key.pem \
  5. -config=ca/ca-config.json \
  6. -profile=etcd \
  7. etcd-client-csr.json | cfssljson -bare etcd-client

拷贝证书到master和node节点

  1. for i in $master;do
  2. ssh $i "mkdir /etc/etcd/ssl -p"
  3. scp /opt/pki/etcd/ca/ca.pem /opt/pki/etcd/{etcd-server.pem,etcd-server-key.pem,etcd-client.pem,etcd-client-key.pem} $i:/etc/etcd/ssl/
  4. done

5.2 创建kubernetes各组件证书

5.2.1 创建kubernetes的ca

  1. mkdir /opt/pki/kubernetes/ -p
  2. cd /opt/pki/kubernetes/
  3. mkdir ca
  4. cd ca

创建ca配置文件

  1. cat > ca-config.json <<EOF
  2. {
  3. "signing": {
  4. "default": {
  5. "expiry": "87600h"
  6. },
  7. "profiles": {
  8. "kubernetes": {
  9. "expiry": "87600h",
  10. "usages": [
  11. "signing",
  12. "key encipherment",
  13. "server auth",
  14. "client auth"
  15. ]
  16. }
  17. }
  18. }
  19. }
  20. EOF

生成ca申请文件

  1. cat > ca-csr.json <<EOF
  2. {
  3. "CA":{"expiry":"87600h"},
  4. "CN": "kubernetes",
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "TS": "Beijing",
  13. "L": "Beijing",
  14. "O": "kubernetes",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF

生成ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

5.3 创建kueb-apiserver证书

  1. mkdir /opt/pki/kubernetes/kube-apiserver -p
  2. cd /opt/pki/kubernetes/kube-apiserver

生成申请文件

  1. cat > kube-apiserver-csr.json < EOF
  2. < EOF
  3. {
  4. "CN": "kube-apiserver",
  5. "hosts": [
  6. "127.0.0.1",
  7. "10.66.6.2",
  8. "10.66.6.3",
  9. "10.66.6.4",
  10. "10.66.6.5",
  11. "10.66.6.6",
  12. "10.66.6.7",
  13. "10.200.0.1",
  14. "kubernetes",
  15. "kubernetes.default",
  16. "kubernetes.default.svc",
  17. "kubernetes.default.svc.cluster",
  18. "kubernetes.default.svc.cluster.local"
  19. ],
  20. "key": {
  21. "algo": "rsa",
  22. "size": 2048
  23. },
  24. "names": [
  25. {
  26. "C": "CN",
  27. "TS": "Beijing",
  28. "L": "Beijing",
  29. "O": "kube-apiserver",
  30. "OU": "System"
  31. }
  32. ]
  33. }
  34. EOF

生成证书

  1. cfssl gencert \
  2. -ca=../ca/ca.pem \
  3. -ca-key=../ca/ca-key.pem \
  4. -config=../ca/ca-config.json \
  5. -profile=kubernetes \
  6. kube-apiserver-csr.json | cfssljson -bare kube-apiserver
  1. for i in master;do
  2. ssh $i "mkdir /etc/kubernetes/pki -p"
  3. scp /opt/pki/kubernetes/ca/{ca.pem,ca-key.pem} /opt/pki/kubernetes/kube-apiserver/{kube-apiserver-key.pem,kube-apiserver.pem} $i:/etc/kubernetes/pki
  4. done

#拷贝证书到node节点

  1. master="node1 node2"
  2. for i in $master;do
  3. ssh $i "mkdir /etc/kubernetes/pki -p"
  4. scp /opt/pki/kubernetes/ca/ca.pem $i:/etc/kubernetes/pki
  5. done

5.4 创建proxy-client证书以及ca

  1. mkdir /opt/pki/proxy-client
  2. cd /opt/pki/proxy-client

生成ca配置文件

  1. cat > front-proxy-ca-csr.json <<EOF
  2. {
  3. "CA":{"expiry":"87600h"},
  4. "CN": "kubernetes",
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. }
  9. }
  10. EOF

生成ca文件

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca

生成客户端证书申请文件

  1. cat > front-proxy-client-csr.json <<EOF
  2. {
  3. "CN": "front-proxy-client",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. }
  8. }
  9. EOF

生成证书

  1. cfssl gencert \
  2. -ca=front-proxy-ca.pem \
  3. -ca-key=front-proxy-ca-key.pem \
  4. -config=../kubernetes/ca/ca-config.json \
  5. -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client

拷贝证书到节点

  1. for i in $master;do
  2. scp /opt/pki/proxy-client/{front-proxy-ca.pem,front-proxy-client.pem,front-proxy-client-key.pem} $i:/etc/kubernetes/pki
  3. done
  4. for i in $node;do
  5. scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki
  6. done

5.5 创建kube-controller-manager证书与认证文件

  1. mkdir /opt/pki/kubernetes/kube-controller-manager
  2. cd /opt/pki/kubernetes/kube-controller-manager

生成配置文件

  1. cat > kube-controller-manager-csr.json <<EOF
  2. {
  3. "CN": "system:kube-controller-manager",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "TS": "Beijing",
  12. "L": "Beijing",
  13. "O": "system:kube-controller-manager",
  14. "OU": "System"
  15. }
  16. ]
  17. }
  18. EOF

生成证书文件

  1. cfssl gencert \
  2. -ca=../ca/ca.pem \
  3. -ca-key=../ca/ca-key.pem \
  4. -config=../ca/ca-config.json \
  5. -profile=kubernetes \
  6. kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成配置文件

  1. kubectl config set-cluster kubernetes \
  2. --certificate-authority=../ca/ca.pem \
  3. --embed-certs=true \
  4. --server=https://127.0.0.1:6443 \
  5. --kubeconfig=kube-controller-manager.kubeconfig
  6. kubectl config set-credentials system:kube-controller-manager \
  7. --client-certificate=kube-controller-manager.pem \
  8. --client-key=kube-controller-manager-key.pem \
  9. --embed-certs=true \
  10. --kubeconfig=kube-controller-manager.kubeconfig
  11. kubectl config set-context default \
  12. --cluster=kubernetes \
  13. --user=system:kube-controller-manager \
  14. --kubeconfig=kube-controller-manager.kubeconfig
  15. kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

拷贝证书到节点

  1. for i in $master;do
  2. scp /opt/pki/kubernetes/kube-controller-manager/kube-controller-manager.kubeconfig $i:/etc/kubernetes
  3. done

5.6 生成kube-scheduler证书文件

  1. mkdir /opt/pki/kubernetes/kube-scheduler
  2. cd /opt/pki/kubernetes/kube-scheduler

生成申请文件

  1. cat > kube-scheduler-csr.json <<EOF
  2. {
  3. "CN": "system:kube-scheduler",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "TS": "Beijing",
  12. "L": "Beijing",
  13. "O": "system:kube-scheduler",
  14. "OU": "System"
  15. }
  16. ]
  17. }
  18. EOF

生成证书

  1. cfssl gencert \
  2. -ca=../ca/ca.pem \
  3. -ca-key=../ca/ca-key.pem \
  4. -config=../ca/ca-config.json \
  5. -profile=kubernetes \
  6. kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成配置文件

  1. kubectl config set-cluster kubernetes \
  2. --certificate-authority=../ca/ca.pem \
  3. --embed-certs=true \
  4. --server=https://127.0.0.1:6443 \
  5. --kubeconfig=kube-scheduler.kubeconfig
  6. kubectl config set-credentials system:kube-scheduler \
  7. --client-certificate=kube-scheduler.pem \
  8. --client-key=kube-scheduler-key.pem \
  9. --embed-certs=true \
  10. --kubeconfig=kube-scheduler.kubeconfig
  11. kubectl config set-context default \
  12. --cluster=kubernetes \
  13. --user=system:kube-scheduler \
  14. --kubeconfig=kube-scheduler.kubeconfig
  15. kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

拷贝证书到节点

  1. for i in $master;do
  2. scp /opt/pki/kubernetes/kube-scheduler/kube-scheduler.kubeconfig $i:/etc/kubernetes
  3. done

5.7.生成kubernetes集群管理员证书

  1. mkdir /opt/pki/kubernetes/admin
  2. cd /opt/pki/kubernetes/admin

生成证书申请文件

  1. cat > admin-csr.json <<EOF
  2. {
  3. "CN": "admin",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "TS": "Beijing",
  12. "L": "Beijing",
  13. "O": "system:masters",
  14. "OU": "System"
  15. }
  16. ]
  17. }
  18. EOF

生成证书

  1. cfssl gencert \
  2. -ca=../ca/ca.pem \
  3. -ca-key=../ca/ca-key.pem \
  4. -config=../ca/ca-config.json \
  5. -profile=kubernetes \
  6. admin-csr.json | cfssljson -bare admin

生成配置文件

  1. kubectl config set-cluster kubernetes \
  2. --certificate-authority=../ca/ca.pem \
  3. --embed-certs=true \
  4. --server=https://127.0.0.1:6443 \
  5. --kubeconfig=admin.kubeconfig
  6. kubectl config set-credentials admin \
  7. --client-certificate=admin.pem \
  8. --client-key=admin-key.pem \
  9. --embed-certs=true \
  10. --kubeconfig=admin.kubeconfig
  11. kubectl config set-context default \
  12. --cluster=kubernetes \
  13. --user=admin \
  14. --kubeconfig=admin.kubeconfig
  15. kubectl config use-context default --kubeconfig=admin.kubeconfig

6. etcd 部署

6.1 安装etcd

  1. tar xf etcd-v3.5.5-linux-amd64.tar.gz
  2. cp etcd-v3.5.5-linux-amd64/etcd* /usr/bin/
  3. rm -rf etcd-v3.5.5-linux-amd64

创建配置文件

  1. cat > /etc/etcd/etcd.config.yml <<EOF
  2. name: 'etcd-1'
  3. data-dir: /var/lib/etcd
  4. wal-dir: /var/lib/etcd/wal
  5. snapshot-count: 5000
  6. heartbeat-interval: 100
  7. election-timeout: 1000
  8. quota-backend-bytes: 0
  9. listen-peer-urls: 'https://10.66.6.2:2380'
  10. listen-client-urls: 'https://10.66.6.2:2379,https://127.0.0.1:2379'
  11. max-snapshots: 3
  12. max-wals: 5
  13. cors:
  14. initial-advertise-peer-urls: 'https://10.66.6.2:2380'
  15. advertise-client-urls: 'https://10.66.6.2:2379'
  16. discovery:
  17. discovery-fallback: 'proxy'
  18. discovery-proxy:
  19. discovery-srv:
  20. initial-cluster: 'etcd-1=https://10.66.6.2:2380' #配置etcd节点根据自己情况
  21. initial-cluster-token: 'etcd-cluster'
  22. initial-cluster-state: 'new'
  23. strict-reconfig-check: false
  24. enable-v2: true
  25. enable-pprof: true
  26. proxy: 'off'
  27. proxy-failure-wait: 5000
  28. proxy-refresh-interval: 30000
  29. proxy-dial-timeout: 1000
  30. proxy-write-timeout: 5000
  31. proxy-read-timeout: 0
  32. client-transport-security:
  33. cert-file: '/etc/etcd/ssl/etcd-server.pem'
  34. key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  35. client-cert-auth: true
  36. trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  37. auto-tls: true
  38. peer-transport-security:
  39. cert-file: '/etc/etcd/ssl/etcd-server.pem'
  40. key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  41. peer-client-cert-auth: true
  42. trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  43. auto-tls: true
  44. debug: false
  45. log-package-levels:
  46. log-outputs: [default]
  47. force-new-cluster: false
  48. EOF

生成启动文件

  1. cat > /usr/lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=Etcd Service
  4. Documentation=https://coreos.com/etcd/docs/latest/
  5. After=network.target
  6. [Service]
  7. Type=notify
  8. ExecStart=/usr/bin/etcd --config-file=/etc/etcd/etcd.config.yml
  9. Restart=on-failure
  10. RestartSec=10
  11. LimitNOFILE=65536
  12. [Install]
  13. WantedBy=multi-user.target
  14. Alias=etcd3.service
  15. EOF
systemctl enable --now etcd

6.2 配置etcdctl 客户端工具

  1. #设置全局变量
  2. cat > /etc/profile.d/etcdctl.sh <<EOF
  3. #!/bin/bash
  4. export ETCDCTL_API=3
  5. export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
  6. export ETCDCTL_CACERT=/etc/etcd/ssl/ca.pem
  7. export ETCDCTL_CERT=/etc/etcd/ssl/etcd-client.pem
  8. export ETCDCTL_KEY=/etc/etcd/ssl/etcd-client-key.pem
  9. EOF
  10. #生效
  11. source /etc/profile
  12. #验证集群状态
  13. etcdctl member list

7. 部署kubernetes

分发二进制文件

  1. tar xf kubernetes-server-linux-amd64.tar.gz
  2. #分发master组件
  3. for i in $master;do
  4. scp kubernetes/server/bin/{kubeadm,kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet,kubectl} $i:/usr/bin
  5. done
  6. #分发node组件
  7. for i in $node;do
  8. scp kubernetes/server/bin/{kube-proxy,kubelet} $i:/usr/bin
  9. done

7.1 安装kube-apiserver

  1. #创建ServiceAccount Key
  2. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
  3. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
  4. #分发master组件
  5. for i in $master;do
  6. scp /etc/kubernetes/pki/{sa.pub,sa.key} $i:/etc/kubernetes/pki/
  7. done

创建service文件

  1. a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
  2. cat > /etc/systemd/system/kube-apiserver.service <<EOF
  3. [Unit]
  4. Description=Kubernetes API Server
  5. Documentation=https://github.com/kubernetes/kubernetes
  6. After=network.target
  7. [Service]
  8. ExecStart=/usr/bin/kube-apiserver \\
  9. --v=2 \\
  10. --logtostderr=true \\
  11. --allow-privileged=true \\
  12. --bind-address=$a \\
  13. --secure-port=6443 \\
  14. --advertise-address=$a \\
  15. --service-cluster-ip-range=10.200.0.0/16 \\
  16. --service-node-port-range=30000-42767 \\
  17. --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \\
  18. --etcd-cafile=/etc/etcd/ssl/ca.pem \\
  19. --etcd-certfile=/etc/etcd/ssl/etcd-client.pem \\
  20. --etcd-keyfile=/etc/etcd/ssl/etcd-client-key.pem \\
  21. --client-ca-file=/etc/kubernetes/pki/ca.pem \\
  22. --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \\
  23. --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \\
  24. --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \\
  25. --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \\
  26. --service-account-key-file=/etc/kubernetes/pki/sa.pub \\
  27. --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
  28. --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  29. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
  30. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
  31. --authorization-mode=Node,RBAC \\
  32. --enable-bootstrap-token-auth=true \\
  33. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
  34. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
  35. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
  36. --requestheader-allowed-names=aggregator \\
  37. --requestheader-group-headers=X-Remote-Group \\
  38. --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  39. --requestheader-username-headers=X-Remote-User
  40. Restart=on-failure
  41. RestartSec=10s
  42. LimitNOFILE=65535
  43. [Install]
  44. WantedBy=multi-user.target
  45. EOF

启动服务

systemctl enable --now kube-apiserver.service

7.2 安装kube-controller-manager

  1. #生成service文件
  2. cat > /etc/systemd/system/kube-controller-manager.service <<EOF
  3. [Unit]
  4. Description=Kubernetes Controller Manager
  5. Documentation=https://github.com/kubernetes/kubernetes
  6. After=network.target
  7. [Service]
  8. ExecStart=/usr/bin/kube-controller-manager \
  9. --v=2 \
  10. --logtostderr=true \
  11. --root-ca-file=/etc/kubernetes/pki/ca.pem \
  12. --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  13. --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  14. --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
  15. --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  16. --leader-elect=true \
  17. --use-service-account-credentials=true \
  18. --node-monitor-grace-period=40s \
  19. --node-monitor-period=5s \
  20. --pod-eviction-timeout=2m0s \
  21. --controllers=*,bootstrapsigner,tokencleaner \
  22. --allocate-node-cidrs=true \
  23. --cluster-cidr=10.200.0.0/16 \
  24. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
  25. --node-cidr-mask-size=24
  26. Restart=always
  27. RestartSec=10s
  28. [Install]
  29. WantedBy=multi-user.target
  30. EOF

#启动

  1. #启动服务
  2. systemctl enable --now kube-controller-manager.service

7.3 安装kube-scheduler

  1. #生成service文件
  2. cat > /etc/systemd/system/kube-scheduler.service <<EOF
  3. [Unit]
  4. Description=Kubernetes Scheduler
  5. Documentation=https://github.com/kubernetes/kubernetes
  6. After=network.target
  7. [Service]
  8. ExecStart=/usr/bin/kube-scheduler \
  9. --v=2 \
  10. --logtostderr=true \
  11. --leader-elect=true \
  12. --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig
  13. Restart=always
  14. RestartSec=10s
  15. [Install]
  16. WantedBy=multi-user.target
  17. EOF
  18. #启动服务
  19. systemctl enable --now kube-scheduler.service

7.4 在master节点部署kubectl工具

  1. mkdir /root/.kube/ -p
  2. cp /opt/pki/kubernetes/admin/admin.kubeconfig /root/.kube/config

验证

  1. kubectl get cs
  2. Warning: v1 ComponentStatus is deprecated in v1.19+
  3. NAME STATUS MESSAGE ERROR
  4. scheduler Healthy ok
  5. controller-manager Healthy ok
  6. etcd-0 Healthy {"health":"true"}

7.5 部署kubelet

7.5.1 使用TLS Bootstrapping自动认证kubelet

创建TLS Bootstrapping认证文件

  1. mkdir /opt/pki/kubernetes/kubelet -p
  2. cd /opt/pki/kubernetes/kubelet
  3. #生成随机认证key
  4. a=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c6`
  5. b=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c16`

生成权限绑定文件

  1. cat > bootstrap.secret.yaml <<EOF
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. name: bootstrap-token-$a
  6. namespace: kube-system
  7. type: bootstrap.kubernetes.io/token
  8. stringData:
  9. description: "The default bootstrap token generated by 'kubelet '."
  10. token-id: $a
  11. token-secret: $b
  12. usage-bootstrap-authentication: "true"
  13. usage-bootstrap-signing: "true"
  14. auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  15. ---
  16. apiVersion: rbac.authorization.k8s.io/v1
  17. kind: ClusterRoleBinding
  18. metadata:
  19. name: kubelet-bootstrap
  20. roleRef:
  21. apiGroup: rbac.authorization.k8s.io
  22. kind: ClusterRole
  23. name: system:node-bootstrapper
  24. subjects:
  25. - apiGroup: rbac.authorization.k8s.io
  26. kind: Group
  27. name: system:bootstrappers:default-node-token
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRoleBinding
  31. metadata:
  32. name: node-autoapprove-bootstrap
  33. roleRef:
  34. apiGroup: rbac.authorization.k8s.io
  35. kind: ClusterRole
  36. name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  37. subjects:
  38. - apiGroup: rbac.authorization.k8s.io
  39. kind: Group
  40. name: system:bootstrappers:default-node-token
  41. ---
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. kind: ClusterRoleBinding
  44. metadata:
  45. name: node-autoapprove-certificate-rotation
  46. roleRef:
  47. apiGroup: rbac.authorization.k8s.io
  48. kind: ClusterRole
  49. name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  50. subjects:
  51. - apiGroup: rbac.authorization.k8s.io
  52. kind: Group
  53. name: system:nodes
  54. ---
  55. apiVersion: rbac.authorization.k8s.io/v1
  56. kind: ClusterRole
  57. metadata:
  58. annotations:
  59. rbac.authorization.kubernetes.io/autoupdate: "true"
  60. labels:
  61. kubernetes.io/bootstrapping: rbac-defaults
  62. name: system:kube-apiserver-to-kubelet
  63. rules:
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes/proxy
  68. - nodes/stats
  69. - nodes/log
  70. - nodes/spec
  71. - nodes/metrics
  72. verbs:
  73. - "*"
  74. ---
  75. apiVersion: rbac.authorization.k8s.io/v1
  76. kind: ClusterRoleBinding
  77. metadata:
  78. name: system:kube-apiserver
  79. namespace: ""
  80. roleRef:
  81. apiGroup: rbac.authorization.k8s.io
  82. kind: ClusterRole
  83. name: system:kube-apiserver-to-kubelet
  84. subjects:
  85. - apiGroup: rbac.authorization.k8s.io
  86. kind: User
  87. name: kube-apiserver
  88. EOF

生成配置文件

  1. #生成配置文件
  2. kubectl config set-cluster kubernetes \
  3. --certificate-authority=../ca/ca.pem \
  4. --embed-certs=true \
  5. --server=https://10.66.6.2:6443 \
  6. --kubeconfig=bootstrap-kubelet.kubeconfig
  7. kubectl config set-credentials tls-bootstrap-token-user \
  8. --token=$a.$b \
  9. --kubeconfig=bootstrap-kubelet.kubeconfig
  10. kubectl config set-context tls-bootstrap-token-user@kubernetes \
  11. --cluster=kubernetes \
  12. --user=tls-bootstrap-token-user \
  13. --kubeconfig=bootstrap-kubelet.kubeconfig
  14. kubectl config use-context tls-bootstrap-token-user@kubernetes \
  15. --kubeconfig=bootstrap-kubelet.kubeconfig
  16. #创建权限
  17. kubectl apply -f bootstrap.secret.yaml

分发认证文件

  1. for i in $node;do
  2. ssh $i "mkdir /etc/kubernetes -p"
  3. scp /opt/pki/kubernetes/kubelet/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes
  4. done

7.5.2 部署kubernetes 组件

使用docker容器运行方式

  1. mkdir /etc/systemd/system/kubelet.service.d/ -p
  2. mkdir /etc/kubernetes/manifests/ -p

生成service文件

  1. cat > /usr/lib/systemd/system/kubelet.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=docker.service
  6. Requires=docker.service
  7. [Service]
  8. ExecStart=/usr/bin/kubelet
  9. Restart=always
  10. StartLimitInterval=0
  11. RestartSec=10
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

生成service 配置文件

  1. cat > /usr/lib/systemd/system/kubelet.service.d/10-kubelet.conf <<
  2. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
  3. Environment="KUBELET_SYSTEM_ARGS=--hostname-override=10.66.6.2"
  4. Environment="KUBELET_RINTIME=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
  6. Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''"
  7. ExecStart=
  8. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_RINTIME
  9. EOF

使用container的方式部署

  1. a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
  2. mkdir /etc/systemd/system/kubelet.service.d/ -p
  3. mkdir /etc/kubernetes/manifests/ -p
  4. #生成service文件
  5. cat > /etc/systemd/system/kubelet.service <<EOF
  6. [Unit]
  7. Description=Kubernetes Kubelet
  8. Documentation=https://github.com/kubernetes/kubernetes
  9. After=containerd.service
  10. Requires=containerd.service
  11. [Service]
  12. ExecStart=/usr/bin/kubelet
  13. Restart=always
  14. StartLimitInterval=0
  15. RestartSec=10
  16. [Install]
  17. WantedBy=multi-user.target
  18. EOF
  19. #生成service配置文件
  20. cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<EOF
  21. [Service]
  22. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
  23. Environment="KUBELET_SYSTEM_ARGS=--hostname-override=$a"
  24. Environment="KUBELET_RINTIME=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
  25. Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
  26. Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
  27. ExecStart=
  28. ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS \$KUBELET_RINTIME
  29. EOF

kubelet配置文件生成

  1. a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
  2. #生成配置文件
  3. cat > /etc/kubernetes/kubelet-conf.yml <<EOF
  4. apiVersion: kubelet.config.k8s.io/v1beta1
  5. kind: KubeletConfiguration
  6. address: $a
  7. port: 10250
  8. readOnlyPort: 10255
  9. authentication:
  10. anonymous:
  11. enabled: false
  12. webhook:
  13. cacheTTL: 2m0s
  14. enabled: true
  15. x509:
  16. clientCAFile: /etc/kubernetes/pki/ca.pem
  17. authorization:
  18. mode: Webhook
  19. webhook:
  20. cacheAuthorizedTTL: 5m0s
  21. cacheUnauthorizedTTL: 30s
  22. cgroupDriver: systemd
  23. cgroupsPerQOS: true
  24. clusterDNS:
  25. - 10.200.0.2
  26. clusterDomain: cluster.local
  27. containerLogMaxFiles: 5
  28. containerLogMaxSize: 10Mi
  29. contentType: application/vnd.kubernetes.protobuf
  30. cpuCFSQuota: true
  31. cpuManagerPolicy: none
  32. cpuManagerReconcilePeriod: 10s
  33. enableControllerAttachDetach: true
  34. enableDebuggingHandlers: true
  35. enforceNodeAllocatable:
  36. - pods
  37. eventBurst: 10
  38. eventRecordQPS: 5
  39. evictionHard:
  40. imagefs.available: 15%
  41. memory.available: 100Mi
  42. nodefs.available: 10%
  43. nodefs.inodesFree: 5%
  44. evictionPressureTransitionPeriod: 5m0s
  45. failSwapOn: true
  46. fileCheckFrequency: 20s
  47. hairpinMode: promiscuous-bridge
  48. healthzBindAddress: 127.0.0.1
  49. healthzPort: 10248
  50. httpCheckFrequency: 20s
  51. imageGCHighThresholdPercent: 85
  52. imageGCLowThresholdPercent: 80
  53. imageMinimumGCAge: 2m0s
  54. iptablesDropBit: 15
  55. iptablesMasqueradeBit: 14
  56. kubeAPIBurst: 10
  57. kubeAPIQPS: 5
  58. makeIPTablesUtilChains: true
  59. maxOpenFiles: 1000000
  60. maxPods: 110
  61. nodeStatusUpdateFrequency: 10s
  62. oomScoreAdj: -999
  63. podPidsLimit: -1
  64. registryBurst: 10
  65. registryPullQPS: 5
  66. resolvConf: /etc/resolv.conf
  67. rotateCertificates: true
  68. runtimeRequestTimeout: 2m0s
  69. serializeImagePulls: true
  70. staticPodPath: /etc/kubernetes/manifests
  71. streamingConnectionIdleTimeout: 4h0m0s
  72. syncFrequency: 1m0s
  73. volumeStatsAggPeriod: 1m0s
  74. EOF

启动服务

systemctl enable --now kubelet.service

7.6 部署kube-proxy

  1. mkdir /opt/pki/kubernetes/kube-proxy/ -p
  2. cd /opt/pki/kubernetes/kube-proxy/

生成配置文件

  1. kubectl -n kube-system create serviceaccount kube-proxy
  2. kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
  3. cat >kube-proxy-scret.yml<<EOF
  4. apiVersion: v1
  5. kind: Secret
  6. metadata:
  7. name: kube-proxy
  8. namespace: kube-system
  9. annotations:
  10. kubernetes.io/service-account.name: "kube-proxy"
  11. type: kubernetes.io/service-account-token
  12. EOF
  1. kubectl apply -f kube-proxy-scret.yml
  2. JWT_TOKEN=$(kubectl -n kube-system get secret/kube-proxy \
  3. --output=jsonpath='{.data.token}' | base64 -d)
  4. kubectl config set-cluster kubernetes \
  5. --certificate-authority=/etc/kubernetes/pki/ca.pem \
  6. --embed-certs=true \
  7. --server=https://10.66.6.2:6443 \
  8. --kubeconfig=kube-proxy.kubeconfig
  9. kubectl config set-credentials kubernetes \
  10. --token=${JWT_TOKEN} \
  11. --kubeconfig=kube-proxy.kubeconfig
  12. kubectl config set-context kubernetes \
  13. --cluster=kubernetes \
  14. --user=kubernetes \
  15. --kubeconfig=kube-proxy.kubeconfig
  16. kubectl config use-context kubernetes \
  17. --kubeconfig=kube-proxy.kubeconfig

拷贝文件到节点

  1. for i in $node;do
  2. scp /opt/pki/kubernetes/kube-proxy/kube-proxy.kubeconfig $i:/etc/kubernetes
  3. done

生成service文件

  1. cat > /etc/systemd/system/kube-proxy.service <<EOF
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/bin/kube-proxy \
  8. --config=/etc/kubernetes/kube-proxy.conf \
  9. --v=2
  10. Restart=always
  11. RestartSec=10s
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

生成配置文件

  1. cat > /etc/kubernetes/kube-proxy.yaml << EOF
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 10.66.6.2
  4. clientConnection:
  5. acceptContentTypes: ""
  6. burst: 10
  7. contentType: application/vnd.kubernetes.protobuf
  8. kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  9. qps: 5
  10. clusterCIDR: 10.100.0.0/16
  11. configSyncPeriod: 15m0s
  12. conntrack:
  13. max: null
  14. maxPerCore: 32768
  15. min: 131072
  16. tcpCloseWaitTimeout: 1h0m0s
  17. tcpEstablishedTimeout: 24h0m0s
  18. enableProfiling: false
  19. healthzBindAddress: 0.0.0.0:10256
  20. hostnameOverride: "10.66.6.2"
  21. iptables:
  22. masqueradeAll: false
  23. masqueradeBit: 14
  24. minSyncPeriod: 0s
  25. syncPeriod: 30s
  26. ipvs:
  27. masqueradeAll: true
  28. minSyncPeriod: 5s
  29. scheduler: "rr"
  30. syncPeriod: 30s
  31. kind: KubeProxyConfiguration
  32. metricsBindAddress: 127.0.0.1:10249
  33. mode: "ipvs"
  34. nodePortAddresses: null
  35. oomScoreAdj: -999
  36. portRange: ""
  37. udpIdleTimeout: 250ms
  38. EOF

启动服务

systemctl enable --now kube-proxy.service

验证工作模式

curl 127.0.0.1:10249/proxyMode

8. 安装组件

8.1 安装calico网络插件

yaml文件下载

https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-typha.yaml

修改如下:

  1. - name: CALICO_IPV4POOL_CIDR
  2. value: "10.100.0.0/16"
  1. kubectl apply -f calico-typha.yaml
  2. #验证
  3. kubectl get node

8.2 安装calicoctl客户端

  1. mkdir /etc/calico -p
  2. cat >/etc/calico/calicoctl.cfg <<EOF
  3. apiVersion: projectcalico.org/v3
  4. kind: CalicoAPIConfig
  5. metadata:
  6. spec:
  7. datastoreType: "kubernetes"
  8. kubeconfig: "/root/.kube/config"
  9. EOF
  1. #验证
  2. calicoctl node status

8.3 安装dashboard

地址:

https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

修改yaml文件

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. labels:
  5. k8s-app: kubernetes-dashboard
  6. name: kubernetes-dashboard
  7. namespace: kubernetes-dashboard
  8. spec:
  9. type: NodePort #添加
  10. ports:
  11. - port: 443
  12. targetPort: 8443
  13. nodePort: 30001 #添加
  14. selector:
  15. k8s-app: kubernetes-dashboard
  16. #创建
  17. kubectl apply -f dashboard.yaml

创建用户

  1. cat >admin.yaml<<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kubernetes-dashboard
  7. ---
  8. apiVersion: v1
  9. kind: Secret
  10. metadata:
  11. name: admin-user
  12. namespace: kubernetes-dashboard
  13. annotations:
  14. kubernetes.io/service-account.name: "admin-user"
  15. type: kubernetes.io/service-account-token
  16. ---
  17. apiVersion: rbac.authorization.k8s.io/v1
  18. kind: ClusterRoleBinding
  19. metadata:
  20. name: admin-user
  21. roleRef:
  22. apiGroup: rbac.authorization.k8s.io
  23. kind: ClusterRole
  24. name: cluster-admin
  25. subjects:
  26. - kind: ServiceAccount
  27. name: admin-user
  28. namespace: kubernetes-dashboard
  29. EOF

创建用户

  1. kubectl apply -f admin.yaml
  2. #获取用户token
  3. kubectl describe secrets -n kubernetes-dashboard admin-user

8.4 安装mertrics-server

下载地址:

https://github.com/kubernetes-sigs/metrics-server/

拷贝证书文件

  1. for i in $node;do
  2. scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki/
  3. done

修改配置

  1. - --cert-dir=/tmp
  2. - --secure-port=4443
  3. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  4. - --kubelet-use-node-status-port
  5. - --metric-resolution=15s
  6. - --kubelet-insecure-tls
  7. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
  8. - --requestheader-username-headers=X-Remote-User
  9. - --requestheader-group-headers=X-Remote-Group
  10. - --requestheader-extra-headers-prefix=X-Remote-Extra-
  11. volumeMounts:
  12. - mountPath: /tmp
  13. name: tmp-dir
  14. - mountPath: /etc/kubernetes/pki
  15. name: ca-ssl
  16. volumes:
  17. - emptyDir: {}
  18. name: tmp-dir
  19. - name: ca-ssl
  20. hostPath:
  21. path: /etc/kubernetes/pki
  1. kubectl apply -f components.yaml
  2. #验证
  3. kubectl top node
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/304772
推荐阅读
相关标签
  

闽ICP备14008679号