赞
踩
操作系统 | 配置 | 主机名 | IP | 所需插件 |
---|---|---|---|---|
CentOS 7.9 | 2C4G | k8s-master | 192.168.60.143 | flannel-cni-plugin、flannel、coredns、etcd、kube-apiserver、kube-controller-manager、kube-proxy、 kube-scheduler 、containerd、pause 、crictl |
CentOS 7.9 | 2C4G | k8s-node01 | 192.168.60.144 | flannel-cni-plugin、flannel、kubectl、kube-proxy、containerd、pause 、crictl、kubernetes-dashboard |
CentOS 7.9 | 2C4G | k8s-node02 | 192.168.60.145 | flannel-cni-plugin、flannel、kubectl、kube-proxy、containerd、pause 、crictl、kubernetes-dashboard |
(三台机器均需执行)
## master 节点执行:192.168.60.143
$ hostnamectl --static set-hostname k8s-master
## node1 节点执行:192.168.60.144
$ hostnamectl --static set-hostname k8s-node1
## node2 节点执行:192.168.60.145
$ hostnamectl --static set-hostname k8s-node2
## 执行以上操作后,再重启服务器
$ reboot -f
## 禁用selinux,关闭内核安全机制
$ sudo sestatus && sudo setenforce 0 && sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
## 关闭防火墙,并禁止自启动
$ sudo systemctl stop firewalld && sudo systemctl disable firewalld && sudo systemctl status firewalld
kubeadm不支持swap
# 临时关闭
$ sudo swapoff -a
# 永久关闭
$ sudo sed -i '/swap/s/^/#/' /etc/fstab
(注意要跟 1.1 设置的 hostname 名称保持一致)
$ cat >> /etc/hosts << EOF
192.168.60.143 k8s-master
192.168.93.144 k8s-node01
192.168.93.145 k8s-node02
EOF
在Docker的使用过程中有时会看到下面这个警告信息,做以下操作即可:
# 这种镜像信息可以通过配置内核参数的方式来消除
$ cat >> /etc/sysctl.conf << EOF
# 启用ipv6桥接转发
net.bridge.bridge-nf-call-ip6tables = 1
# 启用ipv4桥接转发
net.bridge.bridge-nf-call-iptables = 1
# 开启路由转发功能
net.ipv4.ip_forward = 1
# 禁用swap分区
vm.swappiness = 0
EOF
## # 加载 overlay 内核模块
$ modprobe overlay
# 往内核中加载 br_netfilter模块
$ modprobe br_netfilter
# 加载文件内容
$ sysctl -p
## 拉取并设置 yum 源
$ sudo curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
## 快速建立元数据缓存
$ sudo yum makecache fast
## 安装必要系统命令插件
$ sudo yum -y install vim lrzsz unzip wget net-tools tree bash-completion telnet
## 安装同步时间插件
$ yum -y install ntpdate
## 同步阿里云的时间
$ ntpdate ntp.aliyun.com
(三台机器均需执行)
本文使用的是 Containerd 容器运行时,镜像操作是 crictl 和 ctr 命令行;
顺便说一下,k8s 是 1.20 版本开始宣布即将取消 docker 作为默认部署容器,一直到 1.24 版本才正式取消。所以安装的时候一定要注意 k8s 版本 和 容器之间的差异 。
以下是官网关于容器支持的通知(官网中文版的没有这段话,只有英文版本才有,很NT):
## 添加 docker 源,containerd也在docker源内的
$ cat <<EOF | sudo tee /etc/yum.repos.d/docker-ce.repo
[docker]
name=docker-ce
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
## 快速建立 yum 元数据库
$ yum makecache fast
## 安装containerd
## 列出所有containerd版本
$ yum list containerd.io --showduplicates
## 安装 Containerd (我这安装的是现在最新的一个版本)
$ yum -y install containerd.io-1.6.33-3.1.el7.x86_64
$ mkdir -p /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml
## # 修改/etc/containerd/config.toml文件中sandbox_image的值,改为国内源
$ vi /etc/containerd/config.toml
1 ) 设置 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
2 ) 在 [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 后面新增以下两行内容,大概在 153 行左右
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://i9h06ghu.mirror.aliyuncs.com"]
## 启动 Containerd ,并设置开机自启动
$ systemctl start containerd && systemctl enable containerd
(具体在哪些服务器操作,下文副标题都有注明)
(三台机器均需执行)
$ sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
(三台机器均需执行)
## 安装所需 Kubernetes 必要插件
## $ yum install -y kubelet kubeadm kubectl ,我装的是 1.28.2 版本的
$ yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
$ systemctl start kubelet && systemctl enable kubelet
(三台机器均需执行)
$ cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF
# 拉取一个 Nginx 镜像验证 crictl 是否可用
$ crictl pull nginx:latest
Image is up to date for sha256:605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85
# 查看 nginx 镜像
$ crictl images | grep nginx
IMAGE TAG IMAGE ID SIZE
docker.io/library/nginx latest 605c77e624ddb 56.7MB
(master节点执行)
# 生成初始化配置文件,并输出到当前目录
$ kubeadm config print init-defaults > init-config.yaml
# 执行上面的命令可能会出现类似这个提示,不用管,接着往下执行即可:W0615 08:50:40.154637 10202 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
# 编辑配置文件,以下有需要修改部分
$ vi init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.60.143 # 修改此处为你 master 节点 IP 地址,我的是 192.168.60.143
bindPort: 6443 # 默认端口号即可
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master # 修改此处为你主节点的主机名,我的是 k8s-master
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd # 默认路径即可,etcd容器挂载到本地的目录
imageRepository: registry.aliyuncs.com/google_containers # 修改默认地址为国内地址,国外的地址无法访问
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12 # 默认网段即可,service资源的网段,集群内部的网络
podSubnet: 10.244.0.0/16 # 注意:这个是新增的,Pod资源网段,需要与下面的pod网络插件地址一致
scheduler: {}
(master节点执行)
# 根据指定 init-config.yaml 文件,查看初始化需要的镜像
$ kubeadm config images list --config=init-config.yaml
## 拉取镜像
$ kubeadm config images pull --config=init-config.yaml
## 查看拉取的镜像
$ crictl images
(master节点执行)
( kubeadm init 初始化配置参数如下,仅做了解即可)
(kubeadm通过初始化安装是不包括网络插件的,也就是说初始化之后不具备相关网络功能的,比如k8s-master节点上查看信息都是“Not Ready”状态、Pod的CoreDNS无法提供服务等 若初始化失败执行:kubeadm reset、rm -rf $HOME/.kube、/etc/kubernetes/、/var/lib/etcd/
)
(master节点执行)
kubeadm 安装 k8s,这个方式安装的集群会把所有组件安装好,也就免去了需要手动安装 etcd 组件的操作
## 初始化 k8s
## 所有关于 k8s 初始化的配置,在上文已经修改完成了,执行以下命令即可
$ kubeadm init --config=init-config.yaml
(master节点展示)
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.60.143:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:464fc74833ffce2ec83745db47d93e323ff47255c551197c949efc8ba6bcba36
(master节点执行)
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
(两台从节点执行)
直接把k8s-master节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置
每个 master 最后回显的 token 和 sha 认证都不一样
$ kubeadm join 192.168.60.143:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:464fc74833ffce2ec83745db47d93e323ff47255c551197c949efc8ba6bcba36
# 如果加入集群的命令找不到了可以在master节点重新生成一个
$ kubeadm token create --print-join-command
(master节点执行)
前面已经提到了,在初始化 k8s-master 时并没有网络相关的配置,所以无法跟node节点通信,因此状态都是“Not Ready”。但是通过kubeadm join加入的node节点已经在k8s-master上可以看到。
同理,目前 coredns 模块一直处于 Pending 也是正常状态。
## 查看节点信息
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 44m v1.28.2
k8s-node1 NotReady <none> 25m v1.28.2
k8s-node2 NotReady <none> 25m v1.28.2
## 查看主节点运行 Pod 的状态
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66f779496c-ccj8c 0/1 Pending 0 52m <none> <none> <none> <none>
kube-system coredns-66f779496c-mvx6k 0/1 Pending 0 52m <none> <none> <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 52m 192.168.60.143 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 0 52m 192.168.60.143 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 0 52m 192.168.60.143 k8s-master <none> <none>
kube-system kube-proxy-8fbwr 1/1 Running 0 33m 192.168.60.145 k8s-node2 <none> <none>
kube-system kube-proxy-h9xwc 1/1 Running 0 33m 192.168.60.144 k8s-node1 <none> <none>
kube-system kube-proxy-rzdtk 1/1 Running 0 52m 192.168.60.143 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 0 52m 192.168.60.143 k8s-master <none> <none>
(具体在哪些服务器操作,下文副标题都有注明)
(三台机器均需执行)
(三台机器均需执行)
## 导入镜像,切记要在镜像包所在目录执行此命令
$ ctr -n k8s.io i import flannel-cni-plugin-v1.1.2.tar
$ ctr -n k8s.io i import flannel.tar
# 查看镜像
$ crictl images | grep flannel
docker.io/flannel/flannel-cni-plugin v1.1.2 7a2dcab94698c 8.25MB
docker.io/flannel/flannel v0.21.5 a6c0cb5dbd211 69.9MB
(master节点执行)
$ kubectl apply -f kube-flannel.yaml
(两台从节点执行)
(两台从节点执行)
(两台从节点执行)
## 配置环境变量
$ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
## 立即生效
$ source ~/.bash_profile
(三台机器均可执行)
## 查看节点状态
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 23m v1.28.2
k8s-node01 Ready <none> 14m v1.28.2
k8s-node02 Ready <none> 14m v1.28.2
## 查看主节点运行 Pod 的状态
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-7rzg7 1/1 Running 0 5m13s 192.168.60.145 k8s-node2 <none> <none>
kube-flannel kube-flannel-ds-fxzg4 1/1 Running 0 5m13s 192.168.60.143 k8s-master <none> <none>
kube-flannel kube-flannel-ds-gp45f 1/1 Running 0 5m13s 192.168.60.144 k8s-node1 <none> <none>
kube-system coredns-66f779496c-ccj8c 1/1 Running 0 106m 10.244.0.2 k8s-master <none> <none>
kube-system coredns-66f779496c-mvx6k 1/1 Running 0 106m 10.244.2.2 k8s-node2 <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 106m 192.168.60.143 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 0 106m 192.168.60.143 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 0 106m 192.168.60.143 k8s-master <none> <none>
kube-system kube-proxy-8fbwr 1/1 Running 0 87m 192.168.60.145 k8s-node2 <none> <none>
kube-system kube-proxy-h9xwc 1/1 Running 0 87m 192.168.60.144 k8s-node1 <none> <none>
kube-system kube-proxy-rzdtk 1/1 Running 0 106m 192.168.60.143 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 0 106m 192.168.60.143 k8s-master <none> <none>
## 查看指定pod状态
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-25bzd 1/1 Running 0 23m
coredns-7ff77c879f-wp885 1/1 Running 0 23m
etcd-k8s-master 1/1 Running 0 24m
kube-apiserver-k8s-master 1/1 Running 0 24m
kube-controller-manager-k8s-master 1/1 Running 0 24m
kube-proxy-2tphl 1/1 Running 0 15m
kube-proxy-hqppj 1/1 Running 0 15m
kube-proxy-rfxw2 1/1 Running 0 23m
kube-scheduler-k8s-master 1/1 Running 0 24m
## 查看所有pod状态
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-h727x 1/1 Running 0 77s
kube-flannel kube-flannel-ds-kbztr 1/1 Running 0 77s
kube-flannel kube-flannel-ds-nw9pr 1/1 Running 0 77s
kube-system coredns-7ff77c879f-25bzd 1/1 Running 0 24m
kube-system coredns-7ff77c879f-wp885 1/1 Running 0 24m
kube-system etcd-k8s-master 1/1 Running 0 24m
kube-system kube-apiserver-k8s-master 1/1 Running 0 24m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 24m
kube-system kube-proxy-2tphl 1/1 Running 0 15m
kube-system kube-proxy-hqppj 1/1 Running 0 15m
kube-system kube-proxy-rfxw2 1/1 Running 0 24m
kube-system kube-scheduler-k8s-master 1/1 Running 0 24m
## 查看集群组件状态
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
(master节点部署web页面)
## Step 1 :获取资源配置文件
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
## Step 2:编辑资源配置文件,大概定位到39行,修改其提供的service资源
$ vi recommended.yaml
spec:
type: NodePort # 新增的内容
ports:
- port: 443
targetPort: 8443
nodePort: 31000 # 自行定义 web 访问端口号
selector:
k8s-app: kubernetes-dashboard
## Step 3:部署pod应用
$ kubectl apply -f recommended.yaml
## Step 4:创建admin-user账户及授权的资源配置文件
$ cat>dashboard-adminuser.yml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
## Step 5:创建资源实例
$ kubectl create -f dashboard-adminuser.yml
## Step 6:获取账户admin-user的Token用于登录
## 较早版本会自动生成 token,v1.28.2版本需要手动生成,执行命令即可:
$ kubectl create token admin-user --namespace kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6InFBcUVIQ3kxUV9YOTlteGhULUxTTHRkT1FaRU02Y3d2Vk1OcWRkaE45eE0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzIwNTE3NTMyLCJpYXQiOjE3MjA1MTM5MzIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYTBkZDY1MTgtOWZiNi00MjhjLTllNTktOTNiNWNmMDhiZTJiIn19LCJuYmYiOjE3MjA1MTM5MzIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.SoY_tcafcrEYfmVXvrwFpnB4I2DV1K8KcshRkJykmOQDIqUHsk96rovj3U5njHRGuOx0b37SlSqjVW53hBHsni2l53J4DFV9IxGzPtD_mtWcd0AZDTcWtAXa9x4CyHB-2SH5vRxaRODnVig9F88v9WvYOE-2DVr4Zv9Pw6itcPnqF_4uFEt0PYQew7AnGtqixENonG3m3baMg5r5On0qczXe2iVKHYVFpEgdIud5Y4zQJWJ5hOCHrbKhFZxaRv5E601XOrXSUsQO834_rc4LjQY4DFs2M39h5v9SMEpAMXQ67g552hWfBzFEnN4hTVQxYHBCuR6CHZkkxhgUOXCFqg
参考文章:https://blog.csdn.net/weixin_73059729/article/details/139695528
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。