赞
踩
本文更好的阅读体验请见个人博客
主机名 | 公网IP | 内网IP | 系统 | 配置 |
---|---|---|---|---|
k8s-master | 119.3.168.188 | 192.168.0.194 | CentOS 7.6 | 4核 16G |
k8s-node1 | 121.36.55.3 | 192.168.0.130 | CentOS 7.6 | 4核 16G |
k8s-node2 | 124.70.19.106 | 192.168.0.130 | CentOS 7.6 | 4核 16G |
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
cat >> /etc/hosts<<EOF
192.168.0.194 k8s-master
192.168.0.130 k8s-node1
192.168.0.245 k8s-node2
EOF
yum remove docker docker-common container-selinux docker-selinux docker-engine
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo # 安装必要工具集 yum install -y yum-utils # 添加docker的yum源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #阿里云 # 更新yum缓存 yum makecache fast # 查看docker版本信息 yum list docker-ce --showduplicates | sort -r # 挑选指定版本安装 yum -y install docker-ce-<版本号> yum -y install docker-ce-20.10.11-3.el7 # 启动docker并设置开机自启 systemctl enable docker && systemctl start docker # 检查docker版本 docker -v
cat >/etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts":{ "max-size": "100m" }, "registry-mirrors": [ "https://82m9ar63.mirror.aliyuncs.com" ] } EOF # 重启docker systemctl daemon-reload systemctl enable docker && systemctl restart docker && systemctl status docker
# 安装一些依赖包 yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git # 将 SELinux 设置为 permissive 模式(相当于将其禁用) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # 关闭swap swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab # 关闭防火墙,设置 iptables 检查桥接流量 systemctl stop firewalld && systemctl disable firewalld yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
# 配置阿里源 cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 安装 kubelet kubeadm kubectl yum install -y kubelet-1.20.11 kubectl-1.20.11 kubeadm-1.20.11 # systemctl在enable、disable、mask子命令里面增加了--now选项,可以激活同时启动服务,激活同时停止服务等 systemctl enable --now kubelet # 查看安装的版本 kubelet --version
如果想卸载k8s组件的话可以进行下面命令:
# 卸载K8s组件前,先执行kubeadm reset命令,清空K8s集群设置
echo y|kubeadm reset
# 卸载管理组件
yum erase -y kubelet kubectl kubeadm kubernetes-cni
本来直接用kubeadm init
就行,但是由于init命令是从k8s.gcr.io网站上下载镜像,被墙了,所以需要写个脚本把这些镜像下好
[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的Docker镜像文件
[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
[wait-control-plane]:等待control-plan部署的Master组件启动。
[apiclient]:检查Master组件服务状态。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]:安装附加组件CoreDNS和kube-proxy
kubeadm config images list
# 输出结果, 这些都是K8S的必要组件, 但是由于被墙, 是不能直接docker pull下来的
k8s.gcr.io/kube-apiserver:v1.20.15
k8s.gcr.io/kube-controller-manager:v1.20.15
k8s.gcr.io/kube-scheduler:v1.20.15
k8s.gcr.io/kube-proxy:v1.20.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
## 位置自己确定,记住就行 cat >/root/k8s-script/pull_k8s_images.sh << "EOF" # 内容为 set -o errexit set -o nounset set -o pipefail ##这里定义需要下载的版本 KUBE_VERSION=v1.20.15 KUBE_PAUSE_VERSION=3.2 ETCD_VERSION=3.4.13-0 DNS_VERSION=1.7.0 ##这是原来被墙的仓库 GCR_URL=k8s.gcr.io ##这里就是写你要使用的仓库,也可以使用gotok8s DOCKERHUB_URL=registry.cn-hangzhou.aliyuncs.com/google_containers ##这里是镜像列表 images=( kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} kube-apiserver:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coredns:${DNS_VERSION} ) ## 这里是拉取和改名的循环语句, 先下载, 再tag重命名生成需要的镜像, 再删除下载的镜像 for imageName in ${images[@]} ; do docker pull $DOCKERHUB_URL/$imageName docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName docker rmi $DOCKERHUB_URL/$imageName done EOF
# 示例
scp /root/k8s-script/pull_k8s_images.sh root@IP地址:/root/k8s-script/
scp /root/k8s-script/pull_k8s_images.sh root@121.36.55.3:/root/k8s-script/pull_k8s_images.sh
scp /root/k8s-script/pull_k8s_images.sh root@124.70.19.106:/root/k8s-script/pull_k8s_images.sh
bash /root/k8s-script/pull_k8s_images.sh
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.20.15 46e2cd1b2594 4 months ago 99.7MB
k8s.gcr.io/kube-scheduler v1.20.15 9155e4deabb3 4 months ago 47.3MB
k8s.gcr.io/kube-controller-manager v1.20.15 d6296d0e06d2 4 months ago 116MB
k8s.gcr.io/kube-apiserver v1.20.15 323f6347f5e2 4 months ago 122MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 21 months ago 253MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 23 months ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 2 years ago 683kB
vim kubeadm-config.yaml
# 修改项下面标出 apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.194 # 本机IP bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master # 本主机名 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} # 虚拟IP和haproxy端口(可以不填写) dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers # 镜像仓库源要根据自己实际情况修改 kind: ClusterConfiguration kubernetesVersion: v1.20.15 # 修改版本, 与前面版本一致, 也可通过 kubeadm version 查看版本 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" # 新增pod子网, 固定该IP即可 serviceSubnet: 10.96.0.0/12 scheduler: {} # 新增下面设置, 固定即可 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs
kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log # 正常运行结果 ...... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: ......
# 在master上运行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 推送node{1..X}机器上,如果/root/.kube/config没有目录要手动创建
scp /etc/kubernetes/admin.conf root@121.36.55.3:/root/.kube/config
scp /etc/kubernetes/admin.conf root@124.70.19.106:/root/.kube/config
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 9m27s v1.20.11
还是在主节点的init命令的输出日志下, 有子节点的加入命令, 在两台子节点服务器上运行
kubeadm join MasterIP地址:6443 --token xxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxx #正常运行结果 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
kubectl get nodes
[root@k8s-node2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 14m v1.20.11
k8s-node1 NotReady <none> 3m39s v1.20.11
k8s-node2 NotReady <none> 57s v1.20.11
# 先拉取镜像,此过程国内速度比较慢
docker pull quay.io/coreos/flannel:v0.14.0
# 去https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml弄一个yml文件 kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created # 查看pod, 可以看到flannel组件已经运行起来了. 默认系统组件都安装在 kube-system 这个命名空间(namespace)下 [root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-6wmct 1/1 Running 0 51m coredns-7f89b7bc75-nvnnr 1/1 Running 0 51m etcd-k8s-master 1/1 Running 0 51m kube-apiserver-k8s-master 1/1 Running 0 51m kube-controller-manager-k8s-master 1/1 Running 0 51m kube-flannel-ds-dbwqc 1/1 Running 0 12m kube-flannel-ds-pfk6t 1/1 Running 0 12m kube-flannel-ds-q8tkd 1/1 Running 0 12m kube-proxy-jcll5 1/1 Running 0 40m kube-proxy-l68cn 1/1 Running 0 37m kube-proxy-qwf5z 1/1 Running 0 51m kube-scheduler-k8s-master 1/1 Running 0 51m # 再次查看node, 发现状态已经变成了 Ready [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 51m v1.20.11 k8s-node1 Ready <none> 40m v1.20.11 k8s-node2 Ready <none> 37m v1.20.11
如果想要卸载flannel则运行下面命令:
kubectl delete -f kube-flannel.yml
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。