当前位置:   article > 正文

华为云k8s集群的搭建_华为云创建一个x86架构集群

华为云创建一个x86架构集群

本文更好的阅读体验请见个人博客

服务器准备

主机名公网IP内网IP系统配置
k8s-master119.3.168.188192.168.0.194CentOS 7.64核 16G
k8s-node1121.36.55.3192.168.0.130CentOS 7.64核 16G
k8s-node2124.70.19.106192.168.0.130CentOS 7.64核 16G

系统前期设置

设置主机名

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
  • 1
  • 2
  • 3

配置hosts文件

cat >> /etc/hosts<<EOF
192.168.0.194            k8s-master
192.168.0.130            k8s-node1
192.168.0.245             k8s-node2
EOF
  • 1
  • 2
  • 3
  • 4
  • 5

安装docker

卸载旧版本(未安装过则跳过)

yum remove docker docker-common container-selinux docker-selinux docker-engine
  • 1

安装新版本

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 安装必要工具集
yum install -y yum-utils

# 添加docker的yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #阿里云

# 更新yum缓存
yum makecache fast

# 查看docker版本信息
yum list docker-ce --showduplicates | sort -r

# 挑选指定版本安装 yum -y install docker-ce-<版本号>
yum -y install docker-ce-20.10.11-3.el7

# 启动docker并设置开机自启
systemctl enable docker && systemctl start docker

# 检查docker版本
docker -v
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

配置daemon.json文件

cat >/etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts":{
    "max-size": "100m"
  },
  "registry-mirrors": [
        "https://82m9ar63.mirror.aliyuncs.com"
  ]
}
EOF

# 重启docker
systemctl daemon-reload
systemctl enable docker && systemctl restart docker && systemctl status docker 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

安装kubeadm(三台)

环境配置

# 安装一些依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 关闭防火墙,设置 iptables 检查桥接流量
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save 
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

安装kubelet、kubeadm、kubectl

# 配置阿里源
cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

# 安装 kubelet kubeadm kubectl
yum install -y kubelet-1.20.11 kubectl-1.20.11 kubeadm-1.20.11

# systemctl在enable、disable、mask子命令里面增加了--now选项,可以激活同时启动服务,激活同时停止服务等
systemctl enable --now kubelet

# 查看安装的版本
kubelet --version
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

如果想卸载k8s组件的话可以进行下面命令:

# 卸载K8s组件前,先执行kubeadm reset命令,清空K8s集群设置
echo y|kubeadm reset

# 卸载管理组件
yum erase -y kubelet kubectl kubeadm kubernetes-cni
  • 1
  • 2
  • 3
  • 4
  • 5

下载必须镜像(三台)

本来直接用kubeadm init就行,但是由于init命令是从k8s.gcr.io网站上下载镜像,被墙了,所以需要写个脚本把这些镜像下好

kubeadm init主要执行操作

[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的Docker镜像文件
[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
[wait-control-plane]:等待control-plan部署的Master组件启动。
[apiclient]:检查Master组件服务状态。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]:安装附加组件CoreDNS和kube-proxy 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

查看需要下载的镜像

kubeadm config images list

# 输出结果, 这些都是K8S的必要组件, 但是由于被墙, 是不能直接docker pull下来的
k8s.gcr.io/kube-apiserver:v1.20.15
k8s.gcr.io/kube-controller-manager:v1.20.15
k8s.gcr.io/kube-scheduler:v1.20.15
k8s.gcr.io/kube-proxy:v1.20.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

编写pull脚本

## 位置自己确定,记住就行
cat >/root/k8s-script/pull_k8s_images.sh << "EOF"
# 内容为
set -o errexit
set -o nounset
set -o pipefail

##这里定义需要下载的版本
KUBE_VERSION=v1.20.15
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.13-0
DNS_VERSION=1.7.0

##这是原来被墙的仓库
GCR_URL=k8s.gcr.io

##这里就是写你要使用的仓库,也可以使用gotok8s
DOCKERHUB_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

##这里是镜像列表
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)

## 这里是拉取和改名的循环语句, 先下载, 再tag重命名生成需要的镜像, 再删除下载的镜像
for imageName in ${images[@]} ; do
  docker pull $DOCKERHUB_URL/$imageName
  docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  docker rmi $DOCKERHUB_URL/$imageName
done
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

推送脚本到node节点中

# 示例
scp /root/k8s-script/pull_k8s_images.sh root@IP地址:/root/k8s-script/

scp /root/k8s-script/pull_k8s_images.sh root@121.36.55.3:/root/k8s-script/pull_k8s_images.sh
scp /root/k8s-script/pull_k8s_images.sh root@124.70.19.106:/root/k8s-script/pull_k8s_images.sh
  • 1
  • 2
  • 3
  • 4
  • 5

执行脚本

bash /root/k8s-script/pull_k8s_images.sh
  • 1

查看下载结果

docker images
REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                v1.20.15   46e2cd1b2594   4 months ago    99.7MB
k8s.gcr.io/kube-scheduler            v1.20.15   9155e4deabb3   4 months ago    47.3MB
k8s.gcr.io/kube-controller-manager   v1.20.15   d6296d0e06d2   4 months ago    116MB
k8s.gcr.io/kube-apiserver            v1.20.15   323f6347f5e2   4 months ago    122MB
k8s.gcr.io/etcd                      3.4.13-0   0369cf4303ff   21 months ago   253MB
k8s.gcr.io/coredns                   1.7.0      bfe3a36ebd25   23 months ago   45.2MB
k8s.gcr.io/pause                     3.2        80d28bedfe5d   2 years ago     683kB
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

初始化主节点(只有主节点)

编辑文件

vim kubeadm-config.yaml
  • 1
# 修改项下面标出
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.194     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master         # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}           # 虚拟IP和haproxy端口(可以不填写)
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers     # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.20.15      # 修改版本, 与前面版本一致, 也可通过 kubeadm version 查看版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"    # 新增pod子网, 固定该IP即可
  serviceSubnet: 10.96.0.0/12
scheduler: {}

# 新增下面设置, 固定即可
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

运行初始化命令

kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

# 正常运行结果
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
......
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

根据提示操作

# 在master上运行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 推送node{1..X}机器上,如果/root/.kube/config没有目录要手动创建
scp /etc/kubernetes/admin.conf root@121.36.55.3:/root/.kube/config
scp /etc/kubernetes/admin.conf root@124.70.19.106:/root/.kube/config
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

查看当前节点状态

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   9m27s   v1.20.11
  • 1
  • 2
  • 3

将子节点加入到主节点下面(在子节点上操作)

还是在主节点的init命令的输出日志下, 有子节点的加入命令, 在两台子节点服务器上运行

kubeadm join MasterIP地址:6443 --token xxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxx 

#正常运行结果
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

查看集群节点

kubectl get nodes

[root@k8s-node2 ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   14m     v1.20.11
k8s-node1    NotReady   <none>                 3m39s   v1.20.11
k8s-node2    NotReady   <none>                 57s     v1.20.11
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

部署flannel网络(主节点操作)

安装flannel网络插件

# 先拉取镜像,此过程国内速度比较慢
docker pull quay.io/coreos/flannel:v0.14.0
  • 1
  • 2

配置flannel

# 去https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml弄一个yml文件
kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

# 查看pod, 可以看到flannel组件已经运行起来了. 默认系统组件都安装在 kube-system 这个命名空间(namespace)下
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-6wmct             1/1     Running   0          51m
coredns-7f89b7bc75-nvnnr             1/1     Running   0          51m
etcd-k8s-master                      1/1     Running   0          51m
kube-apiserver-k8s-master            1/1     Running   0          51m
kube-controller-manager-k8s-master   1/1     Running   0          51m
kube-flannel-ds-dbwqc                1/1     Running   0          12m
kube-flannel-ds-pfk6t                1/1     Running   0          12m
kube-flannel-ds-q8tkd                1/1     Running   0          12m
kube-proxy-jcll5                     1/1     Running   0          40m
kube-proxy-l68cn                     1/1     Running   0          37m
kube-proxy-qwf5z                     1/1     Running   0          51m
kube-scheduler-k8s-master            1/1     Running   0          51m

# 再次查看node, 发现状态已经变成了 Ready
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   51m   v1.20.11
k8s-node1    Ready    <none>                 40m   v1.20.11
k8s-node2    Ready    <none>                 37m   v1.20.11
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

如果想要卸载flannel则运行下面命令:

kubectl delete -f kube-flannel.yml
  • 1

参考文章

  1. kubeadm安装k8s集群(阿里云服务)
  2. kubeadm安装k8s集群
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/728009
推荐阅读
相关标签
  

闽ICP备14008679号