当前位置:   article > 正文

centos 安装k8s1.14.1+calico集群_kubenetes1.14.1 + calico:v3.1.3

kubenetes1.14.1 + calico:v3.1.3
k8s版本:v1.14.1
docker版本:18.09.5
操作系统版本:centos7.4
  • 1
  • 2
  • 3

1.安装docker(所有k8s节点)

1)安装docker依赖包

yum install -y yum-utils device-mapper-persistent-data lvm2
  • 1

2)配置docker yum源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • 1

3)更新yum包索引

yum makecache fast 
  • 1

4)查看yum源docker版本list

yum list docker-ce --showduplicates|sort -r 
  • 1

5)安装最新版docker

yum -y install docker-ce
  • 1

或者根据需要指定版本安装

yum install docker-ce-17.09.0.ce -y
  • 1

修改docker默认默认存储路径

vi /usr/lib/systemd/system/docker.service

#在里面的EXECStart的后面增加后如下
ExecStart=/usr/bin/dockerd --graph /data/docker
  • 1
  • 2
  • 3
  • 4

6)启动docker并把docker加入自启动

systemctl start docker & systemctl enable docker
  • 1

2.修改服务器配置(所有k8s节点)

1)关闭防火墙

systemctl disable firewalld.service 
systemctl stop firewalld.service
  • 1
  • 2

2)禁用SELINUX

setenforce 0
#或者永久禁用
vi /etc/selinux/config
#修改如下配置
SELINUX=disabled
  • 1
  • 2
  • 3
  • 4
  • 5

3)关闭swap

swapoff -a
  • 1

4)设置所有节点主机名

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2
  • 1
  • 2
  • 3

5) 所有节点 主机名/IP加入 hosts解析

192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2
  • 1
  • 2
  • 3

6)设置k8s系统参数

vi /etc/sysctl.d/k8s.conf 
#添加如下配置
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1   
vm.swappiness=0
  • 1
  • 2
  • 3
  • 4
  • 5

执行让设置的参数生效

sysctl --system
  • 1

3.安装kubelet、kubeadm、kubectl(所有k8s节点)

1)设置kubrenetes yum源

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2.安装kubelet、kubeadm、kubectl并启动

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
  • 1
  • 2

4.master集群初始化(master节点)
1)下载相关镜像(由于国内网络原因,只能从其他镜像仓库下载后重新tag镜像实现)

docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

docker tag mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.1           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.1  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.1           
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.3.10                      
docker rmi coredns/coredns:1.3.1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

若安装其他版本k8s,可通过执行如下命令查看用到的镜像,并提前下载好

kubeadm config images list
  • 1

2)初始化master节点

kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
  • 1

–kubernetes-version: 用于指定 k8s版本
–apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
–pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用calico网络插件。 执行完成后,输入如下信息即为安装成功 [root@localhost ~]# kubeadm init --config
kubeadm-config.yaml W1224 11:01:25.408209 10137 strict.go:54] error
unmarshaling configuration
schema.GroupVersionKind{Group:“kubeadm.k8s.io”, Version:“v1beta1”,
Kind:“ClusterConfiguration”}: error unmarshaling JSON: while decoding
JSON: json: unknown field "\u00a0 podSubnet” [init] Using Kubernetes
version: v1.13.1 [preflight] Running pre-flight checks [preflight]
Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of
your internet connection [preflight] You can also perform this action
in beforehand using ‘kubeadm config images pull’ [kubelet-start]
Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet
configuration to file "/var/lib/kubelet/config.yaml” [kubelet-start]
Activating the kubelet service [certs] Using certificateDir folder
"/etc/kubernetes/pki” [certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “etcd/server” certificate and key [certs]
etcd/server serving cert is signed for DNS names
[localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1
::1] [certs] Generating “etcd/peer” certificate and key [certs]
etcd/peer serving cert is signed for DNS names [localhost.localdomain
localhost] and IPs [192.168.39.79 127.0.0.1 ::1] [certs] Generating
“apiserver-etcd-client” certificate and key [certs] Generating “ca”
certificate and key [certs] Generating “apiserver-kubelet-client”
certificate and key [certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names
[localhost.localdomain kubernetes kubernetes.default
kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs
[10.96.0.1 192.168.39.79] [certs] Generating “front-proxy-ca”
certificate and key [certs] Generating “front-proxy-client”
certificate and key [certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes” [kubeconfig]
Writing “admin.conf” kubeconfig file [kubeconfig] Writing
“kubelet.conf” kubeconfig file [kubeconfig] Writing
“controller-manager.conf” kubeconfig file [kubeconfig] Writing
“scheduler.conf” kubeconfig file [control-plane] Using manifest folder
"/etc/kubernetes/manifests” [control-plane] Creating static Pod
manifest for "kube-apiserver” [control-plane] Creating static Pod
manifest for "kube-controller-manager” [control-plane] Creating static
Pod manifest for "kube-scheduler” [etcd] Creating static Pod manifest
for local etcd in "/etc/kubernetes/manifests” [wait-control-plane]
Waiting for the kubelet to boot up the control plane as static Pods
from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638
seconds [uploadconfig] storing the configuration used in ConfigMap
“kubeadm-config” in the "kube-system” Namespace [kubelet] Creating a
ConfigMap “kubelet-config-1.13” in namespace kube-system with the
configuration for the kubelets in the cluster [patchnode] Uploading
the CRI Socket information “/var/run/dockershim.sock” to the Node API
object “localhost.localdomain” as an annotation [mark-control-plane]
Marking the node localhost.localdomain as control-plane by adding the
label "node-role.kubernetes.io/master=’’” [mark-control-plane] Marking
the node localhost.localdomain as control-plane by adding the taints
[node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using
token: 26uprk.t7vpbwxojest0tvq [bootstrap-token] Configuring bootstrap
tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured
RBAC rules to allow Node Bootstrap tokens to post CSRs in order for
nodes to get long term certificate credentials [bootstraptoken]
configured RBAC rules to allow the csrapprover controller
automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation
for all node client certificates in the cluster [bootstraptoken]
creating the “cluster-info” ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS [addons] Applied essential
addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a
regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run “kubectl apply
-f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on
each node as root:

kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq
–discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123

3)如上述提示,执行如下命令,为当前用户授权kubectl权限

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 1
  • 2
  • 3

4)安装网络插件calico,执行如下命令,安装网络插件

kubectl apply -f  https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f  https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
  • 1
  • 2

5)几分钟之后,执行如下命令,若显示全部pod都为running,即为安装成功

get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-node-kkt6n                    2/2     Running   0          56m   192.168.212.46   k8s-master   <none>           <none>
kube-system   calico-node-pflcd                    2/2     Running   0          55m   192.168.212.20   k8s-node-1   <none>           <none>
kube-system   coredns-fb8b8dccf-68vlz              1/1     Running   0          58m   192.168.0.4      k8s-master   <none>           <none>
kube-system   coredns-fb8b8dccf-86vxc              1/1     Running   0          58m   192.168.0.5      k8s-master   <none>           <none>
kube-system   etcd-k8s-master                      1/1     Running   0          57m   192.168.212.46   k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   0          57m   192.168.212.46   k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          57m   192.168.212.46   k8s-master   <none>           <none>
kube-system   kube-proxy-7qhkh                     1/1     Running   0          58m   192.168.212.46   k8s-master   <none>           <none>
kube-system   kube-proxy-9d68x                     1/1     Running   0          55m   192.168.212.20   k8s-node-1   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   0          56m   192.168.212.46   k8s-master   <none>           <none>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

5.添加slave节点

1.根据4.2执行结果中提示,在slave节点执行如下命令。把slave加入集群。

kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123
  • 1

重新生成token及sha256值命令

kubeadm token create --ttl 0
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • 1
  • 2

2.执行如下命令查看集群状态。

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   62m   v1.14.1
k8s-node-1   Ready    <none>   59m   v1.14.1
  • 1
  • 2
  • 3
  • 4
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/546744
推荐阅读
相关标签
  

闽ICP备14008679号