赞
踩
1、环境信息
1)新安装三台cenos虚拟机,保证三台虚拟机可以互相ping通
- master:192.168.32.100
-
- node1:192.168.32.110
-
- node2:192.168.32.120
2)关闭防火墙
- [lxw@localhost yum.repos.d]$ sudo systemctl stop iptables
- [sudo] lxw 的密码:
- Failed to stop iptables.service: Unit iptables.service not loaded.
- [lxw@localhost yum.repos.d]$
3)基础服务安装
[lxw@localhost yum.repos.d]$ sudo yum -y install net-tools wget vim ntp
4)分别在三台主机上设置主机名
- [lxw@localhost /]$ sudo hostnamectl --static set-hostname k8s_master
- [sudo] lxw 的密码:
- [lxw@localhost /]$ uname -a
- Linux k8s_master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
-
- [lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node1
- [lxw@localhost root]$ uname -a
- Linux k8s_node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
- [lxw@localhost root]$
-
- [lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node2
- [lxw@localhost root]$ uname -a
- Linux k8s_node2 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
- [lxw@localhost root]$
5)设置hosts
- [root@k8s_master ~]# cat <<EOF > /etc/hosts
- > 192.168.32.100 k8s_master
- > 192.168.32.110 k8s_node1
- > 192.168.32.120 k8s_node2
- > EOF
2、各个节点安装docker
注:可以不安装,安装k8s的时候会安装
1)更新源
2)卸载旧版本
- #移除旧版本
- sudo yum remove docker \
- docker-client \
- docker-client-latest \
- docker-common \
- docker-latest \
- docker-latest-logrotate \
- docker-logrotate \
- docker-selinux \
- docker-engine-selinux \
- docker-engine
- # 删除所有旧的数据
- sudo rm -rf /var/lib/docker
3)安装需要的软件包
- # 安装依赖包
- sudo yum install -y yum-utils \
- device-mapper-persistent-data \
- lvm2
4)设置yum源
- #删掉旧源
- sudo rm -rf /etc/yum.repo.d/docker-ce.repo
-
- # 添加源,使用了阿里云镜像
- sudo yum-config-manager \
- --add-repo \
- http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5)安装
- # 配置缓存
- sudo yum makecache fast
-
- # 安装最新稳定版本的docker
- sudo yum install -y docker-ce
6)启动并加入开机启动
- # 启动docker引擎并设置开机启动
- sudo systemctl start docker
- sudo systemctl enable docker
7)测试
- [lxw@localhost yum.repos.d]$ sudo docker search centos
- NAME DESCRIPTION STARS OFFICIAL AUTOMATED
- centos The official build of CentOS. 5917 [OK]
- ansible/centos7-ansible Ansible on Centos7 128 [OK]
- jdeathe/centos-ssh OpenSSH / Supervisor / EPEL/IUS/SCL
8)创建docker组
- [lxw@localhost yum.repos.d]$ sudo groupadd docker
- [sudo] lxw 的密码:
- groupadd:“docker”组已存在
- [lxw@localhost yum.repos.d]$ sudo usermod -aG docker lxw
- [lxw@localhost yum.repos.d]$ docker run hello-world
参考链接:https://www.jianshu.com/p/e6b946c79542
3、master节点配置
etcd安装
1)安装etcd
[lxw@k8s_master root]$ sudo yum -y install etcd
2)编辑配置文件
- [root@k8s_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"
- ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
- ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
- ETCD_NAME="master"
- ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"
3)开启etcd
- [root@k8s_master ~]# systemctl enable etcd
- Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
- [root@k8s_master ~]# systemctl start etcd
4)验证etcd
- [root@k8s_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
- member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
- cluster is healthy
- [root@k8s_master ~]#
kubernetes安装
1)安装kubernetes
- [root@k8s_master ~]# yum -y install kubernetes
- 已加载插件:fastestmirror
- Loading mirror speeds from cached hostfile
- * base: mirrors.nju.edu.cn
2)修改apiserver服务配置文件:(master节点)
- [root@k8s_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"
-
- KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
-
- KUBE_API_PORT="--port=8080"
-
-
- KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.32.100:2379" #master实际ip
-
- KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
-
- KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
-
- KUBE_API_ARGS=""
- [root@k8s_master ~]#
3)修改config配置文件:(master节点)
- [root@k8s_master ~]# cat /etc/kubernetes/config | grep -v "^#"
- KUBE_LOGTOSTDERR="--logtostderr=true"
-
- KUBE_LOG_LEVEL="--v=0"
-
- KUBE_ALLOW_PRIV="--allow-privileged=false"
-
- KUBE_MASTER="--master=http://192.168.32.100:8080" #master的实际Ip
- [root@k8s_master ~]#
4)设置开机启动,开启服务
- [root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
- [root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler
5)查看服务端口
- [root@k8s_master ~]# netstat -tnlp
- Active Internet connections (only servers)
- Proto Recv-Q Send-Q Local Address Foreign Address St ate PID/Program name
- tcp 0 0 127.0.0.1:2380 0.0.0.0:* LI STEN 29976/etcd
- tcp 0 0 0.0.0.0:22 0.0.0.0:* LI STEN 5758/sshd
- tcp 0 0 127.0.0.1:25 0.0.0.0:* LI STEN 5914/master
- tcp6 0 0 :::6443 :::* LI STEN 30192/kube-apiserve
- tcp6 0 0 :::10251 :::* LI STEN 30194/kube-schedule
- tcp6 0 0 :::2379 :::* LI STEN 29976/etcd
- tcp6 0 0 :::10252 :::* LI STEN 30193/kube-controll
- tcp6 0 0 :::8080 :::* LI STEN 30192/kube-apiserve
- tcp6 0 0 :::22 :::* LI STEN 5758/sshd
- tcp6 0 0 ::1:25 :::* LI STEN 5914/master
- tcp6 0 0 :::4001 :::* LI STEN 29976/etcd
- [root@k8s_master ~]#
4、node节点配置
1)安装kubernetes
[root@k8s_node2 ~]# yum -y install kubernetes
2)更改配置
- [root@k8s_node1 ~]# cat /etc/kubernetes/config | grep -v "^#"
- KUBE_LOGTOSTDERR="--logtostderr=true"
-
- KUBE_LOG_LEVEL="--v=0"
-
- KUBE_ALLOW_PRIV="--allow-privileged=false"
-
- KUBE_MASTER="--master=http://192.168.32.100:8080"
- [root@k8s_node1 ~]#
- [root@k8s_master ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
-
- KUBELET_ADDRESS="--address=0.0.0.0"
-
-
- KUBELET_HOSTNAME="--hostname-override=192.168.32.110" #node的ip
-
- KUBELET_API_SERVER="--api-servers=http://192.168.32.100:8080" #master的Ip
-
- KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
-
- KUBELET_ARGS=""
- [root@k8s_master ~]#
3)设置开机启动
- [root@k8s_node1 ~]# systemctl enable kubelet kube-proxy
- [root@k8s_node1 ~]# systemctl start kubelet kube-proxy
4)查看端口
- [root@k8s_node1 ~]# netstat -ntlp
- Active Internet connections (only servers)
- Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
- tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6022/sshd
- tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 6181/master
- tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 38432/kubelet
- tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 38433/kube-proxy
- tcp6 0 0 :::10255 :::* LISTEN 38432/kubelet
- tcp6 0 0 :::22 :::* LISTEN 6022/sshd
- tcp6 0 0 ::1:25 :::* LISTEN 6181/master
- tcp6 0 0 :::4194 :::* LISTEN 38432/kubelet
- tcp6 0 0 :::10250 :::* LISTEN 38432/kubelet
5)测试
- [root@k8s_master ~]# kubectl get nodes
- NAME STATUS AGE
- 192.168.32.110 Ready 11m
-
如果获取不到资源,参考https://blog.csdn.net/weixin_37480442/article/details/82111564
5、配置网络
1)安装flannel
[root@k8s_master ~]# yum -y install flannel
2)flannelpeiz配置
- [root@k8s_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
-
- FLANNEL_ETCD_ENDPOINTS="http://192.168.32.100:2379"
-
- FLANNEL_ETCD_PREFIX="/atomic.io/network"
-
-
- [root@k8s_master ~]#
3)配置网络
- [root@k8s_node1 ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
- {"Network":"172.17.0.0/16"}
4)开机启动项
- [root@k8s_master ~]# systemctl enable flanneld
- Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
- Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
- [root@k8s_master ~]# systemctl start flanneld
6、重启服务
- #master
- for
- SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler;
- do systemctl restart $SERVICES ;
- done
-
- #node
- for
- SERVICES in kube-proxy kubelet docker flanneld;
- do systemctl restart $SERVICES;
- systemctl enable $SERVICES;
- systemctl status $SERVICES;
- done
参考:https://blog.csdn.net/qq_38252499/article/details/99214276
https://www.jianshu.com/p/345e3fb797db
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。