当前位置:   article > 正文

linux--centos搭建k8s集群_[root@k8s-node1 ~]# systemctl stop iptables failed

[root@k8s-node1 ~]# systemctl stop iptables failed to stop iptables.service:

1、环境信息

1)新安装三台cenos虚拟机,保证三台虚拟机可以互相ping通

  1. master:192.168.32.100
  2. node1:192.168.32.110
  3. node2:192.168.32.120

2)关闭防火墙

  1. [lxw@localhost yum.repos.d]$ sudo systemctl stop iptables
  2. [sudo] lxw 的密码:
  3. Failed to stop iptables.service: Unit iptables.service not loaded.
  4. [lxw@localhost yum.repos.d]$

3)基础服务安装

[lxw@localhost yum.repos.d]$ sudo yum -y install net-tools wget vim ntp

4)分别在三台主机上设置主机名

  1. [lxw@localhost /]$ sudo hostnamectl --static set-hostname k8s_master
  2. [sudo] lxw 的密码:
  3. [lxw@localhost /]$ uname -a
  4. Linux k8s_master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  5. [lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node1
  6. [lxw@localhost root]$ uname -a
  7. Linux k8s_node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  8. [lxw@localhost root]$
  9. [lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node2
  10. [lxw@localhost root]$ uname -a
  11. Linux k8s_node2 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  12. [lxw@localhost root]$

5)设置hosts

  1. [root@k8s_master ~]# cat <<EOF > /etc/hosts
  2. > 192.168.32.100 k8s_master
  3. > 192.168.32.110 k8s_node1
  4. > 192.168.32.120 k8s_node2
  5. > EOF

 

2、各个节点安装docker

注:可以不安装,安装k8s的时候会安装

1)更新源

2)卸载旧版本

  1. #移除旧版本
  2. sudo yum remove docker \
  3. docker-client \
  4. docker-client-latest \
  5. docker-common \
  6. docker-latest \
  7. docker-latest-logrotate \
  8. docker-logrotate \
  9. docker-selinux \
  10. docker-engine-selinux \
  11. docker-engine
  12. # 删除所有旧的数据
  13. sudo rm -rf /var/lib/docker

3)安装需要的软件包

  1. # 安装依赖包
  2. sudo yum install -y yum-utils \
  3. device-mapper-persistent-data \
  4. lvm2

4)设置yum源

  1. #删掉旧源
  2. sudo rm -rf /etc/yum.repo.d/docker-ce.repo
  3. # 添加源,使用了阿里云镜像
  4. sudo yum-config-manager \
  5. --add-repo \
  6. http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

5)安装

  1. # 配置缓存
  2. sudo yum makecache fast
  3. # 安装最新稳定版本的docker
  4. sudo yum install -y docker-ce

6)启动并加入开机启动 

  1. # 启动docker引擎并设置开机启动
  2. sudo systemctl start docker
  3. sudo systemctl enable docker

7)测试

  1. [lxw@localhost yum.repos.d]$ sudo docker search centos
  2. NAME DESCRIPTION STARS OFFICIAL AUTOMATED
  3. centos The official build of CentOS. 5917 [OK]
  4. ansible/centos7-ansible Ansible on Centos7 128 [OK]
  5. jdeathe/centos-ssh OpenSSH / Supervisor / EPEL/IUS/SCL

8)创建docker组

  1. [lxw@localhost yum.repos.d]$ sudo groupadd docker
  2. [sudo] lxw 的密码:
  3. groupadd:“docker”组已存在
  4. [lxw@localhost yum.repos.d]$ sudo usermod -aG docker lxw
  5. [lxw@localhost yum.repos.d]$ docker run hello-world

 

 参考链接:https://www.jianshu.com/p/e6b946c79542

3、master节点配置

etcd安装

1)安装etcd

[lxw@k8s_master root]$ sudo yum -y install etcd

2)编辑配置文件

  1. [root@k8s_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"
  2. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  3. ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
  4. ETCD_NAME="master"
  5. ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

3)开启etcd

  1. [root@k8s_master ~]# systemctl enable etcd
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
  3. [root@k8s_master ~]# systemctl start etcd

4)验证etcd

  1. [root@k8s_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
  2. member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
  3. cluster is healthy
  4. [root@k8s_master ~]#

kubernetes安装

1)安装kubernetes

  1. [root@k8s_master ~]# yum -y install kubernetes
  2. 已加载插件:fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.nju.edu.cn

2)修改apiserver服务配置文件:(master节点)

  1. [root@k8s_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"
  2. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
  3. KUBE_API_PORT="--port=8080"
  4. KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.32.100:2379" #master实际ip
  5. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  6. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
  7. KUBE_API_ARGS=""
  8. [root@k8s_master ~]#

3)修改config配置文件:(master节点)

  1. [root@k8s_master ~]# cat /etc/kubernetes/config | grep -v "^#"
  2. KUBE_LOGTOSTDERR="--logtostderr=true"
  3. KUBE_LOG_LEVEL="--v=0"
  4. KUBE_ALLOW_PRIV="--allow-privileged=false"
  5. KUBE_MASTER="--master=http://192.168.32.100:8080" #master的实际Ip
  6. [root@k8s_master ~]#

4)设置开机启动,开启服务

  1. [root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  2. [root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

5)查看服务端口

  1. [root@k8s_master ~]# netstat -tnlp
  2. Active Internet connections (only servers)
  3. Proto Recv-Q Send-Q Local Address Foreign Address St ate PID/Program name
  4. tcp 0 0 127.0.0.1:2380 0.0.0.0:* LI STEN 29976/etcd
  5. tcp 0 0 0.0.0.0:22 0.0.0.0:* LI STEN 5758/sshd
  6. tcp 0 0 127.0.0.1:25 0.0.0.0:* LI STEN 5914/master
  7. tcp6 0 0 :::6443 :::* LI STEN 30192/kube-apiserve
  8. tcp6 0 0 :::10251 :::* LI STEN 30194/kube-schedule
  9. tcp6 0 0 :::2379 :::* LI STEN 29976/etcd
  10. tcp6 0 0 :::10252 :::* LI STEN 30193/kube-controll
  11. tcp6 0 0 :::8080 :::* LI STEN 30192/kube-apiserve
  12. tcp6 0 0 :::22 :::* LI STEN 5758/sshd
  13. tcp6 0 0 ::1:25 :::* LI STEN 5914/master
  14. tcp6 0 0 :::4001 :::* LI STEN 29976/etcd
  15. [root@k8s_master ~]#

4、node节点配置

1)安装kubernetes

[root@k8s_node2 ~]# yum -y install kubernetes

2)更改配置

  1. [root@k8s_node1 ~]# cat /etc/kubernetes/config | grep -v "^#"
  2. KUBE_LOGTOSTDERR="--logtostderr=true"
  3. KUBE_LOG_LEVEL="--v=0"
  4. KUBE_ALLOW_PRIV="--allow-privileged=false"
  5. KUBE_MASTER="--master=http://192.168.32.100:8080"
  6. [root@k8s_node1 ~]#
  1. [root@k8s_master ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
  2. KUBELET_ADDRESS="--address=0.0.0.0"
  3. KUBELET_HOSTNAME="--hostname-override=192.168.32.110" #node的ip
  4. KUBELET_API_SERVER="--api-servers=http://192.168.32.100:8080" #master的Ip
  5. KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  6. KUBELET_ARGS=""
  7. [root@k8s_master ~]#

3)设置开机启动

  1. [root@k8s_node1 ~]# systemctl enable kubelet kube-proxy
  2. [root@k8s_node1 ~]# systemctl start kubelet kube-proxy

4)查看端口 

  1. [root@k8s_node1 ~]# netstat -ntlp
  2. Active Internet connections (only servers)
  3. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  4. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6022/sshd
  5. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 6181/master
  6. tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 38432/kubelet
  7. tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 38433/kube-proxy
  8. tcp6 0 0 :::10255 :::* LISTEN 38432/kubelet
  9. tcp6 0 0 :::22 :::* LISTEN 6022/sshd
  10. tcp6 0 0 ::1:25 :::* LISTEN 6181/master
  11. tcp6 0 0 :::4194 :::* LISTEN 38432/kubelet
  12. tcp6 0 0 :::10250 :::* LISTEN 38432/kubelet

5)测试

  1. [root@k8s_master ~]# kubectl get nodes
  2. NAME STATUS AGE
  3. 192.168.32.110 Ready 11m

如果获取不到资源,参考https://blog.csdn.net/weixin_37480442/article/details/82111564

5、配置网络 

1)安装flannel

[root@k8s_master ~]# yum -y install flannel

2)flannelpeiz配置

  1. [root@k8s_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
  2. FLANNEL_ETCD_ENDPOINTS="http://192.168.32.100:2379"
  3. FLANNEL_ETCD_PREFIX="/atomic.io/network"
  4. [root@k8s_master ~]#

3)配置网络

  1. [root@k8s_node1 ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
  2. {"Network":"172.17.0.0/16"}

4)开机启动项

  1. [root@k8s_master ~]# systemctl enable flanneld
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
  3. Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
  4. [root@k8s_master ~]# systemctl start flanneld

6、重启服务

  1. #master
  2. for
  3. SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler;
  4. do systemctl restart $SERVICES ;
  5. done
  6. #node
  7. for
  8. SERVICES in kube-proxy kubelet docker flanneld;
  9. do systemctl restart $SERVICES;
  10. systemctl enable $SERVICES;
  11. systemctl status $SERVICES;
  12. done

 

参考:https://blog.csdn.net/qq_38252499/article/details/99214276

 https://www.jianshu.com/p/345e3fb797db

 

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号