当前位置:   article > 正文

k8s-v1.24.2高可用部署_192.168.31.54

192.168.31.54

本篇博客用于记录初步学习k8s后,学习部署集群的部署过程。

1、环境准备

本次环境部署,采用VMware虚拟机来部署,配置为2C4G。3台master,3台node,2台负载均衡器,共8台机器来部署k8s高可用集群。操作系统都是CentOS7.6版本,详细机器配置和IP地址规划如下:

机器IP机器角色
192.168.31.183master01
192.168.31.185master02
192.168.31.247master03
192.168.31.211node01
192.168.31.117node02
192.168.31.135node03
192.168.31.54loadbalance01
192.168.31.206loadbalance02
192.168.31.200VIP(test.k8s.local)

说明:这些IP是通过VMware桥接到物理网络上,自动获取的IP地址,如果已经有规划很连续的IP地址那更好,怎么部署方便怎么来。

2、基础环境配置

2.1、 修改机器主机名

给所有机器修改主机名

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname master01
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname master02
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname master03
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname node01
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname node02
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname node03
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname loadbalance01
[root@MiWiFi-RM1800-srv ~]# 

[root@MiWiFi-RM1800-srv ~]# hostnamectl set-hostname loadbalance02
[root@MiWiFi-RM1800-srv ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

2.2、配置机器免密登陆

这里以master01作为跳板,通过master01能免密远程登陆到其他各个节点中去。

(1)生成ssh公私钥对:

[root@master01 ~]# 
[root@master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Hmp6qL/ryDepWDfA5eUDeiwGX91NgBx2XP5wOhGKsFo root@master01
The key's randomart image is:
+---[RSA 2048]----+
|    ..o+oo+      |
|     =o+.= .     |
|.   E + o = .    |
| + O +     *     |
|  O + o S o .    |
| . +   + . .     |
|  . ooo .        |
| + o=+.          |
|. =**=           |
+----[SHA256]-----+
[root@master01 ~]# 
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

(2)将公钥分发至其余各节点中:

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.183
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.183 (192.168.31.183)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.183's password: 
Permission denied, please try again.
root@192.168.31.183's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.183'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.185
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.185 (192.168.31.185)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.185's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.185'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.247
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.247 (192.168.31.247)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.247's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.247'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.211
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.211 (192.168.31.211)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.211's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.211'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.117
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.117 (192.168.31.117)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.117's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.117'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.135
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.135 (192.168.31.135)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.135's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.135'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.54
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.54 (192.168.31.54)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.54's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.54'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.31.206
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host '192.168.31.206 (192.168.31.206)' can't be established.
ECDSA key fingerprint is SHA256:qwIzbDzkrM4yl2g74l+/DqRoCXcUz3QVCfEK23CFg6c.
ECDSA key fingerprint is MD5:09:17:ba:5b:07:20:ac:22:48:e4:5a:6b:cc:26:60:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.31.206's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.31.206'"
and check to make sure that only the key(s) you wanted were added.

[root@master01 ~]# 
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133

2.3、修改机器的hosts文件,提供本地主机名解析功能

在所有机器上都需要配置hosts文件。这里在master01上配置好,然后复制到每一台节点上去即可。

(1)在master01上配置hosts文件:

[root@master01 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.31.183  master01
192.168.31.185  master02
192.168.31.247  master03
192.168.31.211  node01
192.168.31.117  node02
192.168.31.135  node03
192.168.31.54   loadbalance01
192.168.31.206  loadbalance02
192.168.31.200  test.k8s.local

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

(2)将hosts复制到其余机器上:

[root@master01 ~]# 
[root@master01 ~]# scp /etc/hosts root@master02:/etc/
hosts                                                                                   100%  394   426.1KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@master03:/etc/
hosts                                                                                   100%  394   225.1KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@node01:/etc/
hosts                                                                                   100%  394   392.2KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@node02:/etc/
hosts                                                                                   100%  394   393.6KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@node03:/etc/
hosts                                                                                   100%  394   395.0KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@loadbalance01:/etc/
hosts                                                                                   100%  394   422.6KB/s   00:00    
[root@master01 ~]# scp /etc/hosts root@loadbalance02:/etc/
hosts                                                                                   100%  394   408.0KB/s   00:00    
[root@master01 ~]# 
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

2.4、关闭机器防火墙以及selinux

对所有机器关闭防火墙以及selinux

(1)关闭防火墙

[root@master01 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@master02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@master03 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@node01 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@node02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@node03 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@loadbalance01 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@loadbalance02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

(2)关闭selinux:

临时关闭:(每台机器都需要关闭)

[root@master01 ~]# setenforce 0
[root@master01 ~]# getenforce
Permissive

[root@master02 ~]# setenforce 0
[root@master02 ~]# getenforce
Permissive

[root@master03 ~]# setenforce 0
[root@master03 ~]# getenforce
Permissive

[root@node01 ~]# setenforce 0
[root@node01 ~]# getenforce
Permissive

[root@node02 ~]# setenforce 0
[root@node02 ~]# getenforce
Permissive

[root@node03 ~]# setenforce 0
[root@node03 ~]# getenforce
Permissive

[root@loadbalance01 ~]# setenforce 0
[root@loadbalance01 ~]# getenforce
Permissive

[root@loadbalance02 ~]# setenforce 0
[root@loadbalance02 ~]# getenforce
Permissive

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

永久关闭:(需要重启机器,每台机器都需要做。这里先不重启,等后面升级完内核之后一起重启)

[root@master01 ~]# vim /etc/selinux/config
.....
.....
SELINUX=disabled         //这里将SELINUX的值从inforcing改为disabled
.....


[root@master01 ~]# scp /etc/selinux/config root@master02:/etc/selinux/
config                                                                                  100%  542   976.1KB/s   00:00    
[root@master01 ~]# scp /etc/selinux/config root@master03:/etc/selinux/
config                                                                                  100%  542     1.1MB/s   00:00    
[root@master01 ~]# 
[root@master01 ~]# scp /etc/selinux/config root@node01:/etc/selinux/
config                                                                                  100%  542   930.1KB/s   00:00    
[root@master01 ~]# scp /etc/selinux/config root@node02:/etc/selinux/
config                                                                                  100%  542     1.0MB/s   00:00    
[root@master01 ~]# scp /etc/selinux/config root@node03:/etc/selinux/
config                                                                                  100%  542     1.0MB/s   00:00    
[root@master01 ~]# scp /etc/selinux/config root@loadbalance01:/etc/selinux/
config                                                                                  100%  542   852.9KB/s   00:00    
[root@master01 ~]# scp /etc/selinux/config root@loadbalance02:/etc/selinux/
config                                                                                  100%  542     1.0MB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

2.5、时间同步

对于集群来说,需要保证集群内的每个节点的时间都要保持一致,否则集群很容易发生脑裂行为。
时间同步有两种方式,一个是NTP服务,另一个是chronyd服务。这里,我们选择通过chronyd服务来同步时间。此处我们直接把每台机器都与互联网时间(ntp.aliyun.com)进行同步。

(1)在所有节点上安装chronyd服务:(此处我只写一台机器的安装,其余机器安装chronyd服务和这个一样。)

[root@master01 ~]# yum -y install chrony
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.bupt.edu.cn
 * extras: mirrors.bupt.edu.cn
 * updates: mirrors.bupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package chrony.x86_64 0:3.4-1.el7 will be installed
--> Processing Dependency: libseccomp.so.2()(64bit) for package: chrony-3.4-1.el7.x86_64
--> Running transaction check
---> Package libseccomp.x86_64 0:2.3.1-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                        Arch                       Version                         Repository                Size
==========================================================================================================================
Installing:
 chrony                         x86_64                     3.4-1.el7                       base                     251 k
Installing for dependencies:
 libseccomp                     x86_64                     2.3.1-4.el7                     base                      56 k

Transaction Summary
==========================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 307 k
Installed size: 788 k
Downloading packages:
(1/2): libseccomp-2.3.1-4.el7.x86_64.rpm                                                           |  56 kB  00:00:00     
(2/2): chrony-3.4-1.el7.x86_64.rpm                                                                 | 251 kB  00:00:01     
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     229 kB/s | 307 kB  00:00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libseccomp-2.3.1-4.el7.x86_64                                                                          1/2 
  Installing : chrony-3.4-1.el7.x86_64                                                                                2/2 
  Verifying  : libseccomp-2.3.1-4.el7.x86_64                                                                          1/2 
  Verifying  : chrony-3.4-1.el7.x86_64                                                                                2/2 

Installed:
  chrony.x86_64 0:3.4-1.el7                                                                                               

Dependency Installed:
  libseccomp.x86_64 0:2.3.1-4.el7                                                                                         

Complete!
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

(2)在master01上配置chronyd服务:

[root@master01 ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst      //将这几行server注释掉
#server 1.centos.pool.ntp.org iburst      //将这几行server注释掉
#server 2.centos.pool.ntp.org iburst      //将这几行server注释掉
#server 3.centos.pool.ntp.org iburst      //将这几行server注释掉

server ntp.aliyun.com iburst              //在上述行后面添加这一行,将同步时间的源指向阿里云的ntp服务器。
......

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

(3)将master01上的/etc/chrony.conf文件复制到其余各个节点

[root@master01 ~]# scp /etc/chrony.conf root@master02:/etc/
chrony.conf                                                                             100% 1142     2.1MB/s   00:00    
[root@master01 ~]# scp /etc/chrony.conf root@master03:/etc/
chrony.conf                                                                             100% 1142     1.7MB/s   00:00    
[root@master01 ~]# scp /etc/chrony.conf root@node01:/etc/
chrony.conf                                                                             100% 1142     1.2MB/s   00:00    
[root@master01 ~]# scp /etc/chrony.conf root@node02:/etc/
chrony.conf                                                                             100% 1142     2.0MB/s   00:00    
[root@master01 ~]# scp /etc/chrony.conf root@node03:/etc/
chrony.conf                                                                             100% 1142     1.5MB/s   00:00    
[root@master01 ~]# 
[root@master01 ~]# scp /etc/chrony.conf root@loadbalance01:/etc/
chrony.conf                                                                             100% 1142     1.9MB/s   00:00    
[root@master01 ~]# scp /etc/chrony.conf root@loadbalance02:/etc/
chrony.conf                                                                             100% 1142     1.7MB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

(4)在每个节点上启动chronyd服务

[root@master01 ~]# systemctl start chronyd && systemctl enable chronyd
[root@master01 ~]# 

[root@master02 ~]# systemctl start chronyd && systemctl enable chronyd
[root@master02 ~]# 

[root@master03 ~]# systemctl start chronyd && systemctl enable chronyd
[root@master03 ~]# 

[root@node01 ~]# systemctl start chronyd && systemctl enable chronyd
[root@node01 ~]# 

[root@node02 ~]# systemctl start chronyd && systemctl enable chronyd
[root@node02 ~]# 

[root@node03 ~]# systemctl start chronyd && systemctl enable chronyd
[root@node03 ~]# 

[root@loadbalance01 ~]# systemctl start chronyd && systemctl enable chronyd
[root@loadbalance01 ~]# 

[root@loadbalance02 ~]# systemctl start chronyd && systemctl enable chronyd
[root@loadbalance02 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

在master01上查看:

[root@master01 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    77    22   -209us[  +31us] +/-   28ms
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

看到有* ,就说明没问题了

2.6、关闭swap

在k8s集群初始化阶段,如果系统swap启用后,初始化会报错,我们先提前把swap关闭掉。这里只需要关闭k8s的6台节点的swap即可,负载均衡机器可以不用关。

(1)临时关闭:

[root@master01 ~]# swapoff -a
[root@master02 ~]# swapoff -a
[root@master03 ~]# swapoff -a
[root@node01 ~]# swapoff -a
[root@node02 ~]# swapoff -a
[root@node03 ~]# swapoff -a

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

(2)永久关闭:(永久关闭需要配置系统/etc/fstable文件,然后重启系统生效。这里我们先不重启,等到后面升级完系统内核之后再重启)

[root@master01 ~]# vim /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Jul 12 21:27:09 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=981d46e6-8dc0-4db9-8769-1c419db45ad8 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0                        //将此行注释掉即可

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

然后在每台k8s节点上都需要做这个操作。此处就不再演示后面的过程。

2.7、加载所需要的内核模块

k8s需要用到网桥转发功能,另外对于k8s而言,在部署了calico网络之后,需要支持ipvs功能才可以。所以此处需要加载网桥转发模块和ipvs模块。(这仅限于个人理解,可能不全面)

(1)临时加载:

[root@master01 ~]# modprobe br_netfilter
[root@master01 ~]# modprobe -- ip_vs
[root@master01 ~]# modprobe -- ip_vs_rr
[root@master01 ~]# modprobe -- ip_vs_wrr
[root@master01 ~]# modprobe -- ip_vs_sh
[root@master01 ~]# modprobe -- nf_conntrack

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

查看是否加载成功:

[root@master01 ~]# lsmod | grep br_net
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@master01 ~]# 
[root@master01 ~]# lsmod | grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

每台k8s节点都需要加载,此处不再演示后面机器加载过程。

(2)永久加载:(永久加载模块,即需要在/etc/modules-load.d/下创建以.conf结尾的模块文件,将上述模块保存至此文件中,重启系统即可生效。此处先不重启,等后面升级完内核之后再重启)

[root@master01 ~]# vim /etc/modules-load.d/k8s.conf 
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

将此文件复制到其余各k8s节点中:

[root@master01 ~]# scp /etc/modules-load.d/k8s.conf root@master02:/etc/modules-load.d/
k8s.conf                                                                                100%   60    81.5KB/s   00:00    
[root@master01 ~]# scp /etc/modules-load.d/k8s.conf root@master03:/etc/modules-load.d/
k8s.conf                                                                                100%   60    83.1KB/s   00:00    
[root@master01 ~]# scp /etc/modules-load.d/k8s.conf root@node01:/etc/modules-load.d/
k8s.conf                                                                                100%   60    57.4KB/s   00:00    
[root@master01 ~]# scp /etc/modules-load.d/k8s.conf root@node02:/etc/modules-load.d/
k8s.conf                                                                                100%   60    95.7KB/s   00:00    
[root@master01 ~]# scp /etc/modules-load.d/k8s.conf root@node03:/etc/modules-load.d/
k8s.conf                                                                                100%   60    77.4KB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

(3)在各k8s节点上安装ipvsadm、ipset:

[root@master01 ~]# yum -y install ipvsadm ipset
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.bupt.edu.cn
 * extras: mirrors.bupt.edu.cn
 * updates: mirrors.bupt.edu.cn
base                                                                                               | 3.6 kB  00:00:00     
docker-ce-stable                                                                                   | 3.5 kB  00:00:00     
elrepo                                                                                             | 3.0 kB  00:00:00     
extras                                                                                             | 2.9 kB  00:00:00     
updates                                                                                            | 2.9 kB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package ipset.x86_64 0:6.38-2.el7 will be updated
---> Package ipset.x86_64 0:7.1-1.el7 will be an update
--> Processing Dependency: ipset-libs(x86-64) = 7.1-1.el7 for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13(LIBIPSET_4.8)(64bit) for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13(LIBIPSET_2.0)(64bit) for package: ipset-7.1-1.el7.x86_64
--> Processing Dependency: libipset.so.13()(64bit) for package: ipset-7.1-1.el7.x86_64
---> Package ipvsadm.x86_64 0:1.27-8.el7 will be installed
--> Running transaction check
---> Package ipset-libs.x86_64 0:6.38-2.el7 will be updated
---> Package ipset-libs.x86_64 0:7.1-1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                        Arch                       Version                         Repository                Size
==========================================================================================================================
Installing:
 ipvsadm                        x86_64                     1.27-8.el7                      base                      45 k
Updating:
 ipset                          x86_64                     7.1-1.el7                       base                      39 k
Updating for dependencies:
 ipset-libs                     x86_64                     7.1-1.el7                       base                      64 k

Transaction Summary
==========================================================================================================================
Install  1 Package
Upgrade  1 Package (+1 Dependent package)

Total download size: 147 k
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/3): ipvsadm-1.27-8.el7.x86_64.rpm                                                               |  45 kB  00:00:00     
(2/3): ipset-7.1-1.el7.x86_64.rpm                                                                  |  39 kB  00:00:00     
(3/3): ipset-libs-7.1-1.el7.x86_64.rpm                                                             |  64 kB  00:00:00     
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     361 kB/s | 147 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : ipset-libs-7.1-1.el7.x86_64                                                                            1/5 
  Updating   : ipset-7.1-1.el7.x86_64                                                                                 2/5 
  Installing : ipvsadm-1.27-8.el7.x86_64                                                                              3/5 
  Cleanup    : ipset-6.38-2.el7.x86_64                                                                                4/5 
  Cleanup    : ipset-libs-6.38-2.el7.x86_64                                                                           5/5 
  Verifying  : ipvsadm-1.27-8.el7.x86_64                                                                              1/5 
  Verifying  : ipset-7.1-1.el7.x86_64                                                                                 2/5 
  Verifying  : ipset-libs-7.1-1.el7.x86_64                                                                            3/5 
  Verifying  : ipset-libs-6.38-2.el7.x86_64                                                                           4/5 
  Verifying  : ipset-6.38-2.el7.x86_64                                                                                5/5 

Installed:
  ipvsadm.x86_64 0:1.27-8.el7                                                                                             

Updated:
  ipset.x86_64 0:7.1-1.el7                                                                                                

Dependency Updated:
  ipset-libs.x86_64 0:7.1-1.el7                                                                                           

Complete!
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77

此处只演示一台机器安装,其余机器不再演示安装过程。

2.8、开启系统转发功能

k8s需要将系统的如下3个参数设置为1,即开启这3个功能:
net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-iptables
net.ipv4.ip_forward

将这3个参数保存至配置文件中,让其永久生效。

[root@master01 ~]# vim /etc/modules-load.d/k8s-forward.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s-forward.conf                //执行此命令,让其生效
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

将这个文件复制到各k8s节点中:

[root@master01 ~]# scp /etc/sysctl.d/k8s-forward.conf root@master02:/etc/sysctl.d/
k8s-forward.conf                                                                        100%  103   176.8KB/s   00:00    
[root@master01 ~]# scp /etc/sysctl.d/k8s-forward.conf root@master03:/etc/sysctl.d/
k8s-forward.conf                                                                        100%  103   148.5KB/s   00:00    
[root@master01 ~]# scp /etc/sysctl.d/k8s-forward.conf root@node01:/etc/sysctl.d/
k8s-forward.conf                                                                        100%  103   128.1KB/s   00:00    
[root@master01 ~]# scp /etc/sysctl.d/k8s-forward.conf root@node02:/etc/sysctl.d/
k8s-forward.conf                                                                        100%  103   215.3KB/s   00:00    
[root@master01 ~]# scp /etc/sysctl.d/k8s-forward.conf root@node03:/etc/sysctl.d/
k8s-forward.conf                                                                        100%  103   184.6KB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

复制完成后,在剩余的几台k8s节点中执行sysctl -p /etc/sysctl.d/k8s-forward.conf命令让其生效,此处不再演示后面的过程。

2.9、升级系统内核

CentOS系统默认的内核版本是3.10,对于k8s-v1.24版本来说,在生产环境上能部署,能运行,但是在使用k8s期间会出现很多问题,即不稳定因数。为了能在生产环境中稳定运行,对于1.24版本而言,对于CentOS系统而言,需要升级系统内核。而elrepo内核目前已经更新到了5版本。所以本次就用它来升级系统内核。

而对于elrepo而言,它的官网是这个地址:http://elrepo.org/tiki/HomePage 可直接按照官网的步骤来升级即可。考虑到国内墙的因素,在安装内核的rpm包时很慢,所以这里换成了清华大学的开源镜像站。我们这里选择kernel-ml-5.19.5-1.el7.elrepo.x86_64.rpm这个版本的内核。

(1)在/etc/yum.repos.d/下创建一个repo文件:

[root@master01 yum.repos.d]# vim /etc/yum.repos.d/elrepo.repo
[elrepo]
name=elrepo
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/
gpgcheck=0
enabled=1

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

(2)安装kernel-ml:

[root@master01 yum.repos.d]# yum -y install  kernel-ml
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.bupt.edu.cn
 * extras: mirrors.bupt.edu.cn
 * updates: mirrors.bupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package kernel-ml.x86_64 0:5.19.5-1.el7.elrepo will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                    Arch                    Version                                 Repository               Size
==========================================================================================================================
Installing:
 kernel-ml                  x86_64                  5.19.5-1.el7.elrepo                     elrepo                   59 M

Transaction Summary
==========================================================================================================================
Install  1 Package

Total download size: 59 M
Installed size: 276 M
Downloading packages:
kernel-ml-5.19.5-1.el7.elrepo.x86_64.rpm                                                           |  59 MB  00:01:34     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kernel-ml-5.19.5-1.el7.elrepo.x86_64                                                                   1/1 
  Verifying  : kernel-ml-5.19.5-1.el7.elrepo.x86_64                                                                   1/1 

Installed:
  kernel-ml.x86_64 0:5.19.5-1.el7.elrepo                                                                                  

Complete!
[root@master01 yum.repos.d]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40

(3)设置默认启动内核为新内核:

[root@master01 yum.repos.d]# cat /boot/grub2/grub.cfg | grep menuentry                     //查看系统可用内核
if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
  menuentry_id_option=""
export menuentry_id_option
menuentry 'CentOS Linux (5.19.5-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-957.el7.x86_64-advanced-1bf51c00-7358-43e0-9ea5-a17744d255ab' {
menuentry 'CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-957.el7.x86_64-advanced-1bf51c00-7358-43e0-9ea5-a17744d255ab' {
menuentry 'CentOS Linux (0-rescue-938f9c4b9e594d3bb395864ff21e1f2d) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-938f9c4b9e594d3bb395864ff21e1f2d-advanced-1bf51c00-7358-43e0-9ea5-a17744d255ab' {
[root@master01 yum.repos.d]# 


[root@master01 yum.repos.d]# grub2-set-default 'CentOS Linux (5.19.5-1.el7.elrepo.x86_64) 7 (Core)'   //设置系统启动为新内核启动

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

(3)重启系统:

[root@master01 yum.repos.d]# init 6
  • 1

(4)查看当前内核版本:

[root@master01 ~]# uname -a
Linux master01 5.19.5-1.el7.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 29 08:55:53 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@master01 ~]# 
  • 1
  • 2
  • 3

到此,机器内核就升级成功了。后面所有k8s节点都需要升级内核,此处不再详细演示升级过程。

3、负载均衡配置

负载均衡有很多种方式,这里选择haproxy+keepalived组合来构建负载均衡。

3.1、配置haproxy

(1)在两台负载均衡机器(loadbalance01、loadbalance02)上安装haproxy:

[root@loadbalance01 ~]# yum -y install haproxy
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.sjtu.edu.cn
 * extras: mirrors.ustc.edu.cn
 * updates: mirrors.ustc.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                   Arch                     Version                               Repository                 Size
==========================================================================================================================
Installing:
 haproxy                   x86_64                   1.5.18-9.el7_9.1                      updates                   835 k

Transaction Summary
==========================================================================================================================
Install  1 Package

Total download size: 835 k
Installed size: 2.6 M
Downloading packages:
haproxy-1.5.18-9.el7_9.1.x86_64.rpm                                                                | 835 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : haproxy-1.5.18-9.el7_9.1.x86_64                                                                        1/1 
  Verifying  : haproxy-1.5.18-9.el7_9.1.x86_64                                                                        1/1 

Installed:
  haproxy.x86_64 0:1.5.18-9.el7_9.1                                                                                       

Complete!
[root@loadbalance01 ~]# 



[root@loadbalance02 ~]# yum -y install haproxy
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.njupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                   Arch                     Version                               Repository                 Size
==========================================================================================================================
Installing:
 haproxy                   x86_64                   1.5.18-9.el7_9.1                      updates                   835 k

Transaction Summary
==========================================================================================================================
Install  1 Package

Total download size: 835 k
Installed size: 2.6 M
Downloading packages:
haproxy-1.5.18-9.el7_9.1.x86_64.rpm                                                                | 835 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : haproxy-1.5.18-9.el7_9.1.x86_64                                                                        1/1 
  Verifying  : haproxy-1.5.18-9.el7_9.1.x86_64                                                                        1/1 

Installed:
  haproxy.x86_64 0:1.5.18-9.el7_9.1                                                                                       

Complete!
[root@loadbalance02 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82

(2)在loadbalance01上配置haproxy:

[root@loadbalance01 ~]# vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
#    option                  httplog
    option                  dontlognull
#    option http-server-close
#    option forwardfor       except 127.0.0.0/8
#    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  main *:6443
    mode                        tcp
    default_backend             k8s

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend k8s
    mode        tcp
    balance     roundrobin
    server  master01 192.168.31.183:6443 check
    server  master02 192.168.31.185:6443 check
    server  master03 192.168.31.247:6443 check
#    server  app4 127.0.0.1:5004 check

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85

(3)将loadbalance01上的haproxy.cfg复制到loadbalance02上:

[root@loadbalance01 ~]# scp /etc/haproxy/haproxy.cfg root@loadbalance02:/etc/haproxy/
root@loadbalance02's password: 
haproxy.cfg                                                                             100% 3008     4.2MB/s   00:00    
[root@loadbalance01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5

(4)两台节点上启动haproxy服务:

[root@loadbalance01 ~]# systemctl start haproxy && systemctl enable haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@loadbalance01 ~]# 

[root@loadbalance02 haproxy]# systemctl restart haproxy && systemctl enable haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@loadbalance02 haproxy]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

(5)查看haproxy服务的状态:

[root@loadbalance02 haproxy]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-08-30 18:57:18 CST; 1min 16s ago
 Main PID: 18084 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─18084 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─18086 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─18090 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Aug 30 18:57:18 loadbalance02 systemd[1]: Started HAProxy Load Balancer.
Aug 30 18:57:18 loadbalance02 haproxy-systemd-wrapper[18084]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -...-Ds
Hint: Some lines were ellipsized, use -l to show in full.
[root@loadbalance02 haproxy]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

可以看到,haproxy的服务是正常的。到此,haproxy部署完毕。

3.2、配置keepalived

keepalived作为负载均衡组合,为k8s提供一个VIP的访问入口,并且采用主备运行的方式。正常时VIP在主节点上,一旦主节点异常后,VIP会自动漂移至备节点,继续提供服务。

(1)在两台负载均衡机器(loadbalance01、loadbalance02)上安装keepalived软件包:

[root@loadbalance01 ~]# yum -y install keepalived
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.sjtu.edu.cn
 * extras: mirrors.ustc.edu.cn
 * updates: mirrors.ustc.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package keepalived.x86_64 0:1.3.5-19.el7 will be installed
--> Processing Dependency: ipset-libs >= 7.1 for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Running transaction check
---> Package ipset-libs.x86_64 0:6.38-2.el7 will be updated
--> Processing Dependency: ipset-libs(x86-64) = 6.38-2.el7 for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11()(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_1.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_2.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_3.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_4.5)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_4.6)(64bit) for package: ipset-6.38-2.el7.x86_64
---> Package ipset-libs.x86_64 0:7.1-1.el7 will be an update
---> Package net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.2 will be installed
--> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64
---> Package net-snmp-libs.x86_64 1:5.7.2-49.el7_9.2 will be installed
--> Running transaction check
---> Package ipset.x86_64 0:6.38-2.el7 will be updated
---> Package ipset.x86_64 0:7.1-1.el7 will be an update
---> Package lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                         Arch               Version                                     Repository           Size
==========================================================================================================================
Installing:
 keepalived                      x86_64             1.3.5-19.el7                                base                332 k
Installing for dependencies:
 lm_sensors-libs                 x86_64             3.4.0-8.20160601gitf9185e5.el7              base                 42 k
 net-snmp-agent-libs             x86_64             1:5.7.2-49.el7_9.2                          updates             707 k
 net-snmp-libs                   x86_64             1:5.7.2-49.el7_9.2                          updates             752 k
Updating for dependencies:
 ipset                           x86_64             7.1-1.el7                                   base                 39 k
 ipset-libs                      x86_64             7.1-1.el7                                   base                 64 k

Transaction Summary
==========================================================================================================================
Install  1 Package  (+3 Dependent packages)
Upgrade             ( 2 Dependent packages)

Total download size: 1.9 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/6): ipset-7.1-1.el7.x86_64.rpm                                                                  |  39 kB  00:00:00     
(2/6): lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm                                   |  42 kB  00:00:00     
(3/6): ipset-libs-7.1-1.el7.x86_64.rpm                                                             |  64 kB  00:00:00     
(4/6): keepalived-1.3.5-19.el7.x86_64.rpm                                                          | 332 kB  00:00:00     
(5/6): net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64.rpm                                             | 707 kB  00:00:00     
(6/6): net-snmp-libs-5.7.2-49.el7_9.2.x86_64.rpm                                                   | 752 kB  00:00:01     
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     1.4 MB/s | 1.9 MB  00:00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : ipset-libs-7.1-1.el7.x86_64                                                                            1/8 
  Installing : 1:net-snmp-libs-5.7.2-49.el7_9.2.x86_64                                                                2/8 
  Installing : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                  3/8 
  Installing : 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64                                                          4/8 
  Installing : keepalived-1.3.5-19.el7.x86_64                                                                         5/8 
  Updating   : ipset-7.1-1.el7.x86_64                                                                                 6/8 
  Cleanup    : ipset-6.38-2.el7.x86_64                                                                                7/8 
  Cleanup    : ipset-libs-6.38-2.el7.x86_64                                                                           8/8 
  Verifying  : 1:net-snmp-libs-5.7.2-49.el7_9.2.x86_64                                                                1/8 
  Verifying  : ipset-7.1-1.el7.x86_64                                                                                 2/8 
  Verifying  : keepalived-1.3.5-19.el7.x86_64                                                                         3/8 
  Verifying  : ipset-libs-7.1-1.el7.x86_64                                                                            4/8 
  Verifying  : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                  5/8 
  Verifying  : 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64                                                          6/8 
  Verifying  : ipset-libs-6.38-2.el7.x86_64                                                                           7/8 
  Verifying  : ipset-6.38-2.el7.x86_64                                                                                8/8 

Installed:
  keepalived.x86_64 0:1.3.5-19.el7                                                                                        

Dependency Installed:
  lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7          net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.2         
  net-snmp-libs.x86_64 1:5.7.2-49.el7_9.2                         

Dependency Updated:
  ipset.x86_64 0:7.1-1.el7                                  ipset-libs.x86_64 0:7.1-1.el7                                 

Complete!
[root@loadbalance01 ~]# 




[root@loadbalance02 ~]# yum -y install keepalived
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.njupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package keepalived.x86_64 0:1.3.5-19.el7 will be installed
--> Processing Dependency: ipset-libs >= 7.1 for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-19.el7.x86_64
--> Running transaction check
---> Package ipset-libs.x86_64 0:6.38-2.el7 will be updated
--> Processing Dependency: ipset-libs(x86-64) = 6.38-2.el7 for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11()(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_1.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_2.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_3.0)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_4.5)(64bit) for package: ipset-6.38-2.el7.x86_64
--> Processing Dependency: libipset.so.11(LIBIPSET_4.6)(64bit) for package: ipset-6.38-2.el7.x86_64
---> Package ipset-libs.x86_64 0:7.1-1.el7 will be an update
---> Package net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.2 will be installed
--> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64
---> Package net-snmp-libs.x86_64 1:5.7.2-49.el7_9.2 will be installed
--> Running transaction check
---> Package ipset.x86_64 0:6.38-2.el7 will be updated
---> Package ipset.x86_64 0:7.1-1.el7 will be an update
---> Package lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                         Arch               Version                                     Repository           Size
==========================================================================================================================
Installing:
 keepalived                      x86_64             1.3.5-19.el7                                base                332 k
Installing for dependencies:
 lm_sensors-libs                 x86_64             3.4.0-8.20160601gitf9185e5.el7              base                 42 k
 net-snmp-agent-libs             x86_64             1:5.7.2-49.el7_9.2                          updates             707 k
 net-snmp-libs                   x86_64             1:5.7.2-49.el7_9.2                          updates             752 k
Updating for dependencies:
 ipset                           x86_64             7.1-1.el7                                   base                 39 k
 ipset-libs                      x86_64             7.1-1.el7                                   base                 64 k

Transaction Summary
==========================================================================================================================
Install  1 Package  (+3 Dependent packages)
Upgrade             ( 2 Dependent packages)

Total download size: 1.9 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/6): ipset-7.1-1.el7.x86_64.rpm                                                                  |  39 kB  00:00:00     
(2/6): ipset-libs-7.1-1.el7.x86_64.rpm                                                             |  64 kB  00:00:00     
(3/6): lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm                                   |  42 kB  00:00:00     
(4/6): keepalived-1.3.5-19.el7.x86_64.rpm                                                          | 332 kB  00:00:00     
(5/6): net-snmp-libs-5.7.2-49.el7_9.2.x86_64.rpm                                                   | 752 kB  00:00:00     
(6/6): net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64.rpm                                             | 707 kB  00:00:00     
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     2.0 MB/s | 1.9 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : ipset-libs-7.1-1.el7.x86_64                                                                            1/8 
  Installing : 1:net-snmp-libs-5.7.2-49.el7_9.2.x86_64                                                                2/8 
  Installing : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                  3/8 
  Installing : 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64                                                          4/8 
  Installing : keepalived-1.3.5-19.el7.x86_64                                                                         5/8 
  Updating   : ipset-7.1-1.el7.x86_64                                                                                 6/8 
  Cleanup    : ipset-6.38-2.el7.x86_64                                                                                7/8 
  Cleanup    : ipset-libs-6.38-2.el7.x86_64                                                                           8/8 
  Verifying  : 1:net-snmp-libs-5.7.2-49.el7_9.2.x86_64                                                                1/8 
  Verifying  : ipset-7.1-1.el7.x86_64                                                                                 2/8 
  Verifying  : keepalived-1.3.5-19.el7.x86_64                                                                         3/8 
  Verifying  : ipset-libs-7.1-1.el7.x86_64                                                                            4/8 
  Verifying  : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                  5/8 
  Verifying  : 1:net-snmp-agent-libs-5.7.2-49.el7_9.2.x86_64                                                          6/8 
  Verifying  : ipset-libs-6.38-2.el7.x86_64                                                                           7/8 
  Verifying  : ipset-6.38-2.el7.x86_64                                                                                8/8 

Installed:
  keepalived.x86_64 0:1.3.5-19.el7                                                                                        

Dependency Installed:
  lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7          net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.2         
  net-snmp-libs.x86_64 1:5.7.2-49.el7_9.2                         

Dependency Updated:
  ipset.x86_64 0:7.1-1.el7                                  ipset-libs.x86_64 0:7.1-1.el7                                 

Complete!
[root@loadbalance02 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197

(2)配置keepalived:
loadbalance01上:

[root@loadbalance01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
#   smtp_server 192.168.200.1
#   smtp_connect_timeout 30
   router_id k8s01
#   vrrp_skip_check_adv_addr
#   vrrp_strict
   vrrp_mcast_group4 224.0.0.18
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
   script "killall -0 haproxy"
   interval 2
   weight 20
}

vrrp_instance K8S {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        192.168.31.200/24 dev ens33
    }

    track_script {
        chk_haproxy
    }
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

loadbalance02上:

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
#   smtp_server 192.168.200.1
#   smtp_connect_timeout 30
   router_id k8s02
#   vrrp_skip_check_adv_addr
#   vrrp_strict
   vrrp_mcast_group4 224.0.0.18
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
   script "killall -0 haproxy"
   interval 2
   weight 20
}

vrrp_instance K8S {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        192.168.31.200/24 dev ens33
    }

    track_script {
        chk_haproxy
    }
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45

(3)启动keepalived服务:

[root@loadbalance01 ~]# systemctl start keepalived && systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@loadbalance01 ~]# 

[root@loadbalance02 ~]# systemctl start keepalived && systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@loadbalance02 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

启动完成后,在loadbalance01节点上查看IP地址,会看到有VIP配置在ens33这块网卡上:

[root@loadbalance01 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.54/24 brd 192.168.31.255 scope global noprefixroute dynamic ens33
       valid_lft 26969sec preferred_lft 26969sec
    inet 192.168.31.200/24 scope global secondary ens33                                           //这个IP就是VIP                    
       valid_lft forever preferred_lft forever
    inet6 fe80::d3e7:1100:3607:f1a0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:91 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.70/24 brd 192.168.20.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::2681:d86f:ca5:a70f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@loadbalance01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

到此,keepalived就配置完了。

为了验证keepalived没问题,我们做一个小测试:
正常情况下,当keepalived主节点上的haproxy服务或者进程down掉后,VIP会漂移至keepalived备节点。反之就不正常。

在loadbalance01上停掉haproxy服务,然后查看VIP的漂移情况:

[root@loadbalance01 ~]# systemctl stop haproxy

[root@loadbalance01 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.54/24 brd 192.168.31.255 scope global noprefixroute dynamic ens33
       valid_lft 26717sec preferred_lft 26717sec
    inet6 fe80::d3e7:1100:3607:f1a0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:91 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.70/24 brd 192.168.20.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::2681:d86f:ca5:a70f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@loadbalance01 ~]# 

[root@loadbalance02 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fd:b5:92 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.206/24 brd 192.168.31.255 scope global noprefixroute dynamic ens33
       valid_lft 26841sec preferred_lft 26841sec
    inet 192.168.31.200/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c383:8583:d760:5646/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fd:b5:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.80/24 brd 192.168.20.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::8286:ed3d:b49d:79db/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@loadbalance02 ~]# 


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47

由此可看出,当keepalived主节点上的haproxy服务停止掉后,VIP正常漂移至备节点了。

恢复haproxy,查看VIP的还原情况:

[root@loadbalance01 ~]# systemctl start haproxy

[root@loadbalance01 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.54/24 brd 192.168.31.255 scope global noprefixroute dynamic ens33
       valid_lft 26601sec preferred_lft 26601sec
    inet 192.168.31.200/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::d3e7:1100:3607:f1a0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e1:20:91 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.70/24 brd 192.168.20.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::2681:d86f:ca5:a70f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@loadbalance01 ~]# 

[root@loadbalance02 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fd:b5:92 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.206/24 brd 192.168.31.255 scope global noprefixroute dynamic ens33
       valid_lft 26727sec preferred_lft 26727sec
    inet6 fe80::c383:8583:d760:5646/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fd:b5:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.80/24 brd 192.168.20.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::8286:ed3d:b49d:79db/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@loadbalance02 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

可以看到,当keepalived的主节点上的haproxy服务启动后,VIP漂移回来了。

到此,整个负载均衡器就配置完了。

4、安装容器(containerd)

说到容器,我们会想到docker,因为docker是目前最主流的容器管理工具。docker社区对docker的支持力度也很高。所以在之前的k8s版本中,默认的容器运行时都是docker。因为它功能更丰富,使用更方便,并且被全球范围内广大的运维和开发者熟知。但是自k8s-v1.24版本开始,k8s抛弃了docker,即不再将docker作为其默认的容器运行时来工作,而将容器运行时换成了containerd。所以对于1.24及以后的k8s版本,部署集群时的容器应该要向containerd靠拢。

对于1.24版本的k8s部署,如果使用docker作为容器运行时,则在部署集群前,需要安装cri-dockerd,这个插件就是通过cri接口将docker和k8是连接起来。如果使用containerd作为容器运行时,则不需要安装额外的插件就可正常部署。

这里我们选择containerd。

对于containerd而言,可选择二进制安装。我一般习惯用yum安装。在docker的镜像仓库中就有containerd包,我们可以直接docker仓库配置好,然后在每台k8s集群节点中安装containerd就行了。

(1)配置docker仓库:(每台k8s节点上配置)
这里我们选择阿里云的官方开源镜像站。

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  • 1
  • 2
  • 3

(2)安装containerd:(每台k8s节点上配置)

[root@master01 ~]# yum -y install containerd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.bupt.edu.cn
 * extras: mirrors.bupt.edu.cn
 * updates: mirrors.bupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.6.8-3.1.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.6.8-3.1.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
--> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.119.2-1.911c772.el7_8.noarch
--> Running transaction check
---> Package policycoreutils-python.x86_64 0:2.5-34.el7 will be installed
--> Processing Dependency: policycoreutils = 2.5-34.el7 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Running transaction check
---> Package audit-libs-python.x86_64 0:2.8.5-4.el7 will be installed
--> Processing Dependency: audit-libs(x86-64) = 2.8.5-4.el7 for package: audit-libs-python-2.8.5-4.el7.x86_64
---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
---> Package libcgroup.x86_64 0:0.41-21.el7 will be installed
---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
---> Package policycoreutils.x86_64 0:2.5-29.el7 will be updated
---> Package policycoreutils.x86_64 0:2.5-34.el7 will be an update
---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
--> Running transaction check
---> Package audit-libs.x86_64 0:2.8.4-4.el7 will be updated
--> Processing Dependency: audit-libs(x86-64) = 2.8.4-4.el7 for package: audit-2.8.4-4.el7.x86_64
---> Package audit-libs.x86_64 0:2.8.5-4.el7 will be an update
--> Running transaction check
---> Package audit.x86_64 0:2.8.4-4.el7 will be updated
---> Package audit.x86_64 0:2.8.5-4.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                          Arch             Version                               Repository                  Size
==========================================================================================================================
Installing:
 containerd.io                    x86_64           1.6.8-3.1.el7                         docker-ce-stable            33 M
Installing for dependencies:
 audit-libs-python                x86_64           2.8.5-4.el7                           base                        76 k
 checkpolicy                      x86_64           2.5-8.el7                             base                       295 k
 container-selinux                noarch           2:2.119.2-1.911c772.el7_8             extras                      40 k
 libcgroup                        x86_64           0.41-21.el7                           base                        66 k
 libsemanage-python               x86_64           2.5-14.el7                            base                       113 k
 policycoreutils-python           x86_64           2.5-34.el7                            base                       457 k
 python-IPy                       noarch           0.75-6.el7                            base                        32 k
 setools-libs                     x86_64           3.3.8-4.el7                           base                       620 k
Updating for dependencies:
 audit                            x86_64           2.8.5-4.el7                           base                       256 k
 audit-libs                       x86_64           2.8.5-4.el7                           base                       102 k
 policycoreutils                  x86_64           2.5-34.el7                            base                       917 k

Transaction Summary
==========================================================================================================================
Install  1 Package  (+8 Dependent packages)
Upgrade             ( 3 Dependent packages)

Total download size: 36 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/12): audit-libs-2.8.5-4.el7.x86_64.rpm                                                          | 102 kB  00:00:00     
(2/12): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm                                       |  40 kB  00:00:00     
(3/12): libcgroup-0.41-21.el7.x86_64.rpm                                                           |  66 kB  00:00:00     
(4/12): audit-2.8.5-4.el7.x86_64.rpm                                                               | 256 kB  00:00:00     
(5/12): audit-libs-python-2.8.5-4.el7.x86_64.rpm                                                   |  76 kB  00:00:00     
(6/12): checkpolicy-2.5-8.el7.x86_64.rpm                                                           | 295 kB  00:00:00     
(7/12): libsemanage-python-2.5-14.el7.x86_64.rpm                                                   | 113 kB  00:00:00     
(8/12): python-IPy-0.75-6.el7.noarch.rpm                                                           |  32 kB  00:00:00     
(9/12): policycoreutils-2.5-34.el7.x86_64.rpm                                                      | 917 kB  00:00:01     
(10/12): setools-libs-3.3.8-4.el7.x86_64.rpm                                                       | 620 kB  00:00:01     
(11/12): policycoreutils-python-2.5-34.el7.x86_64.rpm                                              | 457 kB  00:00:01     
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.6.8-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Public key for containerd.io-1.6.8-3.1.el7.x86_64.rpm is not installed
(12/12): containerd.io-1.6.8-3.1.el7.x86_64.rpm                                                    |  33 MB  00:00:22     
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     1.6 MB/s |  36 MB  00:00:22     
Retrieving key from https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <docker@docker.com>"
 Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
 From       : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : audit-libs-2.8.5-4.el7.x86_64                                                                         1/15 
  Updating   : policycoreutils-2.5-34.el7.x86_64                                                                     2/15 
  Installing : audit-libs-python-2.8.5-4.el7.x86_64                                                                  3/15 
  Installing : setools-libs-3.3.8-4.el7.x86_64                                                                       4/15 
  Installing : libcgroup-0.41-21.el7.x86_64                                                                          5/15 
  Installing : checkpolicy-2.5-8.el7.x86_64                                                                          6/15 
  Installing : python-IPy-0.75-6.el7.noarch                                                                          7/15 
  Installing : libsemanage-python-2.5-14.el7.x86_64                                                                  8/15 
  Installing : policycoreutils-python-2.5-34.el7.x86_64                                                              9/15 
  Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                   10/15 
setsebool:  SELinux is disabled.
  Installing : containerd.io-1.6.8-3.1.el7.x86_64                                                                   11/15 
  Updating   : audit-2.8.5-4.el7.x86_64                                                                             12/15 
  Cleanup    : policycoreutils-2.5-29.el7.x86_64                                                                    13/15 
  Cleanup    : audit-2.8.4-4.el7.x86_64                                                                             14/15 
  Cleanup    : audit-libs-2.8.4-4.el7.x86_64                                                                        15/15 
  Verifying  : audit-libs-2.8.5-4.el7.x86_64                                                                         1/15 
  Verifying  : audit-2.8.5-4.el7.x86_64                                                                              2/15 
  Verifying  : containerd.io-1.6.8-3.1.el7.x86_64                                                                    3/15 
  Verifying  : policycoreutils-2.5-34.el7.x86_64                                                                     4/15 
  Verifying  : libsemanage-python-2.5-14.el7.x86_64                                                                  5/15 
  Verifying  : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                    6/15 
  Verifying  : python-IPy-0.75-6.el7.noarch                                                                          7/15 
  Verifying  : checkpolicy-2.5-8.el7.x86_64                                                                          8/15 
  Verifying  : policycoreutils-python-2.5-34.el7.x86_64                                                              9/15 
  Verifying  : audit-libs-python-2.8.5-4.el7.x86_64                                                                 10/15 
  Verifying  : libcgroup-0.41-21.el7.x86_64                                                                         11/15 
  Verifying  : setools-libs-3.3.8-4.el7.x86_64                                                                      12/15 
  Verifying  : policycoreutils-2.5-29.el7.x86_64                                                                    13/15 
  Verifying  : audit-libs-2.8.4-4.el7.x86_64                                                                        14/15 
  Verifying  : audit-2.8.4-4.el7.x86_64                                                                             15/15 

Installed:
  containerd.io.x86_64 0:1.6.8-3.1.el7                                                                                    

Dependency Installed:
  audit-libs-python.x86_64 0:2.8.5-4.el7                          checkpolicy.x86_64 0:2.5-8.el7                         
  container-selinux.noarch 2:2.119.2-1.911c772.el7_8              libcgroup.x86_64 0:0.41-21.el7                         
  libsemanage-python.x86_64 0:2.5-14.el7                          policycoreutils-python.x86_64 0:2.5-34.el7             
  python-IPy.noarch 0:0.75-6.el7                                  setools-libs.x86_64 0:3.3.8-4.el7                      

Dependency Updated:
  audit.x86_64 0:2.8.5-4.el7         audit-libs.x86_64 0:2.8.5-4.el7         policycoreutils.x86_64 0:2.5-34.el7        

Complete!
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147

其他节点就不演示安装过程了。

(3)在master01上生成默认的containerd配置文件:

[root@master01 ~]# containerd config default > /etc/containerd/config.toml

  • 1
  • 2

(4)修改配置文件参数:

[root@master01 ~]# vim /etc/containerd/config.toml 
......
......
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"                            //此处由于国内防火墙限制,将此处的仓库修改成阿里云的仓库。
......
......
SystemdCgroup = false                                               //将此处cgroups驱动从false修改为true,因为k8s1.24默认的驱动是systemd,容器运行时的驱动需要和k8s的驱动匹配。
......
......

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

(5)将步骤4中的配置文件复制到其余各k8s节点:

[root@master01 ~]# scp /etc/containerd/config.toml root@master02:/etc/containerd/
config.toml                                                                             100% 7029     7.8MB/s   00:00    
[root@master01 ~]# scp /etc/containerd/config.toml root@master03:/etc/containerd/
config.toml                                                                             100% 7029     8.0MB/s   00:00    
[root@master01 ~]# scp /etc/containerd/config.toml root@node01:/etc/containerd/
config.toml                                                                             100% 7029     8.1MB/s   00:00    
[root@master01 ~]# scp /etc/containerd/config.toml root@node02:/etc/containerd/
config.toml                                                                             100% 7029     8.6MB/s   00:00    
[root@master01 ~]# scp /etc/containerd/config.toml root@node03:/etc/containerd/
config.toml                                                                             100% 7029     7.7MB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

(6)k8s各节点启动containerd服务:

[root@master01 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@master01 ~]# 

[root@master02 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@master02 ~]# 

[root@master03 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@master03 ~]# 

[root@node01 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@node01 ~]# 

[root@node02 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@node02 ~]# 

[root@node03 ~]# systemctl start containerd && systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
[root@node03 ~]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

到此,containerd就安装完了。

拉一个镜像测试一下:

[root@master01 ~]# ctr images pull docker.io/library/nginx:latest
docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f:    exists         |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc: exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7247f6e5c182559e2f7c010c11506802a0259958577a6e64c31b5b8f7cb0b286:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:ca1981974b581a41cc58598a6b51580d317ac61590be75a8a63fa479e53890da:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:d4019c921e20447eea3c9658bd0780a7e3771641bf29b85f222ec3f54c11a84f:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7cb804d746d48520f1c0322fcda93249b96b4ed0bbd7f9912b2eb21bd8da6b43:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:e7a561826262f279acf3a671b2d5684a86a8dbc48dc88e4cb65305ba4b08cae1:    exists         |++++++++++++++++++++++++++++++++++++++| 
config-sha256:2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763:   exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7a6db449b51b92eac5c81cdbd82917785343f1664b2be57b22337b0a40c5b29d:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 288.8s                                                                   total:  53.2 M (188.5 KiB/s)                                     
unpacking linux/amd64 sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f...
done: 2.066439248s	
[root@master01 ~]# 


[root@master01 ~]# ctr images ls
REF                            TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                               LABELS 
docker.io/library/nginx:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f 54.1 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -      
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

可以看到nginx镜像成功的从docker hub上拉取下来了。到此,containerd容器部署成功,功能也正常。

5、部署k8s集群

5.1、安装kubeadm、kubectl、kubelet

这里我们选择通过k8s官方自带的kubeadm工具来部署k8s集群。选择阿里云开源镜像站来安装这3个工具。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

[root@master01 ~]# yum -y install kubeadm-1.24.2 kubectl-1.24.2 kubelet-1.24.2 --nogpgcheck
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

在每台k8s节点上都配置好阿里云的镜像仓库,然后安装kubeadm、kubectl、kubelet这三个工具。此处不再演示后面机器安装过程。

5.2、启动kubelet服务

在每台k8s机器上启动kubelet服务。

[root@master01 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master01 ~]# 

[root@master02 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master02 ~]# 

[root@master03 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master03 ~]# 

[root@node01 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~]# 

[root@node02 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 ~]# 

[root@node03 ~]# systemctl start kubelet && systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node03 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

5.3、配置crictl命令

在上一步安装完k8s的这3个工具之后,cri-tools工具包也作为依赖包一起安装上去了。其中crictl命令就是由cri-tools工具包提供的。
但是运行crictl命令时,提示警告:

[root@master01 ~]# crictl version
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory" 
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  1.6.8
RuntimeApiVersion:  v1
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

这是因为默认的crictl工具的cri接口指向的是dockershim,而在1.24版本中已经没有了dockershim接口,所以我们得配置一下,将接口修改为containerd接口

[root@master01 ~]# vim /etc/crictl.yaml                                //在/etc/下新建一个crictl.yaml文件
runtime-endpoint: unix:///var/run/dockershim.sock                      //这是官方提供的默认接口配置,将这里修改为var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/dockershim.sock                        //同上,修改接口配置
timeout: 2
debug: true                                                            //把debug功能给关闭掉,修改为true
pull-image-on-create: false


上述配置修改完成后如下:
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

运行crictl命令:

[root@master01 ~]# crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  1.6.8
RuntimeApiVersion:  v1
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

可以看到,没有警告了。

接下来将这个文件复制到其他k8s节点上的/etc/下:

[root@master01 ~]# scp /etc/crictl.yaml root@master02:/etc/
crictl.yaml                                                                             100%  172   246.0KB/s   00:00    
[root@master01 ~]# scp /etc/crictl.yaml root@master03:/etc/
crictl.yaml                                                                             100%  172   203.6KB/s   00:00    
[root@master01 ~]# scp /etc/crictl.yaml root@node01:/etc/
crictl.yaml                                                                             100%  172   249.8KB/s   00:00    
[root@master01 ~]# scp /etc/crictl.yaml root@node02:/etc/
crictl.yaml                                                                             100%  172   282.2KB/s   00:00    
[root@master01 ~]# scp /etc/crictl.yaml root@node03:/etc/
crictl.yaml                                                                             100%  172   191.5KB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

到此,crictl命令就配置完成了,其余节点可以正常使用此命令了。

5.4、拉取k8s组件镜像

在初始化时,kubeadm会从k8s.gcr.io这个站点上去拉取镜像。如果提前把这些组件镜像都拉取下来了,那在初始化时就不需要再去拉取了。当然,如果不提前拉取下来的话也没事,初始化时会自动拉取下来,这个看个人选择。这里我们提前拉取下来。

另外:k8s.gcr.io这个站点是谷歌官方镜像站点,在国内网络环境中是无法拉取到的,需要通过科学上网的方式来拉取镜像。这里我选择通过阿里云提供的谷歌镜像仓库来拉取镜像。阿里云谷歌镜像仓库:registry.cn-hangzhou.aliyuncs.com/google_containers

在每台k8s节点上拉取组件镜像,选择1.24.2的版本:

[root@master01 ~]# kubeadm config images list --kubernetes-version 1.24.2          //查看1.24.2版本的所有组件镜像及其版本
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@master01 ~]# 


[root@master01 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers                                              //拉取1.24.2版本的所有组件镜像
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master01 ~]# 

[root@master02 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master02 ~]# 

[root@master03 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master03 ~]# 

[root@node01 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@node01 ~]# 

[root@node02 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@node02 ~]# 

[root@node03 ~]# kubeadm config images pull --kubernetes-version 1.24.2 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@node03 ~]# 


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72

5.5、生成并配置k8s初始化配置文件

使用kubeadm初始化k8s集群有两种方式:

  • 通过kubeadm init命令指定参数来初始化
  • 通过生成k8s配置文件来初始化集群

这里我们选择生成配置文件来初始化集群。

(1)这里以master01作为初始化节点,生成k8s默认配置文件;

[root@master01 ~]# kubeadm config print init-defaults --component-configs KubeProxyConfiguration,KubeletConfiguration > /root/init.yaml
[root@master01 ~]# 
[root@master01 ~]# ls 
anaconda-ks.cfg  init.yaml
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

说明:使用 kubeadm config print init-defaults > /root/init.yaml也可以,只是上面那条命令输出的配置信息更详细而已。

(2)配置init.yaml文件:

[root@master01 ~]# vim init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.31.183                        //此处修改为初始化节点的IP地址,即哪台节点初始化就填写哪台节点的IP.
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock   //此处修改容器运行时的sock,如果是containerd的话默认无需修改。
  imagePullPolicy: IfNotPresent                           //默认无需修改。如果本地有镜像不想使用仓库的话可以修改为Never。
  name: master01                                          //此处一般情况下可修改为初始化节点的主机名。我这里就是如此。
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     //此处修改仓库为阿里云的仓库地址,如果是外网环境则无需修改
kind: ClusterConfiguration
kubernetesVersion: 1.24.2                                                //此处修改k8s的版本为1.24.2
controlPlaneEndpoint: test.k8s.local:6443                                //此处需要添加这个参数,并将其的值设置为VIP:6443或者域名:6443,两者都可以,我这里使用的是VIP对应的域名。
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.1.0.0/16                                             //此处修改与否无所谓,如果有自己的网络规划,那就修改       
  podSubnet: 172.16.0.0/16                                               //此处修改与否无所谓,如果有自己的网络规划,那就修改
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocal:
  bridgeInterface: ""
  interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs"                                                                 //此处默认不修改也行,我这里修改为了ipvs模式。
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
  enableDSR: false
  forwardHealthCheckVip: false
  networkName: ""
  rootHnsEndpointName: ""
  sourceVip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd                                                      //此处设置cgroups的值为systemd,与容器运行时containerd的cgroups保持一致。k8s-1.24版本默认就是systemd,所以此处无需修改
clusterDNS:
- 10.1.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136

说明:本人也是初学者,对里面的很多参数都不明白,我只修改了我认为需要修改的部分,之前在网上找了一些答案,有些需要修改证书的部分,这里我没有修改。我个人觉得除了上述需要修改的部分之外,其余的保持默认即可。

5.6、初始化k8s集群

上述准备工作都准备好后,接下来开始初始化k8s集群。

在初始化节点上操作,我这里就在master01上操作:

[root@master01 ~]# 
[root@master01 ~]# kubeadm init --config init.yaml --upload-certs
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01 test.k8s.local] and IPs [10.1.0.1 192.168.31.183]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.31.183 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.31.183 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.536906 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
861bfee64f09ab362d5dcfa9275138b12269235a3d38348023563889fe8d960d
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681 \
	--control-plane --certificate-key 861bfee64f09ab362d5dcfa9275138b12269235a3d38348023563889fe8d960d

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681 
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87

根据上面的初始化输出日志中可以看到:
当看到 “Your Kubernetes control-plane has initialized successfully!” ,就说明在master01上已经初始化成功了。

接下来,按照提示中的日志信息,配置master01的kubectl命令:

[root@master01 ~]# id
uid=0(root) gid=0(root) groups=0(root)
[root@master01 ~]# 
[root@master01 ~]# 
[root@master01 ~]# pwd
/root
[root@master01 ~]# 
[root@master01 ~]# mkdir -p .kube
[root@master01 ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@master01 ~]# ll .kube/
total 8
-rw------- 1 root root 5638 Aug 31 14:47 config
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

配置环境变量:

[root@master01 ~]# vim .bashrc 
......
......
export KUBECONFIG=/etc/kubernetes/admin.conf                            //在文件末尾添加

[root@master01 ~]# source .bashrc                                       //让其生效
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

配置完成后,使用kubectl命令查看节点状态:

[root@master01 ~]# kubectl get nodes
NAME       STATUS     ROLES           AGE     VERSION
master01   NotReady   control-plane   9m36s   v1.24.2
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5

可以看到,目前master01节点的状态处于NotReady状态,为什么会是这个状态呢,是因为还没有安装网络插件。当网络插件安装完成后,master01节点的状态就会从NotReady转变成Ready状态。

5.7、安装容器网络插件

这里我们选择安装calico网络。

calico官方网站在这里:https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
按照官方网站上的官方文档步骤安装即可。

(1)从官网上下载calico.yaml文件,即网络配置清单:

[root@VM-12-14-centos ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico-typha.yaml -o calico.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  233k  100  233k    0     0  21950      0  0:00:10  0:00:10 --:--:-- 21350
[root@VM-12-14-centos ~]# 
[root@VM-12-14-centos ~]# ls calico.yaml 
calico.yaml
[root@VM-12-14-centos ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

这里由于我本地网络原因,使用官方提供的URL无法下载,所以我使用了我的公有云环境来下载这个文件,下载完成后,将这个文件下载到本地,然后上传至master01上即可。

(2)修改网络配置清单:

[root@master01 ~]# vim calico.yaml 
......
......
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
......
......

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

说明:对于calico网络而言,其使用的默认网段是192.168.0.0/16,对于CIDR而言,官方给出的解释如下:
在这里插入图片描述
官方给出的解释是:如果使用的就是192.168.0.0/16,无需更改网络,如果使用kubeadm部署,并且使用的是其他网络,也不用更改,calico在部署时会自动检测CIDR。但是如果是使用其他平台或者工具来初始化k8s集群,则需要修改网络。

那此处我们也不做更改,按照官方的解释,在部署calico时会自动检测我们的pod网络为172.16.0.0/16。

如果需要自定义,则根据自行需要去修改此配置文件,我这里都保持默认。

(3)应用网络清单:

[root@master01 ~]# kubectl apply -f /root/calico.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
poddisruptionbudget.policy/calico-typha created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
service/calico-typha created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
deployment.apps/calico-typha created
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

到此,网络插件就部署完成了。接下来,master01会拉取calico镜像,然后部署pod,当这一切都完成后,通过kubectl命令去查看节点状态:

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master01   Ready    control-plane   93m   v1.24.2
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5

可以看到master01节点的状态已经变成Ready状态了。

5.8、将控制其余控制节点添加至集群中

这里我们剩下master02和master03两台控制节点,现在要将这两台控制节点添加至集群中。

[root@master02 ~]# kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681 \
> --control-plane --certificate-key 861bfee64f09ab362d5dcfa9275138b12269235a3d38348023563889fe8d960d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master02 test.k8s.local] and IPs [10.1.0.1 192.168.31.185]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master02] and IPs [192.168.31.185 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master02] and IPs [192.168.31.185 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master02 ~]# 
[root@master02 ~]# mkdir -p .kube
[root@master02 ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@master02 ~]# ll .kube/config 
-rw------- 1 root root 5638 Aug 31 16:22 .kube/config
[root@master02 ~]# 
[root@master02 ~]# kubectl get nodes
NAME       STATUS     ROLES           AGE    VERSION
master01   Ready      control-plane   104m   v1.24.2
master02   NotReady   control-plane   66s    v1.24.2






[root@master03 ~]# kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681 \
> --control-plane --certificate-key e27b2c2001d6f8276c5452515acf65cdd54127627eae029c8600e692f5cb9434
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master03] and IPs [192.168.31.247 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master03] and IPs [192.168.31.247 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master03 test.k8s.local] and IPs [10.1.0.1 192.168.31.247]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master03 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master03 ~]# 
[root@master03 ~]# mkdir -p .kube
[root@master03 ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@master03 ~]# ll .kube/config 
-rw------- 1 root root 5642 Aug 31 17:57 .kube/config
[root@master03 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145

说明:在将master03节点添加至集群时,由于超时,导致之前的证书不可用,后面在已经初始化好了的控制节点上执行 “kubeadm init phase upload-certs --upload-certs” 跟新了证书,并将原有初始化命令中的证书替换为新的证书之后才将master03加入集群中。

到此,控制节点就添加完毕了。

5.9、将node节点添加至集群中

这里将node01、node02、node03都添加至集群中去。

[root@node01 ~]# kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.




[root@node02 ~]# kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.




[root@node03 ~]# kubeadm join test.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:e5808eb8dff8994dac64135375842ab5aea3c979901b325be559ae0b23002681
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54

在控制节点上查看所有节点状态:

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   3h31m   v1.24.2
master02   Ready    control-plane   108m    v1.24.2
master03   Ready    control-plane   12m     v1.24.2
node01     Ready    <none>          3m55s   v1.24.2
node02     Ready    <none>          3m12s   v1.24.2
node03     Ready    <none>          2m53s   v1.24.2
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

到此,所有node节点就添加到集群中去了。

node节点默认情况下是无法执行kubectl命令的,为了方便运行和管理,我们这里配置一下node节点的kubectl命令,这个步骤不是必要的。

将master01上的/root/.kube目录整体复制到3台node节点上去:

[root@master01 ~]# scp -r .kube root@node01:/root/
config                                                                                  100% 5638     3.9MB/s   00:00    
1ef8d4ac67cfe377cb785b047f880eaa                                                        100%  471   275.0KB/s   00:00    
5df247d6fae725450d1a7ee91b226aa3                                                        100% 4232     3.4MB/s   00:00    
a32d123dc92e912912d8fb2245c1ca14                                                        100% 1153   928.7KB/s   00:00    
fbcd7d7a8c57f448f8bdce522bdb52f5                                                        100% 1324     1.5MB/s   00:00    
470b73fdf54ff009c4672d4597baf7b0                                                        100% 2531     2.5MB/s   00:00    
a56404c52bc11a79e37cdf3cdf28eab8                                                        100%  925     1.0MB/s   00:00    
802d5446bbc98ecad76d460ad3779cfe                                                        100%  659   726.0KB/s   00:00    
b37b933cae84962917e65a9d268d2193                                                        100%  650   696.3KB/s   00:00    
f14c93da8741734aa6e7564f1e70633b                                                        100%  636   745.4KB/s   00:00    
593f58602047370c87c89d1fced9f50b                                                        100%  623   695.0KB/s   00:00    
3346ed91eaed45fc058b13325792f3ff                                                        100%  796     1.0MB/s   00:00    
743223a773f82cbc08bb47c8b6227bed                                                        100%  772   851.0KB/s   00:00    
b85b8b75ebdd81ed7e9954e5b3255543                                                        100%  843   722.7KB/s   00:00    
5933660946488900043010001319fa6d                                                        100% 1038     1.5MB/s   00:00    
0ff88ea1d3029770832841dc65b995a8                                                        100% 1249   755.2KB/s   00:00    
fafafa2f3233d7352594a94c438cb5c4                                                        100% 1611     1.3MB/s   00:00    
a345a621db82d2be90dd1f9214a64119                                                        100%  536   495.4KB/s   00:00    
479486fab926eca5569777289e46f5d8                                                        100%  636   558.8KB/s   00:00    
adcffc60aa284ab300431765a7d0b2bd                                                        100% 1153     1.1MB/s   00:00    
87bcd940e05514802e8fe41150682ff0                                                        100%  871   725.6KB/s   00:00    
676403b75b8d08e100c473696e6540c3                                                        100%  843   936.6KB/s   00:00    
9e3592efeea035176e6c27b91f7acd4f                                                        100%  987   783.6KB/s   00:00    
4aaf5895ed430f3e5f1ee0a1bf692283                                                        100%  838   813.3KB/s   00:00    
7e39391da245d0659ab51e9ccf3fa767                                                        100%  631   657.3KB/s   00:00    
f3769aebc450e8ff25482ee0fdb8afde                                                        100%  641   827.0KB/s   00:00    
4aa1c59d82b2536dd0f69e3dd4dce0c9                                                        100%  637   742.9KB/s   00:00    
2d0a5180bafe99b457acd5680b55372d                                                        100% 1079     1.3MB/s   00:00    
4a19e97af0f5967927bbf130a510267c                                                        100%  789   452.7KB/s   00:00    
3c53b0f4682ef619eaeb7f9ee1b3396b                                                        100% 6535     1.8MB/s   00:00    
d14624dc4e6329a5a78d4ecb6203c4b8                                                        100%  984   827.9KB/s   00:00    
790bca15e979cc785e59f5a808d9c53a                                                        100%  642   581.5KB/s   00:00    
10e839af326ca6d661db1ec8359d6f05                                                        100%  838   924.3KB/s   00:00    
638673934fe86293f2499d98b5b65837                                                        100% 2887KB 117.6MB/s   00:00    
servergroups.json                                                                       100% 4015     1.0MB/s   00:00    
serverresources.json                                                                    100%  819   891.4KB/s   00:00    
serverresources.json                                                                    100%  819     1.3MB/s   00:00    
serverresources.json                                                                    100%  990   789.7KB/s   00:00    
serverresources.json                                                                    100% 2196     2.0MB/s   00:00    
serverresources.json                                                                    100%  591   899.0KB/s   00:00    
serverresources.json                                                                    100%  325   469.1KB/s   00:00    
serverresources.json                                                                    100%  316   432.0KB/s   00:00    
serverresources.json                                                                    100% 1276     2.3MB/s   00:00    
serverresources.json                                                                    100%  302   349.7KB/s   00:00    
serverresources.json                                                                    100%  307   425.0KB/s   00:00    
serverresources.json                                                                    100%  289   262.2KB/s   00:00    
serverresources.json                                                                    100%  462   548.7KB/s   00:00    
serverresources.json                                                                    100%  704     1.1MB/s   00:00    
serverresources.json                                                                    100%  438   626.9KB/s   00:00    
serverresources.json                                                                    100%  745   884.7KB/s   00:00    
serverresources.json                                                                    100%  509   502.5KB/s   00:00    
serverresources.json                                                                    100%  509   709.7KB/s   00:00    
serverresources.json                                                                    100%  504   828.4KB/s   00:00    
serverresources.json                                                                    100%  504   606.8KB/s   00:00    
serverresources.json                                                                    100%  915     1.1MB/s   00:00    
serverresources.json                                                                    100%  202   254.6KB/s   00:00    
serverresources.json                                                                    100%  302   475.5KB/s   00:00    
serverresources.json                                                                    100%  297   320.7KB/s   00:00    
serverresources.json                                                                    100%  537   793.9KB/s   00:00    
serverresources.json                                                                    100%  653     1.0MB/s   00:00    
serverresources.json                                                                    100%  303   535.6KB/s   00:00    
serverresources.json                                                                    100%  308   514.6KB/s   00:00    
serverresources.json                                                                    100%  455   635.6KB/s   00:00    
serverresources.json                                                                    100% 6221     9.5MB/s   00:00    
serverresources.json                                                                    100%  650   909.7KB/s   00:00    
[root@master01 ~]# 
[root@master01 ~]# scp -r .kube root@node02:/root/
config                                                                                  100% 5638     5.5MB/s   00:00    
1ef8d4ac67cfe377cb785b047f880eaa                                                        100%  471   404.0KB/s   00:00    
5df247d6fae725450d1a7ee91b226aa3                                                        100% 4232     4.8MB/s   00:00    
a32d123dc92e912912d8fb2245c1ca14                                                        100% 1153     1.6MB/s   00:00    
fbcd7d7a8c57f448f8bdce522bdb52f5                                                        100% 1324     1.9MB/s   00:00    
470b73fdf54ff009c4672d4597baf7b0                                                        100% 2531     3.5MB/s   00:00    
a56404c52bc11a79e37cdf3cdf28eab8                                                        100%  925     1.2MB/s   00:00    
802d5446bbc98ecad76d460ad3779cfe                                                        100%  659   649.4KB/s   00:00    
b37b933cae84962917e65a9d268d2193                                                        100%  650   804.5KB/s   00:00    
f14c93da8741734aa6e7564f1e70633b                                                        100%  636   529.7KB/s   00:00    
593f58602047370c87c89d1fced9f50b                                                        100%  623     1.0MB/s   00:00    
3346ed91eaed45fc058b13325792f3ff                                                        100%  796     1.3MB/s   00:00    
743223a773f82cbc08bb47c8b6227bed                                                        100%  772     1.2MB/s   00:00    
b85b8b75ebdd81ed7e9954e5b3255543                                                        100%  843     1.5MB/s   00:00    
5933660946488900043010001319fa6d                                                        100% 1038     1.5MB/s   00:00    
0ff88ea1d3029770832841dc65b995a8                                                        100% 1249     2.0MB/s   00:00    
fafafa2f3233d7352594a94c438cb5c4                                                        100% 1611     2.2MB/s   00:00    
a345a621db82d2be90dd1f9214a64119                                                        100%  536   791.3KB/s   00:00    
479486fab926eca5569777289e46f5d8                                                        100%  636   793.5KB/s   00:00    
adcffc60aa284ab300431765a7d0b2bd                                                        100% 1153   414.4KB/s   00:00    
87bcd940e05514802e8fe41150682ff0                                                        100%  871     1.0MB/s   00:00    
676403b75b8d08e100c473696e6540c3                                                        100%  843     1.2MB/s   00:00    
9e3592efeea035176e6c27b91f7acd4f                                                        100%  987     1.5MB/s   00:00    
4aaf5895ed430f3e5f1ee0a1bf692283                                                        100%  838     1.3MB/s   00:00    
7e39391da245d0659ab51e9ccf3fa767                                                        100%  631   871.3KB/s   00:00    
f3769aebc450e8ff25482ee0fdb8afde                                                        100%  641   795.7KB/s   00:00    
4aa1c59d82b2536dd0f69e3dd4dce0c9                                                        100%  637   391.2KB/s   00:00    
2d0a5180bafe99b457acd5680b55372d                                                        100% 1079     1.3MB/s   00:00    
4a19e97af0f5967927bbf130a510267c                                                        100%  789     1.2MB/s   00:00    
3c53b0f4682ef619eaeb7f9ee1b3396b                                                        100% 6535     6.1MB/s   00:00    
d14624dc4e6329a5a78d4ecb6203c4b8                                                        100%  984   308.5KB/s   00:00    
790bca15e979cc785e59f5a808d9c53a                                                        100%  642   572.5KB/s   00:00    
10e839af326ca6d661db1ec8359d6f05                                                        100%  838   884.0KB/s   00:00    
638673934fe86293f2499d98b5b65837                                                        100% 2887KB  93.8MB/s   00:00    
servergroups.json                                                                       100% 4015     4.2MB/s   00:00    
serverresources.json                                                                    100%  819   753.9KB/s   00:00    
serverresources.json                                                                    100%  819     1.0MB/s   00:00    
serverresources.json                                                                    100%  990     1.2MB/s   00:00    
serverresources.json                                                                    100% 2196     2.5MB/s   00:00    
serverresources.json                                                                    100%  591   737.0KB/s   00:00    
serverresources.json                                                                    100%  325   384.4KB/s   00:00    
serverresources.json                                                                    100%  316   388.1KB/s   00:00    
serverresources.json                                                                    100% 1276     1.1MB/s   00:00    
serverresources.json                                                                    100%  302   281.3KB/s   00:00    
serverresources.json                                                                    100%  307   497.4KB/s   00:00    
serverresources.json                                                                    100%  289   496.4KB/s   00:00    
serverresources.json                                                                    100%  462   796.1KB/s   00:00    
serverresources.json                                                                    100%  704     1.1MB/s   00:00    
serverresources.json                                                                    100%  438   564.2KB/s   00:00    
serverresources.json                                                                    100%  745   752.4KB/s   00:00    
serverresources.json                                                                    100%  509   584.6KB/s   00:00    
serverresources.json                                                                    100%  509   741.8KB/s   00:00    
serverresources.json                                                                    100%  504   852.9KB/s   00:00    
serverresources.json                                                                    100%  504   728.2KB/s   00:00    
serverresources.json                                                                    100%  915     1.2MB/s   00:00    
serverresources.json                                                                    100%  202   198.9KB/s   00:00    
serverresources.json                                                                    100%  302   356.9KB/s   00:00    
serverresources.json                                                                    100%  297   331.5KB/s   00:00    
serverresources.json                                                                    100%  537   590.9KB/s   00:00    
serverresources.json                                                                    100%  653   685.8KB/s   00:00    
serverresources.json                                                                    100%  303   326.6KB/s   00:00    
serverresources.json                                                                    100%  308   358.5KB/s   00:00    
serverresources.json                                                                    100%  455   568.6KB/s   00:00    
serverresources.json                                                                    100% 6221     5.3MB/s   00:00    
serverresources.json                                                                    100%  650   792.4KB/s   00:00    
[root@master01 ~]# scp -r .kube root@node03:/root/
config                                                                                  100% 5638     5.5MB/s   00:00    
1ef8d4ac67cfe377cb785b047f880eaa                                                        100%  471   464.3KB/s   00:00    
5df247d6fae725450d1a7ee91b226aa3                                                        100% 4232     5.7MB/s   00:00    
a32d123dc92e912912d8fb2245c1ca14                                                        100% 1153     1.4MB/s   00:00    
fbcd7d7a8c57f448f8bdce522bdb52f5                                                        100% 1324     1.9MB/s   00:00    
470b73fdf54ff009c4672d4597baf7b0                                                        100% 2531     3.6MB/s   00:00    
a56404c52bc11a79e37cdf3cdf28eab8                                                        100%  925     1.5MB/s   00:00    
802d5446bbc98ecad76d460ad3779cfe                                                        100%  659   384.7KB/s   00:00    
b37b933cae84962917e65a9d268d2193                                                        100%  650   830.0KB/s   00:00    
f14c93da8741734aa6e7564f1e70633b                                                        100%  636     1.1MB/s   00:00    
593f58602047370c87c89d1fced9f50b                                                        100%  623   668.6KB/s   00:00    
3346ed91eaed45fc058b13325792f3ff                                                        100%  796     1.1MB/s   00:00    
743223a773f82cbc08bb47c8b6227bed                                                        100%  772     1.0MB/s   00:00    
b85b8b75ebdd81ed7e9954e5b3255543                                                        100%  843   964.7KB/s   00:00    
5933660946488900043010001319fa6d                                                        100% 1038     1.3MB/s   00:00    
0ff88ea1d3029770832841dc65b995a8                                                        100% 1249     1.7MB/s   00:00    
fafafa2f3233d7352594a94c438cb5c4                                                        100% 1611     2.1MB/s   00:00    
a345a621db82d2be90dd1f9214a64119                                                        100%  536   950.4KB/s   00:00    
479486fab926eca5569777289e46f5d8                                                        100%  636   546.1KB/s   00:00    
adcffc60aa284ab300431765a7d0b2bd                                                        100% 1153   350.4KB/s   00:00    
87bcd940e05514802e8fe41150682ff0                                                        100%  871   930.7KB/s   00:00    
676403b75b8d08e100c473696e6540c3                                                        100%  843     1.1MB/s   00:00    
9e3592efeea035176e6c27b91f7acd4f                                                        100%  987     1.6MB/s   00:00    
4aaf5895ed430f3e5f1ee0a1bf692283                                                        100%  838     1.2MB/s   00:00    
7e39391da245d0659ab51e9ccf3fa767                                                        100%  631   826.9KB/s   00:00    
f3769aebc450e8ff25482ee0fdb8afde                                                        100%  641   946.4KB/s   00:00    
4aa1c59d82b2536dd0f69e3dd4dce0c9                                                        100%  637   915.5KB/s   00:00    
2d0a5180bafe99b457acd5680b55372d                                                        100% 1079     1.6MB/s   00:00    
4a19e97af0f5967927bbf130a510267c                                                        100%  789     1.1MB/s   00:00    
3c53b0f4682ef619eaeb7f9ee1b3396b                                                        100% 6535     7.7MB/s   00:00    
d14624dc4e6329a5a78d4ecb6203c4b8                                                        100%  984     1.7MB/s   00:00    
790bca15e979cc785e59f5a808d9c53a                                                        100%  642   971.0KB/s   00:00    
10e839af326ca6d661db1ec8359d6f05                                                        100%  838     1.1MB/s   00:00    
638673934fe86293f2499d98b5b65837                                                        100% 2887KB  86.5MB/s   00:00    
servergroups.json                                                                       100% 4015   976.3KB/s   00:00    
serverresources.json                                                                    100%  819     1.0MB/s   00:00    
serverresources.json                                                                    100%  819     1.2MB/s   00:00    
serverresources.json                                                                    100%  990     1.5MB/s   00:00    
serverresources.json                                                                    100% 2196     2.0MB/s   00:00    
serverresources.json                                                                    100%  591   580.2KB/s   00:00    
serverresources.json                                                                    100%  325   157.4KB/s   00:00    
serverresources.json                                                                    100%  316   330.3KB/s   00:00    
serverresources.json                                                                    100% 1276     1.9MB/s   00:00    
serverresources.json                                                                    100%  302   264.3KB/s   00:00    
serverresources.json                                                                    100%  307   492.5KB/s   00:00    
serverresources.json                                                                    100%  289   224.2KB/s   00:00    
serverresources.json                                                                    100%  462   577.7KB/s   00:00    
serverresources.json                                                                    100%  704   766.7KB/s   00:00    
serverresources.json                                                                    100%  438   537.4KB/s   00:00    
serverresources.json                                                                    100%  745   846.1KB/s   00:00    
serverresources.json                                                                    100%  509   459.1KB/s   00:00    
serverresources.json                                                                    100%  509   646.6KB/s   00:00    
serverresources.json                                                                    100%  504   558.4KB/s   00:00    
serverresources.json                                                                    100%  504   798.4KB/s   00:00    
serverresources.json                                                                    100%  915     1.2MB/s   00:00    
serverresources.json                                                                    100%  202   203.1KB/s   00:00    
serverresources.json                                                                    100%  302   454.4KB/s   00:00    
serverresources.json                                                                    100%  297   418.5KB/s   00:00    
serverresources.json                                                                    100%  537   591.1KB/s   00:00    
serverresources.json                                                                    100%  653   710.6KB/s   00:00    
serverresources.json                                                                    100%  303   357.6KB/s   00:00    
serverresources.json                                                                    100%  308   393.6KB/s   00:00    
serverresources.json                                                                    100%  455   148.3KB/s   00:00    
serverresources.json                                                                    100% 6221     8.3MB/s   00:00    
serverresources.json                                                                    100%  650   908.9KB/s   00:00    
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201

在node01上运行kubectl命令:

[root@node01 ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   3h36m   v1.24.2
master02   Ready    control-plane   113m    v1.24.2
master03   Ready    control-plane   18m     v1.24.2
node01     Ready    <none>          9m9s    v1.24.2
node02     Ready    <none>          8m26s   v1.24.2
node03     Ready    <none>          8m7s    v1.24.2
[root@node01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

到此,一个测试版的k8s高可用环境就部署好了。至于k8s的web界面功能,这个我这里不部署了,个人更喜欢用命令行模式。

5.10、运行一个pod,测试集群的功能

这里运行一个nginx pod来测试一下:

(1)创建一个名称空间为testpod:

[root@master01 ~]# kubectl create namespace testpod
namespace/testpod created
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4

(2)创建一个nginx控制器,通过控制器来创建pod:

在/root/下创建一个yaml文件,通过配置文件来创建控制器和pod

[root@master01 ~]# vim /root/nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
          hostPort: 20080


[root@master01 ~]# kubectl apply -f ./nginx.yaml                      
deployment.apps/nginx-deployment created
[root@master01 ~]# 


[root@master01 ~]# kubectl describe deployment                       //此处表示查看创建的控制器信息
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Wed, 31 Aug 2022 18:52:19 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    20080/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-66455f9788 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  45s   deployment-controller  Scaled up replica set nginx-deployment-66455f9788 to 1
[root@master01 ~]# 



[root@master01 ~]# kubectl get pods                                           //此处表示查看默认名称空间中的pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-66455f9788-kj494   1/1     Running   0          82s
[root@master01 ~]# 


[root@master01 ~]# kubectl describe pod nginx-deployment-66455f9788-kj494       //此处表示查看某个pod的详细信息
Name:         nginx-deployment-66455f9788-kj494
Namespace:    default
Priority:     0
Node:         node02/192.168.31.117
Start Time:   Wed, 31 Aug 2022 18:52:19 +0800
Labels:       app=nginx
              pod-template-hash=66455f9788
Annotations:  cni.projectcalico.org/containerID: d2c93f5951740348b849f10c4f04ea0a3323b19e42f8650e8722e915434f8ad9
              cni.projectcalico.org/podIP: 172.16.140.65/32
              cni.projectcalico.org/podIPs: 172.16.140.65/32
Status:       Running
IP:           172.16.140.65
IPs:
  IP:           172.16.140.65
Controlled By:  ReplicaSet/nginx-deployment-66455f9788
Containers:
  nginx:
    Container ID:   containerd://9ec095b2b47fcd39b5131b23cd874a6c12f8bda17c731c291fc0a00fad4d68c1
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f
    Port:           80/TCP
    Host Port:      20080/TCP
    State:          Running
      Started:      Wed, 31 Aug 2022 18:52:25 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9n87 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-c9n87:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  107s  default-scheduler  Successfully assigned default/nginx-deployment-66455f9788-kj494 to node02
  Normal  Pulling    107s  kubelet            Pulling image "nginx:latest"
  Normal  Pulled     101s  kubelet            Successfully pulled image "nginx:latest" in 5.418997516s
  Normal  Created    101s  kubelet            Created container nginx
  Normal  Started    101s  kubelet            Started container nginx
[root@master01 ~]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127

通过上述信息可以看到,这个pod运行在node02上,node02的20080映射pod的80端口。在浏览器上访问这个pod:
在这里插入图片描述
可以看到,成功的访问到了nginx pod。

说明部署的k8s高可用集群没问题,后续可以正常去验证k8s的高级特性。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/575551
推荐阅读
相关标签
  

闽ICP备14008679号