当前位置:   article > 正文

(高可用)八、k8s(一)_flannel-cni-plugin:v1.5.1-flannel1

flannel-cni-plugin:v1.5.1-flannel1

一、k8s基础

(一)k8s核心组件

  • etcd:保存整个集群的状态
  • apiserver:资源操作的唯一入口
  • controller manager:负责维护集群的状态(自愈)
  • scheduler:负责资源调度
  • kubelet:负责维护容器的生命周期,同时负责volume(CVI)和网络管理(CNI)
  • Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
  • kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡

(二)设计架构

  • k8s集群包含有节点代理kubelet和Master组件,一切基于分布式存储
  • 核心层:对外提供API构建高层的应用,对内提供插件式应用执行环境
  • 应用层:部署和路由
  • 管理层:系统度量、自动化、策略管理
  • 接口层:kubectl命令行工具、客户端SDK以及集群联邦
  • 生态系统

(三)k8s集群部署

  • 文档:https://v1-23.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  • 192.168.147.100上,harbor仓库
  • 101、102、103做集群
  • 必须禁用swap
[root@k8s4 ~]# swapoff -a

[root@k8s4 ~]# vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Fri Jan 13 03:19:40 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=59fa3a88-0fb4-438a-91a4-12ff299b54ef /                       xfs     defaults        0 0
UUID=0843c96e-506a-4f44-ba5a-1988a432c511 /boot                   xfs     defaults        0 0
UUID=689004f1-5602-484c-97f1-3bd6252b6676 swap                    swap    defaults        0 0

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 同步时钟
[root@k8s1 harbor]# vim /etc/chrony.conf

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server ntp1.aliyun.com iburst

[root@k8s2 ~]# systemctl restart chronyd.service

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 允许iptables检查桥接流量,所有节点都要做
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 配置免密
[root@k8s2 ~]# ssh-keygen
[root@k8s2 ~]# ssh-copy-id k8s3
[root@k8s2 ~]# ssh-copy-id k8s4

  • 1
  • 2
  • 3
  • 4
  • 所有节点部署docker

  • 配置k8s阿里源

[root@k8s2 alexw.com]# cat /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 安装1.23版本,所有节点都要执行
[root@k8s2 alexw.com]# yum install kubeadm-1.23.15-0 kubelet-1.23.15-0 kubectl-1.23.15-0

[root@k8s2 alexw.com]# systemctl enable --now kubelet
  • 1
  • 2
  • 3
  • 查看日志
journalctl -l -u kube-apiserver
journalctl -l -u kube-controller-manager
journalctl -l -u kube-scheduler
journalctl -l -u kubelet
journalctl -l -u kube-proxy
  • 1
  • 2
  • 3
  • 4
  • 5
  • 创建集群
# 查看默认配置
[root@k8s2 alexw.com]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}



# 列出所需镜像
[root@k8s2 alexw.com]# kubeadm config images list
I0120 21:30:09.770147   43521 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.23
registry.k8s.io/kube-apiserver:v1.23.16
registry.k8s.io/kube-controller-manager:v1.23.16
registry.k8s.io/kube-scheduler:v1.23.16
registry.k8s.io/kube-proxy:v1.23.16
registry.k8s.io/pause:3.6
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6

## 使用aliyun镜像
[root@k8s2 alexw.com]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
I0120 21:34:05.877928   43657 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.23

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6


## 镜像本地化
[root@k8s2 alexw.com]# docker images | grep google | awk '{print $1":"$2}' | awk -F/ '{system("docker tag "$0" alexw.com/k8s/"$3"")}'

[root@k8s2 alexw.com]# docker iamges | grep k8s

[root@k8s2 alexw.com]# docker images | grep k8s | awk '{system("docker push "$1":"$2"")}'


## 使用本地harbor源
[root@k8s2 alexw.com]# kubeadm config images list --image-repository alexw.com/k8s --kubernetes-version 1.23.16
alexw.com/k8s/kube-apiserver:v1.23.16
alexw.com/k8s/kube-controller-manager:v1.23.16
alexw.com/k8s/kube-scheduler:v1.23.16
alexw.com/k8s/kube-proxy:v1.23.16
alexw.com/k8s/pause:3.6
alexw.com/k8s/etcd:3.5.6-0
alexw.com/k8s/coredns:v1.8.6

# 初始化报错
[root@k8s2 alexw.com]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository alexw.com/k8s --kubernetes-version 1.23.16
[init] Using Kubernetes version: v1.23.16
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
        [ERROR Mem]: the system RAM (972 MB) is less than the minimum 1700 MB
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
## 解决方案
关机增加内存和cpu
重启

# 再次初始化报错
[root@k8s3 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository alexw.com/k8s --kubernetes-version 1.23.16
[init] Using Kubernetes version: v1.23.16
[preflight] Running pre-flight checks
        [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

## 解决方案
echo "1" > /proc/sys/net/ipv4/ip_forward
service network restart
reboot


# 再次报错
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248:                                connect: connection refused.
## 解决方案
kubelet未正常启动
由于重启,swap分区重新挂载
[root@k8s2 ~]# swapoff -a
[root@k8s2 ~]# vi /etc/fstab

# 注释掉这行
#UUID=689004f1-5602-484c-97f1-3bd6252b6676 swap                    swap    defaults        0 0

[root@k8s2 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1819         220        1011           9         587        1390
Swap:             0           0           0

[root@k8s2 ~]# vi /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0

[root@k8s2 ~]# sysctl -p /etc/sysctl.d/k8s.conf

# 启动kubelet
[root@k8s2 ~]# systemctl start kubelet
[root@k8s2 ~]# systemctl stop kubelet

# 再次初始化通过
[root@k8s3 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository alexw.com/k8s --kubernetes-version 1.23.16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 配置超户
[root@k8s2 ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bash_profile
[root@k8s2 ~]# cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
export KUBECONFIG=/etc/kubernetes/admin.conf

# 查看容器状态
[root@k8s2 ~]# kubectl get pod -A
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-d5d849484-drqzn        0/1     Pending   0          3m41s
kube-system   coredns-d5d849484-hg8qp        0/1     Pending   0          3m41s
kube-system   etcd-k8s2                      1/1     Running   0          3m59s
kube-system   kube-apiserver-k8s2            1/1     Running   0          3m57s
kube-system   kube-controller-manager-k8s2   1/1     Running   0          3m57s
kube-system   kube-proxy-9g656               1/1     Running   0          3m41s
kube-system   kube-scheduler-k8s2            1/1     Running   0          3m57s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 安装插件
## https://kubernetes.io/docs/concepts/cluster-administration/addons/
# 安装flannel插件
[root@k8s2 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
[root@k8s2 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2

[root@k8s2 ~]# docker tag rancher/mirrored-flannelcni-flannel:v0.20.2 alexw.com/rancher/mirrored-flannelcni-flannel:v0.20.2
[root@k8s2 ~]# docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0  alexw.com/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0

[root@k8s2 ~]# docker push alexw.com/rancher/mirrored-flannelcni-flannel:v0.20.2
[root@k8s2 ~]# docker push alexw.com/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0

# 修改配置文件
[root@k8s2 ~]# vim kube-flannel.yml

        #image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2

        #image: docker.io/flannel/flannel:v0.20.2
        image: rancher/mirrored-flannelcni-flannel:v0.20.2

        #image: docker.io/flannel/flannel:v0.20.2
        image: rancher/mirrored-flannelcni-flannel:v0.20.2


[root@k8s2 ~]#kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

# 查看pod状态,flannel未正常启动
[root@k8s2 ~]# kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS                  RESTARTS   AGE
kube-flannel   kube-flannel-ds-d9q6z          0/1     Init:ImagePullBackOff   0          8m3s
kube-system    coredns-d5d849484-drqzn        0/1     Pending                 0          42m
kube-system    coredns-d5d849484-hg8qp        0/1     Pending                 0          42m
kube-system    etcd-k8s2                      1/1     Running                 0          42m
kube-system    kube-apiserver-k8s2            1/1     Running                 0          42m
kube-system    kube-controller-manager-k8s2   1/1     Running                 0          42m
kube-system    kube-proxy-9g656               1/1     Running                 0          42m
kube-system    kube-scheduler-k8s2            1/1     Running                 0          42m


## 解决方案
发现配置文件中的版本号和harbor仓库里不一致,修改一下,重新启动
        #image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0


[root@k8s2 ~]# kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-84fcz          1/1     Running   0          10s
kube-system    coredns-d5d849484-drqzn        1/1     Running   0          47m
kube-system    coredns-d5d849484-hg8qp        1/1     Running   0          47m
kube-system    etcd-k8s2                      1/1     Running   0          48m
kube-system    kube-apiserver-k8s2            1/1     Running   0          48m
kube-system    kube-controller-manager-k8s2   1/1     Running   0          48m
kube-system    kube-proxy-9g656               1/1     Running   0          47m
kube-system    kube-scheduler-k8s2            1/1     Running   0          48m

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 加入slave
# 在slave上执行命令,加入集群
kubeadm join 192.168.147.101:6443 --token 39iru2.6d29d60b2lgf4ugl \
        --discovery-token-ca-cert-hash sha256:026aa67f08d60b3c7853562ba931b8f9461a465f2c6a537d6ddfbe41ec7b9772


# master上状态不对
[root@k8s2 ~]# kubectl get node -A
NAME   STATUS     ROLES                  AGE     VERSION
k8s2   Ready      control-plane,master   60m     v1.23.15
k8s3   NotReady   <none>                 7m26s   v1.23.15
k8s4   NotReady   <none>                 3m59s   v1.23.15
## 解决方案
### 没有成功拉取到镜像
[root@k8s3 ~]# docker images
REPOSITORY                 TAG        IMAGE ID       CREATED         SIZE
alexw.com/k8s/kube-proxy   v1.23.16   28204678d22a   2 days ago      111MB
alexw.com/k8s/pause        3.6        6270bb605e12   17 months ago   683kB
[root@k8s3 ~]# docker pull rancher/mirrored-flannelcni-flannel:v0.20.2    
[root@k8s3 ~]# docker pull alexw.com/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0   

# 再查看master上的状态
[root@k8s2 ~]# kubectl get node -A
NAME   STATUS   ROLES                  AGE     VERSION
k8s2   Ready    control-plane,master   65m     v1.23.15
k8s3   Ready    <none>                 12m     v1.23.15
k8s4   Ready    <none>                 8m50s   v1.23.15

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 配置命令行Tab补齐
[root@k8s2 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s2 ~]# source ~/.bashrc
  • 1
  • 2
  • token过期后重新创建
# 设置过期时间72小时
[root@k8s2 ~]# kubeadm token create --print-join-command --ttl 72h
kubeadm join 192.168.147.101:6443 --token eysozn.4eydjpdtxkoxkdgg --discovery-token-ca-cert-hash sha256:026aa67f08d60b3c7853562ba931b8f9461a465f2c6a537d6ddfbe41ec7b9772

  • 1
  • 2
  • 3
  • 4

二、配置pod和deployment

  • 默认创建的pod都在default这个ns里面

## 报错
The connection to the server localhost:8080 was refused
### 解决方案
[root@k8s2 ~]# scp /etc/kubernetes/admin.conf k8s4:/etc/kubernetes/

[root@k8s4 ~]# mkdir -p $HOME/.kube
[root@k8s4 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s4 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

# 需要用全路径
[root@k8s3 ~]# kubectl run demo1 --image=alexw.com/library/myapp:v1
[root@k8s3 ~]# kubectl describe pod demo1

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/demo1 to k8s3
  Normal  Pulling    5s    kubelet            Pulling image "alexw.com/library/myapp:v1"
  Normal  Pulled     5s    kubelet            Successfully pulled image "alexw.com/library/myapp:v1" in 98.393171ms
  Normal  Created    5s    kubelet            Created container demo1
  Normal  Started    4s    kubelet            Started container demo1

[root@k8s3 ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
demo1   1/1     Running   0          11s

# 创建三个副本
[root@k8s3 ~]# kubectl create deployment myapp --image=alexw.com/library/myapp:v1 --replicas=3
deployment.apps/myapp created
[root@k8s3 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
demo1                   1/1     Running   0          5m51s
myapp-79fdcd5ff-f4bvm   1/1     Running   0          7s
myapp-79fdcd5ff-gb9ng   1/1     Running   0          7s
myapp-79fdcd5ff-k526l   1/1     Running   0          7s


# 副本会自动拉起
[root@k8s3 ~]# kubectl get deployments.apps
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
myapp   3/3     3            3           68s
[root@k8s3 ~]# kubectl delete pod myapp-79fdcd5ff-f4bvm
pod "myapp-79fdcd5ff-f4bvm" deleted
[root@k8s3 ~]# kubectl get deployments.apps
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
myapp   3/3     3            3           100s
[root@k8s3 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
demo1                   1/1     Running   0          7m29s
myapp-79fdcd5ff-gb9ng   1/1     Running   0          105s
myapp-79fdcd5ff-k526l   1/1     Running   0          105s
myapp-79fdcd5ff-rwqdc   1/1     Running   0          7s


# pod会被部署到不同节点
[root@k8s3 ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
demo1                   1/1     Running   0          8m21s   10.244.1.4   k8s3   <none>           <none>
myapp-79fdcd5ff-gb9ng   1/1     Running   0          2m37s   10.244.2.4   k8s4   <none>           <none>
myapp-79fdcd5ff-k526l   1/1     Running   0          2m37s   10.244.1.5   k8s3   <none>           <none>
myapp-79fdcd5ff-rwqdc   1/1     Running   0          59s     10.244.2.6   k8s4   <none>           <none>


# 可以直接控制其他节点上的pod
[root@k8s3 ~]# kubectl exec myapp-79fdcd5ff-rwqdc -- ls /usr/share/nginx/html
50x.html
index.html

# 改变容量
[root@k8s3 ~]# kubectl scale deployment myapp --replicas=6
deployment.apps/myapp scaled
[root@k8s3 ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
demo1                   1/1     Running   0          19m   10.244.1.4   k8s3   <none>           <none>
myapp-79fdcd5ff-gb9ng   1/1     Running   0          14m   10.244.2.4   k8s4   <none>           <none>
myapp-79fdcd5ff-k526l   1/1     Running   0          14m   10.244.1.5   k8s3   <none>           <none>
myapp-79fdcd5ff-m68qn   1/1     Running   0          6s    10.244.1.7   k8s3   <none>           <none>
myapp-79fdcd5ff-ncxv5   1/1     Running   0          6s    10.244.1.6   k8s3   <none>           <none>
myapp-79fdcd5ff-qzqkg   1/1     Running   0          6s    10.244.2.7   k8s4   <none>           <none>
myapp-79fdcd5ff-rwqdc   1/1     Running   0          12m   10.244.2.6   k8s4   <none>           <none>


# 对外发布,--port=80指宿主机端口,--target-port=80指容器内端口
[root@k8s3 ~]# kubectl expose deployment myapp --port=80 --target-port=80
service/myapp exposed
[root@k8s3 ~]# kubectl describe svc myapp
Name:              myapp
Namespace:         default
Labels:            app=myapp
Annotations:       <none>
Selector:          app=myapp
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.107.222.140
IPs:               10.107.222.140
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.5:80,10.244.1.6:80,10.244.1.7:80 + 3 more...
Session Affinity:  None
Events:            <none>

# 可以看到服务是负载均衡的
[root@k8s3 ~]# curl 10.107.222.140/hostname.html
myapp-79fdcd5ff-gb9ng
[root@k8s3 ~]# curl 10.107.222.140/hostname.html
myapp-79fdcd5ff-ncxv5
[root@k8s3 ~]# curl 10.107.222.140/hostname.html
myapp-79fdcd5ff-gb9ng
[root@k8s3 ~]# curl 10.107.222.140/hostname.html

# 修改svc
[root@k8s3 ~]# kubectl edit svc myapp

  sessionAffinity: None
  type: NodePort  # 修改为NodePort

[root@k8s3 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        106m
myapp        NodePort    10.107.222.140   <none>        80:31532/TCP   9m25s

[root@k8s3 ~]# curl -I 192.168.147.103:31532
HTTP/1.1 200 OK
Server: nginx/1.12.2
Date: Sat, 21 Jan 2023 10:46:20 GMT
Content-Type: text/html
Content-Length: 65
Last-Modified: Fri, 02 Mar 2018 03:39:12 GMT
Connection: keep-alive
ETag: "5a98c760-41"
Accept-Ranges: bytes


# 应用更新,这里也要用全路径
[root@k8s3 ~]# kubectl set image deployment/myapp myapp=alexw.com/library/myapp:v2
deployment.apps/myapp image updated

# 版本回滚
[root@k8s3 ~]# kubectl rollout undo deployment myapp --to-revision=1
deployment.apps/myapp rolled back


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144

三、k8s升级

  • 原版本v1.23.16,要升级到1.24
  • 1.23:kubelet》dockershim》docker daemon》contained》pod
  • 1.24:kubelet》contained》pod
  • 到1.24后,k8s不再支持docker
# 安装cri-dockerd
[root@k8s2 ~]# yum install -y cri-dockerd-0.2.5-3.el7.x86_64.rpm

# 修改cri-docker.service
[root@k8s2 ~]# vim /usr/lib/systemd/system/cri-docker.service

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=alexw.com/k8s/pause:3.6  # 修改这行

# 启动cri-docker
[root@k8s2 ~]# systemctl enable --now cri-docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/cri-docker.service to /usr/lib/systemd/system/cri-docker.service

# 更新k8s软件
[root@k8s2 ~]# yum install -y kubeadm-1.24.0-0
[root@k8s2 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s2 ~]# kubeadm upgrade plan

Upgrade to the latest version in the v1.23 series:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.23.16   v1.24.0
kube-controller-manager   v1.23.16   v1.24.0
kube-scheduler            v1.23.16   v1.24.0
kube-proxy                v1.23.16   v1.24.0
CoreDNS                   v1.8.6     v1.8.6
etcd                      3.5.6-0    3.5.3-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.24.0

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

# 执行升级时报错,需要pause:3.7,etcd:3.5.3-0,coredns:v1.8.6
[root@k8s2 ~]# kubeadm upgrade apply v1.24.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0121 04:50:07.595167   92768 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.0"
[upgrade/versions] Cluster version: v1.23.16
[upgrade/versions] kubeadm version: v1.24.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/kube-apiserver:v1.24.0: output: time="2023-01-21T04:50:30-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/kube-controller-manager:v1.24.0: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/kube-scheduler:v1.24.0: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/kube-proxy:v1.24.0: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/pause:3.7: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/etcd:3.5.3-0: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image alexw.com/k8s/coredns:v1.8.6: output: time="2023-01-21T04:50:31-08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

[root@k8s2 ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.7
[root@k8s2 ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.5.3-0


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号