赞
踩
linux下离线安装k8s集群1.19.4附带nfs存储
活动地址:毕业季·进击的技术er
一,环境简介
kubernetes-1.19.4集群部署计划 | ||||
序号 | 服务器配置 | IP地址 | 操作系统 | 备注 |
1 | cpu:2c 内存:4G | 192.168.217.16 | centos 7.6 | k8s主节点 |
2 | cpu:2c | 192.168.217.17 | centos 7.6 | k8s从节点 |
3 | cpu:2c | 192.168.217.18 | centos 7.6 | k8s从节点 |
三台服务器均为虚拟机,网络配置为nat模式。
链接:https://pan.baidu.com/s/19PTj1VwpvaSxYlhbFuqP6w?pwd=k8ss
提取码:k8ss
离线安装包的链接!!!!!!!!!!!!!包含docker环境
二,
关于域名映射问题和网络问题,主机名称修改如下,如何修改在此不讨论。
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.217.16 master
- 192.168.217.17 slave1
- 192.168.217.18 slave2
编辑hosts文件,域名映射如上。因是nat网络模式,因此,三台服务器自组网,三个网卡配置文件内容大体如下:
- TYPE="Ethernet"
- BOOTPROTO="static"
- DEFROUTE="yes"
- IPV4_FAILURE_FATAL="no"
- NAME="ens33"
- UUID="d4876b9f-42d8-446c-b0ae-546e812bc954"
- DEVICE="ens33"
- ONBOOT="yes"
- IPADDR=192.168.217.16
- NETMASK=255.255.255.0
- GATEWAY=192.168.217.2
- DNS1=192.168.217.16
三,
network服务已启用则关闭NetworkManager防止冲突
systemctl stop NetworkManager && systemctl disable NetworkManager
四,时间服务器
五,
内核参数修改,
这个步骤是必须要有的,k8s在安装和使用的过程中会检测这三个参数,三台服务器都要做:
vim /etc/sysctl.conf
- net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.ipv4.ip_forward = 1
写入这几个参数在sysctl.conf 文件内,然后sysctl -p 命令 使之生效。(特别注意,这个命令之前需要执行开启ipvs内核命令 :modprobe br_netfilter)
六,
三台服务器都关闭防火墙,selinux,swap挂载,升级内核版本到5.1
关闭防火墙命令是:
systemctl disable firewalld && systemctl stop firewalld
selinux 临时关闭:setenforce 0
selinux 永久关闭:修改 /etc/selinux/config 这个文件,SELINUX=disabled
swap卸载,见本人博客:KVM虚拟机管理工作二(虚拟机磁盘优化,Centos进入dracut模式,报 /dev/centos/swap does not exist,如何恢复)_zsk_john的博客-CSDN博客_kvm虚拟机磁盘缩容量前言:KVM虚拟机的安装其实不是一个简单的事情,为什么要这么说呢?因为,KVM虚拟机在安装完毕后,我们可能会有很多定制化的需求,比如,更改虚拟机的root密码,安装一些常用软件,或者常用的软件环境。也会有扩容,缩容,增加逻辑盘以及打快照等等扩展需求。那么,KVM虚拟机的操作系统安装一般是什么要求呢?我想,第一,是需要最小化安装,这里最小化安装是为了降低KVM镜像的大小,使得镜像轻量化。第二,是关闭swap,因为很多环境是不能有swap的,相对于生产服务器来说,通常swap都是一个鸡肋的存在,并且https://blog.csdn.net/alwaysbefine/article/details/124831650这里是一个比较容易忽略的地方,卸载swap建议最好按照我的博客所写进行,否则会重新启动不了服务器。
七,升级内核
升级内核的原因是k8s运行在高版本内核下比较稳定,升级内核方法如下:
- rpm -ivh kernel-ml-5.16.9-1.el7.elrepo.x86_64.rpm
-
- grub2-set-default "CentOS Linux (5.16.9-1.el7.elrepo.x86_64) 7 (Core)"
-
- grub2-editenv list ## 查看内核启动项
六七步骤建议都完成后,统一重启服务器。三个节点都做。
八,
服务器之间的免密互信操作
九,docker环境的部署
这里需要说明一下,docker的版本是ce19. 03.9,该版本是和k8s的1.19.4版本适配的
一,
k8s集群规划,因此需要在环境变量内设定一个新变量,变量写在 /etc/profile 文件内,(三个服务器都要写)变量内容如下:
export no_proxy=localhost,127.0.0.1,dev.cnn,192.168.217.16,default.svc.cluster.local,svc.cluster.local,cluster.local,10.96.0.1,10.96.0.0/12,10.244.0.0/16
二,
k8s基本组件的安装:
k8s-1.19.4-offline这个文件夹内的k8s.tar.gz文件解压,然后将该解压目录挂载为本地仓库。
k8s-1.19.4-offline这个文件夹内的conntrack.tar.gz解压,然后执行命令 rpm -ivh * 安装,这个是k8s的强依赖。
检查本地仓库无误后执行以下命令进行安装:
yum install -y kubeadm-1.19.4 kubelet-1.19.4 kubectl-1.19.4
将服务加入自启,三个节点都要执行:
systemctl enable kubelet &&systemctl start kubelet
服务状态为绿色表示服务正常:
- [root@master opt]# systemctl status kubelet
- ● kubelet.service - kubelet: The Kubernetes Node Agent
- Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
- Drop-In: /usr/lib/systemd/system/kubelet.service.d
- └─10-kubeadm.conf
- Active: active (running) since Fri 2022-07-01 18:52:58 CST; 5h 44min ago
- Docs: https://kubernetes.io/docs/
- Main PID: 1091 (kubelet)
- Memory: 152.8M
- CGroup: /system.slice/kubelet.service
- └─1091 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --h...
-
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.844741 1091 topology_manager.go:233] [topologymanager] Topology Admit Handler
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889281 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName:...
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889367 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName:...
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889414 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-tok...
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889451 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-cer...
- Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889488 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-tok...
- Jul 01 18:59:46 master kubelet[1091]: W0701 18:59:46.669731 1091 pod_container_deletor.go:79] Container "3b5bb41530363d16e2478900afd45d91dbe5f9260cf8d0ac398a8d29da0a...s containers
- Jul 01 18:59:46 master kubelet[1091]: W0701 18:59:46.674402 1091 pod_container_deletor.go:79] Container "3c46b2fa0a198e044fdd27507e17a14944dcee9f657be06d1e5812b16383...s containers
- Jul 01 23:37:01 master kubelet[1091]: I0701 23:37:01.462295 1091 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 07ee94a447d5bed0408914de8...4794cb7ae2d9
- Jul 01 23:37:01 master kubelet[1091]: I0701 23:37:01.462890 1091 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a5996702878a2fac2c793c22b...1b5fb16772e6
- Hint: Some lines were ellipsized, use -l to show in full.

三,
镜像的导入:
k8s-1.19.4-offline这个文件夹内的master-images.tar.gz在16服务器解压,然后执行批量导入命令:for i in `ls master-images`;do docker load <$i;done
k8s-1.19.4-offline这个文件夹内的slave1-images.tar.gz在17服务器解压,然后执行批量导入命令:for i in `ls slave1-images`;do docker load <$i;done
k8s-1.19.4-offline这个文件夹内的slave2-images.tar.gz在18服务器解压,然后执行批量导入命令:for i in `ls slave2-images`;do docker load <$i;done
四,
k8s-1.19.4-offline这个文件夹内的kubeadm.zip在三个服务器都解压,然后,将可执行文件kubeadm-1.19.3移动到 /usr/bin/目录下,改名为kubeadm
修改kubeadm.conf 文件,重点修改如下内容:
- localAPIEndpoint:
- advertiseAddress: 192.168.217.16
- bindPort: 6443
- nodeRegistration:
- criSocket: /var/run/dockershim.sock
- name: zsk.cnn
- taints:
- - effect: NoSchedule
- key: node-role.kubernetes.io/master
五,
集群初始化,执行以下命令即可:
kubeadm init --config kubeadm.conf
如果初始化失败的话,可以使用命令 kubeadm reset 命令进行重置,不建议删除相关环境文件重做初始化,加入节点命令在此命令的末尾,复制该命令后在其它节点运行即可加入节点,不需要对此命令进行任何改动,如果加入集群失败,可同样使用kubeadm reset 命令重新恢复环境,再次加入。
注意,此命令是在master节点执行,命令成功执行完成后,输出有节点加入命令,复制节点加入命令,在其余两个节点执行即可。
此时的集群状态应该是noready,在主节点执行命令:kubectl apply -f kube-flannel.yml 集群状态即可恢复正常。
六,
安装kubernetes-dashboard(此操作只在master节点执行,其余两个节点不执行)
dashboard.yml文件的内容如下:
- # Copyright 2017 The Kubernetes Authors.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
-
- apiVersion: v1
- kind: Namespace
- metadata:
- name: kubernetes-dashboard
-
- ---
-
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
-
- ---
-
- kind: Service
- apiVersion: v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- spec:
- ports:
- - port: 443
- targetPort: 8443
- selector:
- k8s-app: kubernetes-dashboard
-
- ---
-
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-certs
- namespace: kubernetes-dashboard
- type: Opaque
-
- ---
-
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-csrf
- namespace: kubernetes-dashboard
- type: Opaque
- data:
- csrf: ""
-
- ---
-
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-key-holder
- namespace: kubernetes-dashboard
- type: Opaque
-
- ---
-
- kind: ConfigMap
- apiVersion: v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-settings
- namespace: kubernetes-dashboard
-
- ---
-
- kind: Role
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- rules:
- # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- - apiGroups: [""]
- resources: ["secrets"]
- resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
- verbs: ["get", "update", "delete"]
- # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- - apiGroups: [""]
- resources: ["configmaps"]
- resourceNames: ["kubernetes-dashboard-settings"]
- verbs: ["get", "update"]
- # Allow Dashboard to get metrics.
- - apiGroups: [""]
- resources: ["services"]
- resourceNames: ["heapster", "dashboard-metrics-scraper"]
- verbs: ["proxy"]
- - apiGroups: [""]
- resources: ["services/proxy"]
- resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
- verbs: ["get"]
-
- ---
-
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- rules:
- # Allow Metrics Scraper to get metrics from the Metrics server
- - apiGroups: ["metrics.k8s.io"]
- resources: ["pods", "nodes"]
- verbs: ["get", "list", "watch"]
-
- ---
-
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: kubernetes-dashboard
- subjects:
- - kind: ServiceAccount
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
-
- ---
-
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: kubernetes-dashboard
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: kubernetes-dashboard
- subjects:
- - kind: ServiceAccount
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
-
- ---
-
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- spec:
- replicas: 1
- revisionHistoryLimit: 10
- selector:
- matchLabels:
- k8s-app: kubernetes-dashboard
- template:
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- spec:
- containers:
- - name: kubernetes-dashboard
- image: kubernetesui/dashboard:v2.0.4
- ports:
- - containerPort: 8443
- protocol: TCP
- args:
- - --auto-generate-certificates
- - --namespace=kubernetes-dashboard
- # Uncomment the following line to manually specify Kubernetes API server Host
- # If not specified, Dashboard will attempt to auto discover the API server and connect
- # to it. Uncomment only if the default does not work.
- # - --apiserver-host=http://my-address:port
- volumeMounts:
- - name: kubernetes-dashboard-certs
- mountPath: /certs
- # Create on-disk volume to store exec logs
- - mountPath: /tmp
- name: tmp-volume
- livenessProbe:
- httpGet:
- scheme: HTTPS
- path: /
- port: 8443
- initialDelaySeconds: 30
- timeoutSeconds: 30
- securityContext:
- allowPrivilegeEscalation: false
- readOnlyRootFilesystem: true
- runAsUser: 1001
- runAsGroup: 2001
- volumes:
- - name: kubernetes-dashboard-certs
- secret:
- secretName: kubernetes-dashboard-certs
- - name: tmp-volume
- emptyDir: {}
- serviceAccountName: kubernetes-dashboard
- nodeSelector:
- "kubernetes.io/os": linux
- # Comment the following tolerations if Dashboard must not be deployed on master
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
-
- ---
-
- kind: Service
- apiVersion: v1
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- name: dashboard-metrics-scraper
- namespace: kubernetes-dashboard
- spec:
- ports:
- - port: 8000
- targetPort: 8000
- selector:
- k8s-app: dashboard-metrics-scraper
-
- ---
-
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- name: dashboard-metrics-scraper
- namespace: kubernetes-dashboard
- spec:
- replicas: 1
- revisionHistoryLimit: 10
- selector:
- matchLabels:
- k8s-app: dashboard-metrics-scraper
- template:
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- annotations:
- seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
- spec:
- containers:
- - name: dashboard-metrics-scraper
- image: kubernetesui/metrics-scraper:v1.0.6
- ports:
- - containerPort: 8000
- protocol: TCP
- livenessProbe:
- httpGet:
- scheme: HTTP
- path: /
- port: 8000
- initialDelaySeconds: 30
- timeoutSeconds: 30
- volumeMounts:
- - mountPath: /tmp
- name: tmp-volume
- securityContext:
- allowPrivilegeEscalation: false
- readOnlyRootFilesystem: true
- runAsUser: 1001
- runAsGroup: 2001
- serviceAccountName: kubernetes-dashboard
- nodeSelector:
- "kubernetes.io/os": linux
- # Comment the following tolerations if Dashboard must not be deployed on master
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- volumes:
- - name: tmp-volume
- emptyDir: {}

-
-
- mkdir /etc/kubernetes/pki/dashboard/
-
- cd /etc/kubernetes/pki/dashboard/
-
- openssl genrsa -out tls.key 2048
-
- openssl req -new -key tls.key -subj "/CN=zsk.cnn" -out tls.csr
-
- openssl x509 -req -days 3650 -in tls.csr -CA ../ca.crt -CAkey ../ca.key -CAcreateserial -out tls.crt
-
- kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/pki/dashboard/ -n kube-system
-
- #执行kubectl 命令安装dashboard
-
-
- kubectl apply -f dashboard.yml

输出如下:
secret/kubernetes-dashboard-certs created
#集群角色绑定
kubectl create clusterrolebinding default --clusterrole=cluster-admin --serviceaccount=kube-system:default --namespace=kube-system
输出如下为正确:
clusterrolebinding.rbac.authorization.k8s.io/default created
七,
安装ingress
vim ingress-nginx-values.yaml
- controller:
- name: controller
- image:
- repository: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller
- tag: "v0.50.0"
- pullPolicy: IfNotPresent
- config:
- map-hash-bucket-size: "1024"
- proxy-body-size: "100m"
- ssl-protocols: "TLSv1.2 TLSv1.3"
- enable-modsecurity: "true"
- enable-owasp-modsecurity-crs: "true"
- error-log-level: "warn"
- modsecurity:
- config:
- enabled: true
- dnsPolicy: ClusterFirstWithHostNet
- hostNetwork: true
- hostPort:
- enabled: true
- ports:
- http: 80
- https: 443
- kind: DaemonSet
- resources:
- limits:
- cpu: 200m
- memory: 512Mi
- requests:
- cpu: 100m
- memory: 200Mi
- defaultBackend:
- enabled: true
- image:
- repository: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend
- tag: "1.4"
- pullPolicy: IfNotPresent

此时,解压k8s-1.19.4-offline\helms这个目录下的ingress-nginx-3.25.0.tgz,上面的配置文件和压缩包同一目录下,然后执行以下命令。这里特别注意,helm这个文件先需要放到环境变量里哦,也就是移动helm 这个文件到 /usr/localbin/目录下即可。
helm install ingress-nginx -f ingress-nginx-values.yaml ingress-nginx -n ingress-nginx --create-namespace
ingress用到的镜像包是:
- [root@master YAML]# docker images --digests
- REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
- registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller <none> sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 435df390f367 16 months ago 279MB
-
-
特别注意,需要查看digest是否是3dd开始的,如果不是,需要修改它的digest,ingress依赖的是两个镜像,这两个镜像都放在了ingress目录下。
八,
部署dashboard-ingress
vim dash-ingress.yaml
这个文件的hosts需要指定,这里我用的是dash.master.com 这个域名 ,hosts里一会要写哦
kind: Ingress apiVersion: extensions/v1beta1 metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: 'true' nginx.ingress.kubernetes.io/use-regex: 'true' spec: tls: - hosts: - dash.master.com secretName: kubernetes-dashboard-certs rules: - host: dash.master.com http: paths: - path: / pathType: ImplementationSpecific backend: serviceName: kubernetes-dashboard servicePort: 443 |
执行安装命令:
kubectl apply -f dash-ingress.yaml
九,
获取token
- kubectl describe sa default -n kube-system 输出如下:
-
- [root@localhost software]# kubectl describe sa default -n kube-system
-
- Name: default
-
- Namespace: kube-system
-
- Labels: <none>
-
- Annotations: <none>
-
- Image pull secrets: <none>
-
- Mountable secrets: default-token-srkj8
-
- Tokens: default-token-srkj8
-
- Events: <none>
-
-
-
-
-
-
-
- kubectl describe secrets default-token-9hhsx -n kube-system输出如下:
-
- [root@localhost software]# kubectl describe secrets default-token-srkj8 -n kube-system
-
- Name: default-token-srkj8
-
- Namespace: kube-system
-
- Labels: <none>
-
- Annotations: kubernetes.io/service-account.name: default
-
- kubernetes.io/service-account.uid: 34ed4707-ebe7-4699-85d1-09b20f3d0cae
-
-
-
- Type: kubernetes.io/service-account-token
-
-
-
- Data
-
- ====
-
- token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkdOMndKQ2FTUzd3c2ZhakVfSFFRekxFLXNQZGhUdUpVdGJyNFpsSTJmMkEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXNya2o4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNGVkNDcwNy1lYmU3LTQ2OTktODVkMS0wOWIyMGYzZDBjYWUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.dGeBBZg-DzZJ7aUf-5FsNcm5x3JaGBMKMMaAa92-98PV7U-5pZQTcCvvw0Bi6nEGTFi8g_a6NQ3Tw43quPJV5FLMFgH9mQMnJXRtjjKomLjd4_GwYpK7cPaFuzwJWLqAXiddnEZmnyLj6D3qy5wc3QR5rgiQQ3QgrXKCZzXoYrlPg9dNUz3XqEgtxDlBYMFe43Gn9e8Xw7NOgydqKv0Qhxqjltx_nGJFw2fXIdoVBQQM1uC1BU37XqJJrh0wficXw57aB338W9ena38454V8pxWs2gYAlsOcCPJDAQb_tZA1e9JoHFWIwZ5VP_YHZC3MGTiVdjws6i8EpcPRM3QFkQ
-
- ca.crt: 1070 bytes
-
- namespace: 11 bytes
-
-
-
- kubectl describe secrets $(kubectl describe sa default -n kube-system | grep Mountable | awk 'NR == 2 {next} {print $3}') -n kube-system

安装到这个阶段的时候,三个节点使用的镜像如下:
master节点:
- [root@master opt]# docker images
- REPOSITORY TAG IMAGE ID CREATED SIZE
- bitnami/kubectl 1.17.13-debian-10-r21 7022735edf5f 19 months ago 129MB
- kubernetesui/metrics-scraper v1.0.6 48d79e554db6 20 months ago 34.5MB
- quay.io/coreos/flannel v0.13.0 e708f4bb69e3 20 months ago 57.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 cdef7632a242 20 months ago 118MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.19.3 9b60aca1d818 20 months ago 111MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.19.3 a301be0cd44b 20 months ago 119MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 aaefbfa906bd 20 months ago 45.7MB
- kubernetesui/dashboard v2.0.4 46d0a29c3f61 22 months ago 225MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 22 months ago 253MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 2 years ago 45.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 2 years ago 683kB
- registry.c7n.gzinfo/choerodon-tools/kubectl v1.15.2 2fad3003d792 2 years ago 52.5MB
slave1节点:
- [root@slave1 ~]# docker images --digests
- REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
- registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller none <none> ae1739386d6a 7 months ago 285MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller <none> sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 435df390f367 17 months ago 279MB
- jettech/kube-webhook-certgen v1.5.1 sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 a013daf8730d 19 months ago 44.7MB
- kubernetesui/metrics-scraper v1.0.6 <none> 48d79e554db6 20 months ago 34.5MB
- quay.io/coreos/flannel v0.13.0 <none> e708f4bb69e3 20 months ago 57.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 <none> cdef7632a242 20 months ago 118MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 <none> aaefbfa906bd 20 months ago 45.7MB
- kubernetesui/dashboard v2.0.4 <none> 46d0a29c3f61 22 months ago 225MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.13-0 <none> 0369cf4303ff 22 months ago 253MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 <none> bfe3a36ebd25 2 years ago 45.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 <none> 80d28bedfe5d 2 years ago 683kB
- registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend 1.4 <none> 846921f0fe
slave2节点:
- [root@slave2 ~]# docker images --digests
- REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
- registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v0.50.0 <none> ae1739386d6a 7 months ago 285MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller <none> sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 435df390f367 17 months ago 279MB
- jettech/kube-webhook-certgen v1.5.1 <none> a013daf8730d 19 months ago 44.7MB
- quay.io/coreos/flannel v0.13.0 <none> e708f4bb69e3 20 months ago 57.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 <none> cdef7632a242 20 months ago 118MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 <none> aaefbfa906bd 20 months ago 45.7MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 <none> bfe3a36ebd25 2 years ago 45.2MB
- registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 <none> 80d28bedfe5d 2 years ago 683kB
- registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend 1.4 <none> 846921f0fe0e 4 years ago 4.84MB
其中,kubernetesui/metrics-scraper这个镜像是dashboard信息收集插件,
kubernetesui/dashboard是主镜像,
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller这个镜像的digest应该是3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
十,nfs插件的安装
(1)
nfs服务的安装----三个节点都安装
- yum install nfs nfs-utils rpcbind -y
- systemctl enable nfs rpcbid && systemctl start nfs rpcbind
(2)
nfs的配置文件编辑
- [root@master ~]# cat /etc/exports
- /data/k8s 10.244.0.0/16(rw,no_root_squash,no_subtree_check) 192.168.217.16(rw,no_root_squash,no_subtree_check) 192.168.217.0/24(rw,no_root_squash,no_subtree_check)
(3)
建立存储点,给予存储点777权限--- 这一步是在master节点操作,别的节点不需要
- mkdir -p /data/k8s
- chmod -Rf 777 /data/k8s
(4)
验证,在slave1或者2节点验证
- systemctl restart nfs rpcbind
- showmount -e master
正确输出如下:
- [root@master ~]# showmount -e master
- Export list for master:
- /data/k8s 192.168.217.0/24,10.244.0.0/16
(5)
使用helm安装(nfs-client-provisioner-0.1.1.tgz这个是helm的离线chart包)
helm install nfs-client-provisioner ./nfs-client-provisioner-0.1.1.tgz --set rbac.create=true --set persistence.enabled=true --set storageClass.name=nfs-provisioner --set persistence.nfsServer=192.168.217.16 --set persistence.nfsPath=/data/k8s --version 0.1.1 --namespace kube-system
(6)设置默认storageclass
kubectl patch storageclass nfs-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
十一,
nfs插件的验证性安装 redis集群
1,建立pvc
helm install redis ./redis-persistentvolumeclaim-0.1.0.tgz --set accessModes={ReadWriteOnce} --set requests.storage=256Mi --set storageClassName=nfs-provisioner --create-namespace --version 0.1.0 --namespace kube-system
此时,可以查看一下pvc,pvc的名称叫redis,其实到这里的时候基本就已经表示该nfs插件是正常的了,因为pvc都是bound啦。
- [root@master ~]# k get pvc -A
- NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- kube-system redis Bound pvc-751a32b6-8706-477b-8cad-d71e8e9f3ab8 256Mi RWO nfs-provisioner 26m
- kube-system redis-data-redis-test-master-0 Bound pvc-f9193155-776c-42f4-a3f5-71e75f16416f 8Gi RWO nfs-provisioner 22m
- kube-system redis-data-redis-test-replicas-0 Bound pvc-d5ea7d10-2ffa-402e-b3f1-8573a195ad6f 8Gi RWO nfs-provisioner 22m
- kube-system redis-data-redis-test-replicas-1 Bound pvc-04203f8a-5907-48ce-9fc2-013e94313c3c 8Gi RWO nfs-provisioner 7m40s
- kube-system redis-data-redis-test-replicas-2 Bound pvc-e1693689-b01b-4855-ab1c-b8f843be4e2e 8Gi RWO nfs-provisioner 6m41s
2,
安装redis
helm install redis-test ./redis-16.4.1.tgz --set persistence.enabled=true --set persistence.existingClaim=redis --set service.enabled=true --version 0.2.5 --namespace kube-system
这个命令的输出如下:
- NAME: redis-test
- LAST DEPLOYED: Sat Jul 2 10:36:09 2022
- NAMESPACE: kube-system
- STATUS: deployed
- REVISION: 1
- TEST SUITE: None
- NOTES:
- CHART NAME: redis
- CHART VERSION: 16.4.1
- APP VERSION: 6.2.6
-
- ** Please be patient while the chart is being deployed **
-
- Redis™ can be accessed on the following DNS names from within your cluster:
-
- redis-test-master.kube-system.svc.cluster.local for read/write operations (port 6379)
- redis-test-replicas.kube-system.svc.cluster.local for read-only operations (port 6379)
-
-
-
- To get your password run:
-
- export REDIS_PASSWORD=$(kubectl get secret --namespace kube-system redis-test -o jsonpath="{.data.redis-password}" | base64 --decode)
-
- To connect to your Redis™ server:
-
- 1. Run a Redis™ pod that you can use as a client:
-
- kubectl run --namespace kube-system redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image registry.hand-china.com/tools/redis:6.2.6-debian-10-r120 --command -- sleep infinity
-
- Use the following command to attach to the pod:
-
- kubectl exec --tty -i redis-client \
- --namespace kube-system -- bash
-
- 2. Connect using the Redis™ CLI:
- REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-test-master
- REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-test-replicas
-
- To connect to your database from outside the cluster execute the following commands:
-
- kubectl port-forward --namespace kube-system svc/redis-test-master : &
- REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p

这里需要用的镜像是registry.hand-china.com_tools_redis_6.2.6-debian-10-r120.tar和redis4.0.11 这两个镜像,都被分配到slave1和slave2了。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。