当前位置:   article > 正文

linux下离线安装k8s集群1.19.4附带nfs存储(kubeadm方式)_removecontainer" containerid=

removecontainer" containerid=

linux下离线安装k8s集群1.19.4附带nfs存储

活动地址:毕业季·进击的技术er

 一,环境简介

kubernetes-1.19.4集群部署计划

序号

服务器配置

IP地址

操作系统

备注

1

cpu:2c

内存:4G
硬盘:200G

192.168.217.16

centos 7.6

k8s主节点
nfs

2

cpu:2c
内存:4G
硬盘:200G

192.168.217.17

centos 7.6

k8s从节点

3

cpu:2c
内存:4G
硬盘:200G

192.168.217.18

centos 7.6

k8s从节点

三台服务器均为虚拟机,网络配置为nat模式。

链接:https://pan.baidu.com/s/19PTj1VwpvaSxYlhbFuqP6w?pwd=k8ss 
提取码:k8ss 

离线安装包的链接!!!!!!!!!!!!!包含docker环境
 

 二,

关于域名映射问题和网络问题,主机名称修改如下,如何修改在此不讨论。

  1. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  2. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  3. 192.168.217.16 master
  4. 192.168.217.17 slave1
  5. 192.168.217.18 slave2

编辑hosts文件,域名映射如上。因是nat网络模式,因此,三台服务器自组网,三个网卡配置文件内容大体如下:

  1. TYPE="Ethernet"
  2. BOOTPROTO="static"
  3. DEFROUTE="yes"
  4. IPV4_FAILURE_FATAL="no"
  5. NAME="ens33"
  6. UUID="d4876b9f-42d8-446c-b0ae-546e812bc954"
  7. DEVICE="ens33"
  8. ONBOOT="yes"
  9. IPADDR=192.168.217.16
  10. NETMASK=255.255.255.0
  11. GATEWAY=192.168.217.2
  12. DNS1=192.168.217.16

三,

network服务已启用则关闭NetworkManager防止冲突

systemctl stop  NetworkManager &&  systemctl disable  NetworkManager

 四,时间服务器

时间服务器的搭建见本人博客:Linux ntp时间服务器的搭建和配置_zsk_john的博客-CSDN博客_linux ntp服务器搭建Linux ntp时间服务器的搭建和配置date +"%Z %z"查看ntp服务器与上层ntp的状态【命令】ntpq -premote:本机和上层ntp的ip或主机名,“+”表示优先,“*”表示次优先refid:参考上一层ntp主机地址st:stratum阶层when:多少秒前曾经同步过时间poll:下次更新在多少秒后reach:已经向上层ntp服务器要求更新的次数delay:网络延迟offset:时间补偿jit...https://blog.csdn.net/alwaysbefine/article/details/109055169

五,

内核参数修改,

这个步骤是必须要有的,k8s在安装和使用的过程中会检测这三个参数,三台服务器都要做:

vim /etc/sysctl.conf

  1. net.bridge.bridge-nf-call-ip6tables = 1
  2. net.bridge.bridge-nf-call-iptables = 1
  3. net.ipv4.ip_forward = 1

 写入这几个参数在sysctl.conf 文件内,然后sysctl -p 命令 使之生效。(特别注意,这个命令之前需要执行开启ipvs内核命令 :modprobe br_netfilter)

六,

三台服务器都关闭防火墙,selinux,swap挂载,升级内核版本到5.1

关闭防火墙命令是:

systemctl disable firewalld && systemctl stop firewalld

selinux 临时关闭:setenforce 0

selinux 永久关闭:修改 /etc/selinux/config 这个文件,SELINUX=disabled

swap卸载,见本人博客:KVM虚拟机管理工作二(虚拟机磁盘优化,Centos进入dracut模式,报 /dev/centos/swap does not exist,如何恢复)_zsk_john的博客-CSDN博客_kvm虚拟机磁盘缩容量前言:KVM虚拟机的安装其实不是一个简单的事情,为什么要这么说呢?因为,KVM虚拟机在安装完毕后,我们可能会有很多定制化的需求,比如,更改虚拟机的root密码,安装一些常用软件,或者常用的软件环境。也会有扩容,缩容,增加逻辑盘以及打快照等等扩展需求。那么,KVM虚拟机的操作系统安装一般是什么要求呢?我想,第一,是需要最小化安装,这里最小化安装是为了降低KVM镜像的大小,使得镜像轻量化。第二,是关闭swap,因为很多环境是不能有swap的,相对于生产服务器来说,通常swap都是一个鸡肋的存在,并且https://blog.csdn.net/alwaysbefine/article/details/124831650这里是一个比较容易忽略的地方,卸载swap建议最好按照我的博客所写进行,否则会重新启动不了服务器。

七,升级内核

升级内核的原因是k8s运行在高版本内核下比较稳定,升级内核方法如下:

  1. rpm -ivh kernel-ml-5.16.9-1.el7.elrepo.x86_64.rpm
  2. grub2-set-default "CentOS Linux (5.16.9-1.el7.elrepo.x86_64) 7 (Core)"
  3. grub2-editenv list ## 查看内核启动项

 六七步骤建议都完成后,统一重启服务器。三个节点都做。

八,

服务器之间的免密互信操作

具体操作见本人博客:科普扫盲---ssh免密登陆(ssh的一些小秘密)_zsk_john的博客-CSDN博客_ssh免密登录配置ssh协议和tcp/ip 协议一样非常的重要,那么,如何使用这个ssh呢?这个协议到底有什么用处呢?一,ssh协议是什么ssh是secure SHell的简写,意思为安全的shell,中文也叫安全的外壳协议(是不是比较喜感的一个名称?),那,既然都是shell了,自然是有shell的那些特征啦。SSH 主要由三部分组成:传输层协议 [SSH-TRANS]提供了服务器认证,保密性及完整性。此外它有时还提供压缩功能。 SSH-TRANS 通常运行在TCP/IP连接上,也可能用于其它可https://blog.csdn.net/alwaysbefine/article/details/123451448

九,docker环境的部署

docker离线环境部署见本人博客 :docker的离线安装以及本地化配置_zsk_john的博客-CSDN博客docker的离线安装以及本地化配置首先需要说明离线安装的适用场景:项目具有私有云,内外网分离,安全性要求比较高,相对在线安装,离线安装优势很大,方便,灵活,不需要配置yum源以及考虑整体网络环境的事情了,只需要关心局域网的网络就可以了。废话不多说了,离线安装有RPM包安装方式,二进制安装包方式,源码编译安装方式,最为简便的方式为二进制安装包方式(没有rpm依赖问题,预编译和编译中,缺少依赖而失败的问题)。一,下...https://blog.csdn.net/alwaysbefine/article/details/110310112

这里需要说明一下,docker的版本是ce19. 03.9,该版本是和k8s的1.19.4版本适配的




 正式开始部署k8s集群

一,

k8s集群规划,因此需要在环境变量内设定一个新变量,变量写在 /etc/profile 文件内,(三个服务器都要写)变量内容如下:

export no_proxy=localhost,127.0.0.1,dev.cnn,192.168.217.16,default.svc.cluster.local,svc.cluster.local,cluster.local,10.96.0.1,10.96.0.0/12,10.244.0.0/16

 二,

k8s基本组件的安装:

k8s-1.19.4-offline这个文件夹内的k8s.tar.gz文件解压,然后将该解压目录挂载为本地仓库。

k8s-1.19.4-offline这个文件夹内的conntrack.tar.gz解压,然后执行命令  rpm -ivh *   安装,这个是k8s的强依赖。

检查本地仓库无误后执行以下命令进行安装:

yum install -y  kubeadm-1.19.4 kubelet-1.19.4 kubectl-1.19.4

 将服务加入自启,三个节点都要执行:

systemctl enable kubelet &&systemctl start kubelet

服务状态为绿色表示服务正常:

  1. [root@master opt]# systemctl status kubelet
  2. ● kubelet.service - kubelet: The Kubernetes Node Agent
  3. Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  4. Drop-In: /usr/lib/systemd/system/kubelet.service.d
  5. └─10-kubeadm.conf
  6. Active: active (running) since Fri 2022-07-01 18:52:58 CST; 5h 44min ago
  7. Docs: https://kubernetes.io/docs/
  8. Main PID: 1091 (kubelet)
  9. Memory: 152.8M
  10. CGroup: /system.slice/kubelet.service
  11. └─1091 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --h...
  12. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.844741 1091 topology_manager.go:233] [topologymanager] Topology Admit Handler
  13. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889281 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName:...
  14. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889367 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName:...
  15. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889414 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-tok...
  16. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889451 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-cer...
  17. Jul 01 18:59:45 master kubelet[1091]: I0701 18:59:45.889488 1091 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-tok...
  18. Jul 01 18:59:46 master kubelet[1091]: W0701 18:59:46.669731 1091 pod_container_deletor.go:79] Container "3b5bb41530363d16e2478900afd45d91dbe5f9260cf8d0ac398a8d29da0a...s containers
  19. Jul 01 18:59:46 master kubelet[1091]: W0701 18:59:46.674402 1091 pod_container_deletor.go:79] Container "3c46b2fa0a198e044fdd27507e17a14944dcee9f657be06d1e5812b16383...s containers
  20. Jul 01 23:37:01 master kubelet[1091]: I0701 23:37:01.462295 1091 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 07ee94a447d5bed0408914de8...4794cb7ae2d9
  21. Jul 01 23:37:01 master kubelet[1091]: I0701 23:37:01.462890 1091 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a5996702878a2fac2c793c22b...1b5fb16772e6
  22. Hint: Some lines were ellipsized, use -l to show in full.

三,

镜像的导入:

k8s-1.19.4-offline这个文件夹内的master-images.tar.gz在16服务器解压,然后执行批量导入命令:for i in `ls master-images`;do docker load <$i;done

k8s-1.19.4-offline这个文件夹内的slave1-images.tar.gz在17服务器解压,然后执行批量导入命令:for i in `ls slave1-images`;do docker load <$i;done

k8s-1.19.4-offline这个文件夹内的slave2-images.tar.gz在18服务器解压,然后执行批量导入命令:for i in `ls slave2-images`;do docker load <$i;done

四,

k8s-1.19.4-offline这个文件夹内的kubeadm.zip在三个服务器都解压,然后,将可执行文件kubeadm-1.19.3移动到 /usr/bin/目录下,改名为kubeadm

修改kubeadm.conf 文件,重点修改如下内容:

  1. localAPIEndpoint:
  2. advertiseAddress: 192.168.217.16
  3. bindPort: 6443
  4. nodeRegistration:
  5. criSocket: /var/run/dockershim.sock
  6. name: zsk.cnn
  7. taints:
  8. - effect: NoSchedule
  9. key: node-role.kubernetes.io/master

五,

集群初始化,执行以下命令即可:

kubeadm init --config kubeadm.conf

 如果初始化失败的话,可以使用命令 kubeadm reset 命令进行重置,不建议删除相关环境文件重做初始化,加入节点命令在此命令的末尾,复制该命令后在其它节点运行即可加入节点,不需要对此命令进行任何改动,如果加入集群失败,可同样使用kubeadm reset 命令重新恢复环境,再次加入。

注意,此命令是在master节点执行,命令成功执行完成后,输出有节点加入命令,复制节点加入命令,在其余两个节点执行即可。

此时的集群状态应该是noready,在主节点执行命令:kubectl apply -f kube-flannel.yml 集群状态即可恢复正常。

六,

安装kubernetes-dashboard(此操作只在master节点执行,其余两个节点不执行)

dashboard.yml文件的内容如下:

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. apiVersion: v1
  15. kind: Namespace
  16. metadata:
  17. name: kubernetes-dashboard
  18. ---
  19. apiVersion: v1
  20. kind: ServiceAccount
  21. metadata:
  22. labels:
  23. k8s-app: kubernetes-dashboard
  24. name: kubernetes-dashboard
  25. namespace: kubernetes-dashboard
  26. ---
  27. kind: Service
  28. apiVersion: v1
  29. metadata:
  30. labels:
  31. k8s-app: kubernetes-dashboard
  32. name: kubernetes-dashboard
  33. namespace: kubernetes-dashboard
  34. spec:
  35. ports:
  36. - port: 443
  37. targetPort: 8443
  38. selector:
  39. k8s-app: kubernetes-dashboard
  40. ---
  41. apiVersion: v1
  42. kind: Secret
  43. metadata:
  44. labels:
  45. k8s-app: kubernetes-dashboard
  46. name: kubernetes-dashboard-certs
  47. namespace: kubernetes-dashboard
  48. type: Opaque
  49. ---
  50. apiVersion: v1
  51. kind: Secret
  52. metadata:
  53. labels:
  54. k8s-app: kubernetes-dashboard
  55. name: kubernetes-dashboard-csrf
  56. namespace: kubernetes-dashboard
  57. type: Opaque
  58. data:
  59. csrf: ""
  60. ---
  61. apiVersion: v1
  62. kind: Secret
  63. metadata:
  64. labels:
  65. k8s-app: kubernetes-dashboard
  66. name: kubernetes-dashboard-key-holder
  67. namespace: kubernetes-dashboard
  68. type: Opaque
  69. ---
  70. kind: ConfigMap
  71. apiVersion: v1
  72. metadata:
  73. labels:
  74. k8s-app: kubernetes-dashboard
  75. name: kubernetes-dashboard-settings
  76. namespace: kubernetes-dashboard
  77. ---
  78. kind: Role
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. labels:
  82. k8s-app: kubernetes-dashboard
  83. name: kubernetes-dashboard
  84. namespace: kubernetes-dashboard
  85. rules:
  86. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  87. - apiGroups: [""]
  88. resources: ["secrets"]
  89. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  90. verbs: ["get", "update", "delete"]
  91. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  92. - apiGroups: [""]
  93. resources: ["configmaps"]
  94. resourceNames: ["kubernetes-dashboard-settings"]
  95. verbs: ["get", "update"]
  96. # Allow Dashboard to get metrics.
  97. - apiGroups: [""]
  98. resources: ["services"]
  99. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  100. verbs: ["proxy"]
  101. - apiGroups: [""]
  102. resources: ["services/proxy"]
  103. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  104. verbs: ["get"]
  105. ---
  106. kind: ClusterRole
  107. apiVersion: rbac.authorization.k8s.io/v1
  108. metadata:
  109. labels:
  110. k8s-app: kubernetes-dashboard
  111. name: kubernetes-dashboard
  112. rules:
  113. # Allow Metrics Scraper to get metrics from the Metrics server
  114. - apiGroups: ["metrics.k8s.io"]
  115. resources: ["pods", "nodes"]
  116. verbs: ["get", "list", "watch"]
  117. ---
  118. apiVersion: rbac.authorization.k8s.io/v1
  119. kind: RoleBinding
  120. metadata:
  121. labels:
  122. k8s-app: kubernetes-dashboard
  123. name: kubernetes-dashboard
  124. namespace: kubernetes-dashboard
  125. roleRef:
  126. apiGroup: rbac.authorization.k8s.io
  127. kind: Role
  128. name: kubernetes-dashboard
  129. subjects:
  130. - kind: ServiceAccount
  131. name: kubernetes-dashboard
  132. namespace: kubernetes-dashboard
  133. ---
  134. apiVersion: rbac.authorization.k8s.io/v1
  135. kind: ClusterRoleBinding
  136. metadata:
  137. name: kubernetes-dashboard
  138. roleRef:
  139. apiGroup: rbac.authorization.k8s.io
  140. kind: ClusterRole
  141. name: kubernetes-dashboard
  142. subjects:
  143. - kind: ServiceAccount
  144. name: kubernetes-dashboard
  145. namespace: kubernetes-dashboard
  146. ---
  147. kind: Deployment
  148. apiVersion: apps/v1
  149. metadata:
  150. labels:
  151. k8s-app: kubernetes-dashboard
  152. name: kubernetes-dashboard
  153. namespace: kubernetes-dashboard
  154. spec:
  155. replicas: 1
  156. revisionHistoryLimit: 10
  157. selector:
  158. matchLabels:
  159. k8s-app: kubernetes-dashboard
  160. template:
  161. metadata:
  162. labels:
  163. k8s-app: kubernetes-dashboard
  164. spec:
  165. containers:
  166. - name: kubernetes-dashboard
  167. image: kubernetesui/dashboard:v2.0.4
  168. ports:
  169. - containerPort: 8443
  170. protocol: TCP
  171. args:
  172. - --auto-generate-certificates
  173. - --namespace=kubernetes-dashboard
  174. # Uncomment the following line to manually specify Kubernetes API server Host
  175. # If not specified, Dashboard will attempt to auto discover the API server and connect
  176. # to it. Uncomment only if the default does not work.
  177. # - --apiserver-host=http://my-address:port
  178. volumeMounts:
  179. - name: kubernetes-dashboard-certs
  180. mountPath: /certs
  181. # Create on-disk volume to store exec logs
  182. - mountPath: /tmp
  183. name: tmp-volume
  184. livenessProbe:
  185. httpGet:
  186. scheme: HTTPS
  187. path: /
  188. port: 8443
  189. initialDelaySeconds: 30
  190. timeoutSeconds: 30
  191. securityContext:
  192. allowPrivilegeEscalation: false
  193. readOnlyRootFilesystem: true
  194. runAsUser: 1001
  195. runAsGroup: 2001
  196. volumes:
  197. - name: kubernetes-dashboard-certs
  198. secret:
  199. secretName: kubernetes-dashboard-certs
  200. - name: tmp-volume
  201. emptyDir: {}
  202. serviceAccountName: kubernetes-dashboard
  203. nodeSelector:
  204. "kubernetes.io/os": linux
  205. # Comment the following tolerations if Dashboard must not be deployed on master
  206. tolerations:
  207. - key: node-role.kubernetes.io/master
  208. effect: NoSchedule
  209. ---
  210. kind: Service
  211. apiVersion: v1
  212. metadata:
  213. labels:
  214. k8s-app: dashboard-metrics-scraper
  215. name: dashboard-metrics-scraper
  216. namespace: kubernetes-dashboard
  217. spec:
  218. ports:
  219. - port: 8000
  220. targetPort: 8000
  221. selector:
  222. k8s-app: dashboard-metrics-scraper
  223. ---
  224. kind: Deployment
  225. apiVersion: apps/v1
  226. metadata:
  227. labels:
  228. k8s-app: dashboard-metrics-scraper
  229. name: dashboard-metrics-scraper
  230. namespace: kubernetes-dashboard
  231. spec:
  232. replicas: 1
  233. revisionHistoryLimit: 10
  234. selector:
  235. matchLabels:
  236. k8s-app: dashboard-metrics-scraper
  237. template:
  238. metadata:
  239. labels:
  240. k8s-app: dashboard-metrics-scraper
  241. annotations:
  242. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
  243. spec:
  244. containers:
  245. - name: dashboard-metrics-scraper
  246. image: kubernetesui/metrics-scraper:v1.0.6
  247. ports:
  248. - containerPort: 8000
  249. protocol: TCP
  250. livenessProbe:
  251. httpGet:
  252. scheme: HTTP
  253. path: /
  254. port: 8000
  255. initialDelaySeconds: 30
  256. timeoutSeconds: 30
  257. volumeMounts:
  258. - mountPath: /tmp
  259. name: tmp-volume
  260. securityContext:
  261. allowPrivilegeEscalation: false
  262. readOnlyRootFilesystem: true
  263. runAsUser: 1001
  264. runAsGroup: 2001
  265. serviceAccountName: kubernetes-dashboard
  266. nodeSelector:
  267. "kubernetes.io/os": linux
  268. # Comment the following tolerations if Dashboard must not be deployed on master
  269. tolerations:
  270. - key: node-role.kubernetes.io/master
  271. effect: NoSchedule
  272. volumes:
  273. - name: tmp-volume
  274. emptyDir: {}
  1. mkdir /etc/kubernetes/pki/dashboard/
  2. cd /etc/kubernetes/pki/dashboard/
  3. openssl genrsa -out tls.key 2048
  4. openssl req -new -key tls.key -subj "/CN=zsk.cnn"  -out tls.csr
  5. openssl x509 -req -days 3650 -in tls.csr -CA ../ca.crt -CAkey ../ca.key -CAcreateserial -out tls.crt
  6. kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/pki/dashboard/ -n kube-system
  7. #执行kubectl 命令安装dashboard
  8. kubectl apply -f dashboard.yml

输出如下:

secret/kubernetes-dashboard-certs created

#集群角色绑定

kubectl create clusterrolebinding default --clusterrole=cluster-admin --serviceaccount=kube-system:default --namespace=kube-system

输出如下为正确:

clusterrolebinding.rbac.authorization.k8s.io/default created

七,

安装ingress

vim ingress-nginx-values.yaml
  1. controller:
  2. name: controller
  3. image:
  4. repository: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller
  5. tag: "v0.50.0"
  6. pullPolicy: IfNotPresent
  7. config:
  8. map-hash-bucket-size: "1024"
  9. proxy-body-size: "100m"
  10. ssl-protocols: "TLSv1.2 TLSv1.3"
  11. enable-modsecurity: "true"
  12. enable-owasp-modsecurity-crs: "true"
  13. error-log-level: "warn"
  14. modsecurity:
  15. config:
  16. enabled: true
  17. dnsPolicy: ClusterFirstWithHostNet
  18. hostNetwork: true
  19. hostPort:
  20. enabled: true
  21. ports:
  22. http: 80
  23. https: 443
  24. kind: DaemonSet
  25. resources:
  26. limits:
  27. cpu: 200m
  28. memory: 512Mi
  29. requests:
  30. cpu: 100m
  31. memory: 200Mi
  32. defaultBackend:
  33. enabled: true
  34. image:
  35. repository: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend
  36. tag: "1.4"
  37. pullPolicy: IfNotPresent

此时,解压k8s-1.19.4-offline\helms这个目录下的ingress-nginx-3.25.0.tgz,上面的配置文件和压缩包同一目录下,然后执行以下命令。这里特别注意,helm这个文件先需要放到环境变量里哦,也就是移动helm 这个文件到 /usr/localbin/目录下即可。

helm install ingress-nginx -f ingress-nginx-values.yaml  ingress-nginx -n ingress-nginx --create-namespace

ingress用到的镜像包是:

  1. [root@master YAML]# docker images --digests
  2. REPOSITORY                                                                     TAG                     DIGEST                                                                    IMAGE ID            CREATED             SIZE
  3. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   <none>                  sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a   435df390f367        16 months ago       279MB
  4.  

特别注意,需要查看digest是否是3dd开始的,如果不是,需要修改它的digest,ingress依赖的是两个镜像,这两个镜像都放在了ingress目录下。

八,

部署dashboard-ingress

vim dash-ingress.yaml

这个文件的hosts需要指定,这里我用的是dash.master.com 这个域名 ,hosts里一会要写哦

kind: Ingress

apiVersion: extensions/v1beta1

metadata:

  name: kubernetes-dashboard

  namespace: kubernetes-dashboard

  annotations:

    kubernetes.io/ingress.class: nginx

    nginx.ingress.kubernetes.io/backend-protocol: HTTPS

    nginx.ingress.kubernetes.io/rewrite-target: /

    nginx.ingress.kubernetes.io/ssl-redirect: 'true'

    nginx.ingress.kubernetes.io/use-regex: 'true'

spec:

  tls:

    - hosts:

      - dash.master.com

      secretName: kubernetes-dashboard-certs

  rules:

    - host: dash.master.com

      http:

        paths:

          - path: /

            pathType: ImplementationSpecific

            backend:

              serviceName: kubernetes-dashboard

              servicePort: 443

执行安装命令:

kubectl apply -f dash-ingress.yaml

九,

获取token

  1. kubectl describe sa default -n kube-system 输出如下:
  2. [root@localhost software]# kubectl describe sa default -n kube-system
  3. Name:                default
  4. Namespace:           kube-system
  5. Labels:              <none>
  6. Annotations:         <none>
  7. Image pull secrets:  <none>
  8. Mountable secrets:   default-token-srkj8
  9. Tokens:              default-token-srkj8
  10. Events:              <none>
  11. kubectl describe secrets default-token-9hhsx -n kube-system输出如下:
  12. [root@localhost software]# kubectl describe secrets default-token-srkj8 -n kube-system
  13. Name:         default-token-srkj8
  14. Namespace:    kube-system
  15. Labels:       <none>
  16. Annotations:  kubernetes.io/service-account.name: default
  17.               kubernetes.io/service-account.uid: 34ed4707-ebe7-4699-85d1-09b20f3d0cae
  18. Type:  kubernetes.io/service-account-token
  19. Data
  20. ====
  21. token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkdOMndKQ2FTUzd3c2ZhakVfSFFRekxFLXNQZGhUdUpVdGJyNFpsSTJmMkEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXNya2o4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNGVkNDcwNy1lYmU3LTQ2OTktODVkMS0wOWIyMGYzZDBjYWUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.dGeBBZg-DzZJ7aUf-5FsNcm5x3JaGBMKMMaAa92-98PV7U-5pZQTcCvvw0Bi6nEGTFi8g_a6NQ3Tw43quPJV5FLMFgH9mQMnJXRtjjKomLjd4_GwYpK7cPaFuzwJWLqAXiddnEZmnyLj6D3qy5wc3QR5rgiQQ3QgrXKCZzXoYrlPg9dNUz3XqEgtxDlBYMFe43Gn9e8Xw7NOgydqKv0Qhxqjltx_nGJFw2fXIdoVBQQM1uC1BU37XqJJrh0wficXw57aB338W9ena38454V8pxWs2gYAlsOcCPJDAQb_tZA1e9JoHFWIwZ5VP_YHZC3MGTiVdjws6i8EpcPRM3QFkQ
  22. ca.crt:     1070 bytes
  23. namespace:  11 bytes
  24. kubectl describe secrets $(kubectl describe sa default -n kube-system | grep Mountable | awk 'NR == 2 {next} {print $3}') -n kube-system

安装到这个阶段的时候,三个节点使用的镜像如下:

master节点:

  1. [root@master opt]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. bitnami/kubectl 1.17.13-debian-10-r21 7022735edf5f 19 months ago 129MB
  4. kubernetesui/metrics-scraper v1.0.6 48d79e554db6 20 months ago 34.5MB
  5. quay.io/coreos/flannel v0.13.0 e708f4bb69e3 20 months ago 57.2MB
  6. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 cdef7632a242 20 months ago 118MB
  7. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.19.3 9b60aca1d818 20 months ago 111MB
  8. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.19.3 a301be0cd44b 20 months ago 119MB
  9. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 aaefbfa906bd 20 months ago 45.7MB
  10. kubernetesui/dashboard v2.0.4 46d0a29c3f61 22 months ago 225MB
  11. registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 22 months ago 253MB
  12. registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 2 years ago 45.2MB
  13. registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 2 years ago 683kB
  14. registry.c7n.gzinfo/choerodon-tools/kubectl v1.15.2 2fad3003d792 2 years ago 52.5MB

 slave1节点:

  1. [root@slave1 ~]# docker images --digests
  2. REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
  3. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller none <none> ae1739386d6a 7 months ago 285MB
  4. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller <none> sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 435df390f367 17 months ago 279MB
  5. jettech/kube-webhook-certgen v1.5.1 sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 a013daf8730d 19 months ago 44.7MB
  6. kubernetesui/metrics-scraper v1.0.6 <none> 48d79e554db6 20 months ago 34.5MB
  7. quay.io/coreos/flannel v0.13.0 <none> e708f4bb69e3 20 months ago 57.2MB
  8. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 <none> cdef7632a242 20 months ago 118MB
  9. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 <none> aaefbfa906bd 20 months ago 45.7MB
  10. kubernetesui/dashboard v2.0.4 <none> 46d0a29c3f61 22 months ago 225MB
  11. registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.13-0 <none> 0369cf4303ff 22 months ago 253MB
  12. registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 <none> bfe3a36ebd25 2 years ago 45.2MB
  13. registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 <none> 80d28bedfe5d 2 years ago 683kB
  14. registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend 1.4 <none> 846921f0fe

slave2节点: 

  1. [root@slave2 ~]# docker images --digests
  2. REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
  3. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v0.50.0 <none> ae1739386d6a 7 months ago 285MB
  4. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller <none> sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 435df390f367 17 months ago 279MB
  5. jettech/kube-webhook-certgen v1.5.1 <none> a013daf8730d 19 months ago 44.7MB
  6. quay.io/coreos/flannel v0.13.0 <none> e708f4bb69e3 20 months ago 57.2MB
  7. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.19.3 <none> cdef7632a242 20 months ago 118MB
  8. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.19.3 <none> aaefbfa906bd 20 months ago 45.7MB
  9. registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.7.0 <none> bfe3a36ebd25 2 years ago 45.2MB
  10. registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 <none> 80d28bedfe5d 2 years ago 683kB
  11. registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend 1.4 <none> 846921f0fe0e 4 years ago 4.84MB

 其中,kubernetesui/metrics-scraper这个镜像是dashboard信息收集插件,

kubernetesui/dashboard是主镜像,

registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller这个镜像的digest应该是3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a

十,nfs插件的安装

(1)

nfs服务的安装----三个节点都安装

  1. yum install nfs nfs-utils rpcbind -y
  2. systemctl enable nfs rpcbid && systemctl start nfs rpcbind

(2)

nfs的配置文件编辑

  1. [root@master ~]# cat /etc/exports
  2. /data/k8s 10.244.0.0/16(rw,no_root_squash,no_subtree_check) 192.168.217.16(rw,no_root_squash,no_subtree_check) 192.168.217.0/24(rw,no_root_squash,no_subtree_check)

 (3)

建立存储点,给予存储点777权限--- 这一步是在master节点操作,别的节点不需要

  1. mkdir -p /data/k8s
  2. chmod -Rf 777 /data/k8s

(4)

验证,在slave1或者2节点验证

  1. systemctl restart nfs rpcbind
  2. showmount -e master

正确输出如下:

  1. [root@master ~]# showmount -e master
  2. Export list for master:
  3. /data/k8s 192.168.217.0/24,10.244.0.0/16

(5)

使用helm安装(nfs-client-provisioner-0.1.1.tgz这个是helm的离线chart包)

helm install nfs-client-provisioner ./nfs-client-provisioner-0.1.1.tgz  --set rbac.create=true     --set persistence.enabled=true     --set storageClass.name=nfs-provisioner     --set persistence.nfsServer=192.168.217.16     --set persistence.nfsPath=/data/k8s     --version 0.1.1     --namespace kube-system

(6)设置默认storageclass

kubectl patch storageclass nfs-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

十一,

nfs插件的验证性安装 redis集群

1,建立pvc

helm install redis ./redis-persistentvolumeclaim-0.1.0.tgz     --set accessModes={ReadWriteOnce}     --set requests.storage=256Mi     --set storageClassName=nfs-provisioner     --create-namespace     --version 0.1.0     --namespace kube-system

此时,可以查看一下pvc,pvc的名称叫redis,其实到这里的时候基本就已经表示该nfs插件是正常的了,因为pvc都是bound啦。

  1. [root@master ~]# k get pvc -A
  2. NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  3. kube-system redis Bound pvc-751a32b6-8706-477b-8cad-d71e8e9f3ab8 256Mi RWO nfs-provisioner 26m
  4. kube-system redis-data-redis-test-master-0 Bound pvc-f9193155-776c-42f4-a3f5-71e75f16416f 8Gi RWO nfs-provisioner 22m
  5. kube-system redis-data-redis-test-replicas-0 Bound pvc-d5ea7d10-2ffa-402e-b3f1-8573a195ad6f 8Gi RWO nfs-provisioner 22m
  6. kube-system redis-data-redis-test-replicas-1 Bound pvc-04203f8a-5907-48ce-9fc2-013e94313c3c 8Gi RWO nfs-provisioner 7m40s
  7. kube-system redis-data-redis-test-replicas-2 Bound pvc-e1693689-b01b-4855-ab1c-b8f843be4e2e 8Gi RWO nfs-provisioner 6m41s

 

2,

安装redis

helm install redis-test ./redis-16.4.1.tgz --set persistence.enabled=true --set persistence.existingClaim=redis --set service.enabled=true --version 0.2.5 --namespace kube-system

这个命令的输出如下:

  1. NAME: redis-test
  2. LAST DEPLOYED: Sat Jul 2 10:36:09 2022
  3. NAMESPACE: kube-system
  4. STATUS: deployed
  5. REVISION: 1
  6. TEST SUITE: None
  7. NOTES:
  8. CHART NAME: redis
  9. CHART VERSION: 16.4.1
  10. APP VERSION: 6.2.6
  11. ** Please be patient while the chart is being deployed **
  12. Redis&trade; can be accessed on the following DNS names from within your cluster:
  13. redis-test-master.kube-system.svc.cluster.local for read/write operations (port 6379)
  14. redis-test-replicas.kube-system.svc.cluster.local for read-only operations (port 6379)
  15. To get your password run:
  16. export REDIS_PASSWORD=$(kubectl get secret --namespace kube-system redis-test -o jsonpath="{.data.redis-password}" | base64 --decode)
  17. To connect to your Redis&trade; server:
  18. 1. Run a Redis&trade; pod that you can use as a client:
  19. kubectl run --namespace kube-system redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image registry.hand-china.com/tools/redis:6.2.6-debian-10-r120 --command -- sleep infinity
  20. Use the following command to attach to the pod:
  21. kubectl exec --tty -i redis-client \
  22. --namespace kube-system -- bash
  23. 2. Connect using the Redis&trade; CLI:
  24. REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-test-master
  25. REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-test-replicas
  26. To connect to your database from outside the cluster execute the following commands:
  27. kubectl port-forward --namespace kube-system svc/redis-test-master : &
  28. REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p

 这里需要用的镜像是registry.hand-china.com_tools_redis_6.2.6-debian-10-r120.tar和redis4.0.11 这两个镜像,都被分配到slave1和slave2了。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/419694
推荐阅读
相关标签
  

闽ICP备14008679号