赞
踩
目录
(1)主机
表1 主机
主机 | 架构 | 版本 | IP | 备注 |
master1 | K8S master节点 | 1.26.0 | 192.168.204.190 | |
node1 | K8S node节点 | 1.26.0 | 192.168.204.191 |
(2)Termius连接
(3)master节点查看集群
- 1)查看node
- kubectl get node
-
- 2)查看node详细信息
- kubectl get node -o wide
(1)查阅
https://github.com/helm/helm/tags
HELM版本与K8S集群兼容
(2)策略
当前K8S 集群为1.26.0版本,HELM 3.11.x 版本可以兼容。
所以选择3.11.0版本。
(3)下载
https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz
Termius使用SFTP传输
(4)解压
- tar -zxvf helm-v3.11.0-linux-amd64.tar.gz
- mv linux-amd64/helm /usr/local/bin/helm
- helm version
(5)命令补全
source <(helm completion bash)
(1)查阅
https://docs.kubesphere.com.cn/v4.0/02-quickstart/01-install-ks-core/
v1.26.x支持安装
(2)安装
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-0.4.0.tgz
完整安装过程:
- [root@master1 opt]# helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-0.4.0.tgz
- Release "ks-core" does not exist. Installing it now.
- NAME: ks-core
- LAST DEPLOYED: Wed May 22 11:57:23 2024
- NAMESPACE: kubesphere-system
- STATUS: deployed
- REVISION: 1
- TEST SUITE: None
- NOTES:
- Please wait for several seconds for KubeSphere deployment to complete.
-
- 1. Make sure KubeSphere components are running:
-
- kubectl get pods -n kubesphere-system
-
- 2. Then you should be able to visit the console NodePort:
-
- Console: http://192.168.204.190:30880
-
- 3. To login to your KubeSphere console:
-
- Account: admin
- Password: "P@88w0rd"
- NOTE: Please change the default password after login.
-
- For more details, please visit https://kubesphere.io.
(3)查看pod
kubectl get pods -n kubesphere-system
(4) 访问
http://192.168.204.190:30880
(5)输入初始账户及密码
- 账户: admin
- 密码: P@88w0rd
(6)修改密码
(7)进入系统
(8) 集群管理
(9)扩展中心
(10)搜索市场
关键字“CI/CD”
(1) 查阅
KubeSphere 扩展市场
https://kubesphere.com.cn/marketplace/extensions/devops/
其他方式安装
https://www.kubesphere.io/zh/docs/v3.4/quick-start/minimal-kubesphere-on-k8s/
(2)同步云账户
(3)安装
KubeSphere DevOps版本支持情况
- Kubernetes 版本>=1.19.0-0
-
- KubeSphere 版本>=4.0.0-0
(4)下一步
(5)开始安装
安装中
成功
(6)下一步
(7)集群选择
(8)确认
(9)安装成功
(10)查看集群
(1) 查阅
Kubesphere扩展市场
https://kubesphere.com.cn/marketplace/extensions/zadig/
Zadig版本支持情况
- Kubernetes 版本>=1.16.0-1.26.0
-
- KubeSphere 版本>=4.0.0-0
(2)Zadig主页
https://koderover.com/zadig
(3)脚本方式安装
https://docs.koderover.com/zadig/Zadig%20v2.0.0/install/helm-deploy/
(4)Kubesphere安装Zadig
官方脚本安装
- curl -LO https://github.com/koderover/zadig/releases/download/v2.0.0/install_quickstart.sh
- chmod +x ./install_quickstart.sh
(8)申明变量
- export IP=192.168.204.190
- export PORT=30090
(9)安装
./install_quickstart.sh
这里安装过程预计持续 10 分钟左右
(1)报错
(2)原因分析
查看日志
2024-05-22T12:18:54.721997147+08:00 client.go:482: [debug] Ignoring delete failure for "zadig-post-upgrade" batch/v1, Kind=Job: jobs.batch "zadig-post-upgrade" not found
(3)解决方法
查阅相关问题
https://github.com/koderover/zadig/issues/2417
先卸载
然后重新安装
下一步
开始安装
安装中
依然报错
- 2024-05-22T12:39:41.303302680+08:00 upgrade.go:442: [debug] warning: Upgrade "zadig" failed: post-upgrade hooks failed: 1 error occurred:
- 2024-05-22T12:39:41.303305360+08:00 * job failed: BackoffLimitExceeded
- 2024-05-22T12:39:41.303306374+08:00
- 2024-05-22T12:39:41.303307165+08:00
- 2024-05-22T12:39:41.327312921+08:00 Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred:
- 2024-05-22T12:39:41.327345989+08:00 * job failed: BackoffLimitExceeded
- 2024-05-22T12:39:41.327352567+08:00
- 2024-05-22T12:39:41.327356487+08:00
- 2024-05-22T12:39:41.327363940+08:00 helm.go:84: [debug] post-upgrade hooks failed: 1 error occurred:
- 2024-05-22T12:39:41.327368417+08:00 * job failed: BackoffLimitExceeded
- 2024-05-22T12:39:41.327371965+08:00
- 2024-05-22T12:39:41.327375169+08:00
- 2024-05-22T12:39:41.327380602+08:00 UPGRADE FAILED
这时可以采用脚本安装。
查看脚本(HELM为3.6.1版本)与master1节点冲突
node1节点部署
如报错显示访问超时,需要网络好时再执行
Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition
(1)查看
calico网络插件
kubectl -n kube-system get po -owide | grep calico-node
可以看到有两个容器网段分别分配给了master1和node1
kubectl get ipamblocks
(2)查看具体信息
- kubectl get ipamblocks 10-244-137-64-26 -oyaml
- kubectl get ipamblocks 10-244-166-128-26 -oyaml
(3)查看路由表
查看master1路由表(目标地址为10.244.66.128/26的请求会被通过网卡tunl0转发到192.168.204.191,也就是node1上;master1节点本机上的POD IP,则会直接被路由到对应的calico网卡。)
route -n
查看node1路由表(node01上也可以看到类似的路由条目,目标地址为10.244.137.64/26的请求会被通过网卡tunl0转发到192.168.204.190,也就是master1上)
route -n
(1)查询
master1节点查看docker占用磁盘空间
- docker system df
-
- docker system df -v
node1节点查看docker占用磁盘空间
(2)清除
master1节点仅删除停止运行的容器
docker container prune
noder1节点仅删除停止运行的容器
(1)删除pod
命令
kubectl delete pod <your-pod-name> -n <name-space> --force --grace-period=0
删除
kubectl delete --all pods -n zadig --force --grace-period=0
kubectl delete --all pods -n argocd --force --grace-period=0
kubectl delete --all pods -n extension-devops --force --grace-period=0
kubectl delete --all pods -n extension-zadig --force --grace-period=0
(2)删除pv、pvc
- kubectl patch pv xxx -p '{"metadata":{"finalizers":null}}'
- kubectl patch pvc xxx -p '{"metadata":{"finalizers":null}}'
(1)报错
kubesphere-devops-system 一直为Terminating
(2)原因分析
选择一个Terminating namespace,并查看namespace 中的finalizer。
kubectl get namespace kubesphere-devops-system -o yaml
显示如下:
- spec:
- finalizers:
- - kubernetes
(3)解决方法
导出json格式到文件
kubectl get namespace kubesphere-devops-system -o json >tmp.json
编辑tmp.josn,删除finalizers 字段的值
- 25 "spec": { #从此行开始删除
- 26 "finalizers": [
- 27 "kubernetes"
- 28 ]
- 29 }, # 删到此行
开启proxy(执行该命令后,当前终端会被卡住)
kubectl proxy
打开新的一个窗口,执行以下命令
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/kubesphere-devops-system/finalize
确认处于Terminating 状态的namespace已经被删除
(1)查看状态
目前为已完成
(2)查看YAML
- kind: Job
- apiVersion: batch/v1
- metadata:
- name: devops-post-delete
- namespace: extension-devops
- labels:
- controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
- job-name: devops-post-delete
- annotations:
- batch.kubernetes.io/job-tracking: ''
- helm.sh/hook: post-delete
- helm.sh/hook-delete-policy: 'before-hook-creation,hook-succeeded'
- helm.sh/hook-weight: '1'
- revisions: >-
- {"1":{"status":"completed","succeed":1,"desire":1,"uid":"0e83e553-482f-4755-834a-9c0f07d4c5b9","start-time":"2024-05-22T15:52:49+08:00","completion-time":"2024-05-22T16:22:50+08:00"}}
- spec:
- parallelism: 1
- completions: 1
- backoffLimit: 6
- selector:
- matchLabels:
- controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
- template:
- metadata:
- creationTimestamp: null
- labels:
- controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
- job-name: devops-post-delete
- spec:
- containers:
- - name: post-delete-job
- image: 'kubesphere/kubectl:v1.27.4'
- command:
- - /bin/bash
- - '-c'
- - |
- if kubectl get ns argocd; then
- kubectl delete ns argocd
- fi
- if kubectl get ns kubesphere-devops-system; then
- kubectl delete ns kubesphere-devops-system
- fi
- if kubectl get ns kubesphere-devops-worker; then
- kubectl delete ns kubesphere-devops-worker
- fi
- resources: {}
- terminationMessagePath: /dev/termination-log
- terminationMessagePolicy: File
- imagePullPolicy: IfNotPresent
- restartPolicy: Never
- terminationGracePeriodSeconds: 30
- dnsPolicy: ClusterFirst
- securityContext: {}
- schedulerName: default-scheduler
- completionMode: NonIndexed
- suspend: false
(1)帮助命令
- [root@node1 ~]# ctr images --help
- NAME:
- ctr images - manage images
-
- USAGE:
- ctr images command [command options] [arguments...]
-
- COMMANDS:
- check check existing images to ensure all content is available locally
- export export images
- import import images
- list, ls list images known to containerd
- mount mount an image to a target path
- unmount unmount the image from the target
- pull pull an image from a remote
- push push an image to a remote
- delete, del, remove, rm remove one or more images by reference
- tag tag an image
- label set and clear labels for an image
- convert convert an image
-
- OPTIONS:
- --help, -h show help
(2)拉取
ctr images pull ghcr.io/dexidp/dex:v2.30.2
(3) 查看
- crictl images list
- 或
- ctr images list
- 或
- ctr i ls -q
(4)导出
ctr image export dev.tar.gz ghcr.io/dexidp/dex:v2.30.2
(5)删除
- 1)查询
- ctr image list | grep ghcr.io/dexidp/dex:v2.30.2
-
- 2)删除
- ctr image delete ghcr.io/dexidp/dex:v2.30.2
(6)导入
- 1)导入
- ctr image import dev.tar.gz
-
- 2)查询
- ctr image list | grep ghcr.io/dexidp/dex:v2.30.2
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。