当前位置:   article > 正文

rancher坏了或删除,继续使用k8s集群_etcd rancher 删除

etcd rancher 删除
 docker run -d  --privileged=true  --restart=unless-stopped -p 80:80 -p 8443:443 -v /opt/service/rancher:/var/lib/rancher harbor.jettech.com/rancher/rancher:v2.3.6

清理

  1. [root@localhost ~]# cat clean.sh
  2. #!/bin/bash
  3. # 卸载rancher2.x
  4. KUBE_SVC='
  5. kubelet
  6. kube-scheduler
  7. kube-proxy
  8. kube-controller-manager
  9. kube-apiserver
  10. '
  11. for kube_svc in ${KUBE_SVC};
  12. do
  13. # 停止服务
  14. if [[ `systemctl is-active ${kube_svc}` == 'active' ]]; then
  15. systemctl stop ${kube_svc}
  16. fi
  17. # 禁止服务开机启动
  18. if [[ `systemctl is-enabled ${kube_svc}` == 'enabled' ]]; then
  19. systemctl disable ${kube_svc}
  20. fi
  21. done
  22. # 停止所有容器
  23. docker stop $(docker ps -aq)
  24. # 删除所有容器
  25. docker rm -f $(docker ps -qa)
  26. # 删除所有容器卷
  27. docker volume rm $(docker volume ls -q)
  28. # 卸载mount目录
  29. for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher;
  30. do
  31. umount $mount;
  32. done
  33. # 备份目录
  34. mv /etc/kubernetes /etc/kubernetes-bak-$(date +"%Y%m%d%H%M")
  35. mv /var/lib/etcd /var/lib/etcd-bak-$(date +"%Y%m%d%H%M")
  36. mv /var/lib/rancher /var/lib/rancher-bak-$(date +"%Y%m%d%H%M")
  37. mv /opt/rke /opt/rke-bak-$(date +"%Y%m%d%H%M")
  38. # 删除残留路径
  39. rm -rf /etc/ceph \
  40. /etc/cni \
  41. /opt/cni \
  42. /run/secrets/kubernetes.io \
  43. /run/calico \
  44. /run/flannel \
  45. /var/lib/calico \
  46. /var/lib/cni \
  47. /var/lib/kubelet \
  48. /var/log/containers \
  49. /var/log/kube-audit \
  50. /var/log/pods \
  51. /var/run/calico
  52. # 清理网络接口
  53. no_del_net_inter='
  54. lo
  55. docker0
  56. eth
  57. ens
  58. bond
  59. '
  60. network_interface=`ls /sys/class/net`
  61. for net_inter in $network_interface;
  62. do
  63. if ! echo "${no_del_net_inter}" | grep -qE ${net_inter:0:3}; then
  64. ip link delete $net_inter
  65. fi
  66. done
  67. # 清理残留进程
  68. port_list='
  69. 80
  70. 443
  71. 6443
  72. 2376
  73. 2379
  74. 2380
  75. 8472
  76. 9099
  77. 10250
  78. 10254
  79. '
  80. for port in $port_list;
  81. do
  82. pid=`netstat -atlnup | grep $port | awk '{print $7}' | awk -F '/' '{print $1}' | grep -v - | sort -rnk2 | uniq`
  83. if [[ -n $pid ]]; then
  84. kill -9 $pid
  85. fi
  86. done
  87. kube_pid=`ps -ef | grep -v grep | grep kube | awk '{print $2}'`
  88. if [[ -n $kube_pid ]]; then
  89. kill -9 $kube_pid
  90. fi
  91. # 清理Iptables表
  92. ## 注意:如果节点Iptables有特殊配置,以下命令请谨慎操作
  93. sudo iptables --flush
  94. sudo iptables --flush --table nat
  95. sudo iptables --flush --table filter
  96. sudo iptables --table nat --delete-chain
  97. sudo iptables --table filter --delete-chain
  98. systemctl restart docker

前提 
1.首先保障rancher管理k8s集群的config文件还在,可以查看~/.kube/config 进行验证

  1. [root@localhost ~]# kubectl config get-contexts
  2. CURRENT NAME CLUSTER AUTHINFO NAMESPACE
  3. * jettech jettech jettech
  4. jettech-172.16.10.87 jettech-172.16.10.87 jettech
  5. [root@localhost ~]# kubectl config view
  6. apiVersion: v1
  7. clusters:
  8. - cluster:
  9. certificate-authority-data: DATA+OMITTED
  10. server: https://172.16.10.87:8443/k8s/clusters/c-t5kwm
  11. name: jettech #此集群是rancher创建用的
  12. - cluster:
  13. certificate-authority-data: DATA+OMITTED
  14. server: https://172.16.10.87:6443
  15. name: jettech-172.16.10.87 #此集群k8s用的kubectl 可以操作的 不通过rancher也可以用
  16. contexts:
  17. - context:
  18. cluster: jettech
  19. user: jettech
  20. name: jettech
  21. - context:
  22. cluster: jettech-172.16.10.87
  23. user: jettech
  24. name: jettech-172.16.10.87
  25. current-context: jettech
  26. kind: Config
  27. preferences: {}
  28. users:
  29. - name: jettech
  30. user:
  31. token: kubeconfig-user-9hzqx.c-t5kwm:4kvkcffhpgsrn4bg4mqv8gppf7h9h7x89zn77kkmfdrrznnjr8tmb9

CURRENT   NAME                   CLUSTER                AUTHINFO   NAMESPACE
*         jettech                jettech                jettech    
          jettech-172.16.10.87   jettech-172.16.10.87   jettech 

jettech集群是最开始创建的,jettech-172.16.10.87集群是rancher创建集群的时候我们选择的是否在racher不可用的情况下,可以用最初的方式操作集群。

2.切换到不同配置项

  1. [root@localhost ~]# kubectl config use-context jettech-172.16.10.87
  2. Switched to context "jettech-172.16.10.87".

3.使用kubectl命令来操作集群

  1. [root@localhost ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. 172.16.10.15 NotReady worker 49d v1.17.4
  4. 172.16.10.87 NotReady controlplane,etcd,worker 49d v1.17.4

若rancher彻底不可恢复

待kubectl命令可以使用后,在新搭建rancher平台中,把此集群导入进去,导入完成后 要重建项目,把之前的命名空间移过去即可。具体导入命令,参考rancher官网

当集群都不可用的时候,可以自己做个集群,创建用户,关联集群用户,用户授权等 这种方法不可以,因为kubectl都不可以用了,没有可用的kubeconfig信息

在开启了 TLS 的集群中,每当与集群交互的时候少不了的是身份认证,使用 kubeconfig(即证书) 和 token 两种认证方式是最简单也最通用的认证方式。

以kubectl为例介绍kubeconfig的配置。kubectl只是个go编写的可执行程序,只要为kubectl配置合适的kubeconfig,就可以在集群中的任意节点使用。kubectl默认会从$HOME/.kube目录下查找文件名为 config 的文件,也可以通过设置环境变量 KUBECONFIG 或者通过设置 --kubeconfig 去指定其它 kubeconfig 文件。

总之kubeconfig就是为访问集群所作的配置。

1.通过rancher创建的集群证书位于

  1. [root@localhost ~]# ls /etc/kubernetes/ssl/
  2. certs kube-ca-key.pem kubecfg-kube-scheduler.yaml kube-proxy-key.pem
  3. kube-apiserver-key.pem kube-ca.pem kube-controller-manager-key.pem kube-proxy.pem
  4. kube-apiserver.pem kubecfg-kube-apiserver-proxy-client.yaml kube-controller-manager.pem kube-scheduler-key.pem
  5. kube-apiserver-proxy-client-key.pem kubecfg-kube-apiserver-requestheader-ca.yaml kube-etcd-172-16-10-87-key.pem kube-scheduler.pem
  6. kube-apiserver-proxy-client.pem kubecfg-kube-controller-manager.yaml kube-etcd-172-16-10-87.pem kube-service-account-token-key.pem
  7. kube-apiserver-requestheader-ca-key.pem kubecfg-kube-node.yaml kube-node-key.pem kube-service-account-token.pem
  8. kube-apiserver-requestheader-ca.pem kubecfg-kube-proxy.yaml kube-node.pem

这里需要集群根证书

  1. [root@localhost ~]# ls /etc/kubernetes/ssl/{kube-ca-key.pem,kube-ca.pem}
  2. /etc/kubernetes/ssl/kube-ca-key.pem /etc/kubernetes/ssl/kube-ca.pem
  3. ca的key/etc/kubernetes/ssl/kube-ca-key.pem
  4. ca的pem或crt/etc/kubernetes/ssl/kube-ca.pem

新建用户

新建一个k8s用户大概可以分为以下几步:

  • 生成用户的证书key
  • 通过用户的证书key,生成用户的证书请求
  • 通过k8s的api的ca证书去签发用户的证书请求,生成用户的证书
  • 配置kubectl config
    • kubectl config set-cluster                           //集群配置
    • kubectl config set-credentials NAME         //用户配置
    • kubectl config set-context                  //context配置
    • kubectl config use-context                 //切换context

具体操作:

(1) 用户证书,此处我用的是自己的用户名:wubo

  1. [root@localhost aaa]# (umask 077; openssl genrsa -out wubo.key 2048) #创建用户证书key
  2. [root@localhost aaa]# openssl req -new -key wubo.key -out wubo.csr -subj "/O=wubo/CN=wubo" #创建用户证书请求,-subj指定组和用户,其中O是组名,CN是用户名
  3. [root@localhost aaa]# openssl x509 -req -in wubo.csr -CA /etc/kubernetes/ssl/kube-ca.pem -CAkey /etc/kubernetes/ssl/kube-ca-key.pem -CAcreateserial -out wubo.crt -days 365 #使用k8s的ca签发用户证书
  4. [root@localhost aaa]# ls
  5. wubo.crt wubo.csr wubo.key

(2) 配置kubectl config

  1. [root@localhost ~]# export KUBE_APISERVER="https://172.16.10.87:6443"
  2. # 设置集群参数
  3. 此处可以不执行,不执行就使用本身的集群,也默认使用本集群的默认config文件
  4. [root@localhost aaa]# kubectl config set-cluster wubo-cluster --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=config
  5. Cluster "wubo-cluster" set.
  6. # 设置客户端认证参数用户是wubo 和openssl创建的证书的CN一致
  7. [root@localhost aaa]# kubectl config set-credentials wubo --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --client-certificate=wubo.crt --client-key=wubo.key --embed-certs=true --kubeconfig=config
  8. User "wubo" set.
  9. # 设置上下文参数 就是把集群【wubo-cluster】和用户【wubo】关联起来
  10. [root@localhost aaa]# # kubectl config set-context wubo@wubo-cluster --cluster=wubo-cluster --user=wubo
  11. # 可以切换到新用户了,但此时用户啥权限都没有,设置默认上下文
  12. [root@localhost aaa]# kubectl config use-context wubo@wubo-cluster
  13. Switched to context "wubo@wubo-cluster".cluster
  14. 查看
  15. [root@localhost aaa]# kubectl config get-contexts
  16. CURRENT NAME CLUSTER AUTHINFO NAMESPACE
  17. jettech jettech jettech
  18. jettech-172.16.10.87 jettech-172.16.10.87 jettech
  19. * wubo@wubo-cluster wubo-cluster wubo
  • 没有使用kubectl config set-cluster ,因为集群就使用本身的集群,也默认使用本集群的默认config文件
  • kubectl config set-cluster --kubeconfig=/PATH/TO/SOMEFILE    ---kubeconfig=/PATH/TO/SOMEFILE用于创建新的配置文件,如果不加此选项,则内容会添加到家目录下.kube/config文件中,可以使用use-context来切换不同的用户管理k8s集群
  • context,就是用什么用户来管理哪个集群,即用户和集群的结合

参数说明:

  1. wubo-cluster                                                                         ##集群名字

  2. --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem       ##集群证书颁发ca

  3. --embed-certs=true --server=${KUBE_APISERVER}           ##集群服务ip

  4. --kubeconfig=config                                                                ##把命令生成的信息内容写入kubeconfig,并且同时写入kubectl.kubeconfig文件
  5. 设置客户端认证参数时 --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem   ##添加管理员权限,没有这一段则为普通用户

  6. --client-certificate:用户的ca,此处是wubo.crt

  7. --client-key:是用的key,此处是wubo.key

生成的 kubeconfig 被保存到 ~/.kube/config 文件;配置文件描述了集群、用户和上下文

集群参数

本段设置了所需要访问的集群的信息。使用set-cluster设置了需要访问的集群,如上为kubernetes,这只是个名称,实际为--server指向的apiserver;--certificate-authority设置了该集群的公钥;--embed-certs为true表示将--certificate-authority证书写入到kubeconfig中;--server则表示该集群的kube-apiserver地址

生成的kubeconfig 被保存到 ~/.kube/config 文件

用户参数

本段主要设置用户的相关信息,主要是用户证书。如上的用户名为wubo,证书为:/etc/kubernetes/ssl/wubo.pem,私钥为:/etc/kubernetes/ssl/wubo-key.pem。注意客户端的证书首先要经过集群CA的签署,否则不会被集群认可。此处使用的是ca认证方式,也可以使用token认证,如kubelet的 TLS Boostrap机制下的bootstrapping使用的就是token认证方式。上述kubectl使用的是ca认证,不需要token字段

上下文参数

集群参数用户参数可以同时设置多对,在上下文参数中将集群参数用户参数关联起来。上面的上下文名称为kubenetes,集群为kubenetes,用户为admin,表示使用admin的用户凭证来访问kubenetes集群的default命名空间,也可以增加--namspace来指定访问的命名空间。

最后使用kubectl config use-context wubo-cluster来使用名为wubo-cluster的环境项来作为配置。如果配置了多个环境项,可以通过切换不同的环境项名字来访问到不同的集群环境

备注

使用kubeconfig还需要注意用户已经经过授权(如RBAC授权),上述例子中用户的证书中O字段为wubo,CN也为wubokube-apiserver 预定义的 RoleBinding cluster-admin 将 Group wubo与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限。

k8s的用户授权有很多种,其中最普遍的是RBAC——基于角色的授权机制。

  • 某个用户属于某个角色
  • 某个角色拥有某些操作
  • 某些操作绑定到某些资源上

即最终效果是:某个用户对某些资源拥有某些操作。下面以上面创建的baison用户为例,让此用户对default空间拥有查看pod与进入到pod里的权限

  1. 创建集群角色 wubo-clusterrole 针对所有namespace 对所有资源有所有权限
  2. [root@localhost aaa]# kubectl create clusterrole wubo-clusterrole --verb="*" --resource="*"
  3. clusterrole.rbac.authorization.k8s.io/wubo-clusterrole created
  4. 给wubo用户赋予 cluster-admin 集群角色
  5. [root@localhost aaa]# kubectl create clusterrolebinding wubo-admin-cluseter --clusterrole=wubo-clusterrole --user=wubo
  6. clusterrolebinding.rbac.authorization.k8s.io/wubo-admin-cluseter created
  7. 系统提供了一个集群admin的角色可以直接使用
  8. kubectl create clusterrolebinding wubo-admin-cluseter --clusterrole=cluster-admin --user=wubo
  9. [root@localhost aaa]# kubectl get clusterrole | grep wubo
  10. wubo-clusterrole 2m14s
  11. [root@localhost aaa]# kubectl get clusterrolebinding | grep wubo
  12. curl-wubo-admin-binding 35d
  13. sa-wubo-cluster-admin 34d
  14. wubo-admin-cluseter 61s

简单说明:

  • 除了role、rolebinding外,还有cluserrole、clusterrolebinding,他们的区别就是作用域不同,很明显带cluser的作用与整个集群。
  • rolebinding可以绑定role,也可以绑定clusterrole,绑定cluserrole的时候,clusterrole作用域降级为namespace级别。
  • 集群有很多内置role、rolebinding和cluserrole、clusterrolebinding,可以用yaml方式打开作为借鉴。

k8s-RBAC授权_码农崛起-CSDN博客

晴空:

kubectl config delete-context wubo-cluster

万能方法

单节点rancher宕机后,无法登录UI界面操作K8S集群,这时可以临时生成集群权限文件,使用该文件即可操作集群。

默认情况下,我们使用rancher,连kubernetes的master主机都不用配置任何参数,那一旦rancher挂机了,master节点又无法使用命令操作集群,该怎么办?

解决方案
我在这里给大家分享一个上面问题的解决方法:

正如你所知,Rancher Server 通过 UI 创建的"自定义"集群,后端是通过 RKE 实现的,所以 RKE(https://docs.rancher.cn/rke/)有能力去纳管Rancher Server 创建的“自定义”集群,也可以不用rke就用kubectl工具即可

(1)下面是rke色思路

通过RKE 创建和管理 Kubernetes 集群,依赖 3 个文件:

cluster.yml:RKE 集群配置文件

kube_config_cluster.yml:该文件包含了获取该集群所有权限的认证凭据

cluster.rkestate:Kubernetes 集群状态文件,包含了获取该集群所有权限的认证凭据

所以,只要能从下游业务集群中获得这 3 个文件,就可以结合 RKE 二进制文件继续管理下游业务集群。下面将详细介绍如何通过 RKE 纳管 Rancher Server 创建的“自定义”集群,并通过RKE扩展集群的节点。

本文只针对 Rancher v2.4.x 和 v2.5.x 版本做了测试,其他版本可能不适用。

为了更好的演示效果,本文将从 Rancher Server 创建“自定义”集群开始,然后通过 RKE 纳管"自定义"集群,最后为了确认 RKE 有能力纳管集群,将演示通过 RKE 添加一个节点

模拟:rancher server服务宕机

如果单单rancher server服务宕机,集群中的服务依然可以正常提供服务

master主机操作

或装有controlplane角色的节点可操作

1、创建一个目录,作为恢复集群的工作目录

  1. mkdir /opt/tembak
  2. cd /opt/tembak

2、恢复下游业务集群的kube_config_cluster.yml文件,在controlplane节点上运行以下命令:

  1. 需要这个文件 在 /etc/kubernetes/ssl
  2. [root@localhost ~]# cat /etc/kubernetes/ssl/kubecfg-kube-node.yaml
  3. apiVersion: v1
  4. kind: Config
  5. clusters:
  6. - cluster:
  7. api-version: v1
  8. certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
  9. server: "https://127.0.0.1:6443"
  10. name: "local"
  11. contexts:
  12. - context:
  13. cluster: "local"
  14. user: "kube-node-local"
  15. name: "local"
  16. current-context: "local"
  17. users:
  18. - name: "kube-node-local"
  19. user:
  20. client-certificate: /etc/kubernetes/ssl/kube-node.pem
  21. client-key: /etc/kubernetes/ssl/kube-node-key.pem
  22. [root@localhost aaa]# docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'>kube_config_cluster.yml
  23. [root@localhost aaa]# docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'
  24. apiVersion: v1
  25. kind: Config
  26. clusters:
  27. - cluster:
  28. api-version: v1
  29. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl4TVRJd01UQTFNell4TTFvWERUTXhNVEV5T1RBMU16WXhNMW93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5mV3NDNFJJV1NNCjlvS2dPYmF0RmZpMWN5bDcva25BSEpiRm5JTlNhS1hsVjY1dWV3VjlVK3Z2dWVKSU8vamt4cFg5MUk0U1N0RFgKdUdYMGZpbExQWGZYaEJYNHBwblVPaE4wekFsNCt6Ym5Ud1VsYW50TmhFdG0vanJXUjhLQkdWRy9NWjZBTERtbwpYVDVPOCs2eXVINDFxVkFLYnEycGY1SVVZNUpTTENZNzhCNzhPVFRGNXlrN2YvWERCcENLdHBXemtScG5Wd0pCCnlkOEJoRVFrbk04SFJBYzlJc0YzSnNKOHlodXZldWxFN1YvWUJyWGVzcmV3M1IvejhDakZJV0NjMnplWXpKOE8KQ2hmWGlVbmpIZmg0azlsWDNpcytVTWFrTml1ekFGS0VETzlEMDFzUllObGgzMm1EY0V4bEdwMm1BQWZFRWlnNwozUmd1OEkzb2xOa0NBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSkY3UjUydHBnZzBlZnV3NGI5VXBzZElNaEhEZkloc0tnc0wKSTBrV2tvTmh4dXZGT3QwblA1Z0pmemk4SzdNMHB4M2dsSEo2MmxTZ2YwbGpGTWxaV3pMSzdFblVuNUw1dUxsSwphQ3V0dFRadzA0NndkM09uWUdtd0tMazRINXI1WWcxUUdia3UyUG5FeVgvbTM3dVNPZUNhd2R4K2JFdnpjN09WCndIY3lKS2RldkhzS2xrd213SXFhOUpvcG44MUR4TkJ6YS9oa1JUdVR2WjBrZjNmVUxGN2ttcHYxZEVyS1JzMHgKMVViZW1FQTJBZUh5QWdFb0o3YzdxbkdSbXZNUTVUOGZXVjdvVXNsTEY2Mm4ydEQxUTRsYTg5QlZBRGtOMnpVNgpNeWdyMHVtM1dSaU5EZmRnZUNxVUR2cVBDTFBHdHlia2o4M0hwdDdMb2NpSzE0eW9OTUU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  30. server: "https://127.0.0.1:6443"
  31. name: "local"
  32. contexts:
  33. - context:
  34. cluster: "local"
  35. user: "kube-admin-local"
  36. name: "local"
  37. current-context: "local"
  38. users:
  39. - name: "kube-admin-local"
  40. user:
  41. client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJRHF3VkY2dVRkUEl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNVEV5TURFd05UTTJNVE5hRncwek1URXhNamt3TmpVeU16WmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXgrSm9OTFdtVnltMExEZ1pUcjBLRXcvSmJ5NkMKcDUvUWN5dXovQ0xTS2t2Z0p5ckZPanZIZkJqVmR4MlZxQWNncDFkWHVldzFrTFBlSVp2SUNwcUNlUVovUldHYwpWcmZjWjZNQ3FaU2ZRVFJFZUtjRDV1Q0FPWDVoUVI2cndNWGZJMnBiZ1FISjJaSlVIWXBpbnBsQVh4bG5qQ0NxCnhjaVNVMUlIby9IcnYvQTA5N2RRU1dXS3hRN2VRbmlVS3U3YUZPRVdvdFV2b2dvTWpYQkRHb2wyNVo1dCswei8KNEI4TmlJNWxFb3lYakI5REx5VlJsdjJyNFRKMFNZZUVxT002aHREMkloZmEyclRiSitLei9JRTRrSjZUUGtCZApaSlZ6KzBJckpuc3J3UHQ2dHZuMXo4dlhnQk9SaXdsR0YvK0djMTl4QUowdTMwZThVU00rNXdGMDVRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFCZk9tZWRiVFhFVUwwTzFHdWZnMVNqVFhab3RVSGdpSHBkazFaZnFFVkFtcUMxeApaNzZNdkV2Qy9SRk1xZnRFZUdYS0NnLzl5U2tOSkNibnpualBWcHdmNUJrVDlCWkRLckEyTnMvd0FObHlKRFVqCnFobjk1VkpON1lWbGRFbHpUZTA2blo1dWk1ZkczTHkrQkxXL0pWci9nZVV2eWxTL0J2WEwrZ1VXSkxoZmpWOWsKeEtFbWdjT01QNFlKcUF3UGl6amlCdEJPZ09CdGpaeXhOZ2t0VmhpTExCcWIydXNVOUl5NTJ0SVlGVGRVYUlCYgpraFlDZjMzSlRoNk42bjFnMy91WllnVURsRVU0SjJ3QjJxVFlRa1RrK2I4M3gwUDVQK2N0QitRcnA4dld5UUhGClgvdTFkeGpXZld6L1dKSHcwZUhPLzB4dWlCS1RWMW1zTWZqZkFZOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  42. client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeCtKb05MV21WeW0wTERnWlRyMEtFdy9KYnk2Q3A1L1FjeXV6L0NMU0trdmdKeXJGCk9qdkhmQmpWZHgyVnFBY2dwMWRYdWV3MWtMUGVJWnZJQ3BxQ2VRWi9SV0djVnJmY1o2TUNxWlNmUVRSRWVLY0QKNXVDQU9YNWhRUjZyd01YZkkycGJnUUhKMlpKVUhZcGlucGxBWHhsbmpDQ3F4Y2lTVTFJSG8vSHJ2L0EwOTdkUQpTV1dLeFE3ZVFuaVVLdTdhRk9FV290VXZvZ29NalhCREdvbDI1WjV0KzB6LzRCOE5pSTVsRW95WGpCOURMeVZSCmx2MnI0VEowU1llRXFPTTZodEQySWhmYTJyVGJKK0t6L0lFNGtKNlRQa0JkWkpWeiswSXJKbnNyd1B0NnR2bjEKejh2WGdCT1Jpd2xHRi8rR2MxOXhBSjB1MzBlOFVTTSs1d0YwNVFJREFRQUJBb0lCQUV4aXpIbmdOVW80Q0wraApUS0tYZ1lNWlZGeGx4TTUwTjMvYjRyTm5Sek9jdlhPYVY3YlNZNENjS08rVlliek54SC9PMUJxY0Z6aE9WSVE1CmVTLzhMZ0k4Sm1VSVVXdWVaZDlCSDJKWkJxY3ZaejlJYkNoT0FSSjNwb2p4UktldHRvRmRRc3pCTnpjclFYUHMKajVXV2NWQW1jRGpQdnhOSWZBclZYVkFjd29BZGlibUUwVzVzMmFqTkE5MllMbE4vZFRFZTd3dW0rQWRCZWNTTwpERHcwcEx0a1U1UlZBTk9uaVNpdXpQNU1IQWp6RGxidWZkMmJoTFFCeEJiWVJnMVdoMkVtV1kvcnNyQ25BRTFoCkVLUm0wb0JJTjREbUxhcXNQM2RBTnN1Q0lxN0ZpNklGNkdUbEN5cUJnbnU3S2xHL3JXbEFSZ0lOVWRKczdlUWMKZ0VzckliMENnWUVBN1RhNkFxNWFhbXpLem5hRm8zSWpnaGx3VlE5aGlwY1cydDdRRFV6c2YzNVpoMFhNc25CUwppVnVwbTlGVm9vZ2RqQW8xVktNS080RzJHaTJWMjc1a2VPeXFzWFlaRmFsM2k0S2djc2gveFdqQjZqWW5YcDZXCkE1bWtFdmNFdGRRVXJTQUw0bUFxNVVpOHlaZjNLTmZLVmdjc3psUzFYYVlEaVJWL0J4a2JBVnNDZ1lFQTE3YmQKVWZ6NzVUSXB2cGNOUGNhTGFTWUlWejZ4cnMvMHVpRkY3MlRxSHFUTmRrdVpkTXNhME5DUCtIcEFDbXRJNm9kRgpYd2MyK0V0aW9pcEttVm83VThZNlVOVXlJS2x3UHUzcEFucUhxN0M0eTN6S2FuMThsS1I0V0h0cmovM09SdjcxCnZPUkxKOWZoL1RqV054VG91dDFmNU4rYy94Y3pTbFZZUkhFSzlyOENnWUVBeTEwa29SSEtyL3l1N2N3TWkvQloKWXJyZVkvMzR1TEVKUmdESlN1M012d3lhUW05anF3TENyOEdtcWRBUVkzUGdLT1BEanRqcjk5SWZSVmdaWnJkVwpPWmxrU1JtZkxjUUltZEVXTHZHWElLM0x1VGhPRGo5VkNxY1lVNjMwR3RKRUc1d2l0Q09RQXR1V0JocERLWCsrCmxudzJQSG5BdHhXUmFGL0dkRlpnb1lzQ2dZQXdQWFRCSVJJejcwUG1tMkVhcjR2OXQ4T2x2eDk5T0lSQ0c2N0kKR29sQTBSb2hta1ozRi9TblBmejBWR0o5OGdBY2NxUFEzSXd1ZXExVUZxRVlLbFdhSm5wa0dVbGNoSWZWaXQ3UQo3eFhvRDExRUpHUWY3SEF2elpnY01YMmNkZVhyZXBqNTVSUHBsUjIwdzBFa2tFaDdnWVl3YU5Gek9uejk0cGdhCnRpejlnUUtCZ1FDY3NrWlFuMVg4MlNCY0ZPUDJNN0pWN0NpaW9VVm5LRXZZS3RhMmhHbDlVWWlvQkRzd0FxVzUKdFJIYVRGUi9NNzlRd21RTktnNVRUZm45eGVHakJtT1I4RnhhaVVKV09EWGpvZ2FEZzAyQ2RyQUxpbW1UV3k4UAp3cTZQYzZHVGtTN2dYTVlEYnFxc1o0RWs2OWxvT3ZBNVF2WTB4c0I0Sm03WUFjWEJHL2hvNEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

1. kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml 这个是集群配置信息kubeconfig

1.1  通过kubeconfig查看集群信息

  1. [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml view
  2. apiVersion: v1
  3. clusters:
  4. - cluster:
  5. certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
  6. server: https://127.0.0.1:6443
  7. name: local
  8. contexts:
  9. - context:
  10. cluster: local
  11. user: kube-node-local
  12. name: local
  13. current-context: local
  14. kind: Config
  15. preferences: {}
  16. users:
  17. - name: kube-node-local
  18. user:
  19. client-certificate: /etc/kubernetes/ssl/kube-node.pem
  20. client-key: /etc/kubernetes/ssl/kube-node-key.pem
  21. [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get-contexts
  22. CURRENT NAME CLUSTER AUTHINFO NAMESPACE
  23. * local local kube-node-local
  24. [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get-clusters
  25. NAME
  26. local

1.2 获取configmap 在kube-system空间下,就是对集群正常操作

  1. [root@localhost ~]# kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system
  2. NAME DATA AGE
  3. calico-config 4 50d
  4. coredns 1 50d
  5. coredns-autoscaler 1 50d
  6. extension-apiserver-authentication 6 50d
  7. full-cluster-state 1 50d
  8. rke-coredns-addon 1 50d
  9. rke-ingress-controller 1 50d
  10. rke-metrics-addon 1 50d
  11. rke-network-plugin 1 50d

1.3 基于1.2 获取configmap full-cluster-state -o json 信息以json格式展示

[root@localhost ~]# kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap full-cluster-state -o json  -n kube-system

1.4 把ip换成local的ip127.0.0.1

kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap   -n kube-system full-cluster-state   -o json | jq -r .data.\"full-cluster-state\" | jq   -r .currentState.certificatesBundle.\"kube-admin\".config | sed   -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"

1.5 其实1.4 就已经可以了。1.5只是在新运行的racnher-agent且用的是k8s集群的ssl 查找的kubeconfig文件。

docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro  --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap   -n kube-system full-cluster-state   -o json | jq -r .data.\"full-cluster-state\" | jq   -r .currentState.certificatesBundle.\"kube-admin\".config | sed   -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'

(2) 下面是kubectl色思路,把上面生成的kube_config_cluster.yml文件复制到

[root@jettoloader ~]# cp kube_config_cluster.yml ~/.kube/config
  1. [root@localhost aaa]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. 172.16.10.15 NotReady worker 50d v1.17.4
  4. 172.16.10.87 NotReady controlplane,etcd,worker 50d v1.17.4

接下来就是在rancher中import外部集群即可。导入集群有个主意事项

1.在rancher新建集群-->导入方式--->然后下载。【如果内网情况】

[root@localhost ~]# wget --no-check-certificate  https://172.16.10.87:8443/v3/import/qdb2r7whtc9zcprltckr6bvdzd7kvtmdc8lz92g8x6fcjjmr88c66b.yaml

修改

  1. image: rancher/rancher-agent:v2.3.6
  2. 为内网私有镜像源

导入方式页面没看到加新节点操作

登录并创建API密钥

在Rancher 1.x中,默认情况下没有启用认证。启动rancher/server容器后,用户无需任何凭据就可以访问API / UI。在Rancher 2.0中,我们用默认用户名和密码管理来启用身份验证。登录后,我们将获得一个不记名的token,我们可以用它来更改密码。更改密码后,我们将创建一个API密钥以执行其他请求。API密钥也是一个不记名token,我们称其为用于自动化目的的自动化。

下面你是命令方方式添加新节点

登录

  1. rancher的用户名和密码
  2. [root@localhost ~]# LOGINRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3-public/localProviders/local?action=login' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"123456aA"}' --insecure
  3. [root@localhost ~]# echo $LOGINRESPONSE
  4. {"authProvider":"local","baseType":"token","clusterId":null,"created":"2022-01-21T02:45:55Z","createdTS":1642733155000,"creatorId":null,"current":false,"description":"","enabled":true,"expired":false,"expiresAt":"","groupPrincipals":null,"id":"token-wtpqv","isDerived":false,"labels":{"authn.management.cattle.io/kind":"session","authn.management.cattle.io/token-userId":"user-9hzqx","cattle.io/creator":"norman"},"lastUpdateTime":"","links":{"self":"https://172.16.10.87:8443/v3-public/tokens/token-wtpqv"},"name":"token-wtpqv","token":"token-wtpqv:dspwbm4p889gnx5482wwhszm2rngmkflpxlcj9rmjtkdjxst7qj4x5","ttl":57600000,"type":"token","userId":"user-9hzqx","userPrincipal":"map[displayName:Default Admin loginName:admin me:true metadata:map[creationTimestamp:\u003cnil\u003e name:local://user-9hzqx] principalType:user provider:local]","uuid":"4150599c-7a64-11ec-b842-0242ac110002"}
  5. 截取token
  6. [root@localhost ~]# LOGINTOKEN=$(echo $LOGINRESPONSE | jq -r .token)
  7. [root@localhost ~]# echo $LOGINTOKEN
  8. token-wtpqv:dspwbm4p889gnx5482wwhszm2rngmkflpxlcj9rmjtkdjxst7qj4x5

更改密码(将密码改为thisisyournewpassword)

[root@localhost ~]# curl -s 'https://172.16.10.87:8443/v3/users?action=changepassword' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"123456aA","newPassword":"123456789"}' --insecure

创建API密钥

  1. [root@localhost ~]# APIRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/token' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","deion":"automation"}' --insecure)
  2. [root@localhost ~]# echo $APIRESPONSE
  3. {"authProvider":"local","baseType":"token","clusterId":null,"created":"2022-01-21T02:53:57Z","createdTS":1642733637000,"creatorId":null,"current":false,"description":"","enabled":true,"expired":false,"expiresAt":"","groupPrincipals":null,"id":"token-ks7fs","isDerived":true,"labels":{"authn.management.cattle.io/token-userId":"user-9hzqx","cattle.io/creator":"norman"},"lastUpdateTime":"","links":{"remove":"https://172.16.10.87:8443/v3/tokens/token-ks7fs","self":"https://172.16.10.87:8443/v3/tokens/token-ks7fs","update":"https://172.16.10.87:8443/v3/tokens/token-ks7fs"},"name":"token-ks7fs","token":"token-ks7fs:c4hnkt5mlrvjtll4w9bfpwqhgt22r82xtlsl7tqmjqvsn9v6pqszzd","ttl":0,"type":"token","userId":"user-9hzqx","userPrincipal":"map[displayName:Default Admin loginName:admin me:true metadata:map[creationTimestamp:\u003cnil\u003e name:local://user-9hzqx] principalType:user provider:local]","uuid":"6102c7d9-7a65-11ec-b842-0242ac110002"}
  4. [root@localhost ~]# APITOKEN=$(echo $APIRESPONSE | jq -r .token)
  5. [root@localhost ~]# echo $APITOKEN
  6. token-ks7fs:c4hnkt5mlrvjtll4w9bfpwqhgt22r82xtlsl7tqmjqvsn9v6pqszzd

创建集群

生成API密钥匙后,就可以开始创建集群了。创建集群时,您有3个选项:

›启动一个云集群(谷歌Kubernetes Engine/GKE)

›创建一个集群(用我们自己的Kubernetes安装程序,Rancher Kubernetes Engine)

›导入现有集群(如果您已经有了Kubernetes集群,则可以通过从该集群插入kubeconfig文件导入)

拿本文来说,我们将使用Rancher Kubernetes Engine (rke)创建一个集群。当您创建一个集群时,可以选择在创建集群时直接创建新节点(通过从像DigitalOcean / Amazon这样的云提供商创建节点)或使用已存在的节点,并让Rancher用SSH凭证连接到节点。我们在本文中讨论的方法(通过运行docker run命令添加节点)仅在创建集群之后才可用。

您可以使用以下命令创建集群(您的新集群)。如您所见,此处仅包含参数ignoreDockerVersion(忽略Kubernetes不支持的Docker版本)。其余的将是默认的,我们将会在后续文章中讨论。在此之前,您可以通过UI发现可配置选项。

  1. [root@localhost ~]# CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"wubo"}' --insecure)
  2. [root@localhost ~]# echo $CLUSTERRESPONSE
  3. {"actions":{"backupEtcd":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=backupEtcd","enableMonitoring":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=enableMonitoring","exportYaml":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=exportYaml","generateKubeconfig":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=generateKubeconfig","importYaml":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=importYaml","restoreFromEtcdBackup":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=restoreFromEtcdBackup","rotateCertificates":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=rotateCertificates","runSecurityScan":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=runSecurityScan","saveAsTemplate":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=saveAsTemplate"},"annotations":{},"appliedEnableNetworkPolicy":false,"baseType":"cluster","clusterTemplateId":null,"clusterTemplateRevisionId":null,"conditions":[{"status":"True","type":"Pending"},{"status":"Unknown","type":"Provisioned"},{"status":"Unknown","type":"Waiting"}],"created":"2022-01-21T02:56:46Z","createdTS":1642733806000,"creatorId":"user-9hzqx","defaultClusterRoleForProjectMembers":null,"defaultPodSecurityPolicyTemplateId":null,"dockerRootDir":"/var/lib/docker","enableClusterAlerting":false,"enableClusterMonitoring":false,"enableNetworkPolicy":false,"id":"c-4dpq4","internal":false,"istioEnabled":false,"labels":{"cattle.io/creator":"norman"},"links":{"apiServices":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/apiservices","clusterAlertGroups":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralertgroups","clusterAlertRules":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralertrules","clusterAlerts":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralerts","clusterCatalogs":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clustercatalogs","clusterLoggings":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterloggings","clusterMonitorGraphs":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clustermonitorgraphs","clusterRegistrationTokens":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterregistrationtokens","clusterRoleTemplateBindings":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterroletemplatebindings","clusterScans":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterscans","etcdBackups":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/etcdbackups","namespaces":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/namespaces","nodePools":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/nodepools","nodes":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/nodes","notifiers":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/notifiers","persistentVolumes":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/persistentvolumes","projects":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/projects","remove":"https://172.16.10.87:8443/v3/clusters/c-4dpq4","self":"https://172.16.10.87:8443/v3/clusters/c-4dpq4","shell":"wss://172.16.10.87:8443/v3/clusters/c-4dpq4?shell=true","storageClasses":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/storageclasses","subscribe":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/subscribe","templates":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/templates","tokens":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/tokens","update":"https://172.16.10.87:8443/v3/clusters/c-4dpq4"},"name":"wubo","rancherKubernetesEngineConfig":{"addonJobTimeout":30,"ignoreDockerVersion":true,"kubernetesVersion":"v1.17.4-rancher1-2","services":{"etcd":{"backupConfig":{"enabled":true,"intervalHours":12,"retention":6,"s3BackupConfig":null,"safeTimestamp":false,"type":"/v3/schemas/backupConfig"},"creation":"12h","extraArgs":{"election-timeout":"5000","heartbeat-interval":"500"},"gid":0,"retention":"72h","snapshot":false,"type":"/v3/schemas/etcdService","uid":0},"type":"/v3/schemas/rkeConfigServices"},"sshAgentAuth":false,"type":"/v3/schemas/rancherKubernetesEngineConfig"},"state":"provisioning","transitioning":"yes","transitioningMessage":"","type":"cluster","uuid":"c5b129a2-7a65-11ec-b842-0242ac110002","windowsPreferedCluster":false}
  4. 集群id
  5. [root@localhost ~]# CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
  6. [root@localhost ~]# echo $CLUSTERID
  7. c-96t4g

注意此处是可以添加参数的,比如镜像是否私有,私有的话地址用户和密码,网络,可用端口范围 等等

加参数 私有仓库

  1. [root@localhost ~]# cat args
  2. {
  3. "name": "wubo",
  4. "type": "cluster",
  5. "nodes": [],
  6. "rancherKubernetesEngineConfig": {
  7. "ignoreDockerVersion": "true",
  8. "private_registries": {
  9. "is_default": "true",
  10. "password": "Harbor12345",
  11. "url": "harbor.jettech.com",
  12. "user": "admin"
  13. }
  14. }
  15. }
  16. [root@localhost ~]# CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "$(cat args)" --insecure)
  17. [root@localhost ~]# CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
  18. [root@localhost ~]# echo $CLUSTERID
  19. c-t6tdv

这个是rancher创建集群的模板文件,可以参考这个模板文件转换为json文件 

  1. #
  2. # Cluster Config
  3. #
  4. docker_root_dir: /var/lib/docker
  5. enable_cluster_alerting: false
  6. enable_cluster_monitoring: false
  7. enable_network_policy: false
  8. local_cluster_auth_endpoint:
  9. enabled: true
  10. name: wubo
  11. #
  12. # Rancher Config
  13. #
  14. rancher_kubernetes_engine_config:
  15. addon_job_timeout: 30
  16. authentication:
  17. strategy: x509
  18. ignore_docker_version: true
  19. #
  20. # # 当前仅支持nginx的ingress
  21. # # 设置`provider: none`禁用ingress控制器
  22. # # 通过node_selector可以指定在某些节点上运行ingress控制器,例如:
  23. # provider: nginx
  24. # node_selector:
  25. # app: ingress
  26. #
  27. ingress:
  28. provider: nginx
  29. kubernetes_version: v1.17.4-rancher1-2
  30. monitoring:
  31. provider: metrics-server
  32. #
  33. # # 如果您在AWS上使用calico
  34. #
  35. # network:
  36. # plugin: calico
  37. # calico_network_provider:
  38. # cloud_provider: aws
  39. #
  40. # # 指定flannel网络接口
  41. #
  42. # network:
  43. # plugin: flannel
  44. # flannel_network_provider:
  45. # iface: eth1
  46. #
  47. # # 指定canal网络插件的flannel网络接口
  48. #
  49. # network:
  50. # plugin: canal
  51. # canal_network_provider:
  52. # iface: eth1
  53. #
  54. network:
  55. mtu: 0
  56. options:
  57. flannel_backend_type: vxlan
  58. plugin: canal
  59. private_registries:
  60. - is_default: true
  61. password: Harbor12345
  62. url: harbor.jettech.com
  63. user: admin
  64. #
  65. # # 自定义服务参数,仅适用于Linux环境
  66. # services:
  67. # kube-api:
  68. # service_cluster_ip_range: 10.43.0.0/16
  69. # extra_args:
  70. # watch-cache: true
  71. # kube-controller:
  72. # cluster_cidr: 10.42.0.0/16
  73. # service_cluster_ip_range: 10.43.0.0/16
  74. # extra_args:
  75. # # 修改每个节点子网大小(cidr掩码长度),默认为24,可用IP为254个;23,可用IP为510个;22,可用IP为1022个;
  76. # node-cidr-mask-size: 24
  77. # # 控制器定时与节点通信以检查通信是否正常,周期默认5s
  78. # node-monitor-period: '5s'
  79. # # 当节点通信失败后,再等一段时间kubernetes判定节点为notready状态。这个时间段必须是kubelet的nodeStatusUpdateFrequency(默认10s)的N倍,其中N表示允许kubelet同步节点状态的重试次数,默认40s。
  80. # node-monitor-grace-period: '20s'
  81. # # 再持续通信失败一段时间后,kubernetes判定节点为unhealthy状态,默认1m0s。
  82. # node-startup-grace-period: '30s'
  83. # # 再持续失联一段时间,kubernetes开始迁移失联节点的Pod,默认5m0s。
  84. # pod-eviction-timeout: '1m'
  85. # kubelet:
  86. # cluster_domain: cluster.local
  87. # cluster_dns_server: 10.43.0.10
  88. # # 扩展变量
  89. # extra_args:
  90. # # 与apiserver会话时的并发数,默认是10
  91. # kube-api-burst: '30'
  92. # # 与apiserver会话时的 QPS,默认是5
  93. # kube-api-qps: '15'
  94. # # 修改节点最大Pod数量
  95. # max-pods: '250'
  96. # # secrets和configmaps同步到Pod需要的时间,默认一分钟
  97. # sync-frequency: '3s'
  98. # # kubelet默认一次拉取一个镜像,设置为false可以同时拉取多个镜像,前提是存储驱动要为overlay2,对应的Docker也需要增加下载并发数
  99. # serialize-image-pulls: false
  100. # # 拉取镜像的最大并发数,registry-burst不能超过registry-qps ,仅当registry-qps大于0(零)时生效,(默认10)。如果registry-qps为0则不限制(默认5)。
  101. # registry-burst: '10'
  102. # registry-qps: '0'
  103. # # 以下配置用于配置节点资源预留和限制
  104. # cgroups-per-qos: 'true'
  105. # cgroup-driver: cgroupfs
  106. # # 以下两个参数指明为相关服务预留多少资源,仅用于调度,不做实际限制
  107. # system-reserved: 'memory=300Mi'
  108. # kube-reserved: 'memory=2Gi'
  109. # enforce-node-allocatable: 'pods'
  110. # # 硬驱逐阈值,当节点上的可用资源少于这个值时,就会触发强制驱逐。强制驱逐会强制kill掉POD,不会等POD自动退出。
  111. # eviction-hard: 'memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%'
  112. # # 软驱逐阈值
  113. # ## 以下四个参数配套使用,当节点上的可用资源少于这个值时但大于硬驱逐阈值时候,会等待eviction-soft-grace-period设置的时长;
  114. # ## 等待中每10s检查一次,当最后一次检查还触发了软驱逐阈值就会开始驱逐,驱逐不会直接Kill POD,先发送停止信号给POD,然后等待eviction-max-pod-grace-period设置的时长;
  115. # ## 在eviction-max-pod-grace-period时长之后,如果POD还未退出则发送强制kill POD
  116. # eviction-soft: 'memory.available<500Mi,nodefs.available<50%,imagefs.available<50%,nodefs.inodesFree<10%'
  117. # eviction-soft-grace-period: 'memory.available=1m30s'
  118. # eviction-max-pod-grace-period: '30'
  119. # ## 当处于驱逐状态的节点不可调度,当节点恢复正常状态后
  120. # eviction-pressure-transition-period: '5m0s'
  121. # extra_binds:
  122. # - "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
  123. # - "/etc/iscsi:/etc/iscsi"
  124. # - "/sbin/iscsiadm:/sbin/iscsiadm"
  125. # etcd:
  126. # # 修改空间配额为$((4*1024*1024*1024)),默认2G,最大8G
  127. # extra_args:
  128. # quota-backend-bytes: '4294967296'
  129. # auto-compaction-retention: 240 #(单位小时)
  130. # kubeproxy:
  131. # extra_args:
  132. # # 默认使用iptables进行数据转发
  133. # proxy-mode: "" # 如果要启用ipvs,则此处设置为`ipvs`
  134. #
  135. services:
  136. etcd:
  137. backup_config:
  138. enabled: true
  139. interval_hours: 12
  140. retention: 6
  141. safe_timestamp: false
  142. creation: 12h
  143. extra_args:
  144. election-timeout: 5000
  145. heartbeat-interval: 500
  146. gid: 0
  147. retention: 72h
  148. snapshot: false
  149. uid: 0
  150. kube_api:
  151. always_pull_images: false
  152. pod_security_policy: false
  153. service_node_port_range: 30000-32767
  154. kubelet:
  155. cluster_domain: jettech.com
  156. ssh_agent_auth: false
  157. windows_prefered_cluster: false

 运行这些代码之后,您应该在UI中看到您的新集群了。由于没有添加节点,集群状态将是“等待节点配置或等待有效配置”。

组装docker run命令以启动rancher/agent

添加节点的最后一部分是启动rancher/agent容器,该容器将把节点添加到集群中。为此,我们需要:

›与Rancher版本耦合的代理镜像

›节点(etcd和/或控制面板和/或工作人员)

›可以到达rancher/server容器的地址

›代理所使用的加入集群的集群token

›CA证书的校验和

可以从API的设置端点检索代理镜像:

  1. [root@localhost ~]# AGENTIMAGE=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/agent-image --insecure | jq -r .value)
  2. [root@localhost ~]# echo $AGENTIMAGE
  3. rancher/rancher-agent:v2.3.6

节点的角色,您可以自己决定。(在本例中,我们将使用全部三种角色):

ROLEFLAGS="--etcd --controlplane --worker"

可以到达rancher/server容器的地址应该是自解的,rancher/agent将连接到该端点。

[root@localhost ~]# RANCHERSERVER="https://172.16.10.87:8443"

集群token可以从创建的集群中检索。我们在CLUSTERID中保存了创建的clusterid,随后可以用它生成一个token。

  1. [root@localhost ~]# AGENTTOKEN=$(curl -s 'https://172.16.10.87:8443/v3/clusterregistrationtoken' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token)
  2. [root@localhost ~]# echo $AGENTTOKEN
  3. 48v2k7gw2vng7zwh9n9lj4m78vplf2h4s4zqhtglzm74gv6pgxrfmh

生成的CA证书也存储在API中,并可以按如下所示进行检索,这时可以添加sha256sum来生成我们需要加入集群的校验和。

  1. [root@localhost ~]# CACHECKSUM=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/cacerts --insecure | jq -r .value | sha256sum | awk '{ print $1 }')
  2. [root@localhost ~]# echo $CACHECKSUM
  3. c1bfa78ba60bad860216f7757c2a045c94b6d5191ff4268add37dede20265330

加入集群所需的所有数据现在都可用,我们只需组装该命令。

  1. [root@localhost ~]# AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"
  2. 执行上述命令
  3. [root@localhost ~]#
  4. Unable to find image 'rancher/rancher-agent:v2.3.6' locally
  5. v2.3.6: Pulling from rancher/rancher-agent
  6. Digest: sha256:4913a649dcad32fd0a48ab6442192f441b573f76e22db316468690f269ac5d00
  7. Status: Downloaded newer image for rancher/rancher-agent:v2.3.6

最后一个命令(echo $AGENTCOMMAND)应该是这样的

  1. docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.6 --server https://172.16.10.87:8443 --token qf7tsj22qjhq5px2x25m29r2b57lrgzmpdmzzcbcqz2kcg6vkfr42z --ca-checksum c1bfa78ba60bad860216f7757c2a045c94b6d5191ff4268add37dede20265330 --node-name 172.16.10.87 --etcd --controlplane --worker --label diskType=dev --taints NoDiskType=NoDev:NoSchedule
  2. [root@localhost ~]# echo $AGENTCOMMAND
  3. docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host harbor.jettech.com/rancher/rancher-agent:v2.3.6 --etcd --controlplane --worker --server https://172.16.10.87:8443 --token vnsg4c7hbvzkt9t44z6pkxzk2r95z678jps9fhmdctwlph7bkhqpws --ca-checksum 275e0ea1c549581e0b612c77269f3cab25e4d6375512ff358d9cd01b757220b1
  4. [root@localhost ~]# docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host harbor.jettech.com/rancher/rancher-agent:v2.3.6 --etcd --controlplane --worker --server https://172.16.10.87:8443 --token vnsg4c7hbvzkt9t44z6pkxzk2r95z678jps9fhmdctwlph7bkhqpws --ca-checksum 275e0ea1c549581e0b612c77269f3cab25e4d6375512ff358d9cd01b757220b1
  5. 421c030ad7bd034a02e98baaa41dc211cb0d140e1f7e258d43e6d64622ce05aa

在节点上运行此命令后,您应该可以看到它加入了集群并由Rancher进行配置。

Protip:这些token也可以直接用作基本身份验证,例如:

curl -u $APITOKEN  https://172.16.10.87:8443/v3/settings --insecure

Rancher 2.0 Tech Preview 2自动化的第一步。

Rancher 2.0 Tech Preview 3即将发布

上面整理在一起的

  1. [root@localhost work]# cat wubo
  2. 1. 登录
  3. LOGINRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3-public/localProviders/local?action=login' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"123456aA"}' --insecure)
  4. 登录TOKEN
  5. LOGINTOKEN=$(echo $LOGINRESPONSE | jq -r .token)
  6. 2. 更改密码
  7. curl -s 'https://172.16.10.87:8443/v3/users?action=changepassword' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"123456aA","newPassword":"wuqi413l"}' --insecure
  8. 3. 创建API密钥
  9. APIRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/token' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","deion":"automation"}' --insecure)
  10. APITOKEN=$(echo $APIRESPONSE | jq -r .token)
  11. 4. 创建集群
  12. CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"wubo"}' --insecure)
  13. CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "$(cat args)" --insecure)
  14. CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
  15. 5. 可以从API的设置端点检索代理镜像:
  16. AGENTIMAGE=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/agent-image --insecure | jq -r .value)
  17. AGENTIMAGE=harbor.jettech.com/rancher/rancher-agent:v2.3.6
  18. 6. 节点的角色,您可以自己决定。(在本例中,我们将使用全部三种角色)
  19. ROLEFLAGS="--etcd --controlplane --worker"
  20. 7. 可以到达rancher/server容器的地址应该是自解的,rancher/agent将连接到该端点。
  21. RANCHERSERVER="https://172.16.10.87:8443"
  22. 8. 集群token可以从创建的集群中检索。我们在CLUSTERID中保存了创建的clusterid,随后可以用它生成一个token
  23. AGENTTOKEN=$(curl -s 'https://172.16.10.87:8443/v3/clusterregistrationtoken' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token)
  24. 9. 生成的CA证书也存储在API中,并可以按如下所示进行检索,这时可以添加sha256sum来生成我们需要加入集群的校验和。
  25. CACHECKSUM=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/cacerts --insecure | jq -r .value | sha256sum | awk '{ print $1 }')
  26. 10. 加入集群所需的所有数据现在都可用,我们只需组装该命令。
  27. AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"

使用脚本创建自定义集群:

1. 创建自定义集群

复制并保存以下内容为脚本文件,修改前三行api_urltokencluster_name,然后执行脚本。

  1. [root@localhost work]# cat ~/wubo/netranchercluster.sh
  2. #!/bin/bash
  3. api_url='https://xxx.domain.com'
  4. api_token='token-5zgl2:tcj5nvfq67rf55r7xxxxxxxxxxx429xrwd4zx'
  5. cluster_name=''
  6. kubernetes_Version='v1.13.5-rancher1-2'
  7. network_plugin='canal'
  8. quota_backend_bytes=${quota_backend_bytes:-6442450944}
  9. auto_compaction_retention=${auto_compaction_retention:-240}
  10. ingress_provider=${ingress_provider:-nginx}
  11. ignoreDocker_Version=${ignoreDocker_Version:-true}
  12. monitoring_provider=${monitoring_provider:-metrics-server}
  13. service_NodePort_Range=${service_NodePort_Range:-'30000-32767'}
  14. create_Cluster=true
  15. add_Node=true
  16. create_cluster_data()
  17. {
  18. cat <<EOF
  19. {
  20. "amazonElasticContainerServiceConfig": null,
  21. "azureKubernetesServiceConfig": null,
  22. "dockerRootDir": "/var/lib/docker",
  23. "enableClusterAlerting": false,
  24. "enableClusterMonitoring": false,
  25. "googleKubernetesEngineConfig": null,
  26. "localClusterAuthEndpoint": {
  27. "enabled": true,
  28. "type": "/v3/schemas/localClusterAuthEndpoint"
  29. },
  30. "name": "$cluster_name",
  31. "rancherKubernetesEngineConfig": {
  32. "addonJobTimeout": 30,
  33. "addonsInclude":[ "https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yml"
  34. ],
  35. "authentication": {
  36. "strategy": "x509|webhook",
  37. "type": "/v3/schemas/authnConfig"
  38. },
  39. "authorization": {
  40. "type": "/v3/schemas/authzConfig"
  41. },
  42. "bastionHost": {
  43. "sshAgentAuth": false,
  44. "type": "/v3/schemas/bastionHost"
  45. },
  46. "cloudProvider": {
  47. "type": "/v3/schemas/cloudProvider"
  48. },
  49. "ignoreDockerVersion": "$ignoreDocker_Version",
  50. "ingress": {
  51. "provider": "$ingress_provider",
  52. "type": "/v3/schemas/ingressConfig"
  53. },
  54. "kubernetesVersion": "$kubernetes_Version",
  55. "monitoring": {
  56. "provider": "$monitoring_provider",
  57. "type": "/v3/schemas/monitoringConfig"
  58. },
  59. "network": {
  60. "options": {
  61. "flannel_backend_type": "vxlan"
  62. },
  63. "plugin": "$network_plugin",
  64. "type": "/v3/schemas/networkConfig"
  65. },
  66. "restore": {
  67. "restore": false,
  68. "type": "/v3/schemas/restoreConfig"
  69. },
  70. "services": {
  71. "etcd": {
  72. "backupConfig": {
  73. "enabled": true,
  74. "intervalHours": 12,
  75. "retention": 6,
  76. "s3BackupConfig": null,
  77. "type": "/v3/schemas/backupConfig"
  78. },
  79. "creation": "12h",
  80. "extraArgs": {
  81. "auto-compaction-retention": "$auto_compaction_retention",
  82. "election-timeout": "5000",
  83. "heartbeat-interval": "500",
  84. "quota-backend-bytes": "$quota_backend_bytes"
  85. },
  86. "retention": "72h",
  87. "snapshot": false,
  88. "type": "/v3/schemas/etcdService"
  89. },
  90. "kubeApi": {
  91. "alwaysPullImages": false,
  92. "podSecurityPolicy": false,
  93. "serviceNodePortRange": "$service_NodePort_Range",
  94. "type": "/v3/schemas/kubeAPIService"
  95. },
  96. "kubeController": {
  97. "extraArgs": {
  98. "node-monitor-grace-period": "20s",
  99. "node-monitor-period": "5s",
  100. "node-startup-grace-period": "30s",
  101. "pod-eviction-timeout": "1m"
  102. },
  103. "type": "/v3/schemas/kubeControllerService"
  104. },
  105. "kubelet": {
  106. "extraArgs": {
  107. "eviction-hard": "memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%",
  108. "kube-api-burst": "30",
  109. "kube-api-qps": "15",
  110. "kube-reserved": "memory=250Mi",
  111. "max-open-files": "2000000",
  112. "max-pods": "250",
  113. "network-plugin-mtu": "1500",
  114. "pod-infra-container-image": "rancher/pause:3.1",
  115. "registry-burst": "10",
  116. "registry-qps": "0",
  117. "serialize-image-pulls": "false",
  118. "sync-frequency": "3s",
  119. "system-reserved": "memory=250Mi"
  120. },
  121. "failSwapOn": false,
  122. "type": "/v3/schemas/kubeletService"
  123. },
  124. "kubeproxy": {
  125. "type": "/v3/schemas/kubeproxyService"
  126. },
  127. "scheduler": {
  128. "type": "/v3/schemas/schedulerService"
  129. },
  130. "type": "/v3/schemas/rkeConfigServices"
  131. },
  132. "sshAgentAuth": false,
  133. "type": "/v3/schemas/rancherKubernetesEngineConfig"
  134. }
  135. }
  136. EOF
  137. }
  138. curl -k -X POST \
  139. -H "Authorization: Bearer ${api_token}" \
  140. -H "Content-Type: application/json" \
  141. -d "$(create_cluster_data)" $api_url/v3/clusters

本人的

  1. [root@localhost rancher]# cat ranchercluster-init.sh
  2. #!/bin/bash
  3. api_url='https://172.16.10.87:8443'
  4. api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
  5. cluster_name='wubo'
  6. create_cluster_data()
  7. {
  8. cat <<EOF
  9. {
  10. "name": "wubo",
  11. "type": "cluster",
  12. "nodes": [],
  13. "rancherKubernetesEngineConfig": {
  14. "ignoreDockerVersion": "true",
  15. "private_registries": {
  16. "is_default": "true",
  17. "password": "Harbor12345",
  18. "url": "harbor.jettech.com",
  19. "user": "admin"
  20. }
  21. }
  22. }
  23. EOF
  24. }
  25. curl -k -X POST \
  26. -H "Authorization: Bearer ${api_token}" \
  27. -H "Content-Type: application/json" \
  28. -d "$(create_cluster_data)" $api_url/v3/clusters

2. 生成注册命令,这个只是展示一下而已 第三步才是

复制并保存以下内容为脚本文件,修改前三行api_urltokencluster_name,然后执行脚本。

  1. [root@localhost rancher]# cat b.sh
  2. #!/bin/bash
  3. api_url='https://172.16.10.87:8443'
  4. api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
  5. cluster_name='wubo'
  6. # 获取集群ID
  7. cluster_ID=$( curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters | jq -r ".data[] | select(.name == \"$cluster_name\") | .id" )
  8. # 生成注册命令
  9. create_token_data()
  10. {
  11. cat <<EOF
  12. {
  13. "clusterId": "$cluster_ID"
  14. }
  15. EOF
  16. }
  17. curl -k -X POST \
  18. -H "Authorization: Bearer ${api_token}" \
  19. -H 'Accept: application/json' \
  20. -H 'Content-Type: application/json' \
  21. -d "$(create_token_data)" $api_url/v3/clusterregistrationtokens

3. 获取主机注册命令

复制并保存以下内容为脚本文件,修改前三行api_urltokencluster_name,然后执行脚本。

  1. [root@localhost rancher]# cat get.sh
  2. #!/bin/bash
  3. api_url='https://172.16.10.87:8443'
  4. api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
  5. cluster_name='wubo'
  6. cluster_ID=$( curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters | jq -r ".data[] | select(.name == \"$cluster_name\") | .id" )
  7. # nodeCommand
  8. echo "================"
  9. curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters/${cluster_ID}/clusterregistrationtokens | jq -r .data[].nodeCommand
  10. # command
  11. echo "================"
  12. curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters/${cluster_ID}/clusterregistrationtokens | jq -r .data[].command

API - 使用脚本创建自定义集群 - 《Rancher v2.4.4 中文文档》 - 书栈网 · BookStack

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/代码探险家/article/detail/848845
推荐阅读
相关标签
  

闽ICP备14008679号