赞
踩
docker run -d --privileged=true --restart=unless-stopped -p 80:80 -p 8443:443 -v /opt/service/rancher:/var/lib/rancher harbor.jettech.com/rancher/rancher:v2.3.6
清理
- [root@localhost ~]# cat clean.sh
- #!/bin/bash
-
- # 卸载rancher2.x
-
- KUBE_SVC='
- kubelet
- kube-scheduler
- kube-proxy
- kube-controller-manager
- kube-apiserver
- '
- for kube_svc in ${KUBE_SVC};
- do
- # 停止服务
- if [[ `systemctl is-active ${kube_svc}` == 'active' ]]; then
- systemctl stop ${kube_svc}
- fi
- # 禁止服务开机启动
- if [[ `systemctl is-enabled ${kube_svc}` == 'enabled' ]]; then
- systemctl disable ${kube_svc}
- fi
- done
- # 停止所有容器
- docker stop $(docker ps -aq)
- # 删除所有容器
- docker rm -f $(docker ps -qa)
- # 删除所有容器卷
- docker volume rm $(docker volume ls -q)
- # 卸载mount目录
- for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher;
- do
- umount $mount;
- done
- # 备份目录
- mv /etc/kubernetes /etc/kubernetes-bak-$(date +"%Y%m%d%H%M")
- mv /var/lib/etcd /var/lib/etcd-bak-$(date +"%Y%m%d%H%M")
- mv /var/lib/rancher /var/lib/rancher-bak-$(date +"%Y%m%d%H%M")
- mv /opt/rke /opt/rke-bak-$(date +"%Y%m%d%H%M")
- # 删除残留路径
- rm -rf /etc/ceph \
- /etc/cni \
- /opt/cni \
- /run/secrets/kubernetes.io \
- /run/calico \
- /run/flannel \
- /var/lib/calico \
- /var/lib/cni \
- /var/lib/kubelet \
- /var/log/containers \
- /var/log/kube-audit \
- /var/log/pods \
- /var/run/calico
- # 清理网络接口
- no_del_net_inter='
- lo
- docker0
- eth
- ens
- bond
- '
- network_interface=`ls /sys/class/net`
- for net_inter in $network_interface;
- do
- if ! echo "${no_del_net_inter}" | grep -qE ${net_inter:0:3}; then
- ip link delete $net_inter
- fi
- done
- # 清理残留进程
- port_list='
- 80
- 443
- 6443
- 2376
- 2379
- 2380
- 8472
- 9099
- 10250
- 10254
- '
- for port in $port_list;
- do
- pid=`netstat -atlnup | grep $port | awk '{print $7}' | awk -F '/' '{print $1}' | grep -v - | sort -rnk2 | uniq`
- if [[ -n $pid ]]; then
- kill -9 $pid
- fi
- done
- kube_pid=`ps -ef | grep -v grep | grep kube | awk '{print $2}'`
- if [[ -n $kube_pid ]]; then
- kill -9 $kube_pid
- fi
- # 清理Iptables表
- ## 注意:如果节点Iptables有特殊配置,以下命令请谨慎操作
- sudo iptables --flush
- sudo iptables --flush --table nat
- sudo iptables --flush --table filter
- sudo iptables --table nat --delete-chain
- sudo iptables --table filter --delete-chain
- systemctl restart docker
前提
1.首先保障rancher管理k8s集群的config文件还在,可以查看~/.kube/config 进行验证
- [root@localhost ~]# kubectl config get-contexts
- CURRENT NAME CLUSTER AUTHINFO NAMESPACE
- * jettech jettech jettech
- jettech-172.16.10.87 jettech-172.16.10.87 jettech
- [root@localhost ~]# kubectl config view
- apiVersion: v1
- clusters:
- - cluster:
- certificate-authority-data: DATA+OMITTED
- server: https://172.16.10.87:8443/k8s/clusters/c-t5kwm
- name: jettech #此集群是rancher创建用的
- - cluster:
- certificate-authority-data: DATA+OMITTED
- server: https://172.16.10.87:6443
- name: jettech-172.16.10.87 #此集群k8s用的kubectl 可以操作的 不通过rancher也可以用
- contexts:
- - context:
- cluster: jettech
- user: jettech
- name: jettech
- - context:
- cluster: jettech-172.16.10.87
- user: jettech
- name: jettech-172.16.10.87
- current-context: jettech
- kind: Config
- preferences: {}
- users:
- - name: jettech
- user:
- token: kubeconfig-user-9hzqx.c-t5kwm:4kvkcffhpgsrn4bg4mqv8gppf7h9h7x89zn77kkmfdrrznnjr8tmb9
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* jettech jettech jettech
jettech-172.16.10.87 jettech-172.16.10.87 jettech
jettech集群是最开始创建的,jettech-172.16.10.87集群是rancher创建集群的时候我们选择的是否在racher不可用的情况下,可以用最初的方式操作集群。
2.切换到不同配置项
- [root@localhost ~]# kubectl config use-context jettech-172.16.10.87
- Switched to context "jettech-172.16.10.87".
3.使用kubectl命令来操作集群
- [root@localhost ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- 172.16.10.15 NotReady worker 49d v1.17.4
- 172.16.10.87 NotReady controlplane,etcd,worker 49d v1.17.4
-
待kubectl命令可以使用后,在新搭建rancher平台中,把此集群导入进去,导入完成后 要重建项目,把之前的命名空间移过去即可。具体导入命令,参考rancher官网
当集群都不可用的时候,可以自己做个集群,创建用户,关联集群用户,用户授权等 这种方法不可以,因为kubectl都不可以用了,没有可用的kubeconfig信息
在开启了 TLS 的集群中,每当与集群交互的时候少不了的是身份认证,使用 kubeconfig(即证书) 和 token 两种认证方式是最简单也最通用的认证方式。
以kubectl为例介绍kubeconfig的配置。kubectl只是个go编写的可执行程序,只要为kubectl配置合适的kubeconfig,就可以在集群中的任意节点使用。kubectl默认会从$HOME/.kube目录下查找文件名为 config
的文件,也可以通过设置环境变量 KUBECONFIG
或者通过设置 --kubeconfig 去指定其它 kubeconfig 文件。
总之kubeconfig就是为访问集群所作的配置。
1.通过rancher创建的集群证书位于
- [root@localhost ~]# ls /etc/kubernetes/ssl/
- certs kube-ca-key.pem kubecfg-kube-scheduler.yaml kube-proxy-key.pem
- kube-apiserver-key.pem kube-ca.pem kube-controller-manager-key.pem kube-proxy.pem
- kube-apiserver.pem kubecfg-kube-apiserver-proxy-client.yaml kube-controller-manager.pem kube-scheduler-key.pem
- kube-apiserver-proxy-client-key.pem kubecfg-kube-apiserver-requestheader-ca.yaml kube-etcd-172-16-10-87-key.pem kube-scheduler.pem
- kube-apiserver-proxy-client.pem kubecfg-kube-controller-manager.yaml kube-etcd-172-16-10-87.pem kube-service-account-token-key.pem
- kube-apiserver-requestheader-ca-key.pem kubecfg-kube-node.yaml kube-node-key.pem kube-service-account-token.pem
- kube-apiserver-requestheader-ca.pem kubecfg-kube-proxy.yaml kube-node.pem
这里需要集群根证书
- [root@localhost ~]# ls /etc/kubernetes/ssl/{kube-ca-key.pem,kube-ca.pem}
- /etc/kubernetes/ssl/kube-ca-key.pem /etc/kubernetes/ssl/kube-ca.pem
-
-
- ca的key:/etc/kubernetes/ssl/kube-ca-key.pem
- ca的pem或crt:/etc/kubernetes/ssl/kube-ca.pem
新建一个k8s用户大概可以分为以下几步:
具体操作:
(1) 用户证书,此处我用的是自己的用户名:wubo
- [root@localhost aaa]# (umask 077; openssl genrsa -out wubo.key 2048) #创建用户证书key
-
-
- [root@localhost aaa]# openssl req -new -key wubo.key -out wubo.csr -subj "/O=wubo/CN=wubo" #创建用户证书请求,-subj指定组和用户,其中O是组名,CN是用户名
-
-
- [root@localhost aaa]# openssl x509 -req -in wubo.csr -CA /etc/kubernetes/ssl/kube-ca.pem -CAkey /etc/kubernetes/ssl/kube-ca-key.pem -CAcreateserial -out wubo.crt -days 365 #使用k8s的ca签发用户证书
-
- [root@localhost aaa]# ls
- wubo.crt wubo.csr wubo.key
(2) 配置kubectl config
- [root@localhost ~]# export KUBE_APISERVER="https://172.16.10.87:6443"
-
- # 设置集群参数
- 此处可以不执行,不执行就使用本身的集群,也默认使用本集群的默认config文件
- [root@localhost aaa]# kubectl config set-cluster wubo-cluster --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=config
- Cluster "wubo-cluster" set.
-
- # 设置客户端认证参数用户是wubo 和openssl创建的证书的CN一致
- [root@localhost aaa]# kubectl config set-credentials wubo --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --client-certificate=wubo.crt --client-key=wubo.key --embed-certs=true --kubeconfig=config
- User "wubo" set.
-
- # 设置上下文参数 就是把集群【wubo-cluster】和用户【wubo】关联起来
- [root@localhost aaa]# # kubectl config set-context wubo@wubo-cluster --cluster=wubo-cluster --user=wubo
-
- # 可以切换到新用户了,但此时用户啥权限都没有,设置默认上下文
- [root@localhost aaa]# kubectl config use-context wubo@wubo-cluster
- Switched to context "wubo@wubo-cluster".cluster
-
-
- 查看
- [root@localhost aaa]# kubectl config get-contexts
- CURRENT NAME CLUSTER AUTHINFO NAMESPACE
- jettech jettech jettech
- jettech-172.16.10.87 jettech-172.16.10.87 jettech
- * wubo@wubo-cluster wubo-cluster wubo
参数说明:
wubo-cluster ##集群名字
--certificate-authority=/etc/kubernetes/ssl/kube-ca.pem ##集群证书颁发ca
--embed-certs=true --server=${KUBE_APISERVER} ##集群服务ip
- --kubeconfig=config ##把命令生成的信息内容写入kubeconfig,并且同时写入kubectl.kubeconfig文件
设置客户端认证参数时 --certificate-authority=/etc/kubernetes/ssl/kube-ca.pem ##添加管理员权限,没有这一段则为普通用户
--client-certificate:用户的ca,此处是wubo.crt
--client-key:是用的key,此处是wubo.key
生成的 kubeconfig 被保存到 ~/.kube/config
文件;配置文件描述了集群、用户和上下文
本段设置了所需要访问的集群的信息。使用set-cluster设置了需要访问的集群,如上为kubernetes,这只是个名称,实际为--server指向的apiserver;--certificate-authority设置了该集群的公钥;--embed-certs为true表示将--certificate-authority证书写入到kubeconfig中;--server则表示该集群的kube-apiserver地址
生成的kubeconfig 被保存到 ~/.kube/config
文件
本段主要设置用户的相关信息,主要是用户证书。如上的用户名为wubo,证书为:/etc/kubernetes/ssl/wubo.pem,私钥为:/etc/kubernetes/ssl/wubo-key.pem。注意客户端的证书首先要经过集群CA的签署,否则不会被集群认可。此处使用的是ca认证方式,也可以使用token认证,如kubelet的 TLS Boostrap机制下的bootstrapping使用的就是token认证方式。上述kubectl使用的是ca认证,不需要token字段
集群参数和用户参数可以同时设置多对,在上下文参数中将集群参数和用户参数关联起来。上面的上下文名称为kubenetes,集群为kubenetes,用户为admin,表示使用admin的用户凭证来访问kubenetes集群的default命名空间,也可以增加--namspace来指定访问的命名空间。
最后使用kubectl config use-context wubo-cluster来使用名为wubo-cluster的环境项来作为配置。如果配置了多个环境项,可以通过切换不同的环境项名字来访问到不同的集群环境
使用kubeconfig还需要注意用户已经经过授权(如RBAC授权),上述例子中用户的证书中O字段为wubo,CN也为wubo
,kube-apiserver
预定义的 RoleBinding cluster-admin
将 Group wubo与 Role cluster-admin
绑定,该 Role 授予了调用kube-apiserver
相关 API 的权限。
k8s的用户授权有很多种,其中最普遍的是RBAC——基于角色的授权机制。
即最终效果是:某个用户对某些资源拥有某些操作。下面以上面创建的baison用户为例,让此用户对default空间拥有查看pod与进入到pod里的权限
- 创建集群角色 wubo-clusterrole 针对所有namespace 对所有资源有所有权限
- [root@localhost aaa]# kubectl create clusterrole wubo-clusterrole --verb="*" --resource="*"
- clusterrole.rbac.authorization.k8s.io/wubo-clusterrole created
-
-
-
- 给wubo用户赋予 cluster-admin 集群角色
- [root@localhost aaa]# kubectl create clusterrolebinding wubo-admin-cluseter --clusterrole=wubo-clusterrole --user=wubo
- clusterrolebinding.rbac.authorization.k8s.io/wubo-admin-cluseter created
-
- 系统提供了一个集群admin的角色可以直接使用
- kubectl create clusterrolebinding wubo-admin-cluseter --clusterrole=cluster-admin --user=wubo
-
-
-
- [root@localhost aaa]# kubectl get clusterrole | grep wubo
- wubo-clusterrole 2m14s
- [root@localhost aaa]# kubectl get clusterrolebinding | grep wubo
- curl-wubo-admin-binding 35d
- sa-wubo-cluster-admin 34d
- wubo-admin-cluseter 61s
简单说明:
晴空:
kubectl config delete-context wubo-cluster
万能方法
单节点rancher宕机后,无法登录UI界面操作K8S集群,这时可以临时生成集群权限文件,使用该文件即可操作集群。
默认情况下,我们使用rancher,连kubernetes的master主机都不用配置任何参数,那一旦rancher挂机了,master节点又无法使用命令操作集群,该怎么办?
解决方案
我在这里给大家分享一个上面问题的解决方法:
正如你所知,Rancher Server 通过 UI 创建的"自定义"集群,后端是通过 RKE 实现的,所以 RKE(https://docs.rancher.cn/rke/)有能力去纳管Rancher Server 创建的“自定义”集群,也可以不用rke就用kubectl工具即可
(1)下面是rke色思路
通过RKE 创建和管理 Kubernetes 集群,依赖 3 个文件:
cluster.yml:RKE 集群配置文件
kube_config_cluster.yml:该文件包含了获取该集群所有权限的认证凭据
cluster.rkestate:Kubernetes 集群状态文件,包含了获取该集群所有权限的认证凭据
所以,只要能从下游业务集群中获得这 3 个文件,就可以结合 RKE 二进制文件继续管理下游业务集群。下面将详细介绍如何通过 RKE 纳管 Rancher Server 创建的“自定义”集群,并通过RKE扩展集群的节点。
本文只针对 Rancher v2.4.x 和 v2.5.x 版本做了测试,其他版本可能不适用。
为了更好的演示效果,本文将从 Rancher Server 创建“自定义”集群开始,然后通过 RKE 纳管"自定义"集群,最后为了确认 RKE 有能力纳管集群,将演示通过 RKE 添加一个节点
模拟:rancher server服务宕机
如果单单rancher server服务宕机,集群中的服务依然可以正常提供服务
或装有controlplane
角色的节点可操作
1、创建一个目录,作为恢复集群的工作目录
- mkdir /opt/tembak
- cd /opt/tembak
2、恢复下游业务集群的kube_config_cluster.yml文件,在controlplane节点上运行以下命令:
- 需要这个文件 在 /etc/kubernetes/ssl
- [root@localhost ~]# cat /etc/kubernetes/ssl/kubecfg-kube-node.yaml
- apiVersion: v1
- kind: Config
- clusters:
- - cluster:
- api-version: v1
- certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
- server: "https://127.0.0.1:6443"
- name: "local"
- contexts:
- - context:
- cluster: "local"
- user: "kube-node-local"
- name: "local"
- current-context: "local"
- users:
- - name: "kube-node-local"
- user:
- client-certificate: /etc/kubernetes/ssl/kube-node.pem
- client-key: /etc/kubernetes/ssl/kube-node-key.pem
-
-
-
-
- [root@localhost aaa]# docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'>kube_config_cluster.yml
-
-
-
- [root@localhost aaa]# docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'
- apiVersion: v1
- kind: Config
- clusters:
- - cluster:
- api-version: v1
- certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl4TVRJd01UQTFNell4TTFvWERUTXhNVEV5T1RBMU16WXhNMW93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5mV3NDNFJJV1NNCjlvS2dPYmF0RmZpMWN5bDcva25BSEpiRm5JTlNhS1hsVjY1dWV3VjlVK3Z2dWVKSU8vamt4cFg5MUk0U1N0RFgKdUdYMGZpbExQWGZYaEJYNHBwblVPaE4wekFsNCt6Ym5Ud1VsYW50TmhFdG0vanJXUjhLQkdWRy9NWjZBTERtbwpYVDVPOCs2eXVINDFxVkFLYnEycGY1SVVZNUpTTENZNzhCNzhPVFRGNXlrN2YvWERCcENLdHBXemtScG5Wd0pCCnlkOEJoRVFrbk04SFJBYzlJc0YzSnNKOHlodXZldWxFN1YvWUJyWGVzcmV3M1IvejhDakZJV0NjMnplWXpKOE8KQ2hmWGlVbmpIZmg0azlsWDNpcytVTWFrTml1ekFGS0VETzlEMDFzUllObGgzMm1EY0V4bEdwMm1BQWZFRWlnNwozUmd1OEkzb2xOa0NBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSkY3UjUydHBnZzBlZnV3NGI5VXBzZElNaEhEZkloc0tnc0wKSTBrV2tvTmh4dXZGT3QwblA1Z0pmemk4SzdNMHB4M2dsSEo2MmxTZ2YwbGpGTWxaV3pMSzdFblVuNUw1dUxsSwphQ3V0dFRadzA0NndkM09uWUdtd0tMazRINXI1WWcxUUdia3UyUG5FeVgvbTM3dVNPZUNhd2R4K2JFdnpjN09WCndIY3lKS2RldkhzS2xrd213SXFhOUpvcG44MUR4TkJ6YS9oa1JUdVR2WjBrZjNmVUxGN2ttcHYxZEVyS1JzMHgKMVViZW1FQTJBZUh5QWdFb0o3YzdxbkdSbXZNUTVUOGZXVjdvVXNsTEY2Mm4ydEQxUTRsYTg5QlZBRGtOMnpVNgpNeWdyMHVtM1dSaU5EZmRnZUNxVUR2cVBDTFBHdHlia2o4M0hwdDdMb2NpSzE0eW9OTUU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- server: "https://127.0.0.1:6443"
- name: "local"
- contexts:
- - context:
- cluster: "local"
- user: "kube-admin-local"
- name: "local"
- current-context: "local"
- users:
- - name: "kube-admin-local"
- user:
- client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJRHF3VkY2dVRkUEl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNVEV5TURFd05UTTJNVE5hRncwek1URXhNamt3TmpVeU16WmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXgrSm9OTFdtVnltMExEZ1pUcjBLRXcvSmJ5NkMKcDUvUWN5dXovQ0xTS2t2Z0p5ckZPanZIZkJqVmR4MlZxQWNncDFkWHVldzFrTFBlSVp2SUNwcUNlUVovUldHYwpWcmZjWjZNQ3FaU2ZRVFJFZUtjRDV1Q0FPWDVoUVI2cndNWGZJMnBiZ1FISjJaSlVIWXBpbnBsQVh4bG5qQ0NxCnhjaVNVMUlIby9IcnYvQTA5N2RRU1dXS3hRN2VRbmlVS3U3YUZPRVdvdFV2b2dvTWpYQkRHb2wyNVo1dCswei8KNEI4TmlJNWxFb3lYakI5REx5VlJsdjJyNFRKMFNZZUVxT002aHREMkloZmEyclRiSitLei9JRTRrSjZUUGtCZApaSlZ6KzBJckpuc3J3UHQ2dHZuMXo4dlhnQk9SaXdsR0YvK0djMTl4QUowdTMwZThVU00rNXdGMDVRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFCZk9tZWRiVFhFVUwwTzFHdWZnMVNqVFhab3RVSGdpSHBkazFaZnFFVkFtcUMxeApaNzZNdkV2Qy9SRk1xZnRFZUdYS0NnLzl5U2tOSkNibnpualBWcHdmNUJrVDlCWkRLckEyTnMvd0FObHlKRFVqCnFobjk1VkpON1lWbGRFbHpUZTA2blo1dWk1ZkczTHkrQkxXL0pWci9nZVV2eWxTL0J2WEwrZ1VXSkxoZmpWOWsKeEtFbWdjT01QNFlKcUF3UGl6amlCdEJPZ09CdGpaeXhOZ2t0VmhpTExCcWIydXNVOUl5NTJ0SVlGVGRVYUlCYgpraFlDZjMzSlRoNk42bjFnMy91WllnVURsRVU0SjJ3QjJxVFlRa1RrK2I4M3gwUDVQK2N0QitRcnA4dld5UUhGClgvdTFkeGpXZld6L1dKSHcwZUhPLzB4dWlCS1RWMW1zTWZqZkFZOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
- client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeCtKb05MV21WeW0wTERnWlRyMEtFdy9KYnk2Q3A1L1FjeXV6L0NMU0trdmdKeXJGCk9qdkhmQmpWZHgyVnFBY2dwMWRYdWV3MWtMUGVJWnZJQ3BxQ2VRWi9SV0djVnJmY1o2TUNxWlNmUVRSRWVLY0QKNXVDQU9YNWhRUjZyd01YZkkycGJnUUhKMlpKVUhZcGlucGxBWHhsbmpDQ3F4Y2lTVTFJSG8vSHJ2L0EwOTdkUQpTV1dLeFE3ZVFuaVVLdTdhRk9FV290VXZvZ29NalhCREdvbDI1WjV0KzB6LzRCOE5pSTVsRW95WGpCOURMeVZSCmx2MnI0VEowU1llRXFPTTZodEQySWhmYTJyVGJKK0t6L0lFNGtKNlRQa0JkWkpWeiswSXJKbnNyd1B0NnR2bjEKejh2WGdCT1Jpd2xHRi8rR2MxOXhBSjB1MzBlOFVTTSs1d0YwNVFJREFRQUJBb0lCQUV4aXpIbmdOVW80Q0wraApUS0tYZ1lNWlZGeGx4TTUwTjMvYjRyTm5Sek9jdlhPYVY3YlNZNENjS08rVlliek54SC9PMUJxY0Z6aE9WSVE1CmVTLzhMZ0k4Sm1VSVVXdWVaZDlCSDJKWkJxY3ZaejlJYkNoT0FSSjNwb2p4UktldHRvRmRRc3pCTnpjclFYUHMKajVXV2NWQW1jRGpQdnhOSWZBclZYVkFjd29BZGlibUUwVzVzMmFqTkE5MllMbE4vZFRFZTd3dW0rQWRCZWNTTwpERHcwcEx0a1U1UlZBTk9uaVNpdXpQNU1IQWp6RGxidWZkMmJoTFFCeEJiWVJnMVdoMkVtV1kvcnNyQ25BRTFoCkVLUm0wb0JJTjREbUxhcXNQM2RBTnN1Q0lxN0ZpNklGNkdUbEN5cUJnbnU3S2xHL3JXbEFSZ0lOVWRKczdlUWMKZ0VzckliMENnWUVBN1RhNkFxNWFhbXpLem5hRm8zSWpnaGx3VlE5aGlwY1cydDdRRFV6c2YzNVpoMFhNc25CUwppVnVwbTlGVm9vZ2RqQW8xVktNS080RzJHaTJWMjc1a2VPeXFzWFlaRmFsM2k0S2djc2gveFdqQjZqWW5YcDZXCkE1bWtFdmNFdGRRVXJTQUw0bUFxNVVpOHlaZjNLTmZLVmdjc3psUzFYYVlEaVJWL0J4a2JBVnNDZ1lFQTE3YmQKVWZ6NzVUSXB2cGNOUGNhTGFTWUlWejZ4cnMvMHVpRkY3MlRxSHFUTmRrdVpkTXNhME5DUCtIcEFDbXRJNm9kRgpYd2MyK0V0aW9pcEttVm83VThZNlVOVXlJS2x3UHUzcEFucUhxN0M0eTN6S2FuMThsS1I0V0h0cmovM09SdjcxCnZPUkxKOWZoL1RqV054VG91dDFmNU4rYy94Y3pTbFZZUkhFSzlyOENnWUVBeTEwa29SSEtyL3l1N2N3TWkvQloKWXJyZVkvMzR1TEVKUmdESlN1M012d3lhUW05anF3TENyOEdtcWRBUVkzUGdLT1BEanRqcjk5SWZSVmdaWnJkVwpPWmxrU1JtZkxjUUltZEVXTHZHWElLM0x1VGhPRGo5VkNxY1lVNjMwR3RKRUc1d2l0Q09RQXR1V0JocERLWCsrCmxudzJQSG5BdHhXUmFGL0dkRlpnb1lzQ2dZQXdQWFRCSVJJejcwUG1tMkVhcjR2OXQ4T2x2eDk5T0lSQ0c2N0kKR29sQTBSb2hta1ozRi9TblBmejBWR0o5OGdBY2NxUFEzSXd1ZXExVUZxRVlLbFdhSm5wa0dVbGNoSWZWaXQ3UQo3eFhvRDExRUpHUWY3SEF2elpnY01YMmNkZVhyZXBqNTVSUHBsUjIwdzBFa2tFaDdnWVl3YU5Gek9uejk0cGdhCnRpejlnUUtCZ1FDY3NrWlFuMVg4MlNCY0ZPUDJNN0pWN0NpaW9VVm5LRXZZS3RhMmhHbDlVWWlvQkRzd0FxVzUKdFJIYVRGUi9NNzlRd21RTktnNVRUZm45eGVHakJtT1I4RnhhaVVKV09EWGpvZ2FEZzAyQ2RyQUxpbW1UV3k4UAp3cTZQYzZHVGtTN2dYTVlEYnFxc1o0RWs2OWxvT3ZBNVF2WTB4c0I0Sm03WUFjWEJHL2hvNEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
1. kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml 这个是集群配置信息kubeconfig
1.1 通过kubeconfig查看集群信息
- [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml view
- apiVersion: v1
- clusters:
- - cluster:
- certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
- server: https://127.0.0.1:6443
- name: local
- contexts:
- - context:
- cluster: local
- user: kube-node-local
- name: local
- current-context: local
- kind: Config
- preferences: {}
- users:
- - name: kube-node-local
- user:
- client-certificate: /etc/kubernetes/ssl/kube-node.pem
- client-key: /etc/kubernetes/ssl/kube-node-key.pem
- [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get-contexts
- CURRENT NAME CLUSTER AUTHINFO NAMESPACE
- * local local kube-node-local
- [root@localhost ~]# kubectl config --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get-clusters
- NAME
- local
1.2 获取configmap 在kube-system空间下,就是对集群正常操作
- [root@localhost ~]# kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system
- NAME DATA AGE
- calico-config 4 50d
- coredns 1 50d
- coredns-autoscaler 1 50d
- extension-apiserver-authentication 6 50d
- full-cluster-state 1 50d
- rke-coredns-addon 1 50d
- rke-ingress-controller 1 50d
- rke-metrics-addon 1 50d
- rke-network-plugin 1 50d
1.3 基于1.2 获取configmap full-cluster-state -o json 信息以json格式展示
[root@localhost ~]# kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap full-cluster-state -o json -n kube-system
1.4 把ip换成local的ip127.0.0.1
kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"
1.5 其实1.4 就已经可以了。1.5只是在新运行的racnher-agent且用的是k8s集群的ssl 查找的kubeconfig文件。
docker run --rm --net=host -v /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro --entrypoint bash harbor.jettech.com/rancher/rancher-agent:v2.3.6 -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"'
(2) 下面是kubectl色思路,把上面生成的kube_config_cluster.yml文件复制到
[root@jettoloader ~]# cp kube_config_cluster.yml ~/.kube/config
- [root@localhost aaa]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- 172.16.10.15 NotReady worker 50d v1.17.4
- 172.16.10.87 NotReady controlplane,etcd,worker 50d v1.17.4
接下来就是在rancher中import外部集群即可。导入集群有个主意事项
1.在rancher新建集群-->导入方式--->然后下载。【如果内网情况】
[root@localhost ~]# wget --no-check-certificate https://172.16.10.87:8443/v3/import/qdb2r7whtc9zcprltckr6bvdzd7kvtmdc8lz92g8x6fcjjmr88c66b.yaml
修改
- image: rancher/rancher-agent:v2.3.6
- 为内网私有镜像源
导入方式页面没看到加新节点操作
登录并创建API密钥
在Rancher 1.x中,默认情况下没有启用认证。启动rancher/server容器后,用户无需任何凭据就可以访问API / UI。在Rancher 2.0中,我们用默认用户名和密码管理来启用身份验证。登录后,我们将获得一个不记名的token,我们可以用它来更改密码。更改密码后,我们将创建一个API密钥以执行其他请求。API密钥也是一个不记名token,我们称其为用于自动化目的的自动化。
下面你是命令方方式添加新节点
登录
- rancher的用户名和密码
- [root@localhost ~]# LOGINRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3-public/localProviders/local?action=login' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"123456aA"}' --insecure
-
- [root@localhost ~]# echo $LOGINRESPONSE
- {"authProvider":"local","baseType":"token","clusterId":null,"created":"2022-01-21T02:45:55Z","createdTS":1642733155000,"creatorId":null,"current":false,"description":"","enabled":true,"expired":false,"expiresAt":"","groupPrincipals":null,"id":"token-wtpqv","isDerived":false,"labels":{"authn.management.cattle.io/kind":"session","authn.management.cattle.io/token-userId":"user-9hzqx","cattle.io/creator":"norman"},"lastUpdateTime":"","links":{"self":"https://172.16.10.87:8443/v3-public/tokens/token-wtpqv"},"name":"token-wtpqv","token":"token-wtpqv:dspwbm4p889gnx5482wwhszm2rngmkflpxlcj9rmjtkdjxst7qj4x5","ttl":57600000,"type":"token","userId":"user-9hzqx","userPrincipal":"map[displayName:Default Admin loginName:admin me:true metadata:map[creationTimestamp:\u003cnil\u003e name:local://user-9hzqx] principalType:user provider:local]","uuid":"4150599c-7a64-11ec-b842-0242ac110002"}
-
-
- 截取token
- [root@localhost ~]# LOGINTOKEN=$(echo $LOGINRESPONSE | jq -r .token)
- [root@localhost ~]# echo $LOGINTOKEN
- token-wtpqv:dspwbm4p889gnx5482wwhszm2rngmkflpxlcj9rmjtkdjxst7qj4x5
更改密码(将密码改为thisisyournewpassword)
[root@localhost ~]# curl -s 'https://172.16.10.87:8443/v3/users?action=changepassword' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"123456aA","newPassword":"123456789"}' --insecure
创建API密钥
- [root@localhost ~]# APIRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/token' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","deion":"automation"}' --insecure)
- [root@localhost ~]# echo $APIRESPONSE
- {"authProvider":"local","baseType":"token","clusterId":null,"created":"2022-01-21T02:53:57Z","createdTS":1642733637000,"creatorId":null,"current":false,"description":"","enabled":true,"expired":false,"expiresAt":"","groupPrincipals":null,"id":"token-ks7fs","isDerived":true,"labels":{"authn.management.cattle.io/token-userId":"user-9hzqx","cattle.io/creator":"norman"},"lastUpdateTime":"","links":{"remove":"https://172.16.10.87:8443/v3/tokens/token-ks7fs","self":"https://172.16.10.87:8443/v3/tokens/token-ks7fs","update":"https://172.16.10.87:8443/v3/tokens/token-ks7fs"},"name":"token-ks7fs","token":"token-ks7fs:c4hnkt5mlrvjtll4w9bfpwqhgt22r82xtlsl7tqmjqvsn9v6pqszzd","ttl":0,"type":"token","userId":"user-9hzqx","userPrincipal":"map[displayName:Default Admin loginName:admin me:true metadata:map[creationTimestamp:\u003cnil\u003e name:local://user-9hzqx] principalType:user provider:local]","uuid":"6102c7d9-7a65-11ec-b842-0242ac110002"}
- [root@localhost ~]# APITOKEN=$(echo $APIRESPONSE | jq -r .token)
- [root@localhost ~]# echo $APITOKEN
- token-ks7fs:c4hnkt5mlrvjtll4w9bfpwqhgt22r82xtlsl7tqmjqvsn9v6pqszzd
创建集群
生成API密钥匙后,就可以开始创建集群了。创建集群时,您有3个选项:
›启动一个云集群(谷歌Kubernetes Engine/GKE)
›创建一个集群(用我们自己的Kubernetes安装程序,Rancher Kubernetes Engine)
›导入现有集群(如果您已经有了Kubernetes集群,则可以通过从该集群插入kubeconfig文件导入)
拿本文来说,我们将使用Rancher Kubernetes Engine (rke)创建一个集群。当您创建一个集群时,可以选择在创建集群时直接创建新节点(通过从像DigitalOcean / Amazon这样的云提供商创建节点)或使用已存在的节点,并让Rancher用SSH凭证连接到节点。我们在本文中讨论的方法(通过运行docker run命令添加节点)仅在创建集群之后才可用。
您可以使用以下命令创建集群(您的新集群)。如您所见,此处仅包含参数ignoreDockerVersion(忽略Kubernetes不支持的Docker版本)。其余的将是默认的,我们将会在后续文章中讨论。在此之前,您可以通过UI发现可配置选项。
- [root@localhost ~]# CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"wubo"}' --insecure)
-
-
-
- [root@localhost ~]# echo $CLUSTERRESPONSE
- {"actions":{"backupEtcd":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=backupEtcd","enableMonitoring":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=enableMonitoring","exportYaml":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=exportYaml","generateKubeconfig":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=generateKubeconfig","importYaml":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=importYaml","restoreFromEtcdBackup":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=restoreFromEtcdBackup","rotateCertificates":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=rotateCertificates","runSecurityScan":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=runSecurityScan","saveAsTemplate":"https://172.16.10.87:8443/v3/clusters/c-4dpq4?action=saveAsTemplate"},"annotations":{},"appliedEnableNetworkPolicy":false,"baseType":"cluster","clusterTemplateId":null,"clusterTemplateRevisionId":null,"conditions":[{"status":"True","type":"Pending"},{"status":"Unknown","type":"Provisioned"},{"status":"Unknown","type":"Waiting"}],"created":"2022-01-21T02:56:46Z","createdTS":1642733806000,"creatorId":"user-9hzqx","defaultClusterRoleForProjectMembers":null,"defaultPodSecurityPolicyTemplateId":null,"dockerRootDir":"/var/lib/docker","enableClusterAlerting":false,"enableClusterMonitoring":false,"enableNetworkPolicy":false,"id":"c-4dpq4","internal":false,"istioEnabled":false,"labels":{"cattle.io/creator":"norman"},"links":{"apiServices":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/apiservices","clusterAlertGroups":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralertgroups","clusterAlertRules":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralertrules","clusterAlerts":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusteralerts","clusterCatalogs":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clustercatalogs","clusterLoggings":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterloggings","clusterMonitorGraphs":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clustermonitorgraphs","clusterRegistrationTokens":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterregistrationtokens","clusterRoleTemplateBindings":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterroletemplatebindings","clusterScans":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/clusterscans","etcdBackups":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/etcdbackups","namespaces":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/namespaces","nodePools":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/nodepools","nodes":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/nodes","notifiers":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/notifiers","persistentVolumes":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/persistentvolumes","projects":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/projects","remove":"https://172.16.10.87:8443/v3/clusters/c-4dpq4","self":"https://172.16.10.87:8443/v3/clusters/c-4dpq4","shell":"wss://172.16.10.87:8443/v3/clusters/c-4dpq4?shell=true","storageClasses":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/storageclasses","subscribe":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/subscribe","templates":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/templates","tokens":"https://172.16.10.87:8443/v3/clusters/c-4dpq4/tokens","update":"https://172.16.10.87:8443/v3/clusters/c-4dpq4"},"name":"wubo","rancherKubernetesEngineConfig":{"addonJobTimeout":30,"ignoreDockerVersion":true,"kubernetesVersion":"v1.17.4-rancher1-2","services":{"etcd":{"backupConfig":{"enabled":true,"intervalHours":12,"retention":6,"s3BackupConfig":null,"safeTimestamp":false,"type":"/v3/schemas/backupConfig"},"creation":"12h","extraArgs":{"election-timeout":"5000","heartbeat-interval":"500"},"gid":0,"retention":"72h","snapshot":false,"type":"/v3/schemas/etcdService","uid":0},"type":"/v3/schemas/rkeConfigServices"},"sshAgentAuth":false,"type":"/v3/schemas/rancherKubernetesEngineConfig"},"state":"provisioning","transitioning":"yes","transitioningMessage":"","type":"cluster","uuid":"c5b129a2-7a65-11ec-b842-0242ac110002","windowsPreferedCluster":false}
-
-
- 集群id
- [root@localhost ~]# CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
- [root@localhost ~]# echo $CLUSTERID
- c-96t4g
注意此处是可以添加参数的,比如镜像是否私有,私有的话地址用户和密码,网络,可用端口范围 等等
加参数 私有仓库
- [root@localhost ~]# cat args
- {
- "name": "wubo",
- "type": "cluster",
- "nodes": [],
- "rancherKubernetesEngineConfig": {
- "ignoreDockerVersion": "true",
- "private_registries": {
- "is_default": "true",
- "password": "Harbor12345",
- "url": "harbor.jettech.com",
- "user": "admin"
- }
- }
- }
-
- [root@localhost ~]# CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "$(cat args)" --insecure)
- [root@localhost ~]# CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
- [root@localhost ~]# echo $CLUSTERID
- c-t6tdv
这个是rancher创建集群的模板文件,可以参考这个模板文件转换为json文件
- #
- # Cluster Config
- #
- docker_root_dir: /var/lib/docker
- enable_cluster_alerting: false
- enable_cluster_monitoring: false
- enable_network_policy: false
- local_cluster_auth_endpoint:
- enabled: true
- name: wubo
- #
- # Rancher Config
- #
- rancher_kubernetes_engine_config:
- addon_job_timeout: 30
- authentication:
- strategy: x509
- ignore_docker_version: true
- #
- # # 当前仅支持nginx的ingress
- # # 设置`provider: none`禁用ingress控制器
- # # 通过node_selector可以指定在某些节点上运行ingress控制器,例如:
- # provider: nginx
- # node_selector:
- # app: ingress
- #
- ingress:
- provider: nginx
- kubernetes_version: v1.17.4-rancher1-2
- monitoring:
- provider: metrics-server
- #
- # # 如果您在AWS上使用calico
- #
- # network:
- # plugin: calico
- # calico_network_provider:
- # cloud_provider: aws
- #
- # # 指定flannel网络接口
- #
- # network:
- # plugin: flannel
- # flannel_network_provider:
- # iface: eth1
- #
- # # 指定canal网络插件的flannel网络接口
- #
- # network:
- # plugin: canal
- # canal_network_provider:
- # iface: eth1
- #
- network:
- mtu: 0
- options:
- flannel_backend_type: vxlan
- plugin: canal
- private_registries:
- - is_default: true
- password: Harbor12345
- url: harbor.jettech.com
- user: admin
- #
- # # 自定义服务参数,仅适用于Linux环境
- # services:
- # kube-api:
- # service_cluster_ip_range: 10.43.0.0/16
- # extra_args:
- # watch-cache: true
- # kube-controller:
- # cluster_cidr: 10.42.0.0/16
- # service_cluster_ip_range: 10.43.0.0/16
- # extra_args:
- # # 修改每个节点子网大小(cidr掩码长度),默认为24,可用IP为254个;23,可用IP为510个;22,可用IP为1022个;
- # node-cidr-mask-size: 24
- # # 控制器定时与节点通信以检查通信是否正常,周期默认5s
- # node-monitor-period: '5s'
- # # 当节点通信失败后,再等一段时间kubernetes判定节点为notready状态。这个时间段必须是kubelet的nodeStatusUpdateFrequency(默认10s)的N倍,其中N表示允许kubelet同步节点状态的重试次数,默认40s。
- # node-monitor-grace-period: '20s'
- # # 再持续通信失败一段时间后,kubernetes判定节点为unhealthy状态,默认1m0s。
- # node-startup-grace-period: '30s'
- # # 再持续失联一段时间,kubernetes开始迁移失联节点的Pod,默认5m0s。
- # pod-eviction-timeout: '1m'
- # kubelet:
- # cluster_domain: cluster.local
- # cluster_dns_server: 10.43.0.10
- # # 扩展变量
- # extra_args:
- # # 与apiserver会话时的并发数,默认是10
- # kube-api-burst: '30'
- # # 与apiserver会话时的 QPS,默认是5
- # kube-api-qps: '15'
- # # 修改节点最大Pod数量
- # max-pods: '250'
- # # secrets和configmaps同步到Pod需要的时间,默认一分钟
- # sync-frequency: '3s'
- # # kubelet默认一次拉取一个镜像,设置为false可以同时拉取多个镜像,前提是存储驱动要为overlay2,对应的Docker也需要增加下载并发数
- # serialize-image-pulls: false
- # # 拉取镜像的最大并发数,registry-burst不能超过registry-qps ,仅当registry-qps大于0(零)时生效,(默认10)。如果registry-qps为0则不限制(默认5)。
- # registry-burst: '10'
- # registry-qps: '0'
- # # 以下配置用于配置节点资源预留和限制
- # cgroups-per-qos: 'true'
- # cgroup-driver: cgroupfs
- # # 以下两个参数指明为相关服务预留多少资源,仅用于调度,不做实际限制
- # system-reserved: 'memory=300Mi'
- # kube-reserved: 'memory=2Gi'
- # enforce-node-allocatable: 'pods'
- # # 硬驱逐阈值,当节点上的可用资源少于这个值时,就会触发强制驱逐。强制驱逐会强制kill掉POD,不会等POD自动退出。
- # eviction-hard: 'memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%'
- # # 软驱逐阈值
- # ## 以下四个参数配套使用,当节点上的可用资源少于这个值时但大于硬驱逐阈值时候,会等待eviction-soft-grace-period设置的时长;
- # ## 等待中每10s检查一次,当最后一次检查还触发了软驱逐阈值就会开始驱逐,驱逐不会直接Kill POD,先发送停止信号给POD,然后等待eviction-max-pod-grace-period设置的时长;
- # ## 在eviction-max-pod-grace-period时长之后,如果POD还未退出则发送强制kill POD
- # eviction-soft: 'memory.available<500Mi,nodefs.available<50%,imagefs.available<50%,nodefs.inodesFree<10%'
- # eviction-soft-grace-period: 'memory.available=1m30s'
- # eviction-max-pod-grace-period: '30'
- # ## 当处于驱逐状态的节点不可调度,当节点恢复正常状态后
- # eviction-pressure-transition-period: '5m0s'
- # extra_binds:
- # - "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
- # - "/etc/iscsi:/etc/iscsi"
- # - "/sbin/iscsiadm:/sbin/iscsiadm"
- # etcd:
- # # 修改空间配额为$((4*1024*1024*1024)),默认2G,最大8G
- # extra_args:
- # quota-backend-bytes: '4294967296'
- # auto-compaction-retention: 240 #(单位小时)
- # kubeproxy:
- # extra_args:
- # # 默认使用iptables进行数据转发
- # proxy-mode: "" # 如果要启用ipvs,则此处设置为`ipvs`
- #
- services:
- etcd:
- backup_config:
- enabled: true
- interval_hours: 12
- retention: 6
- safe_timestamp: false
- creation: 12h
- extra_args:
- election-timeout: 5000
- heartbeat-interval: 500
- gid: 0
- retention: 72h
- snapshot: false
- uid: 0
- kube_api:
- always_pull_images: false
- pod_security_policy: false
- service_node_port_range: 30000-32767
- kubelet:
- cluster_domain: jettech.com
- ssh_agent_auth: false
- windows_prefered_cluster: false
运行这些代码之后,您应该在UI中看到您的新集群了。由于没有添加节点,集群状态将是“等待节点配置或等待有效配置”。
组装docker run命令以启动rancher/agent
添加节点的最后一部分是启动rancher/agent容器,该容器将把节点添加到集群中。为此,我们需要:
›与Rancher版本耦合的代理镜像
›节点(etcd和/或控制面板和/或工作人员)
›可以到达rancher/server容器的地址
›代理所使用的加入集群的集群token
›CA证书的校验和
可以从API的设置端点检索代理镜像:
- [root@localhost ~]# AGENTIMAGE=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/agent-image --insecure | jq -r .value)
- [root@localhost ~]# echo $AGENTIMAGE
- rancher/rancher-agent:v2.3.6
节点的角色,您可以自己决定。(在本例中,我们将使用全部三种角色):
ROLEFLAGS="--etcd --controlplane --worker"
可以到达rancher/server容器的地址应该是自解的,rancher/agent将连接到该端点。
[root@localhost ~]# RANCHERSERVER="https://172.16.10.87:8443"
集群token可以从创建的集群中检索。我们在CLUSTERID中保存了创建的clusterid,随后可以用它生成一个token。
- [root@localhost ~]# AGENTTOKEN=$(curl -s 'https://172.16.10.87:8443/v3/clusterregistrationtoken' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token)
- [root@localhost ~]# echo $AGENTTOKEN
- 48v2k7gw2vng7zwh9n9lj4m78vplf2h4s4zqhtglzm74gv6pgxrfmh
生成的CA证书也存储在API中,并可以按如下所示进行检索,这时可以添加sha256sum来生成我们需要加入集群的校验和。
- [root@localhost ~]# CACHECKSUM=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/cacerts --insecure | jq -r .value | sha256sum | awk '{ print $1 }')
- [root@localhost ~]# echo $CACHECKSUM
- c1bfa78ba60bad860216f7757c2a045c94b6d5191ff4268add37dede20265330
加入集群所需的所有数据现在都可用,我们只需组装该命令。
- [root@localhost ~]# AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"
-
-
- 执行上述命令
- [root@localhost ~]#
-
- Unable to find image 'rancher/rancher-agent:v2.3.6' locally
- v2.3.6: Pulling from rancher/rancher-agent
- Digest: sha256:4913a649dcad32fd0a48ab6442192f441b573f76e22db316468690f269ac5d00
- Status: Downloaded newer image for rancher/rancher-agent:v2.3.6
最后一个命令(echo $AGENTCOMMAND)应该是这样的
- docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.6 --server https://172.16.10.87:8443 --token qf7tsj22qjhq5px2x25m29r2b57lrgzmpdmzzcbcqz2kcg6vkfr42z --ca-checksum c1bfa78ba60bad860216f7757c2a045c94b6d5191ff4268add37dede20265330 --node-name 172.16.10.87 --etcd --controlplane --worker --label diskType=dev --taints NoDiskType=NoDev:NoSchedule
-
-
- [root@localhost ~]# echo $AGENTCOMMAND
- docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host harbor.jettech.com/rancher/rancher-agent:v2.3.6 --etcd --controlplane --worker --server https://172.16.10.87:8443 --token vnsg4c7hbvzkt9t44z6pkxzk2r95z678jps9fhmdctwlph7bkhqpws --ca-checksum 275e0ea1c549581e0b612c77269f3cab25e4d6375512ff358d9cd01b757220b1
-
- [root@localhost ~]# docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host harbor.jettech.com/rancher/rancher-agent:v2.3.6 --etcd --controlplane --worker --server https://172.16.10.87:8443 --token vnsg4c7hbvzkt9t44z6pkxzk2r95z678jps9fhmdctwlph7bkhqpws --ca-checksum 275e0ea1c549581e0b612c77269f3cab25e4d6375512ff358d9cd01b757220b1
- 421c030ad7bd034a02e98baaa41dc211cb0d140e1f7e258d43e6d64622ce05aa
在节点上运行此命令后,您应该可以看到它加入了集群并由Rancher进行配置。
Protip:这些token也可以直接用作基本身份验证,例如:
curl -u $APITOKEN https://172.16.10.87:8443/v3/settings --insecure
Rancher 2.0 Tech Preview 2自动化的第一步。
Rancher 2.0 Tech Preview 3即将发布
上面整理在一起的
- [root@localhost work]# cat wubo
- 1. 登录
- LOGINRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3-public/localProviders/local?action=login' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"123456aA"}' --insecure)
- 登录TOKEN
- LOGINTOKEN=$(echo $LOGINRESPONSE | jq -r .token)
-
-
- 2. 更改密码
- curl -s 'https://172.16.10.87:8443/v3/users?action=changepassword' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"123456aA","newPassword":"wuqi413l"}' --insecure
-
- 3. 创建API密钥
- APIRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/token' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","deion":"automation"}' --insecure)
- APITOKEN=$(echo $APIRESPONSE | jq -r .token)
-
-
- 4. 创建集群
- CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"wubo"}' --insecure)
-
- CLUSTERRESPONSE=$(curl -s 'https://172.16.10.87:8443/v3/cluster' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "$(cat args)" --insecure)
-
- CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r .id)
-
- 5. 可以从API的设置端点检索代理镜像:
- AGENTIMAGE=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/agent-image --insecure | jq -r .value)
- AGENTIMAGE=harbor.jettech.com/rancher/rancher-agent:v2.3.6
-
-
- 6. 节点的角色,您可以自己决定。(在本例中,我们将使用全部三种角色)
- ROLEFLAGS="--etcd --controlplane --worker"
-
- 7. 可以到达rancher/server容器的地址应该是自解的,rancher/agent将连接到该端点。
- RANCHERSERVER="https://172.16.10.87:8443"
-
- 8. 集群token可以从创建的集群中检索。我们在CLUSTERID中保存了创建的clusterid,随后可以用它生成一个token
- AGENTTOKEN=$(curl -s 'https://172.16.10.87:8443/v3/clusterregistrationtoken' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token)
-
- 9. 生成的CA证书也存储在API中,并可以按如下所示进行检索,这时可以添加sha256sum来生成我们需要加入集群的校验和。
- CACHECKSUM=$(curl -s -H "Authorization: Bearer $APITOKEN" https://172.16.10.87:8443/v3/settings/cacerts --insecure | jq -r .value | sha256sum | awk '{ print $1 }')
-
- 10. 加入集群所需的所有数据现在都可用,我们只需组装该命令。
- AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"
1. 创建自定义集群
复制并保存以下内容为脚本文件,修改前三行api_url
、token
、cluster_name
,然后执行脚本。
- [root@localhost work]# cat ~/wubo/netranchercluster.sh
- #!/bin/bash
- api_url='https://xxx.domain.com'
- api_token='token-5zgl2:tcj5nvfq67rf55r7xxxxxxxxxxx429xrwd4zx'
- cluster_name=''
- kubernetes_Version='v1.13.5-rancher1-2'
- network_plugin='canal'
- quota_backend_bytes=${quota_backend_bytes:-6442450944}
- auto_compaction_retention=${auto_compaction_retention:-240}
- ingress_provider=${ingress_provider:-nginx}
- ignoreDocker_Version=${ignoreDocker_Version:-true}
- monitoring_provider=${monitoring_provider:-metrics-server}
- service_NodePort_Range=${service_NodePort_Range:-'30000-32767'}
- create_Cluster=true
- add_Node=true
- create_cluster_data()
- {
- cat <<EOF
- {
- "amazonElasticContainerServiceConfig": null,
- "azureKubernetesServiceConfig": null,
- "dockerRootDir": "/var/lib/docker",
- "enableClusterAlerting": false,
- "enableClusterMonitoring": false,
- "googleKubernetesEngineConfig": null,
- "localClusterAuthEndpoint": {
- "enabled": true,
- "type": "/v3/schemas/localClusterAuthEndpoint"
- },
- "name": "$cluster_name",
- "rancherKubernetesEngineConfig": {
- "addonJobTimeout": 30,
- "addonsInclude":[ "https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yml"
- ],
- "authentication": {
- "strategy": "x509|webhook",
- "type": "/v3/schemas/authnConfig"
- },
- "authorization": {
- "type": "/v3/schemas/authzConfig"
- },
- "bastionHost": {
- "sshAgentAuth": false,
- "type": "/v3/schemas/bastionHost"
- },
- "cloudProvider": {
- "type": "/v3/schemas/cloudProvider"
- },
- "ignoreDockerVersion": "$ignoreDocker_Version",
- "ingress": {
- "provider": "$ingress_provider",
- "type": "/v3/schemas/ingressConfig"
- },
- "kubernetesVersion": "$kubernetes_Version",
- "monitoring": {
- "provider": "$monitoring_provider",
- "type": "/v3/schemas/monitoringConfig"
- },
- "network": {
- "options": {
- "flannel_backend_type": "vxlan"
- },
- "plugin": "$network_plugin",
- "type": "/v3/schemas/networkConfig"
- },
- "restore": {
- "restore": false,
- "type": "/v3/schemas/restoreConfig"
- },
- "services": {
- "etcd": {
- "backupConfig": {
- "enabled": true,
- "intervalHours": 12,
- "retention": 6,
- "s3BackupConfig": null,
- "type": "/v3/schemas/backupConfig"
- },
- "creation": "12h",
- "extraArgs": {
- "auto-compaction-retention": "$auto_compaction_retention",
- "election-timeout": "5000",
- "heartbeat-interval": "500",
- "quota-backend-bytes": "$quota_backend_bytes"
- },
- "retention": "72h",
- "snapshot": false,
- "type": "/v3/schemas/etcdService"
- },
- "kubeApi": {
- "alwaysPullImages": false,
- "podSecurityPolicy": false,
- "serviceNodePortRange": "$service_NodePort_Range",
- "type": "/v3/schemas/kubeAPIService"
- },
- "kubeController": {
- "extraArgs": {
- "node-monitor-grace-period": "20s",
- "node-monitor-period": "5s",
- "node-startup-grace-period": "30s",
- "pod-eviction-timeout": "1m"
- },
- "type": "/v3/schemas/kubeControllerService"
- },
- "kubelet": {
- "extraArgs": {
- "eviction-hard": "memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%",
- "kube-api-burst": "30",
- "kube-api-qps": "15",
- "kube-reserved": "memory=250Mi",
- "max-open-files": "2000000",
- "max-pods": "250",
- "network-plugin-mtu": "1500",
- "pod-infra-container-image": "rancher/pause:3.1",
- "registry-burst": "10",
- "registry-qps": "0",
- "serialize-image-pulls": "false",
- "sync-frequency": "3s",
- "system-reserved": "memory=250Mi"
- },
- "failSwapOn": false,
- "type": "/v3/schemas/kubeletService"
- },
- "kubeproxy": {
- "type": "/v3/schemas/kubeproxyService"
- },
- "scheduler": {
- "type": "/v3/schemas/schedulerService"
- },
- "type": "/v3/schemas/rkeConfigServices"
- },
- "sshAgentAuth": false,
- "type": "/v3/schemas/rancherKubernetesEngineConfig"
- }
- }
- EOF
- }
- curl -k -X POST \
- -H "Authorization: Bearer ${api_token}" \
- -H "Content-Type: application/json" \
- -d "$(create_cluster_data)" $api_url/v3/clusters
- [root@localhost rancher]# cat ranchercluster-init.sh
- #!/bin/bash
- api_url='https://172.16.10.87:8443'
- api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
- cluster_name='wubo'
-
- create_cluster_data()
- {
- cat <<EOF
- {
- "name": "wubo",
- "type": "cluster",
- "nodes": [],
- "rancherKubernetesEngineConfig": {
- "ignoreDockerVersion": "true",
- "private_registries": {
- "is_default": "true",
- "password": "Harbor12345",
- "url": "harbor.jettech.com",
- "user": "admin"
- }
- }
- }
- EOF
- }
- curl -k -X POST \
- -H "Authorization: Bearer ${api_token}" \
- -H "Content-Type: application/json" \
- -d "$(create_cluster_data)" $api_url/v3/clusters
复制并保存以下内容为脚本文件,修改前三行api_url
、token
、cluster_name
,然后执行脚本。
- [root@localhost rancher]# cat b.sh
- #!/bin/bash
- api_url='https://172.16.10.87:8443'
- api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
- cluster_name='wubo'
- # 获取集群ID
- cluster_ID=$( curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters | jq -r ".data[] | select(.name == \"$cluster_name\") | .id" )
- # 生成注册命令
- create_token_data()
- {
- cat <<EOF
- {
- "clusterId": "$cluster_ID"
- }
- EOF
- }
- curl -k -X POST \
- -H "Authorization: Bearer ${api_token}" \
- -H 'Accept: application/json' \
- -H 'Content-Type: application/json' \
- -d "$(create_token_data)" $api_url/v3/clusterregistrationtokens
复制并保存以下内容为脚本文件,修改前三行api_url
、token
、cluster_name
,然后执行脚本。
- [root@localhost rancher]# cat get.sh
- #!/bin/bash
- api_url='https://172.16.10.87:8443'
- api_token='token-m5ngj:s8lkcxrlfdj5lccthw2w958hdvrcmpxnqrtrsh74rsksjtgjn69nkb'
- cluster_name='wubo'
- cluster_ID=$( curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters | jq -r ".data[] | select(.name == \"$cluster_name\") | .id" )
- # nodeCommand
- echo "================"
- curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters/${cluster_ID}/clusterregistrationtokens | jq -r .data[].nodeCommand
- # command
- echo "================"
- curl -s -k -H "Authorization: Bearer ${api_token}" $api_url/v3/clusters/${cluster_ID}/clusterregistrationtokens | jq -r .data[].command
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。