赞
踩
# 输出:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d17h
# 输出:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cattle-fleet-system gitjob ClusterIP 10.111.178.163 <none> 80/TCP 18h
cattle-provisioning-capi-system capi-webhook-service ClusterIP 10.109.173.131 <none> 443/TCP 4d16h
cattle-system rancher ClusterIP 10.108.22.229 <none> 80/TCP,443/TCP 18h
cattle-system rancher-webhook ClusterIP 10.110.157.18 <none> 443/TCP 18h
cert-manager cert-manager ClusterIP 10.97.116.89 <none> 9402/TCP 4d17h
cert-manager cert-manager-webhook ClusterIP 10.99.135.71 <none> 443/TCP 4d17h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d17h
ingress-nginx ingress-nginx-controller LoadBalancer 10.102.90.59 192.168.21.157 80:31759/TCP,443:32560/TCP 4d17h
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.110.70.3 <none> 443/TCP 4d17h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d17h
# 输出:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
cattle-system rancher <none> rancher.xxx.com 80, 443 18h
# 输出:
Name: rancher
Labels: app=rancher
app.kubernetes.io/managed-by=Helm
chart=rancher-2.8.5
heritage=Helm
release=rancher
Namespace: cattle-system
Address:
Ingress Class: <none>
Default backend: <default>
TLS:
tls-rancher-ingress terminates rancher.xxx.com
Rules:
Host Path Backends
---- ---- --------
rancher.xxx.com
/ rancher:80 (10.244.167.180:80,10.244.167.181:80,10.244.167.182:80)
Annotations: meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
nginx.ingress.kubernetes.io/proxy-connect-timeout: 30
nginx.ingress.kubernetes.io/proxy-read-timeout: 1800
nginx.ingress.kubernetes.io/proxy-send-timeout: 1800
Events: <none>
查看相关文件(看不看都行)
# 创建/编辑ingress class
vim nginx-ingress-set-default.yml
# 文件内容
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx-ingress
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
# 应用
kubectl apply -f nginx-ingress-set-default.yml
# ingressclass.networking.k8s.io/nginx-ingress created
# 查看
kubectl get ingress -A
# NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
# cattle-system rancher <none> rancher.xxx.com 80, 443 18h
cd /tmp/
ls
,但最好看下,万一失败还可以恢复原来的tmp文件删除冗余文件kubectl get ingress/rancher -n cattle-system
,发现CLASS是,ADDRESS为空kubectl get ingress/rancher -n cattle-system -o yaml
# 输出
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
creationTimestamp: "2024-07-29T08:54:02Z"
generation: 1
labels:
app: rancher
app.kubernetes.io/managed-by: Helm
chart: rancher-2.8.5
heritage: Helm
release: rancher
name: rancher
namespace: cattle-system
resourceVersion: "1370584"
uid: b1fe8833-6c77-4969-aa11-e2be56d719c5
spec:
# 这里增加一行
rules:
- host: rancher.xxx.com
http:
paths:
- backend:
service:
name: rancher
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- rancher.xxx.com
secretName: tls-rancher-ingress
status:
loadBalancer: {}
kubectl get ingress/rancher -n cattle-system -o yaml > /tmp/test.yaml
[...]
uid: b1fe8833-6c77-4969-aa11-e2be56d719c5
spec:
# 增加 ingressClassName: nginx-ingress
ingressClassName: nginx-ingress
rules:
- host: rancher.lxq.com
http:
paths:
- backend:
service:
name: rancher
port:
number: 80
path: /
pathType: ImplementationSpecific
[...]
kubectl apply -f test.yaml -n cattle-system
# 我这里虽然报错但是经过下方查看命令发现ingress相关已经更新,经测试Rancher UI成功链接
kubectl apply -f test.yaml -n cattle-system
# Error from server (Conflict): error when applying patch:
# {"metadata":{"generation":1,"resourceVersion":"1370584"}}
# to:
# Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
# Name: "rancher", Namespace: "cattle-system"
# for: "test.yaml": error when patching "test.yaml": Operation cannot be fulfilled on ingresses.networking.k8s.io "rancher": the object has been modified; please apply your changes to the latest version and try again
# 查看
kubectl get ingress -A
# NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
# cattle-system rancher nginx-ingress rancher.xxx.com 192.168.21.157 80, 443 18h
从提供的输出和错误信息来看,这里发生了两件事情:
第一次执行 kubectl apply -f test.yaml
时的警告:
Warning: resource ingresses/rancher is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply.
这个警告说明,你正在尝试使用 kubectl apply
命令应用一个没有 kubectl.kubernetes.io/last-applied-configuration
注解的资源。这个注解是 kubectl apply
用来确定资源是否被修改过的关键信息。如果资源没有这个注解,kubectl apply
将自动添加它,并警告用户。
后续执行 kubectl apply -f test.yaml -n cattle-system
时的错误:
Error from server (Conflict): error when applying patch: ...
the object has been modified; please apply your changes to the latest version and try again
这个错误表明你正在尝试应用的配置已经被修改过了。kubectl apply
通过资源的 generation
和 resourceVersion
来确定资源的当前状态。如果这些值与你的配置文件中的不匹配,Kubernetes 就会阻止应用,以防止潜在的配置冲突。
使用 kubectl apply
时,确保你使用的是最新的配置文件。如果你手动编辑了资源或通过其他方式修改了它,你需要重新获取最新的配置,然后再应用。
如果你确定配置文件是最新的,或者你想强制更新资源,你可以使用 kubectl replace -f test.yaml -n cattle-system
命令替代 kubectl apply
。这个命令会替换现有的资源而不是尝试应用差异。
如果你收到冲突错误,并且想要解决它,你可以先获取当前的资源状态,然后再次应用你的配置:
kubectl get ingress rancher -o yaml -n cattle-system > test.yaml
kubectl apply -f test.yaml -n cattle-system
检查资源是否正确应用:
kubectl get ingress -A
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。