赞
踩
以下步骤均在无法访问docker.io的集群下操作,如果集群可以连接docker.io,则不需要准备镜像包,直接在docker.io中拉取即可
部署在kubernetes中的pod,在往外部请求的时,外部服务期望能知道是哪个pod调用的,以保证数据安全,在外部服务之前,用abac配置了对pod的环境限制,比方说我有两个kubernetes环境,一个是沙箱测试环境,一个是沙箱生产环境。如果abac中配置的策略为只允许在沙箱测试环境调用,当pod在调用外部http接口时,会携带上当前的kubernetes环境的唯一标识。
istio-1.20.0-linux-amd64.tar.gz
docker load -i pilot.tar
docker load -i proxyv2.tar
docker tag 4bfb4a2c6118 registry-cnp-dev.inspurcloud.cn/istio/pilot:1.20.0
docker tag df8296c1ff53 registry-cnp-dev.inspurcloud.cn/istio/proxyv2:1.20.0
docker push registry-cnp-dev.inspurcloud.cn/istio/pilot:1.20.0
docker push registry-cnp-dev.inspurcloud.cn/istio/proxyv2:1.20.0
tar -xvf istio-1.20.0-linux-amd64.tar.gz
cd istio-1.20.0/manifests/profiles/
vim demo.yaml
#### spec下增加hub和tag,hub的地址为当前环境的仓库域名 后面加上/istio
spec:
hub: xxxxxx/istio
tag: 1.20.0
istioctl install --set profile=demo -y
执行完安装命令后,再打开一个ssh窗口
kubectl get deploy -n istio-system
#### 结果
root@master01:~# kubectl get deploy -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
istiod 0/1 1 0 69s
kubectl edit deploy -n istio-system istiod
#### 在spec.template.spec下增加下面配置,指定调度到master01节点上
nodeSelector:
kubernetes.io/hostname: master01
### 检查spec.template.spec.containers.image 是否正确,如果不正确,需要改成当前环境的仓库域名(镜像准备时推送的harbor仓库地址)
image: docker.io/istio/pilot:1.20.0 # 默认是docker.io,需要改成xxxxxxx
### 修改完毕之后,用esc ---> :wq 保存
重复执行4,查询下状态,直到istiod的READY是 1/1,证明启动成功
root@master01:~# kubectl get deploy -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
istio-egressgateway 0/1 1 0 2m22s
istio-ingressgateway 0/1 1 0 2m22s
istiod 1/1 1 1 7m28s
可以看到又出现了两个deployment,按照上述的步骤5,把istio-egressgateway和istio-ingressgateway的配置修改
注意:命令中修改哪个deployment名字,需要修改,把istiod改为要修改的deployment
例如:kubectl edit deploy -n istio-system istio-ingressgateway
直到3个deployment都启动成功,就可以进行部署EnvoyFilter了
root@master01:~# kubectl get deploy -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
istio-egressgateway 1/1 1 1 9m2s
istio-ingressgateway 1/1 1 1 9m2s
istiod 1/1 1 1 14m
vim envoyfilter.yaml
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: my-envoy-filter namespace: ibp-test spec: configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: add-pod-header typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua" inlineCode: | function envoy_on_request(request_handle) local authority = request_handle:headers():get("host") local ip_pattern = "(%d+.%d+.%d+.%d+)" local ip = string.match(authority, ip_pattern) if ip == "xxxxx" or ip == "xxxxx" then local pod_name = os.getenv("POD_NAME") or "unknown_pod" request_handle:headers():add("X-POD-NAME", pod_name) request_handle:headers():add("X-SPACE", "FUNCTION") end end
kubectl label namespace ibp-test istio-injection=enabled
kubectl apply -f envoyfilter.yaml
kubectl get envoyfilter -n ibp-test
注意:历史的pod需要重启才会生效!
在envoyfilter中,只会拦截被视为http流量的端点,所以还需要另外创建http流量,使envoyfilter拦截生效
apiVersion: v1 kind: Service metadata: name: envoy-filter-open-ports namespace: ibp-test spec: ports: - name: http-9234 protocol: TCP port: 9234 targetPort: 9234 - name: http-9001 protocol: TCP port: 9001 targetPort: 9001 - name: http-20040 protocol: TCP port: 7082 targetPort: 7082 type: ClusterIP
kubectl create -f envoy-filter-open-ports.yaml
istioctl uninstall -y --purge
Q: 启动pod的时候,还是会去docker.io中拉取镜像启动initContainer
A: deployment配置如下
apiVersion: apps/v1 kind: Deployment metadata: name: httpbin spec: replicas: 1 selector: matchLabels: app: httpbin version: v1 template: metadata: annotations: "sidecar.istio.io/proxyImage": docker.io/istio/proxyv2:1.20.0 # 这里修改具体的镜像仓库 labels: app: httpbin version: v1 spec: serviceAccountName: httpbin containers: - image: docker.io/kong/httpbin imagePullPolicy: IfNotPresent name: httpbin ports: - containerPort: 80
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。