当前位置:   article > 正文

Istio_1.17.8安装_processing resources for egress gateways, ingress

processing resources for egress gateways, ingress gateways. waiting for depl

项目背景

 按照istio官网的命令一路安装下来,安装好的istio版本为目前的最新版本,1.22.0。而我的k8s集群的版本并不支持istio_1.22的版本,导致ingress-gate网关安装不上,再仔细查看istio的发布文档,如果用istio_1.22版本,k8s的版本至少也得是1.27。与其升级k8s,我还是换掉istio的版本。

  1. # istioctl version
  2. client version: 1.22.0
  3. control plane version: 1.22.0
  4. data plane version: none
  5. # kubectl version
  6. Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
  7. Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:32:02Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}

 1.安装istio_1.17.8

查看istio的支持版本,1.17.8是17版本最后一次更新,也正好适配我目前的k8s版本。

1.1下载地址

Release Istio 1.17.8 · istio/istio · GitHub

 1.2上传并解压

  1. # ll
  2. -rw-r--r-- 1 root root 27127663 Jun 5 16:25 istio-1.17.8-linux-amd64.tar.gz

1.3 新增环境变量

  1. # cat /etc/profile
  2. ...
  3. export PATH=/usr/local/openssh-8.5p1/sbin:/usr/local/openssh-8.5p1/bin:$PATH
  4. export KUBECONFIG=/etc/kubernetes/admin.conf
  5. export PATH=/home/root/k8s/istio_test/install/istio-1.17.8/bin/:$PATH
  6. #export PATH=/home/root/k8s/istio_test/install/istio-1.22.0/bin:$PATH

 source /etc/profile

  1. # istioctl version
  2. client version: 1.17.8
  3. control plane version: 1.22.0
  4. data plane version: 0

1.4 install

  1. # istioctl install --set profile=demo -y
  2. WARNING: Istio control planes installed: 1.22.0.
  3. WARNING: An older installed version of Istio has been detected. Running this command will overwrite it.
  4. ✔ Istio core installed
  5. - Processing resources for Istiod. Waiting for Deployment/istio-system/istiod ^C
  6. [foot@host-10-19-83-151 istio-1.17.8]$ istioctl install --set profile=demo -y -n bookinfo
  7. WARNING: Istio control planes installed: 1.22.0.
  8. WARNING: An older installed version of Istio has been detected. Running this command will overwrite it.
  9. ✔ Istio core installed
  10. ✔ Istiod installed
  11. - Processing resources for Egress gateways, Ingress gateways. Waiting for Deployment/istio-system/istio-egressgateway, Deployment/istio-system/istio-ingressga...
  12. ✔ Egress gateways installed
  13. ✔ Ingress gateways installed
  14. ✔ Installation complete Making this installation the default for injection and validation.
  15. Thank you for installing Istio 1.17. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/hMHGiwZHPU7UQRWe9

查看pod情况

  1. # kubectl get po -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. ingress-nginx nginx-deployment-64d5f7665c-56cpz 1/1 Running 0 22d
  4. ingress-nginx nginx-ingress-controller-7cfc988f46-cszsd 1/1 Running 0 22d
  5. istio-system istio-egressgateway-85df6b84b7-hjlbx 1/1 Running 0 92s
  6. istio-system istio-ingressgateway-6bb8fb6549-wcgqq 1/1 Running 0 92s
  7. istio-system istiod-8d74787f-bckft 1/1 Running 0 2m37s

 

再次查看版本

  1. # istioctl version
  2. client version: 1.17.8
  3. control plane version: 1.17.8
  4. data plane version: 1.17.8 (2 proxies)

 2.部署demo

2.1.1 准备

给这些示例服务创建一个命名空间。

kubectl create namespace bookinfo

给命名空间添加 Istio 的标签,指示 Istio 在部署应用(只对 Pod 起效)的时候,自动注入 Envoy Sidecar Proxy 容器:

kubectl label namespace bookinfo istio-injection=enabled

开启让 Istio 注入 Sidecar 有很多种方式,其中一种是给命名空间设置下标签,在此命名空间下部署的 Pod,会被自动注入 Sidecar 。

demo是官方提供的demo,bookinfo

文件位置:https://github.com/whuanle/istio_book/tree/main/3

仓库拉取后打开 3 目录,执行命令进行部署:

  1. # ll
  2. total 68
  3. -rw-rw-r-- 1 root root 598 Jun 5 15:14 details_deploy.yaml
  4. -rw-rw-r-- 1 root root 108 Jun 5 15:14 details_sa.yaml
  5. -rw-rw-r-- 1 root root 190 Jun 5 15:14 details_svc.yaml
  6. -rw-rw-r-- 1 root root 278 Jun 5 15:14 ingress_gateway.yaml
  7. -rw-rw-r-- 1 root root 754 Jun 5 15:14 productpage_deploy.yaml
  8. -rw-rw-r-- 1 root root 116 Jun 5 15:14 productpage_sa.yaml
  9. -rw-rw-r-- 1 root root 206 Jun 5 15:14 productpage_svc.yaml
  10. -rw-rw-r-- 1 root root 227 Jun 5 15:14 productpage_tmpsvc.yaml
  11. -rw-rw-r-- 1 root root 466 Jun 5 15:14 productpage_vs.yaml
  12. -rw-rw-r-- 1 root root 598 Jun 5 15:14 ratings_deploy.yaml
  13. -rw-rw-r-- 1 root root 108 Jun 5 15:14 ratings_sa.yaml
  14. -rw-rw-r-- 1 root root 190 Jun 5 15:14 ratings_svc.yaml
  15. -rw-rw-r-- 1 root root 108 Jun 5 15:14 reviews_sa.yaml
  16. -rw-rw-r-- 1 root root 190 Jun 5 15:14 reviews_svc.yaml
  17. -rw-rw-r-- 1 root root 913 Jun 5 15:14 reviews_v1_deploy.yaml
  18. -rw-rw-r-- 1 root root 913 Jun 5 15:14 reviews_v2_deploy.yaml
  19. -rw-rw-r-- 1 root root 913 Jun 5 15:14 reviews_v3_deploy.yaml

既然不能一次都执行了,那就一个一个yaml命令执行吧。

2.1.2 detail

Detail:存储了书籍信息的应用。

1.使用 Deployment 部署 details 应用。

2.为 details 服务配置 Kubernetes Service 。

3.为 details 服务创建一个 ServiceAccount。

  1. $ kubectl apply -f *.yaml
  2. error: Unexpected args: [details_sa.yaml details_svc.yaml ingress_gateway.yaml productpage_deploy.yaml productpage_sa.yaml productpage_svc.yaml productpage_tmpsvc.yaml productpage_vs.yaml ratings_deploy.yaml ratings_sa.yaml ratings_svc.yaml reviews_sa.yaml reviews_svc.yaml reviews_v1_deploy.yaml reviews_v2_deploy.yaml reviews_v3_deploy.yaml]
  3. See 'kubectl apply -h' for help and examples
  4. # kubectl apply -f details_*.yaml
  5. error: Unexpected args: [details_sa.yaml details_svc.yaml]
  6. See 'kubectl apply -h' for help and examples
  7. # kubectl apply -f details_deploy.yaml -n bookinfo
  8. deployment.apps/details-v1 created
  9. # kubectl apply -f details_sa.yaml -n bookinfo
  10. serviceaccount/bookinfo-details created
  11. # kubectl apply -f details_svc.yaml -n bookinfo
  12. service/details created
  13. # kubectl get po -n bookinfo
  14. NAME READY STATUS RESTARTS AGE
  15. details-v1-698b5d8c98-bnd52 1/1 Running 0 2m27s

2.1.3 ratings

提供每条评论的打星数据。

  1. # kubectl apply -f ratings_deploy.yaml -n bookinfo
  2. deployment.apps/ratings-v1 created
  3. # kubectl apply -f ratings_svc.yaml -n bookinfo
  4. service/ratings created
  5. #kubectl apply -f ratings_sa.yaml -n bookinfo
  6. serviceaccount/bookinfo-ratings created
  7. # kubectl get po -n bookinfo
  8. NAME READY STATUS RESTARTS AGE
  9. details-v1-698b5d8c98-bnd52 1/1 Running 0 13m
  10. ratings-v1-5967f59c58-st7xr 1/1 Running 0 4m48s

 2.1.4 review

提供书籍的评论信息

  1. # kubectl apply -f reviews_svc.yaml -n bookinfo
  2. service/reviews created
  3. # kubectl apply -f reviews_sa.yaml -n bookinfo
  4. serviceaccount/bookinfo-reviews unchanged
  5. # kubectl apply -f reviews_v1_deploy.yaml -n bookinfo
  6. deployment.apps/reviews-v1 created
  7. [foot@host-10-19-83-151 3]$ kubectl apply -f reviews_v2_deploy.yaml -n bookinfo
  8. deployment.apps/reviews-v2 created
  9. [foot@host-10-19-83-151 3]$ kubectl apply -f reviews_v3_deploy.yaml -n bookinfo
  10. deployment.apps/reviews-v3 created
  11. # kubectl get po -n bookinfo
  12. NAME READY STATUS RESTARTS AGE
  13. details-v1-698b5d8c98-bnd52 1/1 Running 0 22m
  14. ratings-v1-5967f59c58-st7xr 1/1 Running 0 13m
  15. reviews-v1-9c6bb6658-dc5s9 1/1 Running 0 6m15s
  16. reviews-v2-8454bb78d8-fghsh 1/1 Running 0 6m8s
  17. reviews-v3-6dc9897554-zpdl2 1/1 Running 0 6m3s

2.1.5  productpage

页面

  1. # kubectl apply -f productpage_deploy.yaml -n bookinfo
  2. deployment.apps/productpage-v1 created
  3. # kubectl apply -f productpage_svc.yaml -n bookinfo
  4. service/productpage created
  5. # kubectl apply -f productpage_sa.yaml -n bookinfo
  6. serviceaccount/bookinfo-productpage created
  7. # kubectl get po -n bookinfo
  8. NAME READY STATUS RESTARTS AGE
  9. details-v1-698b5d8c98-bnd52 1/1 Running 0 23m
  10. productpage-v1-bf4b489d8-z9wxp 1/1 Running 0 46s
  11. ratings-v1-5967f59c58-st7xr 1/1 Running 0 15m
  12. reviews-v1-9c6bb6658-dc5s9 1/1 Running 0 7m56s
  13. reviews-v2-8454bb78d8-fghsh 1/1 Running 0 7m49s
  14. reviews-v3-6dc9897554-zpdl2 1/1 Running 0 7m44s

聚合服务,供用户浏览书籍信息。

2.1.6 检查

2.6.1 get all

  1. # kubectl get all -n bookinfo
  2. NAME READY STATUS RESTARTS AGE
  3. pod/details-v1-698b5d8c98-bnd52 1/1 Running 0 25m
  4. pod/productpage-v1-bf4b489d8-z9wxp 1/1 Running 0 2m4s
  5. pod/ratings-v1-5967f59c58-st7xr 1/1 Running 0 16m
  6. pod/reviews-v1-9c6bb6658-dc5s9 1/1 Running 0 9m14s
  7. pod/reviews-v2-8454bb78d8-fghsh 1/1 Running 0 9m7s
  8. pod/reviews-v3-6dc9897554-zpdl2 1/1 Running 0 9m2s
  9. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  10. service/details ClusterIP 10.102.62.72 <none> 9080/TCP 24m
  11. service/productpage ClusterIP 10.96.210.213 <none> 9080/TCP 117s
  12. service/ratings ClusterIP 10.103.80.45 <none> 9080/TCP 16m
  13. service/reviews ClusterIP 10.102.163.207 <none> 9080/TCP 9m40s
  14. NAME READY UP-TO-DATE AVAILABLE AGE
  15. deployment.apps/details-v1 1/1 1 1 25m
  16. deployment.apps/productpage-v1 1/1 1 1 2m5s
  17. deployment.apps/ratings-v1 1/1 1 1 16m
  18. deployment.apps/reviews-v1 1/1 1 1 9m14s
  19. deployment.apps/reviews-v2 1/1 1 1 9m7s
  20. deployment.apps/reviews-v3 1/1 1 1 9m2s
  21. NAME DESIRED CURRENT READY AGE
  22. replicaset.apps/details-v1-698b5d8c98 1 1 1 25m
  23. replicaset.apps/productpage-v1-bf4b489d8 1 1 1 2m5s
  24. replicaset.apps/ratings-v1-5967f59c58 1 1 1 16m
  25. replicaset.apps/reviews-v1-9c6bb6658 1 1 1 9m14s
  26. replicaset.apps/reviews-v2-8454bb78d8 1 1 1 9m7s
  27. replicaset.apps/reviews-v3-6dc9897554 1 1 1 9m2s

2.1.6.2 curl productpage

  1. $ curl 10.96.210.213:9080
  2. <!DOCTYPE html>
  3. <html>
  4. <head>
  5. <title>Simple Bookstore App</title>
  6. <meta charset="utf-8">
  7. <meta http-equiv="X-UA-Compatible" content="IE=edge">
  8. <meta name="viewport" content="width=device-width, initial-scale=1">
  9. <!-- Latest compiled and minified CSS -->
  10. <link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css">
  11. <!-- Optional theme -->
  12. <link rel="stylesheet" href="static/bootstrap/css/bootstrap-theme.min.css">
  13. </head>
  14. <body>
  15. <p>
  16. <h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3>
  17. </p>
  18. <table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></table></td></tr></table></td></tr></table>
  19. <p>
  20. <h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester
  21. </h4>
  22. </p>
  23. <p><a href="/productpage?u=normal">Normal user</a></p>
  24. <p><a href="/productpage?u=test">Test user</a></p>
  25. <!-- Latest compiled and minified JavaScript -->
  26. <script src="static/jquery.min.js"></script>
  27. <!-- Latest compiled and minified JavaScript -->
  28. <script src="static/bootstrap/js/bootstrap.min.js"></script>
  29. </body>
  30. </html>

有这一串xml文件,就是部署成功了。放到浏览器中访问,无法访问。

2.1.7 临时访问

将ingress-gageway.yaml复制一份

  1. # kubectl apply -f productpage_tmpsvc.yaml -n bookinfo
  2. service/productpagetmp created
  3. # kubectl get svc -n book
  4. No resources found in book namespace.
  5. # kubectl get svc -n bookinfo
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. details ClusterIP 10.102.62.72 <none> 9080/TCP 30m
  8. productpage ClusterIP 10.96.210.213 <none> 9080/TCP 7m14s
  9. productpagetmp NodePort 10.111.122.78 <none> 9080:31680/TCP 17s
  10. ratings ClusterIP 10.103.80.45 <none> 9080/TCP 21m
  11. reviews ClusterIP 10.102.163.207 <none> 9080/TCP 14m

页面中输入 虚拟机ip:31680

带上上下文访问

 

 

 一直刷新就行,会轮寻review的三个版本的pod

2.1.8 配置gateway

hosts 表示对外开放的访问路径,你可以绑定域名、IP 等。

这里使用 * ,表示所有访问都可以进入此网关。

  1. # kubectl apply -f ingress_gateway.yaml -n bookinfo
  2. gateway.networking.istio.io/bookinfo-gateway created

模型来到这会

 2.1.9 部署VistualService

虽然创建了 Istio Gateway,但是我们还不能直接通过网关访问到前面部署的微服务,我们还需要创建 Istio VirtualService 将 Istio Gateway 跟对应的 Kubernetes Service 绑定起来,然后流量才能正式流向 Pod。

 流量实际并不会经过 Service 中,但是 VirtualService 需要通过 Service 来发现 Pod

  1. # kubectl apply -f productpage_vs.yaml
  2. virtualservice.networking.istio.io/bookinfo created

2.1.10 检查,验证是否部署成功

有返回,则bookinfo应用正常

  1. # kubectl exec "$(kubectl get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -n bookinfo -c ratings -- curl -S productpage:9080/productpage | grep -o "<title>.*</title>"
  2. % Total % Received % Xferd Average Speed Time Time Time Current
  3. Dload Upload Total Spent Left Speed
  4. 100 5294 100 5294 0 0 4086 0 0:00:01 0:00:01 --:--:-- 4088
  5. <title>Simple Bookstore App</title>

3.对外开放应用程序

3.1 创建istio入站ingress Gateway

不设置namespace时,ingressGageway将运行在默认default命名空间中。添加方式如下,新增namespace属性。

  1. cat bookinfo-gateway.yaml
  2. apiVersion: networking.istio.io/v1alpha3
  3. kind: Gateway
  4. metadata:
  5. name: bookinfo-gateway
  6. namespace: bookinfo
  7. spec:
  8. selector:
  9. istio: ingressgateway # use istio default controller
  10. servers:
  11. - port:
  12. number: 80
  13. name: http
  14. protocol: HTTP
  15. hosts:
  16. - "*"
  17. ---
  18. apiVersion: networking.istio.io/v1alpha3
  19. kind: VirtualService
  20. metadata:
  21. name: bookinfo
  22. namespace: bookinfo
  23. spec:
  24. hosts:
  25. - "*"
  26. gateways:
  27. - bookinfo-gateway
  28. http:
  29. - match:
  30. - uri:
  31. exact: /productpage
  32. - uri:
  33. prefix: /static
  34. - uri:
  35. exact: /login
  36. - uri:
  37. exact: /logout
  38. - uri:
  39. prefix: /api/v1/products
  40. route:
  41. - destination:
  42. host: productpage
  43. port:
  44. number: 9080

3.2 运行安装ingress Gateway 

  1. # kubectl apply -f bookinfo-gateway.yaml
  2. gateway.networking.istio.io/bookinfo-gateway created
  3. virtualservice.networking.istio.io/bookinfo created
  4. # istioctl analyze
  5. Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection.
  6. # kubectl label namespace default istio-injection=disabled
  7. namespace/default labeled
  8. # kubectl apply -f bookinfo-gateway.yaml
  9. gateway.networking.istio.io/bookinfo-gateway unchanged
  10. virtualservice.networking.istio.io/bookinfo unchanged
  11. # istioctl analyze
  12. ✔ No validation issues found when analyzing namespace: default.

3.3 确定入站IP

  1. # kubectl get svc istio-ingressgateway -n istio-system
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. istio-ingressgateway LoadBalancer 10.104.41.124 <pending> 15021:32515/TCP,80:31183/TCP,443:32582/TCP,31400:30302/TCP,15443:32277/TCP 41h
  4. [foot@host-10-19-83-151 networking]$

设置 EXTERNAL-IP 的值之后, 您的环境就有了一个外部的负载均衡器,可以将其用作入站网关。 但如果 EXTERNAL-IP 的值为 <none> (或者一直是 <pending> 状态), 则您的环境则没有提供可作为入站流量网关的外部负载均衡器。 在这个情况下,您还可以用服务(Service)的节点端口访问网关。

如何配置自己的外部负载均衡器,参考

https://www.cnblogs.com/yinzhengjie/p/17811466.htmlicon-default.png?t=N7T8https://www.cnblogs.com/yinzhengjie/p/17811466.html环境中,没有找到外部负载均衡器,就选择一个节点来代替

  1. # export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
  2. # echo $INGRESS_PORT
  3. 31183
  4. # export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
  5. # echo $SECURE_INGRESS_PORT
  6. 32582
  7. # export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
  8. # echo $INGRESS_HOST
  9. xx.xx.xx.xx
  10. # export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
  11. # echo $GATEWAY_URL
  12. xx.xx.xx.xx:31183

3.4 外部验证

4 查看仪表盘

Istio 和几个遥测应用做了集成。 遥测能帮您了解服务网格的结构、展示网络的拓扑结构、分析网格的健康状态

4.1 Kiali及其插件

4.1.1 安装Kiali

  1. # kubectl apply -f addons/
  2. serviceaccount/grafana created
  3. configmap/grafana created
  4. service/grafana created
  5. deployment.apps/grafana created
  6. configmap/istio-grafana-dashboards created
  7. configmap/istio-services-grafana-dashboards created
  8. deployment.apps/jaeger created
  9. service/tracing created
  10. service/zipkin created
  11. service/jaeger-collector created
  12. serviceaccount/kiali created
  13. configmap/kiali created
  14. clusterrole.rbac.authorization.k8s.io/kiali-viewer created
  15. clusterrole.rbac.authorization.k8s.io/kiali created
  16. clusterrolebinding.rbac.authorization.k8s.io/kiali created
  17. role.rbac.authorization.k8s.io/kiali-controlplane created
  18. rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
  19. service/kiali created
  20. deployment.apps/kiali created
  21. serviceaccount/prometheus created
  22. configmap/prometheus created
  23. clusterrole.rbac.authorization.k8s.io/prometheus created
  24. clusterrolebinding.rbac.authorization.k8s.io/prometheus created
  25. service/prometheus created
  26. deployment.apps/prometheus created

使用 kubectl rollout status 命令可以检查部署的滚动更新进度,直到部署完成或超时。

如果部署正在进行滚动更新,该命令将会显示更新的进度情况,直到更新完成或超时。如果部署已经完成更新,命令将会输出 "deployment "kiali" successfully rolled out"。

请注意,这个命令只适用于使用 Deployment 进行部署管理的场景。如果你在其它类型的控制器上进行了部署(例如 StatefulSet),则需要使用相应的命令来检查滚动更新状态。

  1. # kubectl rollout status deployment/kiali -n istio-system
  2. deployment "kiali" successfully rolled out

 4.1.2 修改kiali配置NodePort

  1. # kubectl get svc -n istio-system
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. grafana ClusterIP 10.102.208.36 <none> 3000/TCP 179m
  4. istio-egressgateway ClusterIP 10.100.225.194 <none> 80/TCP,443/TCP 2d
  5. istio-ingressgateway LoadBalancer 10.104.41.124 <pending> 15021:32515/TCP,80:31183/TCP,443:32582/TCP,31400:30302/TCP,15443:32277/TCP 2d
  6. istiod ClusterIP 10.104.226.207 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3d7h
  7. jaeger-collector ClusterIP 10.96.191.105 <none> 14268/TCP,14250/TCP,9411/TCP 179m
  8. kiali ClusterIP 10.98.246.20 <none> 20001/TCP,9090/TCP 179m
  9. prometheus ClusterIP 10.110.173.109 <none> 9090/TCP 179m
  10. tracing ClusterIP 10.106.222.123 <none> 80/TCP,16685/TCP 179m
  11. zipkin ClusterIP 10.101.205.116 <none> 9411/TCP 179m

4.1.3 kubectl apply -f kiali.yaml

  1. # kubectl apply -f kiali.yaml
  2. serviceaccount/kiali unchanged
  3. configmap/kiali unchanged
  4. clusterrole.rbac.authorization.k8s.io/kiali-viewer unchanged
  5. clusterrole.rbac.authorization.k8s.io/kiali unchanged
  6. clusterrolebinding.rbac.authorization.k8s.io/kiali unchanged
  7. role.rbac.authorization.k8s.io/kiali-controlplane unchanged
  8. rolebinding.rbac.authorization.k8s.io/kiali-controlplane unchanged
  9. service/kiali configured
  10. deployment.apps/kiali unchanged
  11. # kubectl get svc -n istio-system
  12. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  13. grafana ClusterIP 10.102.208.36 <none> 3000/TCP 3h2m
  14. istio-egressgateway ClusterIP 10.100.225.194 <none> 80/TCP,443/TCP 2d
  15. istio-ingressgateway LoadBalancer 10.104.41.124 <pending> 15021:32515/TCP,80:31183/TCP,443:32582/TCP,31400:30302/TCP,15443:32277/TCP 2d
  16. istiod ClusterIP 10.104.226.207 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3d7h
  17. jaeger-collector ClusterIP 10.96.191.105 <none> 14268/TCP,14250/TCP,9411/TCP 3h2m
  18. kiali NodePort 10.98.246.20 <none> 20001:30853/TCP,9090:32008/TCP 3h2m
  19. prometheus ClusterIP 10.110.173.109 <none> 9090/TCP 3h2m
  20. tracing ClusterIP 10.106.222.123 <none> 80/TCP,16685/TCP 3h2m
  21. zipkin ClusterIP 10.101.205.116 <none> 9411/TCP

4.1.4 访问kiali页面

4.1.5 造数据

要查看追踪数据,必须向服务发送请求。请求的数量取决于 Istio 的采样率。 采样率在安装 Istio 时设置,默认采样速率为 1%。在第一个跟踪可见之前,您需要发送至少 100 个请求。 使用以下命令向 productpage 服务发送 100 个请求:

for i in `seq 1 100`; do curl -s -o /dev/null http://$GATEWAY_URL/productpage; done

for i in `seq 1 100`; do curl -s -o /dev/null http://10.19.83.151:31183/productpage; done

4.1.6 访问kiali页面

5.卸载 

删除 Bookinfo 示例应用和配置, 参阅清理 Bookinfo

Istio 卸载程序按照层次结构逐级的从 istio-system 命令空间中删除 RBAC 权限和所有资源。对于不存在的资源报错,可以安全的忽略掉,毕竟它们已经被分层地删除了。

 

$ kubectl delete -f samples/addons $ istioctl uninstall -y --purge

命名空间 istio-system 默认情况下并不会被移除。 不需要的时候,使用下面命令移除它:

 

$ kubectl delete namespace istio-system

指示 Istio 自动注入 Envoy 边车代理的标签默认也不移除。 不需要的时候,使用下面命令移除它。

 

$ kubectl label namespace default istio-injection-

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/915767
推荐阅读
相关标签
  

闽ICP备14008679号