当前位置:   article > 正文

如何在Kubernetes上安装和使用Istio

kubesphere 安装istio

介绍 (Introduction)

A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. As more developers work with microservices, service meshes have evolved to make that work easier and more effective by consolidating common management and administrative tasks in a distributed setup.

服务网格是基础架构层,可让您管理应用程序的微服务之间的通信。 随着越来越多的开发人员使用微服务,服务网格已经发展为通过在分布式设置中整合常见的管理和管理任务来使这项工作更容易,更有效。

Using a service mesh like Istio can simplify tasks like service discovery, routing and traffic configuration, encryption and authentication/authorization, and monitoring and telemetry. Istio, in particular, is designed to work without major changes to pre-existing service code. When working with Kubernetes, for example, it is possible to add service mesh capabilities to applications running in your cluster by building out Istio-specific objects that work with existing application resources.

使用Istio这样的服务网格可以简化任务,例如服务发现,路由和流量配置,加密和身份验证/授权以及监视和遥测。 尤其是Istio,旨在在不对现有服务代码进行重大更改的情况下工作。 例如,在使用Kubernetes时 ,可以通过构建与现有应用程序资源一起使用的Istio特定对象,来向集群中运行的应用程序添加服务网格功能。

In this tutorial, you will install Istio using the Helm package manager for Kubernetes. You will then use Istio to expose a demo Node.js application to external traffic by creating Gateway and Virtual Service resources. Finally, you will access the Grafana telemetry addon to visualize your application traffic data.

在本教程中,您将使用Kubernetes的Helm软件包管理器安装Istio。 然后,您将使用Istio通过创建网关虚拟服务资源,将演示Node.js应用程序暴露给外部流量。 最后,您将访问Grafana遥测插件,以可视化您的应用程序流量数据。

先决条件 (Prerequisites)

To complete this tutorial, you will need:

要完成本教程,您将需要:

Note: We highly recommend a cluster with at least 8GB of available memory and 4vCPUs for this setup. This tutorial will use three of DigitalOcean’s standard 4GB/2vCPU Droplets as nodes.

注意:对于此设置,我们强烈建议集群使用至少8GB的可用内存和4vCPU。 本教程将使用DigitalOcean的标准4GB / 2vCPU Droplet中的三个作为节点。

第1步-打包应用程序 (Step 1 — Packaging the Application)

To use our demo application with Kubernetes, we will need to clone the code and package it so that the kubelet agent can pull the image.

要将我们的演示应用程序与Kubernetes一起使用,我们将需要克隆代码并将其打包,以便kubelet代理可以提取图像。

Our first step will be to clone the nodejs-image-demo respository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker, which describes how to build an image for a Node.js application and how to create a container using this image. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

我们的第一步是从DigitalOcean社区GitHub帐户中克隆nodejs-image-demo存储库 。 该存储库包含如何使用Docker构建Node.js应用程序中描述的设置代码,该代码描述了如何为Node.js应用程序构建图像以及如何使用该图像创建容器。 您可以在《 使用Node.js从容器到Kubernetes 》系列中找到有关应用程序本身的更多信息。

To get started, clone the nodejs-image-demo repository into a directory called istio_project:

首先,将nodejs-image-demo存储istio_project到一个名为istio_project的目录中:

  • git clone https://github.com/do-community/nodejs-image-demo.git istio_project

    git clone https://github.com/do-community/nodejs-image-demo.git istio_project

Navigate to the istio_project directory:

导航到istio_project目录:

  • cd istio_project

    cd istio_project

This directory contains files and folders for a shark information application that offers users basic information about sharks. In addition to the application files, the directory contains a Dockerfile with instructions for building a Docker image with the application code. For more information about the instructions in the Dockerfile, see Step 3 of How To Build a Node.js Application with Docker.

此目录包含鲨鱼信息应用程序的文件和文件夹,该应用程序向用户提供有关鲨鱼的基本信息。 除了应用程序文件之外,该目录还包含一个Dockerfile,其中包含有关使用应用程序代码构建Docker映像的说明。 有关Dockerfile中的指令的更多信息,请参见如何使用Docker构建Node.js应用程序的步骤3

To test that the application code and Dockerfile work as expected, you can build and tag the image using the docker build command, and then use the image to run a demo container. Using the -t flag with docker build will allow you to tag the image with your Docker Hub username so that you can push it to Docker Hub once you’ve tested it.

要测试应用程序代码和Dockerfile是否按预期工作,可以使用docker build命令构建并标记映像,然后使用该映像运行演示容器。 将-t标志与docker build将允许您使用Docker Hub用户名标记该映像,以便在对其进行测试后将其推送到Docker Hub。

Build the image with the following command:

使用以下命令生成映像:

  • docker build -t your_dockerhub_username/node-demo .

    docker build -t your_dockerhub_username / node-demo 。

The . in the command specifies that the build context is the current directory. We’ve named the image node-demo, but you are free to name it something else.

. 在命令中,指定构建上下文为当前目录。 我们已经将图像命名为node-demo ,但是您可以随意命名。

Once the build process is complete, you can list your images with docker images:

构建过程完成后,您可以将图像与docker images一起列出:

  • docker images

    码头工人图像

You will see the following output confirming the image build:

您将看到以下输出确认映像生成:

  1. Output
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. your_dockerhub_username/node-demo latest 37f1c2939dbf 5 seconds ago 77.6MB
  4. node 10-alpine 9dfa73010b19 2 days ago 75.3MB

Next, you’ll use docker run to create a container based on this image. We will include three flags with this command:

接下来,您将使用docker run基于该映像创建一个容器。 我们将在此命令中包含三个标志:

  • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.

    -p :这将在容器上发布端口,并将其映射到我们主机上的端口。 我们将在主机上使用端口80 ,但是,如果在该端口上运行其他进程,则可以随意进行必要的修改。 有关如何工作的更多信息,请参阅Docker文档中有关端口绑定的讨论。

  • -d: This runs the container in the background.

    -d :这将在后台运行容器。

  • --name: This allows us to give the container a customized name.

    --name :这使我们可以为容器指定一个自定义名称。

Run the following command to build the container:

运行以下命令来构建容器:

  • docker run --name node-demo -p 80:8080 -d your_dockerhub_username/node-demo

    docker run --name node-demo -p 80:8080 -d your_dockerhub_username / node-demo

Inspect your running containers with docker ps:

使用docker ps检查正在运行的容器:

  • docker ps

    码头工人ps

You will see output confirming that your application container is running:

您将看到输出确认您的应用程序容器正在运行:

  1. Output
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:80->8080/tcp node-demo

You can now visit your server IP to test your setup: http://your_server_ip. Your application will display the following landing page:

现在,您可以访问服务器IP来测试设置: http:// your_server_ip 。 您的应用程序将显示以下登录页面:

Now that you have tested the application, you can stop the running container. Use docker ps again to get your CONTAINER ID:

现在您已经测试了应用程序,可以停止正在运行的容器。 再次使用docker ps获取您的CONTAINER ID

  • docker ps

    码头工人ps
  1. Output
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 49a67bafc325 your_dockerhub_username/node-demo "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:80->8080/tcp node-demo

Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

使用docker stop停止容器。 一定要更换CONTAINER ID与自己的应用程序在这里列出CONTAINER ID

  • docker stop 49a67bafc325

    码头工人停止49a67bafc325

Now that you have tested the image, you can push it to Docker Hub. First, log in to the Docker Hub account you created in the prerequisites:

现在您已经测试了映像,可以将其推送到Docker Hub。 首先,登录到在先决条件中创建的Docker Hub帐户:

  • docker login -u your_dockerhub_username

    泊坞窗登录-u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user’s home directory with your Docker Hub credentials.

出现提示时,输入您的Docker Hub帐户密码。 以这种方式登录将使用Docker Hub凭据在非root用户的主目录中创建~/.docker/config.json文件。

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

使用docker push命令将应用程序映像推送到Docker Hub。 请记住用您自己的Docker Hub用户名替换your_dockerhub_username

  • docker push your_dockerhub_username/node-demo

    docker push your_dockerhub_username / 节点演示

You now have an application image that you can pull to run your application with Kubernetes and Istio. Next, you can move on to installing Istio with Helm.

现在,您有了一个应用程序映像,可以拉出该映像以使用Kubernetes和Istio运行您的应用程序。 接下来,您可以继续使用Helm安装Istio。

步骤2 —使用Helm安装Istio (Step 2 — Installing Istio with Helm)

Although Istio offers different installation methods, the documentation recommends using Helm to maximize flexibility in managing configuration options. We will install Istio with Helm and ensure that the Grafana addon is enabled so that we can visualize traffic data for our application.

尽管Istio提供了不同的安装方法,但是文档建议使用Helm来最大程度地灵活管理配置选项。 我们将在Helm中安装Istio,并确保已启用Grafana插件,以便我们可以可视化应用程序的流量数据。

First, add the Istio release repository:

首先,添加Istio版本存储库:

  • helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

    舵库添加istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

This will enable you to use the Helm charts in the repository to install Istio.

这将使您能够使用存储库中的Helm图表来安装Istio。

Check that you have the repo:

检查您是否有存储库:

  • helm repo list

    头盔回购清单

You should see the istio.io repo listed:

您应该看到列出的istio.io库:

  1. Output
  2. NAME URL
  3. stable https://kubernetes-charts.storage.googleapis.com
  4. local http://127.0.0.1:8879/charts
  5. istio.io https://storage.googleapis.com/istio-release/releases/1.1.7/charts/

Next, install Istio’s Custom Resource Definitions (CRDs) with the istio-init chart using the helm install command:

接下来,使用helm install命令通过istio-init图表安装Istio的自定义资源定义 (CRD):

  • helm install --name istio-init --namespace istio-system istio.io/istio-init

    头盔安装--name istio-init --namespace istio-system istio.io/istio-init
  1. Output
  2. NAME: istio-init
  3. LAST DEPLOYED: Fri Jun 7 17:13:32 2019
  4. NAMESPACE: istio-system
  5. STATUS: DEPLOYED
  6. ...

This command commits 53 CRDs to the kube-apiserver, making them available for use in the Istio mesh. It also creates a namespace for the Istio objects called istio-system and uses the --name option to name the Helm release istio-init. A release in Helm refers to a particular deployment of a chart with specific configuration options enabled.

此命令将53个CRD提交到kube-apiserver ,使其可在Istio网格中使用。 它还为名为istio-system的Istio对象创建一个名称空间 ,并使用--name选项将Helm 版本 istio-init命名。 Helm中的发行版是指已启用特定配置选项的图表的特定部署。

To check that all of the required CRDs have been committed, run the following command:

要检查是否已提交所有必需的CRD,请运行以下命令:

  • kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l

    kubectl获取crds | grep'istio.io \ | certmanager.k8s.io'| wc -l

This should output the number 53.

这应该输出数字53

You can now install the istio chart. To ensure that the Grafana telemetry addon is installed with the chart, we will use the --set grafana.enabled=true configuration option with our helm install command. We will also use the installation protocol for our desired configuration profile: the default profile. Istio has a number of configuration profiles to choose from when installing with Helm that allow you to customize the Istio control plane and data plane sidecars. The default profile is recommended for production deployments, and we’ll use it to familiarize ourselves with the configuration options that we would use when moving to production.

现在,您可以安装istio图表。 为了确保随图表一起安装Grafana遥测插件,我们将在helm install命令中使用--set grafana.enabled=true配置选项。 我们还将对所需的配置文件使用安装协议:默认配置文件。 使用Helm安装时,Istio有许多配置配置文件可供选择,这些配置文件使您可以自定义Istio 控制平面和数据平面侧车 。 建议将默认配置文件用于生产部署,并且我们将使用它来熟悉迁移到生产环境时将使用的配置选项。

Run the following helm install command to install the chart:

运行以下helm install命令以安装图表:

  • helm install --name istio --namespace istio-system --set grafana.enabled=true istio.io/istio

    头盔安装--name istio --namespace istio-system --set grafana.enabled = true istio.io/istio
  1. Output
  2. NAME: istio
  3. LAST DEPLOYED: Fri Jun 7 17:18:33 2019
  4. NAMESPACE: istio-system
  5. STATUS: DEPLOYED
  6. ...

Again, we’re installing our Istio objects into the istio-system namespace and naming the release — in this case, istio.

再次,我们将Istio对象安装到istio-system命名空间中,并命名发行版-在这种情况下为istio

We can verify that the Service objects we expect for the default profile have been created with the following command:

我们可以使用以下命令验证是否已为默认配置文件期望了服务对象

  • kubectl get svc -n istio-system

    kubectl获取svc -n istio-system

The Services we would expect to see here include istio-citadel, istio-galley, istio-ingressgateway, istio-pilot, istio-policy, istio-sidecar-injector, istio-telemetry, and prometheus. We would also expect to see the grafana Service, since we enabled this addon during installation:

我们希望看到这里的服务包括istio-citadelistio-galleyistio-ingressgatewayistio-pilotistio-policyistio-sidecar-injectoristio-telemetryprometheus 。 我们还希望看到grafana服务,因为我们在安装过程中启用了此插件:

  1. Output
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. grafana ClusterIP 10.245.85.162 <none> 3000/TCP 3m26s
  4. istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 3m25s
  5. istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 3m26s
  6. istio-ingressgateway LoadBalancer 10.245.171.39 174.138.125.110 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 3m26s
  7. istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 3m26s
  8. istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 3m26s
  9. istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 3m25s
  10. istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 3m26s
  11. prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 3m26s

We can also check for the corresponding Istio Pods with the following command:

我们还可以检查相应的Istio 吊舱使用下面的命令:

  • kubectl get pods -n istio-system

    kubectl获取pods -n istio-system

The Pods corresponding to these services should have a STATUS of Running, indicating that the Pods are bound to nodes and that the containers associated with the Pods are running:

与这些服务相对应的Pod的STATUS应为Running ,指示Pod已绑定到节点,并且与Pod相关联的容器正在运行:

  1. Output
  2. NAME READY STATUS RESTARTS AGE
  3. grafana-67c69bb567-t8qrg 1/1 Running 0 4m25s
  4. istio-citadel-fc966574d-v5rg5 1/1 Running 0 4m25s
  5. istio-galley-cf776876f-5wc4x 1/1 Running 0 4m25s
  6. istio-ingressgateway-7f497cc68b-c5w64 1/1 Running 0 4m25s
  7. istio-init-crd-10-bxglc 0/1 Completed 0 9m29s
  8. istio-init-crd-11-dv5lz 0/1 Completed 0 9m29s
  9. istio-pilot-785694f946-m5wp2 2/2 Running 0 4m25s
  10. istio-policy-79cff99c7c-q4z5x 2/2 Running 1 4m25s
  11. istio-sidecar-injector-c8ddbb99c-czvwq 1/1 Running 0 4m24s
  12. istio-telemetry-578b6f967c-zk56d 2/2 Running 1 4m25s
  13. prometheus-d8d46c5b5-k5wmg 1/1 Running 0 4m25s

The READY field indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

READY字段指示Pod中有多少个容器正在运行。 有关更多信息,请查阅Pod生命周期文档

Note: If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

注意:如果在“ STATUS列中看到意外的阶段,请记住,可以使用以下命令对Pod进行故障排除:

  • kubectl describe pods your_pod -n pod_namespace

    kubectl描述pods your_pod -n pod_namespace

  • kubectl logs your_pod -n pod_namespace

    kubectl记录your_pod -n pod_namespace

The final step in the Istio installation will be enabling the creation of Envoy proxies, which will be deployed as sidecars to services running in the mesh.

Istio安装的最后一步将是启用Envoy代理的创建,该代理将作为Sidecar部署到网格中运行的服务。

Sidecars are typically used to add an extra layer of functionality in existing container environments. Istio’s mesh architecture relies on communication between Envoy sidecars, which comprise the data plane of the mesh, and the components of the control plane. In order for the mesh to work, we need to ensure that each Pod in the mesh will also run an Envoy sidecar.

边车通常用于在现有容器环境中添加额外的功能层。 Istio的网格体系结构依赖于Envoy边车之间的通信,该边车包括网格的数据平面和控制平面的组件。 为了使网格工作,我们需要确保网格中的每个Pod也将运行Envoy边车。

There are two ways of accomplishing this goal: manual sidecar injection and automatic sidecar injection. We’ll enable automatic sidecar injection by labeling the namespace in which we will create our application objects with the label istio-injection=enabled. This will ensure that the MutatingAdmissionWebhook controller can intercept requests to the kube-apiserver and perform a specific action — in this case, ensuring that all of our application Pods start with a sidecar.

有两种方法可以实现此目标: 手动边车注入自动边车注入 。 我们将通过标记名称空间istio-injection=enabled来创建应用程序对象,从而启用自动边车注入。 这将确保MutatingAdmissionWebhook控制器可以拦截请求到kube-apiserver并执行特定操作-在这种情况下,确保所有我们的应用程序荚开始了三轮。

We’ll use the default namespace to create our application objects, so we’ll apply the istio-injection=enabled label to that namespace with the following command:

我们将使用default名称空间来创建我们的应用程序对象,因此我们将通过以下命令将istio-injection=enabled标签应用于该名称空间:

  • kubectl label namespace default istio-injection=enabled

    kubectl标签名称空间默认istio-injection = enabled

We can verify that the command worked as intended by running:

我们可以通过运行以下命令来验证该命令是否按预期工作:

  • kubectl get namespace -L istio-injection

    kubectl获取名称空间-L istio-injection

You will see the following output:

您将看到以下输出:

  1. Output
  2. AME STATUS AGE ISTIO-INJECTION
  3. default Active 47m enabled
  4. istio-system Active 16m
  5. kube-node-lease Active 47m
  6. kube-public Active 47m
  7. kube-system Active 47m

With Istio installed and configured, we can move on to creating our application Service and Deployment objects.

安装并配置Istio之后,我们可以继续创建应用程序服务和部署对象。

第3步-创建应用程序对象 (Step 3 — Creating Application Objects)

With the Istio mesh in place and configured to inject sidecar Pods, we can create an application manifest with specifications for our Service and Deployment objects. Specifications in a Kubernetes manifest describe each object’s desired state.

将Istio网格放置到位并配置为注入sidecar Pods后,我们可以创建一个包含服务和部署对象规范的应用程序清单 。 Kubernetes清单中的规范描述了每个对象的期望状态。

Our application Service will ensure that the Pods running our containers remain accessible in a dynamic environment, as individual Pods are created and destroyed, while our Deployment will describe the desired state of our Pods.

我们的应用程序服务将确保在创建和销毁单个Pod时,在动态环境中仍可访问运行我们容器的Pod,而我们的Deployment将描述Pod的所需状态。

Open a file called node-app.yaml with nano or your favorite editor:

使用nano或您喜欢的编辑器打开一个名为node-app.yaml的文件:

  • nano node-app.yaml

    nano node-app.yaml

First, add the following code to define the nodejs application Service:

首先,添加以下代码以定义nodejs应用程序服务:

~/istio_project/node-app.yaml
〜/ istio_project / node-app.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nodejs
  5. labels:
  6. app: nodejs
  7. spec:
  8. selector:
  9. app: nodejs
  10. ports:
  11. - name: http
  12. port: 8080

This Service definition includes a selector that will match Pods with the corresponding app: nodejs label. We’ve also specified that the Service will target port 8080 on any Pod with the matching label.

此服务定义包括一个selector ,该selectorapp: nodejs与相应的app: nodejs标签匹配。 我们还指定了该服务将目标定位到带有匹配标签的任何Pod上的端口8080

We are also naming the Service port, in compliance with Istio’s requirements for Pods and Services. The http value is one of the values Istio will accept for the name field.

我们还根据Istio 对Pods and Services要求命名服务端口。 http值是Istio将接受的name字段值之一。

Next, below the Service, add the following specifications for the application Deployment. Be sure to replace the image listed under the containers specification with the image you created and pushed to Docker Hub in Step 1:

接下来,在服务下方,为应用程序部署添加以下规范。 一定要更换image下上市containers与您创建,并在推到泊坞枢纽的图像规范第1步

~/istio_project/node-app.yaml
〜/ istio_project / node-app.yaml
  1. ...
  2. ---
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. name: nodejs
  7. labels:
  8. version: v1
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: nodejs
  14. template:
  15. metadata:
  16. labels:
  17. app: nodejs
  18. version: v1
  19. spec:
  20. containers:
  21. - name: nodejs
  22. image: your_dockerhub_username/node-demo
  23. ports:
  24. - containerPort: 8080

The specifications for this Deployment include the number of replicas (in this case, 1), as well as a selector that defines which Pods the Deployment will manage. In this case, it will manage Pods with the app: nodejs label.

此部署的规范包括replicas的数量(在本例中为1),以及用于定义部署将管理的Pod的selector 。 在这种情况下,它将使用app: nodejs标签管理Pods。

The template field contains values that do the following:

template字段包含执行以下操作的值:

  • Apply the app: nodejs label to the Pods managed by the Deployment. Istio recommends adding the app label to Deployment specifications to provide contextual information for Istio’s metrics and telemetry.

    app: nodejs标签app: nodejs到由Deployment管理的app: nodejs 。 Istio 建议app标签添加到“部署”规范中,以提供有关Istio指标和遥测的上下文信息。

  • Apply a version label to specify the version of the application that corresponds to this Deployment. As with the app label, Istio recommends including the version label to provide contextual information.

    应用version标签以指定与此部署相对应的应用程序的版本。 与app标签一样,Istio建议包括version标签以提供上下文信息。

  • Define the specifications for the containers the Pods will run, including the container name and the image. The image here is the image you created in Step 1 and pushed to Docker Hub. The container specifications also include a containerPort configuration to point to the port each container will listen on. If ports remain unlisted here, they will bypass the Istio proxy. Note that this port, 8080, corresponds to the targeted port named in the Service definition.

    定义Pod将运行的容器的规范,包括容器nameimage 。 该image这里是你创建的图像步骤1和推到泊坞枢纽。 容器规范还包括containerPort配置,以指向每个容器将侦听的端口。 如果端口在此处未列出,它们将绕过Istio代理。 请注意,此端口8080对应于服务定义中命名的目标端口。

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With this file in place, we can move on to editing the file that will contain definitions for Gateway and Virtual Service objects, which control how traffic enters the mesh and how it is routed once there.

放置好该文件后,我们可以继续编辑该文件,其中包含网关和虚拟服务对象的定义,这些对象控制流量如何进入网格以及如何在网格中路由。

步骤4 —创建Istio对象 (Step 4 — Creating Istio Objects)

To control access to a cluster and routing to Services, Kubernetes uses Ingress Resources and Controllers. Ingress Resources define rules for HTTP and HTTPS routing to cluster Services, while Controllers load balance incoming traffic and route it to the correct Services.

为了控制对集群的访问以及对服务的路由,Kubernetes使用Ingress 资源控制器 。 入口资源定义用于HTTP和HTTPS路由到集群服务的规则,而控制器负载均衡传入的流量并将其路由到正确的服务。

For more information about using Ingress Resources and Controllers, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

有关使用Ingress资源和控制器的更多信息,请参见如何在DigitalOcean Kubernetes上使用Cert-Manager设置Nginx Ingress

Istio uses a different set of objects to achieve similar ends, though with some important differences. Instead of using a Controller to load balance traffic, the Istio mesh uses a Gateway, which functions as a load balancer that handles incoming and outgoing HTTP/TCP connections. The Gateway then allows for monitoring and routing rules to be applied to traffic entering the mesh. Specifically, the configuration that determines traffic routing is defined as a Virtual Service. Each Virtual Service includes routing rules that match criteria with a specific protocol and destination.

Istio使用一组不同的对象来达到相似的目的,尽管有一些重要的区别。 Istio网格不使用Controller来对流量进行负载平衡,而是使用Gateway ,该网关用作处理传入和传出HTTP / TCP连接的负载均衡器。 然后,网关允许将监视和路由规则应用于进入网状网络的流量。 具体来说,将确定流量路由的配置定义为虚拟服务。 每个虚拟服务均包括将规则与特定协议和目的地匹配的路由规则。

Though Kubernetes Ingress Resources/Controllers and Istio Gateways/Virtual Services have some functional similarities, the structure of the mesh introduces important differences. Kubernetes Ingress Resources and Controllers offer operators some routing options, for example, but Gateways and Virtual Services make a more robust set of functionalities available since they enable traffic to enter the mesh. In other words, the limited application layer capabilities that Kubernetes Ingress Controllers and Resources make available to cluster operators do not include the functionalities — including advanced routing, tracing, and telemetry — provided by the sidecars in the Istio service mesh.

尽管Kubernetes入口资源/控制器和Istio网关/虚拟服务在功能上有一些相似之处,但网格的结构引入了重要的差异。 Kubernetes入口资源和控制器为运营商提供了一些路由选项,例如,但是网关和虚拟服务提供了更强大的功能集,因为它们使流量能够进入网状网络。 换句话说,Kubernetes入口控制器和资源提供给集群运营商的有限的应用程序层功能不包括Istio服务网格中的边车提供的功能-包括高级路由,跟踪和遥测。

To allow external traffic into our mesh and configure routing to our Node app, we will need to create an Istio Gateway and Virtual Service. Open a file called node-istio.yaml for the manifest:

为了允许外部流量进入我们的网格并配置到Node应用程序的路由,我们需要创建一个Istio网关和虚拟服务。 为清单打开一个名为node-istio.yaml的文件:

  • nano node-istio.yaml

    纳米节点

First, add the definition for the Gateway object:

首先,添加网关对象的定义:

~/istio_project/node-isto.yaml
〜/ istio_project / node-isto.yaml
  1. apiVersion: networking.istio.io/v1alpha3
  2. kind: Gateway
  3. metadata:
  4. name: nodejs-gateway
  5. spec:
  6. selector:
  7. istio: ingressgateway
  8. servers:
  9. - port:
  10. number: 80
  11. name: http
  12. protocol: HTTP
  13. hosts:
  14. - "*"

In addition to specifying a name for the Gateway in the metadata field, we’ve included the following specifications:

除了在metadata字段中为网关指定name ,我们还包括以下规范:

  • A selector that will match this resource with the default Istio IngressGateway controller that was enabled with the configuration profile we selected when installing Istio.

    一个将与此资源与默认Istio IngressGateway控制器匹配的selector ,该默认Istio IngressGateway控制器已在安装Istio时通过我们选择的配置文件启用。

  • A servers specification that specifies the port to expose for ingress and the hosts exposed by the Gateway. In this case, we are specifying all hosts with an asterisk (*) since we are not working with a specific secured domain.

    servers规范指定port ,以暴露用于入口和hosts通过网关露出。 在这种情况下,由于我们不使用特定的安全域,因此我们为所有hosts指定一个星号( * )。

Below the Gateway definition, add specifications for the Virtual Service:

在网关定义下,添加虚拟服务的规范:

~/istio_project/node-istio.yaml
〜/ istio_project / node-istio.yaml
  1. ...
  2. ---
  3. apiVersion: networking.istio.io/v1alpha3
  4. kind: VirtualService
  5. metadata:
  6. name: nodejs
  7. spec:
  8. hosts:
  9. - "*"
  10. gateways:
  11. - nodejs-gateway
  12. http:
  13. - route:
  14. - destination:
  15. host: nodejs

In addition to providing a name for this Virtual Service, we’re also including specifications for this resource that include:

除了提供此虚拟服务的name之外,我们还包括此资源的规范,包括:

  • A hosts field that specifies the destination host. In this case, we’re again using a wildcard value (*) to enable quick access to the application in the browser, since we’re not working with a domain.

    一个hosts字段,用于指定目标主机。 在这种情况下,由于我们不使用域,因此我们再次使用通配符值( * )来启用对浏览器中应用程序的快速访问。

  • A gateways field that specifies the Gateway through which external requests will be allowed. In this case, it’s our nodejs-gateway Gateway.

    gateways字段,指定允许外部请求通过的网关。 在这种情况下,这是我们的nodejs-gateway网关。

  • The http field that specifies how HTTP traffic will be routed.

    http字段,用于指定将如何路由HTTP通信。

  • A destination field that indicates where the request will be routed. In this case, it will be routed to the nodejs service, which implicitly expands to the Service’s Fully Qualified Domain Name (FQDN) in a Kubernetes environment: nodejs.default.svc.cluster.local. It’s important to note, though, that the FQDN will be based on the namespace where the rule is defined, not the Service, so be sure to use the FQDN in this field when your application Service and Virtual Service are in different namespaces. To learn about Kubernetes Domain Name System (DNS) more generally, see An Introduction to the Kubernetes DNS Service.

    一个destination字段,指示将请求路由到何处。 在这种情况下,它将被路由到nodejs服务,该服务在Kubernetes环境中隐式扩展为该服务的完全合格域名(FQDN): nodejs.default.svc.cluster.local 。 不过,请务必注意,FQDN将基于定义规则的名称空间,而不是服务,因此,当应用程序服务和虚拟服务位于不同的名称空间时,请确保在此字段中使用FQDN。 要更全面地了解Kubernetes域名系统(DNS),请参阅Kubernetes DNS服务简介

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With your yaml files in place, you can create your application Service and Deployment, as well as the Gateway and Virtual Service objects that will enable access to your application.

放置好yaml文件后,您可以创建应用程序“服务和部署”以及网关和虚拟服务对象,这些对象将使您能够访问应用程序。

步骤5 —创建应用程序资源并启用遥测访问 (Step 5 — Creating Application Resources and Enabling Telemetry Access)

Once you have created your application Service and Deployment objects, along with a Gateway and Virtual Service, you will be able to generate some requests to your application and look at the associated data in your Istio Grafana dashboards. First, however, you will need to configure Istio to expose the Grafana addon so that you can access the dashboards in your browser.

创建应用程序服务和部署对象以及网关和虚拟服务后,您将能够对应用程序生成一些请求,并在Istio Grafana仪表板中查看相关数据。 但是,首先,您需要将Istio配置为公开Grafana插件,以便您可以在浏览器中访问仪表板。

We will enable Grafana access with HTTP, but when you are working in production or in sensitive environments, it is strongly recommended that you enable access with HTTPS.

我们将使用HTTP启用Grafana访问 ,但是,当您在生产环境或敏感环境中工作时,强烈建议您使用HTTPS启用访问

Because we set the --set grafana.enabled=true configuration option when installing Istio in Step 2, we have a Grafana Service and Pod in our istio-system namespace, which we confirmed in that Step.

因为在步骤2中安装Istio时我们设置了--set grafana.enabled=true配置选项,所以我们在istio-system名称空间中有一个Grafana服务和Pod,我们在该步骤中对此进行了确认。

With those resources already in place, our next step will be to create a manifest for a Gateway and Virtual Service so that we can expose the Grafana addon.

有了这些资源之后,我们的下一步将是为网关和虚拟服务创建清单,以便我们可以公开Grafana插件。

Open the file for the manifest:

打开清单文件:

  • nano node-grafana.yaml

    纳米节点-grafana.yaml

Add the following code to the file to create a Gateway and Virtual Service to expose and route traffic to the Grafana Service:

将以下代码添加到文件中,以创建网关和虚拟服务,以公开流量并将其路由到Grafana服务:

~/istio_project/node-grafana.yaml
〜/ istio_project / node-grafana.yaml
  1. apiVersion: networking.istio.io/v1alpha3
  2. kind: Gateway
  3. metadata:
  4. name: grafana-gateway
  5. namespace: istio-system
  6. spec:
  7. selector:
  8. istio: ingressgateway
  9. servers:
  10. - port:
  11. number: 15031
  12. name: http-grafana
  13. protocol: HTTP
  14. hosts:
  15. - "*"
  16. ---
  17. apiVersion: networking.istio.io/v1alpha3
  18. kind: VirtualService
  19. metadata:
  20. name: grafana-vs
  21. namespace: istio-system
  22. spec:
  23. hosts:
  24. - "*"
  25. gateways:
  26. - grafana-gateway
  27. http:
  28. - match:
  29. - port: 15031
  30. route:
  31. - destination:
  32. host: grafana
  33. port:
  34. number: 3000

Our Grafana Gateway and Virtual Service specifications are similar to those we defined for our application Gateway and Virtual Service in Step 4. There are a few differences, however:

我们的Grafana网关和虚拟服务规范与我们在步骤4中为应用程序网关和虚拟服务定义的规范相似。 但是有一些区别:

  • Grafana will be exposed on the http-grafana named port (port 15031), and it will run on port 3000 on the host.

    Grafana将在http-grafana命名端口(端口15031 )上公开,并将在主机的端口3000上运行。

  • The Gateway and Virtual Service are both defined in the istio-system namespace.

    网关和虚拟服务都在istio-system名称空间中定义。

  • The host in this Virtual Service is the grafana Service in the istio-system namespace. Since we are defining this rule in the same namespace that the Grafana Service is running in, FQDN expansion will again work without conflict.

    该虚拟服务中的hostistio-system名称空间中的grafana服务。 由于我们在运行Grafana服务的同一名称空间中定义了此规则,因此FQDN扩展将再次正常运行而不会发生冲突。

Note: Because our current MeshPolicy is configured to run TLS in permissive mode, we do not need to apply a Destination Rule to our manifest. If you selected a different profile with your Istio installation, then you will need to add a Destination Rule to disable mutual TLS when enabling access to Grafana with HTTP. For more information on how to do this, you can refer to the official Istio documentaion on enabling access to telemetry addons with HTTP.

注意:由于我们当前的MeshPolicy配置为在许可模式下运行TLS,因此我们无需对目标清单应用目标规则 。 如果您在Istio安装中选择了其他配置文件,则在使用HTTP启用对Grafana的访问时,需要添加目标规则来禁用双向TLS。 有关如何执行此操作的更多信息,您可以参考有关通过HTTP启用对遥测插件的访问的官方Istio文档

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Create your Grafana resources with the following command:

使用以下命令创建您的Grafana资源:

  • kubectl apply -f node-grafana.yaml

    kubectl应用-f node-grafana.yaml

The kubectl apply command allows you to apply a particular configuration to an object in the process of creating or updating it. In our case, we are applying the configuration we specified in the node-grafana.yaml file to our Gateway and Virtual Service objects in the process of creating them.

kubectl apply命令允许您在创建或更新对象的过程中将特定配置应用于对象。 在本例中,我们将在创建它们的过程中将在node-grafana.yaml文件中指定的配置应用于网关和虚拟服务对象。

You can take a look at the Gateway in the istio-system namespace with the following command:

您可以使用以下命令在istio-system命名空间中查看网关:

  • kubectl get gateway -n istio-system

    kubectl获取网关-n istio-system

You will see the following output:

您将看到以下输出:

  1. Output
  2. NAME AGE
  3. grafana-gateway 47s

You can do the same thing for the Virtual Service:

您可以对虚拟服务执行相同的操作:

  • kubectl get virtualservice -n istio-system

    kubectl获取virtualservice -n istio-system
  1. Output
  2. NAME GATEWAYS HOSTS AGE
  3. grafana-vs [grafana-gateway] [*] 74s

With these resources created, we should be able to access our Grafana dashboards in the browser. Before we do that, however, let’s create our application Service and Deployment, along with our application Gateway and Virtual Service, and check that we can access our application in the browser.

创建这些资源后,我们应该能够在浏览器中访问Grafana仪表板。 但是,在执行此操作之前,让我们创建应用程序服务和部署以及应用程序网关和虚拟服务,并检查是否可以在浏览器中访问我们的应用程序。

Create the application Service and Deployment with the following command:

使用以下命令创建应用程序服务和部署:

  • kubectl apply -f node-app.yaml

    kubectl应用-f node-app.yaml

Wait a few seconds, and then check your application Pods with the following command:

等待几秒钟,然后使用以下命令检查您的应用程序Pod:

  • kubectl get pods

    kubectl得到豆荚
  1. Output
  2. NAME READY STATUS RESTARTS AGE
  3. nodejs-7759fb549f-kmb7x 2/2 Running 0 40s

Your application containers are running, as you can see in the STATUS column, but why does the READY column list 2/2 if the application manifest from Step 3 only specified 1 replica?

您的应用程序容器正在运行,正如您在“ STATUS列中看到的那样,但是如果应用程序从第3步开始仅指定了1个副本,为什么READY列会列出2/2

This second container is the Envoy sidecar, which you can inspect with the following command. Be sure to replace the pod listed here with the NAME of your own nodejs Pod:

第二个容器是Envoy边车,您可以使用以下命令进行检查。 确保使用您自己的nodejs Pod的NAME替换此处列出的pod:

  • kubectl describe pod nodejs-7759fb549f-kmb7x

    kubectl描述Pod Node.js-7759fb549f-kmb7x

  1. Output
  2. Name: nodejs-7759fb549f-kmb7x
  3. Namespace: default
  4. ...
  5. Containers:
  6. nodejs:
  7. ...
  8. istio-proxy:
  9. Container ID: docker://f840d5a576536164d80911c46f6de41d5bc5af5152890c3aed429a1ee29af10b
  10. Image: docker.io/istio/proxyv2:1.1.7
  11. Image ID: docker-pullable://istio/proxyv2@sha256:e6f039115c7d5ef9c8f6b049866fbf9b6f5e2255d3a733bb8756b36927749822
  12. Port: 15090/TCP
  13. Host Port: 0/TCP
  14. Args:
  15. ...

Next, create your application Gateway and Virtual Service:

接下来,创建您的应用程序网关和虚拟服务:

  • kubectl apply -f node-istio.yaml

    kubectl应用-f node-istio.yaml

You can inspect the Gateway with the following command:

您可以使用以下命令检查网关:

  • kubectl get gateway

    kubectl获取网关
  1. Output
  2. NAME AGE
  3. nodejs-gateway 7s

And the Virtual Service:

和虚拟服务:

  • kubectl get virtualservice

    kubectl获取虚拟服务
  1. Output
  2. NAME GATEWAYS HOSTS AGE
  3. nodejs [nodejs-gateway] [*] 28s

We are now ready to test access to the application. To do this, we will need the external IP associated with our istio-ingressgateway Service, which is a LoadBalancer Service type.

现在,我们准备测试对应用程序的访问。 为此,我们需要与istio-ingressgateway服务相关的外部IP,它是LoadBalancer Service类型

Get the external IP for the istio-ingressgateway Service with the following command:

使用以下命令获取istio-ingressgateway服务的外部IP:

  • kubectl get svc -n istio-system

    kubectl获取svc -n istio-system

You will see output like the following:

您将看到如下输出:

  1. Output
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. grafana ClusterIP 10.245.85.162 <none> 3000/TCP 42m
  4. istio-citadel ClusterIP 10.245.135.45 <none> 8060/TCP,15014/TCP 42m
  5. istio-galley ClusterIP 10.245.46.245 <none> 443/TCP,15014/TCP,9901/TCP 42m
  6. istio-ingressgateway LoadBalancer 10.245.171.39 ingressgateway_ip 15020:30707/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30285/TCP,15030:31668/TCP,15031:32297/TCP,15032:30853/TCP,15443:30406/TCP 42m
  7. istio-pilot ClusterIP 10.245.56.97 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 42m
  8. istio-policy ClusterIP 10.245.206.189 <none> 9091/TCP,15004/TCP,15014/TCP 42m
  9. istio-sidecar-injector ClusterIP 10.245.223.99 <none> 443/TCP 42m
  10. istio-telemetry ClusterIP 10.245.5.215 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 42m
  11. prometheus ClusterIP 10.245.100.132 <none> 9090/TCP 42m

The istio-ingressgateway should be the only Service with the TYPE LoadBalancer, and the only Service with an external IP.

istio-ingressgateway应该是唯一具有TYPE LoadBalancer的服务,也是唯一具有外部IP的服务。

Navigate to this external IP in your browser: http://ingressgateway_ip.

在浏览器中导航到该外部IP: http:// ingressgateway_ip

You should see the following landing page:

您应该看到以下登录页面:

Next, generate some load to the site by clicking refresh five or six times.

接下来,通过单击刷新五到六次来对该站点产生一些负载。

You can now check the Grafana dashboard to look at traffic data.

现在,您可以检查Grafana仪表板以查看路况数据。

In your browser, navigate to the following address, again using your istio-ingressgateway external IP and the port you defined in your Grafana Gateway manifest: http://ingressgateway_ip:15031.

在浏览器中,再次使用istio-ingressgateway外部IP和您在Grafana Gateway清单中定义的端口导航到以下地址: http:// ingressgateway_ip:15031

You will see the following landing page:

您将看到以下登录页面:

Clicking on Home at the top of the page will bring you to a page with an istio folder. To get a list of dropdown options, click on the istio folder icon:

点击首页页面的顶部将带给你一个页面的文件夹istio。 要获得下拉选项列表,请单击istio文件夹图标:

From this list of options, click on Istio Service Dashboard.

从此选项列表中,单击Istio Service Dashboard

This will bring you to a landing page with another dropdown menu:

这将带您进入带有另一个下拉菜单的登录页面:

Select nodejs.default.svc.cluster.local from the list of available options.

从可用选项列表中选择nodejs.default.svc.cluster.local

You will now be able to look at traffic data for that service:

现在,您将能够查看该服务的流量数据:

You now have a functioning Node.js application running in an Istio service mesh with Grafana enabled and configured for external access.

现在,您具有在Istio服务网格中运行的功能正常的Node.js应用程序,并且已启用Grafana并对其进行了配置以进行外部访问。

结论 (Conclusion)

In this tutorial, you installed Istio using the Helm package manager and used it to expose a Node.js application Service using Gateway and Virtual Service objects. You also configured Gateway and Virtual Service objects to expose the Grafana telemetry addon, in order to look at traffic data for your application.

在本教程中,您使用Helm软件包管理器安装了Istio,并使用它来使用Gateway和Virtual Service对象公开Node.js应用程序服务。 您还配置了网关和虚拟服务对象以公开Grafana遥测插件,以便查看应用程序的流量数据。

As you move toward production, you will want to take steps like securing your application Gateway with HTTPS and ensuring that access to your Grafana Service is also secure.

在进入生产阶段时,您将需要采取一些步骤,例如使用HTTPS保护应用程序网关,并确保对Grafana服务的访问也很安全

You can also explore other telemetry-related tasks, including collecting and processing metrics, logs, and trace spans.

您还可以探索其他与遥测相关的任务 ,包括收集和处理度量日志跟踪范围

翻译自: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-istio-with-kubernetes

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号