赞
踩
微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由 Deis 公司发起,该公司已经被微软收购。Helm通过软件打包的形式,支持发布的版本管理和控制,很大程度上简化了Kubernetes应用部署和管理的复杂性。
随着业务容器化与向微服务架构转变,通过分解巨大的单体应用为多个服务的方式,分解了单体应用的复杂性,使每个微服务都可以独立部署和扩展,实现了敏捷开发和快速迭代和部署。但任何事情都有两面性,虽然微服务给我们带来了很多便利,但由于应用被拆分成多个组件,导致服务数量大幅增加,对于Kubernetest编排来说,每个组件有自己的资源文件,并且可以独立的部署与伸缩,这给采用Kubernetes做应用编排带来了诸多挑战:
管理、编辑与更新大量的K8s配置文件
部署一个含有大量配置文件的复杂K8s应用
分享和复用K8s配置和应用
参数化配置模板支持多个环境
管理应用的发布:回滚、diff和查看发布历史
控制一个部署周期中的某一些环节
发布后的验证
Helm是kubernetes包管理工具,可以方便快捷的安装、管理、卸载kubernetes应用,类似于Linux操作系统中yum或apt-get软件的作用。
其主要的设计目的:
创建新的chart包
将charts包文件打包压缩tgz格式
上传chart到chart仓库或从仓库中下载charts文件
在kubernetes集群中安装及卸载charts
管理通过helm安装的charts的应用发布周期
Helm 到底解决了什么问题?为什么 Kubernetes 需要 Helm?
Kubernetes 能够很好地组织和编排容器,但它缺少一个更高层次的应用打包工具,而 Helm 就是来干这件事的。
举个例子,我们需要部署一个MySQL服务,Kubernetes则需要部署以下对象:
① 为了能够让外界访问到MySQL,需要部署一个mysql的service;
②需要进行定义MySQL的密码,则需要部署一个Secret;
③Mysql的运行需要持久化的数据存储,此时还需要部署PVC;
④保证后端mysql的运行,还需要部署一个Deployment,以支持以上的对象。
针对以上对象,我们可以使用YAML文件进行定义并部署,但是仅仅对于单个的服务支持,如果应用需要由一个甚至几十个这样的服务组成,并且还需要考虑各种服务的依赖问题,可想而知,这样的组织管理应用的方式就显得繁琐。为此就诞生了一个工具Helm,就是为了解决Kubernetes这种应用部署繁重的现象。
Helm主要由Helm客户端、Tiller服务器和Charts仓库组成:
helm:Client,GO语言编写,实现管理本地的Chart仓库,可管理Chart,与Tiller服务进行交互,用于发送Chart,实例安装、查询、卸载等操作。
Tiller:Server,通常运行在K8S集群之上。用于接收helm发来的Charts和Conifg,合并生成release,完成部署。
简单的说:Helm 客户端负责管理 chart;Tiller 服务器负责管理 release。
Helm Client是用户命令行工具。
Helm Client主要负责如下:
本地chart开发
charts仓库管理
与Tiller sever交互
发送预安装的chart
查询release信息
要求升级或卸载已存在的release
Tiller Server是一个部署在Kubernetes集群内部的server,Tiller Server与Helm client、Kubernetes API server进行交互。
Tiller server主要负责如下:
监听来自Helm client的请求
通过chart及其配置构建一次发布
安装chart到Kubernetes集群,并跟踪随后的发布
通过与Kubernetes交互升级或卸载chart
简单的说,client管理charts,而server管理发布release。
Helm把Kubernetes资源(比如deployments、services或 ingress等) 打包到一个chart中,而chart被保存到chart仓库。通过chart仓库可用来存储和分享chart。Helm使发布可配置,支持发布应用配置的版本管理,简化了Kubernetes部署应用的版本控制、打包、发布、删除、更新等操作。
本节展示了Helm的Client、Server与本地Chart仓库的安装过程。
1.chart是什么?
helm程序包,比方说部署nginx,需要deployment 和 service的yaml,这两个yaml清单文件就是一个helm程序包,在k8s中把这些yaml清单文件叫做chart图表。
2.values.yaml文件
values.yaml文件为模板中的文件赋值,可以实现我们自定义安装,如果是chart开发者需要自定义模板,如果是chart使用者只需要修改values.yaml即可。
3.helm可理解如下
helm把kubernetes资源打包到一个chart中,制作并完成各个chart和chart本身依赖关系并利用chart仓库实现对外分发,而helm还可实现可配置的对外发布,通过values.yaml文件完成可配置的发布,如果chart版本更新了,helm自动支持滚更更新机制,还可以一键回滚,但是不是适合在生产环境使用,除非具有定义自制chart的能力,helm属于kubernetes一个项目。
4.repository、release、chart关系
repository:用于发布和存储 Chart 的仓库,Helm客户端通过HTTP协议来访问仓库中Chart的索引文件和压缩包。
chart: 一个 Helm 包,其中包含了运行一个应用所需要的镜像、依赖和资源定义等,还可能包含 Kubernetes 集群中的服务定义。
release:在 Kubernetes 集群上运行的 Chart 的一个实例。在同一个集群上,一个 Chart 可以安装很多次,每次安装都会创建一个新的 release。
chart--->通过values.yaml这个文件赋值-->生成release实例
helm 官方网站:
https://helm.sh/
helm2 官方文档地址:
https://v2.helm.sh/docs/
helm2 下载地址:
https://storage.googleapis.com/kubernetes-helm/
helm github地址:
https://github.com/helm/helm/releases
https://github.com/helm/helm/tags
helm 官方chart仓库:
https://hub.helm.sh/charts
https://hub.kubeapps.com/
5.安装helm客户端,在k8s的master01节点操作
从helm下载地址,下载适合自己的helm版本[root@k8s-master01 yaml]# mkdir helm && cd helm[root@k8s-master01 helm]# wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz[root@k8s-master01 helm]# tar -zxvf helm-v2.13.1-linux-amd64.tar.gz[root@k8s-master01 helm]# cp linux-amd64/helm /usr/local/bin[root@k8s-master01 helm]# helm versionClient: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}Error: could not find tiller报错,显示没有tiller server端
Helm Tiller是Helm的server,Tiller有多种安装方式,比如本地安装或以pod形式部署到Kubernetes集群中。方案1,使用yaml文件安装。
为 Tiller 创建具有集群管理员权限的Service Account
Tiller是Helm的服务端组件。Tiller将被安装在kubernetes集群中,Helm客户端会与其交互,从而使用Helm charts部署应用程序。
Helm将管理k8s集群资源。因此,我们需要向安装在集群kube-system命令空间中的tiller组件添加必要的权限。
所以需要做以下操作:
创建名称为tiller的Service Account
创建tiller对Service Account具有集群管理员权限的ClusterRoleBinding。
我们将在一个yaml文件中添加Service Account和clusterRoleBinding。
查看api-versions[root@k8s-master01 helm]# kubectl api-versions | grep rbac.authorization.k8s.iorbac.authorization.k8s.io/v1rbac.authorization.k8s.io/v1beta1创建一个名为tiller-rbac.yaml的文件。[root@k8s-master01 helm]# vim tiller-rbac.yamlapiVersion: v1kind: ServiceAccountmetadata: name: tiller namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: tillerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects: - kind: ServiceAccount name: tiller namespace: kube-system
通过kubectl更新yaml文件[root@k8s-master01 helm]# kubectl apply -f tiller-rbac.yaml serviceaccount/tiller createdclusterrolebinding.rbac.authorization.k8s.io/tiller created查看创建的sa[root@k8s-master01 helm]# kubectl get sa -n kube-system | grep tillertiller 1 7s
helm init命令进行初始化时,会用到gcr.io/kubernetes-helm/tiller:版本的镜像,需要提前下载,镜像标签和Helm同版本号tiller image镜像下载方式1:把tiller的镜像压缩包传到k8s的各个node节点,然后手动docker load -i 把镜像解压到本地,镜像所在百度网盘地址如下:百度网盘链接:https://pan.baidu.com/s/1Z_yuava-8W65mn5tla7cFw 提取码:bd2nnode节点导入测试的image镜像[root@k8s-node01 ~]# docker load -i tiler_2_13_1.tar.gz3fc64803ca2d: Loading layer [==================================================>] 4.463MB/4.463MB79395a173ae6: Loading layer [==================================================>] 6.006MB/6.006MBc33cd2d4c63e: Loading layer [==================================================>] 37.16MB/37.16MBd727bd750bf2: Loading layer [==================================================>] 36.89MB/36.89MBLoaded image: gcr.io/kubernetes-helm/tiller:v2.13.1tiller image镜像下载方式2:在 https://hub.docker.com/explore/ 搜索 tiller_v2.13.1node节点下载测试的image镜像,修改与yaml中image地址一致docker pull hekai/gcr.io_kubernetes-helm_tiller_v2.13.1docker tag hekai/gcr.io_kubernetes-helm_tiller_v2.13.1 gcr.io/kubernetes-helm/tiller:v2.13.1
[root@k8s-master01 helm]# vim tiller.yaml---apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-systemspec: selector: matchLabels: app: helm name: tiller replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: app: helm name: tiller spec: automountServiceAccountToken: true serviceAccount: tiller containers: - env: - name: TILLER_NAMESPACE value: kube-system - name: TILLER_HISTORY_MAX value: "0" image: gcr.io/kubernetes-helm/tiller:v2.13.1 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /liveness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 name: tiller ports: - containerPort: 44134 name: tiller - containerPort: 44135 name: http readinessProbe: httpGet: path: /readiness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 resources: {}status: {}---apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-systemspec: ports: - name: tiller port: 44134 targetPort: tiller selector: app: helm name: tiller type: ClusterIPstatus: loadBalancer: {}...
通过kubectl更新yaml文件[root@k8s-master01 helm]# kubectl apply -f tiller.yamldeployment.apps/tiller-deploy createdservice/tiller-deploy created查看tiller pod是否部署完成[root@k8s-master01 helm]# kubectl get pods -n kube-system | grep tillertiller-deploy-7bd89687c8-qvhqq 1/1 Running 0 26s验证 helm 和 tiller版本[root@k8s-master01 helm]# helm versionClient: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
上面结果可看到helm客户端和服务端版本一致,都安装好了。
2. tiller的安装部署
控制台执行 > helm init 命令,该命令会将从charts仓库中下载charts包,并根据其中的配置部署至kubernetes集群。
默认的charts仓库为 https://kubernetes-charts.storage.googleapis.com/index.yaml
默认使用的tiller镜像为 gcr.io/kubernetes-helm/tiller:v2.13.1
国内由于墙的原因无法直接访问,需要我们自行处理可替代的仓库和镜像版本,通过如下命令进行helm服务端的安装部署:
[root@k8s-master01 helm]# helm init --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming! [root@k8s-master01 helm]# cat /root/.helm/repository/repositories.yamlapiVersion: v1generated: 2020-07-02T12:06:28.679177481+08:00repositories:- caFile: "" cache: /root/.helm/repository/cache/stable-index.yaml certFile: "" keyFile: "" name: stable password: "" url: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts username: ""- caFile: "" cache: /root/.helm/repository/cache/local-index.yaml certFile: "" keyFile: "" name: local password: "" url: http://127.0.0.1:8879/charts username: ""
如果希望在安装时自定义一些参数,可以参考一下的一些参数:
--canary-image:安装canary分支,即项目的Master分支
--tiller-image:安装指定版本的镜像,默认和helm同版本
--kube-context:安装到指定的Kubernetes集群
--tiller-namespace:安装到指定的名称空间,默认为kube-system
稍等一会然后执行如下命令,看到如下输出说明安装成功:
验证 helm 和 tiller 版本[root@k8s-master01 helm]# helm versionClient: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"
访问授权
在上面的步骤中我们将tiller所需的资源部署到了kubernetes集群中,但是由于Deployment tiller-deploy没有定义授权的ServiceAccount导致访问apiserver拒绝,执行如下命令为tiller-deploy进行授权:
[root@k8s-master01 helm]# kubectl create serviceaccount --namespace kube-system tiller[root@k8s-master01 helm]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[root@k8s-master01 helm]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
从kubernetes集群中删除Tiller
Tiller将数据存储在ConfigMap资源当中,删除或重装不会导致数据丢失。
删除Tiller的方法有两种:
(1) kubectl delete deployment tiller-deploy --n kube-system(2) heml reset 由于某种原因,如果引发错误,请使用以下命令强制将其删除 helm reset --force
Helm命令及使用
[root@k8s-master01 helm]# helmThe Kubernetes package managerTo begin working with Helm, run the 'helm init' command: $ helm initThis will install Tiller to your running Kubernetes cluster.It will also set up any necessary local configuration.Common actions from this point include:- helm search: search for charts- helm fetch: download a chart to your local directory to view- helm install: upload the chart to Kubernetes- helm list: list releases of chartsEnvironment: $HELM_HOME set an alternative location for Helm files. By default, these are stored in ~/.helm $HELM_HOST set an alternative Tiller host. The format is host:port $HELM_NO_PLUGINS disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. $TILLER_NAMESPACE set an alternative Tiller namespace (default "kube-system") $KUBECONFIG set an alternative Kubernetes configuration file (default "~/.kube/config") $HELM_TLS_CA_CERT path to TLS CA certificate used to verify the Helm client and Tiller server certificates (default "$HELM_HOME/ca.pem") $HELM_TLS_CERT path to TLS client certificate file for authenticating to Tiller (default "$HELM_HOME/cert.pem") $HELM_TLS_KEY path to TLS client key file for authenticating to Tiller (default "$HELM_HOME/key.pem") $HELM_TLS_ENABLE enable TLS connection between Helm and Tiller (default "false") $HELM_TLS_VERIFY enable TLS connection between Helm and Tiller and verify Tiller server certificate (default "false") $HELM_TLS_HOSTNAME the hostname or IP address used to verify the Tiller server certificate (default "127.0.0.1") $HELM_KEY_PASSPHRASE set HELM_KEY_PASSPHRASE to the passphrase of your PGP private key. If set, you will not be prompted for the passphrase while signing helm chartsUsage: helm [command]Available Commands: completion Generate autocompletions script for the specified shell (bash or zsh) create create a new chart with the given name delete given a release name, delete the release from Kubernetes dependency manage a chart's dependencies fetch download a chart from a repository and (optionally) unpack it in local directory get download a named release help Help about any command history fetch release history home displays the location of HELM_HOME init initialize Helm on both client and server inspect inspect a chart install install a chart archive lint examines a chart for possible issues list list releases package package a chart directory into a chart archive plugin add, list, or remove Helm plugins repo add, list, remove, update, and index chart repositories reset uninstalls Tiller from a cluster rollback roll back a release to a previous revision search search for a keyword in charts serve start a local http web server status displays the status of the named release template locally render templates test test a release upgrade upgrade a release verify verify that a chart at the given path has been signed and is valid version print the client/server version informationFlags: --debug enable verbose output -h, --help help for helm --home string location of your Helm config. Overrides $HELM_HOME (default "/root/.helm") --host string address of Tiller. Overrides $HELM_HOST --kube-context string name of the kubeconfig context to use --kubeconfig string absolute path to the kubeconfig file to use --tiller-connection-timeout int the duration (in seconds) Helm will wait to establish a connection to tiller (default 300) --tiller-namespace string namespace of Tiller (default "kube-system")Use "helm [command] --help" for more information about a command.=======================helm命令解释:- helm search: 搜索charts- helm fetch: 下载charts到本地目录- helm install: 安装charts- helm list: 列出charts的所有版本用法: helm [command]命令可用选项: completion 为指定的shell生成自动补全脚本(bash或zsh) create 创建一个新的charts delete 删除指定版本的release dependency 管理charts的依赖 fetch 下载charts并解压到本地目录 (--untar 不解压) get 下载一个release history release历史信息 home 显示helm的家目录 init 在客户端和服务端初始化helm inspect 查看charts的详细信息 install 安装charts lint 检测包的存在问题 list 列出release package 将chart目录进行打包 plugin add(增加), list(列出), 或 remove(移除) Helm 插件 repo add(增加), list(列出), remove(移除), update(更新), and index(索引) chart仓库 reset 卸载tiller rollback release版本回滚 search 关键字搜索chart serve 启动一个本地的http server status 查看release状态信息 template 本地模板 test release测试 upgrade release更新 verify 验证chart的签名和有效期 version 打印客户端和服务端的版本信息=======================helm常用命令分类Charts:helm search 查找可用的Chartshelm inspect 查看指定Chart的基本信息helm install 根据指定的Chart 部署一个Release到K8shelm create 创建自己的Charthelm package 打包Chart,一般是一个压缩包文件release:helm list 列出已经部署的Releasehelm delete [RELEASE] 删除一个Release. 并没有物理删除, 出于审计需要,历史可查。helm status [RELEASE] 查看指定的Release信息,即使使用helm delete命令删除的Release.helm upgrade 升级某个Releasehelm rollback [RELEASE] [REVISION] 回滚Release到指定发布版本helm get values [RELEASE] 查看Release的配置文件值helm ls –deleted 列出已经删除的Releaserepo:helm repo listhelm repo add [RepoName] [RepoUrl]helm repo update=======================Helm 常用命令举例查看版本 helm version查看当前安装的chartshelm list查询nginx chartshelm search nginx下载nginx chartshelm fetch apphub/nginx 不解压helm fetch apphub/nginx --untar 解压查看package详细信息helm inspect chart安装chartshelm install --name nginx --namespaces prod bitnami/nginx查看charts状态helm status nginx删除chartshelm delete --purge nginx查看当前repo仓库资源helm repo list增加repo仓库资源helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartshelm repo add --username admin --password password myps https://harbor.pt1.cn/chartrepo/charts更新repo仓库资源helm repo update创建chartshelm create helm_charts测试charts语法helm lint 打包chartscd helm_charts && helm package ./查看生成的yaml文件helm template helm_charts-0.1.1.tgz更新imagehelm upgrade --set image.tag=‘v201908‘ test update myharbor/study-api-en-oral回滚relasehelm rollback 2自定义 package 的选项:查询支持的选项helm inspect values stable/mysql自定义 password 持久化存储helm install --name db-mysql --set mysqlRootPassword=anoyi stable/mysql发布到私有harbor仓库脚本request_url='https://harbor.qing.cn/api/chartrepo/charts/charts'user_name='admin'password='password'chart_file='helm_charts-0.1.3.tgz'curl -i -u "$user_name:$password" -k -X POST "${request_url}" \-H "accept: application/json" \-H "Content-Type: multipart/form-data" \-F "chart=@${chart_file};type=application/x-compressed"echo $result
设置helm命令自动补全 为了方便helm命令的使用
[root@k8s-master01 helm]# source [root@k8s-master01 helm]# echo "source > ~/.bashrc
Charts是Helm的程序包,它们都存在在Charts仓库当中。Kubernetes官方的仓库保存了一系列的Charts,仓库默认的名称为stable。安装Charts到集群时,Helm首先会到官方仓库获取相关的Charts,并创建release。
执行 helm search 查看当前可安装的 chart
[root@k8s-master01 helm]# helm searchNAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.1.3 2.1.1 Scales worker nodes within agent pools stable/aerospike 0.1.7 v3.14.1.2 A Helm chart for Aerospike in Kubernetes stable/anchore-engine 0.1.3 0.1.6 Anchore container analysis and policy evaluation engine s...stable/artifactory 7.0.3 5.8.4 Universal Repository Manager supporting all major packagi...stable/artifactory-ha 0.1.0 5.8.4 Universal Repository Manager supporting all major packagi...stable/aws-cluster-autoscaler 0.3.2 Scales worker nodes within autoscaling groups. stable/bitcoind 0.1.0 0.15.1 Bitcoin is an innovative payment network and a new kind o...stable/buildkite 0.2.1 3 Agent for Buildkite stable/centrifugo 2.0.0 1.7.3 Centrifugo is a real-time messaging server. .......
这些 chart 都是从哪里来的?
前面说过,Helm 可以像 yum 管理软件包一样管理 chart。
yum 的软件包存放在仓库中,同样的,Helm 也有仓库。
Helm 安装时已经默认配置好了两个仓库:stable 和 local。
stable 是官方仓库,local 是用户存放自己开发的chart的本地仓库。
执行 helm repo list 进行查看。
[root@k8s-master01 helm]# helm repo listNAME URL stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartslocal http://127.0.0.1:8879/charts
执行 helm repo update 更新chart仓库
仓库更新有时会提示无法连接[root@k8s-master01 helm]# helm repo update Hang tight while we grab the latest from your chart repositories......Skip local chart repository...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp 172.217.160.112:443: connect: connection refusedUpdate Complete. ⎈ Happy Helming!⎈
执行 helm repo remove 删除默认的stable repo仓库
删除默认的stable repo[root@k8s-master01 helm]# helm repo remove stable"stable" has been removed from your repositories
因为helm默认使用官方的源,国内访问太慢。
执行 helm repod add 添加国内的源
增加阿里云的charts仓库增加微软azure的charts仓库[root@k8s-master01 helm]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts[root@k8s-master01 helm]# helm repo add apphub https://apphub.aliyuncs.com[root@k8s-master01 helm]# helm repo add azure http://mirror.azure.cn/kubernetes/charts查看chart仓库列表[root@k8s-master01 helm]# helm repo listNAME URL local http://127.0.0.1:8879/charts apphub https://apphub.aliyuncs.com azure http://mirror.azure.cn/kubernetes/charts stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts再次更新repo[root@k8s-master01 helm]# helm repo updateHang tight while we grab the latest from your chart repositories......Skip local chart repository...Successfully got an update from the "stable" chart repository...Successfully got an update from the "apphub" chart repository...Successfully got an update from the "azure" chart repositoryUpdate Complete. ⎈ Happy Helming!⎈
与 yum 一样,helm 也支持关键字搜索。
执行 helm search 关键字
[root@k8s-master01 helm]# helm search mysqlNAME CHART VERSION APP VERSION DESCRIPTION apphub/mysql 6.8.0 8.0.19 Chart to create a Highly available MySQL cluster apphub/mysqldump 2.6.0 2.4.1 A Helm chart to help backup MySQL databases using mysqldump apphub/mysqlha 1.0.0 5.7.13 MySQL cluster with a single master and zero or more slave...apphub/prometheus-mysql-exporter 0.5.2 v0.11.0 A Helm chart for prometheus mysql exporter with cloudsqlp...azure/mysql 1.6.4 5.7.30 Fast, reliable, scalable, and easy to use open-source rel...azure/mysqldump 2.6.0 2.4.1 A Helm chart to help backup MySQL databases using mysqldump azure/prometheus-mysql-exporter 0.6.0 v0.11.0 A Helm chart for prometheus mysql exporter with cloudsqlp...stable/mysql 0.3.5 Fast, reliable, scalable, and easy to use open-source rel...apphub/percona 1.2.0 5.7.17 free, fully compatible, enhanced, open source drop-in rep...apphub/percona-xtradb-cluster 1.0.3 5.7.19 free, fully compatible, enhanced, open source drop-in rep...apphub/phpmyadmin 4.2.12 5.0.1 phpMyAdmin is an mysql administration frontend azure/percona 1.2.1 5.7.26 free, fully compatible, enhanced, open source drop-in rep...azure/percona-xtradb-cluster 1.0.4 5.7.19 free, fully compatible, enhanced, open source drop-in rep...azure/phpmyadmin 4.3.5 5.0.1 DEPRECATED phpMyAdmin is an mysql administration frontend stable/percona 0.3.0 free, fully compatible, enhanced, open source drop-in rep...stable/percona-xtradb-cluster 0.0.2 5.7.19 free, fully compatible, enhanced, open source drop-in rep...apphub/mariadb 7.3.9 10.3.22 Fast, reliable, scalable, and easy to use open-source rel...apphub/mariadb-galera 0.8.1 10.4.12 MariaDB Galera is a multi-master database cluster solutio...azure/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy azure/mariadb 7.3.14 10.3.22 DEPRECATED Fast, reliable, scalable, and easy to use open...stable/gcloud-sqlproxy 0.2.3 Google Cloud SQL Proxy stable/mariadb 2.1.6 10.1.31 Fast, reliable, scalable, and easy to use open-source rel...
包括 DESCRIPTION 在内的所有信息,只要跟关键字匹配,都会显示在结果列表中。
安装 chart 也很简单,从上述列表中选择需要安装的MySQL版本。
执行 helm install 安装 MySQL
[root@k8s-master01 helm]# helm install apphub/mysql -n mysql-proNAME: mysql-proLAST DEPLOYED: Thu Jul 2 14:57:03 2020NAMESPACE: kube-systemSTATUS: DEPLOYEDRESOURCES:==> v1/ConfigMapNAME DATA AGEmysql-pro-master 1 0smysql-pro-slave 1 0s==> v1/Pod(related)NAME READY STATUS RESTARTS AGEmysql-pro-master-0 0/1 Pending 0 0smysql-pro-slave-0 0/1 Pending 0 0s==> v1/SecretNAME TYPE DATA AGEmysql-pro Opaque 2 0s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmysql-pro ClusterIP 10.97.248.71 3306/TCP 0smysql-pro-slave ClusterIP 10.102.133.28 3306/TCP 0s==> v1/StatefulSetNAME READY AGEmysql-pro-master 0/1 0smysql-pro-slave 0/1 0sNOTES:Please be patient while the chart is being deployedTip: Watch the deployment status using the command: kubectl get pods -w --namespace kube-systemServices: echo Master: mysql-pro.kube-system.svc.cluster.local:3306 echo Slave: mysql-pro-slave.kube-system.svc.cluster.local:3306Administrator credentials: echo Username: root echo Password : $(kubectl get secret --namespace kube-system mysql-pro -o jsonpath="{.data.mysql-root-password}" | base64 --decode)To connect to your database: 1. Run a pod that you can use as a client: kubectl run mysql-pro-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.19-debian-10-r0 --namespace kube-system --command -- bash 2. To connect to master service (read/write): mysql -h mysql-pro.kube-system.svc.cluster.local -uroot -p my_database 3. To connect to slave service (read-only): mysql -h mysql-pro-slave.kube-system.svc.cluster.local -uroot -p my_databaseTo upgrade this helm chart: 1. Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown below: ROOT_PASSWORD=$(kubectl get secret --namespace kube-system mysql-pro -o jsonpath="{.data.mysql-root-password}" | base64 --decode) helm upgrade mysql-pro bitnami/mysql --set root.password=$ROOT_PASSWORD以上输出分为三部分:① chart 本次部署的描述信息:NAME 是 release 的名字,如果没用 -n 参数指定,Helm 会随机生成了一个 名字。NAMESPACE 是 release 部署的 namespace,默认是 default,也可以通过 --namespace 指定。STATUS 是 DEPLOYED,表示已经将 chart 部署到集群。② 当前 release 包含的资源:ConfigMap、Pod、Secret、Service 和 StatefulSet,其名字都是 mysql-pro,命名的格式为 ReleasName-ChartName。③ NOTES 部分显示的是 release 的使用方法。比如如何访问 Service,如何获取数据库密码,以及如何连接数据库等
执行 helm status 查看release状态:(输出和上面相同)
[root@k8s-master01 helm]# helm status mysql-proLAST DEPLOYED: Thu Jul 2 15:04:45 2020NAMESPACE: kube-systemSTATUS: DEPLOYED
执行 kubectl get 查看组成 该实例release 的各个对象:
[root@k8s-master01 helm]# kubectl get ConfigMap -n kube-system | grep mysql-promysql-pro-master 1 50smysql-pro-slave 1 50smysql-pro.v1 1 50s[root@k8s-master01 helm]# kubectl get pod -n kube-system | grep mysql-promysql-pro-master-0 0/1 Pending 0 76smysql-pro-slave-0 0/1 Pending 0 76s[root@k8s-master01 helm]# kubectl describe pod lucky-sponge-mysql-master-0 -n kube-system......Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "mysql-pro-master-0": pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "mysql-pro-master-0": pod has unbound immediate PersistentVolumeClaims[root@k8s-master01 helm]# kubectl get Secret -n kube-system | grep mysql-promysql-pro Opaque 2 2m45s[root@k8s-master01 helm]# kubectl get service -n kube-system | grep mysql-promysql-pro ClusterIP 10.97.248.71 3306/TCP 3m10smysql-pro-slave ClusterIP 10.102.133.28 3306/TCP 3m10s[root@k8s-master01 helm]# kubectl get StatefulSet -n kube-systemNAME READY AGEmysql-pro-master 0/1 3m26smysql-pro-slave 0/1 3m26s
由于还没有准备 PersistentVolume 和 PersistentVolumeClaims,当前 实例release 还不可用。
执行 helm hist 查看指定release的历史部署版本信息
[root@k8s-master01 helm]# helm listNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE mysql-pro 1 Thu Jul 2 14:57:03 2020 DEPLOYED mysql-6.8.0 8.0.19 kube-system
执行 helm delete 删除 release
执行 helm delete --purge 彻底删除 release
删除chart[root@k8s-master01 helm]# helm delete mysql-prorelease "mysql-pro" deleted确认chart是否删除[root@k8s-master01 helm]# helm ls --all mysql-proNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE mysql-pro 1 Thu Jul 2 14:57:03 2020 DELETED mysql-6.8.0 8.0.19 kube-system以上删除,无法彻底删除,需要执行以下操作,彻底删除[root@k8s-master01 helm]# helm del --purge mysql-prorelease "mysql-pro" deleted再次确认chart是否删除[root@k8s-master01 helm]# helm ls --all mysql-pro[root@k8s-master01 helm]# helm hist mysql-proError: release: "mysql-pro" not found
chart 是 Helm 的应用打包格式。chart 由一系列文件组成,这些文件描述了 Kubernetes 部署应用时所需要的资源,比如 Service、Deployment、PersistentVolumeClaim、Secret、ConfigMap 等。
单个的 chart 可以非常简单,只用于部署一个服务,比如 Memcached;chart 也可以很复杂,部署整个应用,比如包含 HTTP Servers、 Database、消息中间件、cache 等。
chart 将这些文件放置在预定义的目录结构中,通常整个 chart 被打成 tar 包,而且标注上版本信息,便于 Helm 部署。
以前面 MySQL chart 为例。
一旦安装了某个 chart,可以在 /root/.helm/cache/archive 中找到 chart 的 tar 包。解压,可以看到mysql目录结构,包含各类yaml文件。
[root@k8s-master01 helm]# cd /root/.helm/cache/archive/[root@k8s-master01 archive]# ll-rw-r--r-- 1 root root 17860 Jul 2 12:39 mysql-6.8.0.tgz[root@k8s-master01 archive]# tar -zxvf mysql-6.8.0.tgz[root@k8s-master01 archive]# tree mysqlmysql├── Chart.yaml├── ci│ └── values-production.yaml├── files│ └── docker-entrypoint-initdb.d│ └── README.md├── README.md├── templates│ ├── _helpers.tpl│ ├── initialization-configmap.yaml│ ├── master-configmap.yaml│ ├── master-statefulset.yaml│ ├── master-svc.yaml│ ├── NOTES.txt│ ├── secrets.yaml│ ├── servicemonitor.yaml│ ├── slave-configmap.yaml│ ├── slave-statefulset.yaml│ └── slave-svc.yaml├── values-production.yaml└── values.yaml4 directories, 17 files
chart仓库用来存储和分享打包的chart,官方chart仓库由Kubernetes Charts维护, Helm允许创建私有chart仓库。
chart仓库是一个可用来存储index.yml与打包的chart文件的HTTP server,当要分享chart时,需要上传chart文件到chart仓库。任何一个能能够提供YAML与tar文件的HTTP server都可以当做chart仓库,比如Google Cloud Storage (GCS) bucket、Amazon S3 bucket、Github Pages或创建你自己的web服务器。
一个chart仓库由一个chart包与index.yaml文件组成,index.yaml记录了chart仓库中全部chart的索引,一个本地chart仓库的布局例子如下。
[root@k8s-master01 helm]# cat /root/.helm/repository/local/index.yaml apiVersion: v1entries: {}generated: 2020-07-02T12:06:29.172607551+08:00
创建chart仓库有多种方式,本文以创建一个本地仓库为例:
[root@k8s-master01 data]# helm serve &[1] 20630[root@k8s-master01 data]# Regenerating index. This may take a moment.Now serving you on 127.0.0.1:8879
默认情况下该服务只监听 127.0.0.1,如果你要绑定到其它网络接口,可使用以下命令:
[root@k8s-master01 data]# helm serve --address 192.168.56.10:8879 &[2] 25589[root@k8s-master01 data]# Regenerating index. This may take a moment.Now serving you on 192.168.56.10:8879
如果你想使用指定目录来做为 Helm Repository 的存储目录,可以加上 --repo-path 参数
#创建chart仓库目录[root@k8s-master01 helm]# mkdir -p /root/yaml/helm/data/[root@k8s-master01 data]# helm serve --address 192.168.56.10:8879 --repo-path /root/yaml/helm/data/ --url http://192.168.56.10:8879/charts/ &[1] 27156[root@k8s-master01 data]# Regenerating index. This may take a moment.Now serving you on 192.168.56.10:8879
[root@k8s-master01 data]# helm repo add local http://127.0.0.1:8879"local" has been added to your repositories[root@k8s-master01 helm]# helm repo add local-IP http://192.168.56.10:8879/charts/"local-IP" has been added to your repositories[root@k8s-master01 helm]# helm repo listNAME URL local http://127.0.0.1:8879 apphub https://apphub.aliyuncs.com azure http://mirror.azure.cn/kubernetes/charts stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartslocal-IP http://192.168.56.10:8879/charts/
管理chart仓库
上面步骤中,已经创建了一个本地的chart仓库,接下来讲述如何在chart仓库中维护chart。chart须遵循 SemVer 2 规则填写正确的版本格式。
一旦chart目录已经存在,将chart打包,并移动到的一个新建目录,通过helm repo index 命令将chart的metadata记录在index.yaml文件中。
[root@k8s-master01 helm]# helm create mychartCreating mychart[root@k8s-master01 helm]# helm package mychartSuccessfully packaged chart and saved it to: /root/yaml/helm/mychart-0.1.0.tgz[root@k8s-master01 helm]# mkdir fantastic-charts[root@k8s-master01 helm]# mv mychart-0.1.0.tgz fantastic-charts/[root@k8s-master01 helm]# helm repo index fantastic-charts/ --url=http://192.168.56.10:8879/charts/
上传chart到chart仓库,通过helm repo add命令上传chart到chart仓库:
[root@k8s-master01 helm]# helm repo add fantastic-charts http://192.168.56.10:8879/charts/"fantastic-charts" has been added to your repositories
查看chart是否上传仓库成功:
[root@k8s-master01 helm]# helm repo listNAME URL local http://127.0.0.1:8879 apphub https://apphub.aliyuncs.com azure http://mirror.azure.cn/kubernetes/charts stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartslocal-IP http://192.168.56.10:8879/charts/ fantastic-charts http://192.168.56.10:8879/charts/
查找上传的chart:
[root@k8s-master01 helm]# helm search mychart -lNAME CHART VERSION APP VERSION DESCRIPTION local/mychart 0.1.0 1.0 A Helm chart for Kubernetes
查看helm目录结构
[root@k8s-master01 helm]# tree /root/.helm//root/.helm/├── cache│ └── archive│ ├── mysql│ │ ├── Chart.yaml│ │ ├── ci│ │ │ └── values-production.yaml│ │ ├── files│ │ │ └── docker-entrypoint-initdb.d│ │ │ └── README.md│ │ ├── README.md│ │ ├── templates│ │ │ ├── _helpers.tpl│ │ │ ├── initialization-configmap.yaml│ │ │ ├── master-configmap.yaml│ │ │ ├── master-statefulset.yaml│ │ │ ├── master-svc.yaml│ │ │ ├── NOTES.txt│ │ │ ├── secrets.yaml│ │ │ ├── servicemonitor.yaml│ │ │ ├── slave-configmap.yaml│ │ │ ├── slave-statefulset.yaml│ │ │ └── slave-svc.yaml│ │ ├── values-production.yaml│ │ └── values.yaml│ └── mysql-6.8.0.tgz├── plugins├── repository│ ├── cache│ │ ├── apphub-index.yaml│ │ ├── azure-index.yaml│ │ ├── fantastic-charts-index.yaml│ │ ├── local-index.yaml -> /root/.helm/repository/local/index.yaml│ │ ├── local-IP-index.yaml│ │ └── stable-index.yaml│ ├── local│ │ ├── index.yaml│ │ └── mychart-0.1.0.tgz│ └── repositories.yaml└── starters12 directories, 27 files[root@k8s-master01 helm]# cat /root/.helm/repository/cache/apphub-index.yaml | less[root@k8s-master01 helm]# cat /root/.helm/repository/cache/azure-index.yaml[root@k8s-master01 helm]# cat /root/.helm/repository/cache/stable-index.yaml
案例1:Helm部署ingress Nginx应用
创建namespace命名空间[root@k8s-master01 ~]# kubectl create ns ingress-nginxnamespace/ingress-nginx created
查询需要部署的应用[root@k8s-master01 helm]# helm search nginx-ingressNAME CHART VERSION APP VERSION DESCRIPTION apphub/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that uses ConfigMap to store ...apphub/nginx-ingress-controller 5.3.4 0.29.0 Chart for the nginx Ingress controller azure/nginx-ingress 1.39.1 0.32.0 An nginx Ingress controller that uses ConfigMap to store ...stable/nginx-ingress 1.39.1 0.32.0 An nginx Ingress controller that uses ConfigMap to store ...apphub/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego azure/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego able/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego 安装nginx-ingress chart包[root@k8s-master01 helm]# helm install apphub/nginx-ingress --set rbac.create=true -n nginx-ingress --namespace ingress-nginx
查看部署应用的pod[root@k8s-master01 helm]# kubectl get pod (由于国外image无法下载,报错)NAME READY STATUS RESTARTS AGEnginx-ingress-controller-5895796f95-hfk87 0/1 ImagePullBackOff 0 5m27snginx-ingress-default-backend-7c868597f4-wm5rp 0/1 ImagePullBackOff 0 5m27s查看image镜像 (nginx-ingress-controller-5895796f95-hfk87)[root@k8s-master01 helm]# kubectl describe pod nginx-ingress-controller-5895796f95-hfk87 | grep image 在node节点下载image镜像 (由于下载dockerhub镜像较慢,先从阿里云容器镜像下载,再使用 docker tag 修改)[root@k8s-node01 k8s-images]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0[root@k8s-node01 k8s-images]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0[root@k8s-node02 k8s-images]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0[root@k8s-node02 k8s-images]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0查看image镜像 (nginx-ingress-default-backend-7c868597f4-wm5rp)[root@k8s-master01 helm]# kubectl describe pod nginx-ingress-default-backend-7c868597f4-wm5rp | grep image在node节点下载image镜像 (由于下载dockerhub镜像较慢,先从阿里云容器镜像下载,再使用 docker tag 修改)[root@k8s-node01 k8s-images]# docker pull registry.cn-hangzhou.aliyuncs.com/lusifeng/defaultbackend-amd64:1.5[root@k8s-node01 k8s-images]# docker tag registry.cn-hangzhou.aliyuncs.com/lusifeng/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5[root@k8s-node02 k8s-images]# docker pull registry.cn-hangzhou.aliyuncs.com/lusifeng/defaultbackend-amd64:1.5[root@k8s-node02 k8s-images]# docker tag registry.cn-hangzhou.aliyuncs.com/lusifeng/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5在image镜像下载,docker tag修改完成后,应用pod正常运行[root@k8s-master01 helm]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ingress-controller-5895796f95-hfk87 1/1 Running 0 25m 10.244.2.78 k8s-node02 nginx-ingress-default-backend-7c868597f4-wm5rp 1/1 Running 0 25m 10.244.2.75 k8s-node02 [root@k8s-master01 helm]# kubectl get deployNAME READY UP-TO-DATE AVAILABLE AGEnginx-ingress-controller 1/1 1 1 28mnginx-ingress-default-backend 1/1 1 1 28mSVC稍微有点问题[root@k8s-master01 helm]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-controller LoadBalancer 10.100.184.216 80:30792/TCP,443:31994/TCP 28mnginx-ingress-default-backend ClusterIP 10.101.4.146 80/TCP 28m
可以使用以下命令检查 helm ls 否安装。
[root@k8s-master01 helm]# helm lsNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEnginx-ingress 1 Sun Jun 7 00:26:30 2020 DEPLOYED nginx-ingress-1.39.1 0.32.0 mon
我们发现svc服务一直pending状态,无法对外服务,这是因为helm默认部署是以LoadBalancer方式,这种方式需要平台的支持,比如在AWS、GCE或者阿里云等平台。而我们自己搭建的集群没办法使用这种方式。不过我们可以通过设置externalIPs的方式使用内部ip,再通过统一的负载均衡接入外网。
可以使用 helm delete 命令删除刚才的部署。
删除 nginx-ingress release 实例[root@k8s-master01 helm]# helm delete nginx-ingressrelease "nginx-ingress" deleted[root@k8s-master01 helm]# helm install stable/nginx-ingress --name nginx-ingressError: a release named nginx-ingress already exists.Run: helm ls --all nginx-ingress; to check the status of the releaseOr run: helm del --purge nginx-ingress; to delete it[root@k8s-master01 helm]# helm ls --all nginx-ingressNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEnginx-ingress 1 Sun Jun 7 00:26:30 2020 DELETED nginx-ingress-1.39.1 0.32.0 mon [root@k8s-master01 helm]# helm del --purge nginx-ingressrelease "nginx-ingress" deleted[root@k8s-master01 helm]# helm ls --all nginx-ingress
重新安装nginx-ingress chart包 [root@k8s-master01 helm]# helm install apphub/nginx-ingress --set controller.service.externalIPs[0]=192.168.56.88 -n nginx-ingress --namespace ingress-nginx[root@k8s-master01 helm]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGEnginx-ingress-controller 1/1 1 1 56snginx-ingress-default-backend 1/1 1 1 56s[root@k8s-master01 helm]# kubectl get svc (外部实际IP地址 192.168.56.88)NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-controller LoadBalancer 10.98.177.150 192.168.56.88 80:30985/TCP,443:31203/TCP 62snginx-ingress-default-backend ClusterIP 10.106.41.197 80/TCP 61s[root@k8s-master01 helm]# curl 10.98.177.150default backend - 404[root@k8s-master01 helm]# curl 10.106.41.197default backend - 404[root@k8s-master01 helm]# curl 192.168.56.88 (实验环境不行,实际情况可以访问192.168.56.88,再转到内部 CLuster-IP)default backend - 404参考文章:使用helm部署ingress-nginxhttps://blog.csdn.net/u010039418/article/details/99894421
由于K8S集群内部还没有服务,因此请求转到nginx-ingress的default backend中。
我们创建一个nginx web服务器作为测试1
在node节点下载nginx:1.18镜像[root@k8s-node01 ~]# docker pull nginx:1.18 [root@k8s-node02 ~]# docker pull nginx:1.18
在master节点编辑 nginx-web.yaml[root@k8s-master01 helm]# vim nginx-web.yaml---apiVersion: v1kind: Podmetadata:name: nginxnamespace: ingress-nginxlabels:app: webspec:hostNetwork: falsecontainers: - name: nginximage: nginx:1.18---kind: ServiceapiVersion: v1metadata:name: webservicenamespace: ingress-nginxspec:selector:app: webports: - protocol: TCPport: 10080 targetPort: 8
[root@k8s-master01 helm]# kubectl apply -f nginx-web.yaml pod/nginx created[root@k8s-master01 helm]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-controller LoadBalancer 10.98.177.150 192.168.56.88 80:30985/TCP,443:31203/TCP 15mnginx-ingress-default-backend ClusterIP 10.106.41.197 <none> 80/TCP 15mwebservice ClusterIP 10.97.67.174 <none> 10080/TCP 86s[root@k8s-master01 helm]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx 1/1 Running 0 23m 10.244.2.96 k8s-node02 <none> <none>nginx-ingress-controller-5f866545d4-jvxtb 1/1 Running 0 36m 10.244.2.95 k8s-node02 <none> <none>nginx-ingress-default-backend-69b685fc96-cnppk 1/1 Running 0 36m 10.244.2.94 k8s-node02 <none> <none
为了便于测试,我们进入这个nginx pod,将测试web服务的主页内容修改一下,
[root@k8s-master01 helm]# kubectl exec -ti nginx -n ingress-nginx -- /bin/bashroot@nginx:/# echo "This is my webservice!" > /usr/share/nginx/html/index.htmlroot@nginx:/# cat /usr/share/nginx/html/index.htmlThis is my webservice!root@nginx:/# exitexit
确保服务正常,我们在集群内通过 webservice 的 ClusterIP 测试一下,
[root@k8s-master01 helm]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-controller LoadBalancer 10.98.177.150 192.168.56.88 80:30985/TCP,443:31203/TCP 19mnginx-ingress-default-backend ClusterIP 10.106.41.197 80/TCP 19mwebservice ClusterIP 10.97.67.174 10080/TCP 6m48s[root@k8s-master01 helm]# curl 10.97.67.174:10080This is my webservice!
可见,服务创建正常,但是该服务目前只能在集群内使用,集群外无法访问该服务。
下面我们通过ingress-nginx来将该服务暴露到集群外部。
首先定义我们的服务的对外要暴露的访问域名,这里假设为 webservice.com
[root@k8s-master01 helm]# kubectl get ingress --all-namespacesNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGEingress-nginx test-ingress webservice.com 80 10s
创建好后,我们试着在K8S集群外部来访问。
[root@CentOS-7-2 ~]# curl -H "Host: webservice.com" http://192.168.56.88:80This is my webservice!
可见,通过ingress-nginx,我们在K8S集群外部也能访问集群内部的服务。
此时可以查看下ingress-nginx内部的配置文件
[root@k8s-master01 helm]# kubectl exec -ti nginx-ingress-controller-5f866545d4-jvxtb -n ingress-nginx -- cat /etc/nginx/nginx.conf... upstream ingress-nginx-webservice-10080 {# Load balance algorithm; empty for round robin, which is the default least_conn; keepalive 32; server 10.244.2.96:80 max_fails=0 fail_timeout=0; } ...## start server webservice.com server { server_name webservice.com ; listen 80; listen [::]:80; ... location / { port_in_redirect off;set $proxy_upstream_name "ingress-nginx-webservice-10080"; ... } ... }## end server webservice.com
可见此时ingress-nginx里面已经自动生成了webservice服务的转发规则,无需我们手动再去添加。再看下后端服务ip 10.244.2.96是哪个pod。
[root@k8s-master01 helm]# kubectl get pods --all-namespaces -o wide | grep 10.244.2.96ingress-nginx nginx 1/1 Running 0 25m 10.244.2.96 k8s-node02 <none> <none>
正是我们启动的nginx服务。
这样就实现了将集群内部服务暴露到外部。
这里以一个典型的三层应用 Wordpress 为例,包括 MySQL、PHP 和 Apache。
查询需要部署的应用[root@k8s-master01 helm]# helm search wordpressNAME CHART VERSION APP VERSION DESCRIPTION apphub/wordpress 8.1.3 5.3.2 Web publishing platform for building blogs and websites. azure/wordpress 9.0.3 5.3.2 DEPRECATED Web publishing platform for building blogs and...stable/wordpress 0.8.8 4.9.4 Web publishing platform for building blogs and websites. 安装需要的应用 ()[root@k8s-master01 helm]# helm install --name wordpress-test --set "persistence.enabled=false,mariadb.persistence.enabled=false" apphub/wordpressNAME: wordpress-testLAST DEPLOYED: Thu Jul 2 15:45:25 2020NAMESPACE: kube-systemSTATUS: DEPLOYEDRESOURCES:==> v1/ConfigMapNAME DATA AGEwordpress-test-mariadb 1 1swordpress-test-mariadb-tests 1 1s==> v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEwordpress-test 0/1 1 0 1s==> v1/Pod(related)NAME READY STATUS RESTARTS AGEwordpress-test-5495b7fc87-2k4c6 0/1 ContainerCreating 0 0swordpress-test-mariadb-0 0/1 Pending 0 0s==> v1/SecretNAME TYPE DATA AGEwordpress-test Opaque 1 1swordpress-test-mariadb Opaque 2 1s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEwordpress-test LoadBalancer 10.100.184.216 80:31977/TCP,443:31916/TCP 1swordpress-test-mariadb ClusterIP 10.100.156.107 3306/TCP 1s==> v1/StatefulSetNAME READY AGEwordpress-test-mariadb 0/1 1sNOTES:1. Get the WordPress URL: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace kube-system -w wordpress-test' export SERVICE_IP=$(kubectl get svc --namespace kube-system wordpress-test --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}") echo "WordPress URL: http://$SERVICE_IP/" echo "WordPress Admin URL: http://$SERVICE_IP/admin"2. Login with the following credentials to see your blog echo Username: user echo Password: $(kubectl get secret --namespace kube-system wordpress-test -o jsonpath="{.data.wordpress-password}" | base64 --decode)
查看部署应用的pod[root@k8s-master01 helm]# kubectl get pod -o wide | grep wordpress (由于国外image无法下载,报错)wordpress-test-78db4789f8-xlhmr 0/1 ContainerCreating 0 95s k8s-node02 wordpress-test-mariadb-0 0/1 Pending 0 94s 查看image镜像 (wordpress-test-78db4789f8-xlhmr)[root@k8s-master01 helm]# kubectl describe pod wordpress-test-5495b7fc87-2k4c6 | grep image Normal Pulling 11m kubelet, localhost.localdomain Pulling image "docker.io/bitnami/wordpress:5.3.2-debian-10-r0"在node节点下载image镜像 (由于下载dockerhub镜像较慢,在阿里云镜像仓库中未找到合适镜像 )[root@k8s-node01 k8s-images]# docker pull docker.io/bitnami/wordpress:5.3.2-debian-10-r0[root@k8s-node02 k8s-images]# docker pull docker.io/bitnami/wordpress:5.3.2-debian-10-r0查看image镜像 (wordpress-test-mariadb-0)[root@k8s-master01 helm]# kubectl describe pod wordpress-test-mariadb-0......Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "wordpress-test-mariadb-0": pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "wordpress-test-mariadb-0": pod has unbound immediate PersistentVolumeClaims需要解决以上image问题 和 PersistentVolumeClaims问题[root@k8s-master01 helm]# kubectl get deploy -o wide | grep wordpresswordpress-test 0/1 1 0 13m wordpress docker.io/bitnami/wordpress:5.3.2-debian-10-r0 app=wordpress-test,release=wordpress-test[root@k8s-master01 helm]# kubectl get svc -o wide | grep wordpresswordpress-test LoadBalancer 10.96.19.254 80:30953/TCP,443:31387/TCP 13m app.kubernetes.io/instance=wordpress-test,app.kubernetes.io/name=wordpresswordpress-test-mariadb ClusterIP 10.110.43.128 3306/TCP 13m app=mariadb,component=master,release=wordpress-test[root@k8s-master01 helm]# kubectl get svc -o wide | grep wordpresswordpress-test LoadBalancer 10.100.184.216 80:31977/TCP,443:31916/TCP 14m app=wordpress-testwordpress-test-mariadb ClusterIP 10.100.156.107 3306/TCP 14m app=mariadb,component=master,release=wordpress-test[root@k8s-master01 helm]# kubectl get secret --namespace default wordpress-test-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode在浏览器中打开页面,并输入用户名和密码就可以看到搭建好的WordPress博客网站了
升级应用当有新的chart包发布时或者想改变已有release的配置时,可以通过 helm upgrade命令实现,比如:[root@k8s-master01 helm]# helm upgrade wordpress-test --set "persistence.enabled=true,mariadb.persistence.enabled=true" stable/wordpress
在上面例子中,已经展示了Helm对release的非常强大的版本管理功能,比如通过”helm list -a”查看有哪些release,通过” helm hist“查看某一个具体的release发布过的历史版本,以及通过” helm get –revision”,查看某个release的一次历史版本对应的具体应用配置信息等。即使已经被删除的release仍然有记录,并且通过Helm能够快速回滚到已删除release的某个发布过的历史版本。Helm的这些版本管理功能,Kubernetes原生并不支持。
Kubernetes 是什么?
Kubernetes 基础概念
Kubernetes 1.18.2集群部署 (单Master)+docker—kubeadm方式
Kubernetes 1.18.2集群部署 (多Master)+docker—kubeadm方式
Kubernetes 1.18.2集群部署 (多Master)+docker—二进制方式
Kubernetes Harbor v2.0.0私有镜像仓库部署-更新
Kubernetes kubectx/kubens切换context和namespace
Kubernetes 删除namespace时卡在Terminating状态
Kubernetes kubeadm初始化kubernetes集群延长证书过期时间
Kubernetes kubectl命令
Kubernetes 创建、更新应用
Kubernetes 资源清单
Kubernetes Pod状态和生命周期管理
Kubernetes Pod控制器
Kubernetes ReplicaSet控制器
Kubernetes Deployment控制器
Kubernetes DamonSet控制器
Kubernetes 服务发现Service
Kubernetes 内部服务发现—Coredns
Kubernetes 外部服务发现—Traefik ingress
Kubernetes 外部服务发现—Nginx Ingress Controller
Kubernetes 存储卷
Kubernetes 特殊存储卷—Secret和ConfigMap
Kubernetes StatefulSet控制器
Kubernetes 认证、授权和准入控制
Kubernetes dashboard认证访问
Kubernetes 网络模型和网络策略
Kubernetes 网络原理解析
Kubernetes 网络插件-flannel
Kubernetes 网络插件-calico
Kubernetes Pod资源调度
Kubernetes 资源指标和集群监控
Kubernetes 集群中部署Prometheus+Grafana+Alertmanager监控告警系统
Kubernetes Prometheus监控Nginx
Kubernetes Prometheus监控MySQL
Kubernetes Prometheus监控tomcat
Kubernetes 使用StatefulSet部署MySQL高可用集群
Kubernetes 使用StatefulSet部署MongoDB高可用集群
Kubernetes 使用Elastic Stack构建Kubernetes全栈监控
Kubernetes 集群中部署Nginx+php-fpm+MySQL并运行Discuz
Kubernetes 集群中部署WordPress博客
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。