当前位置:   article > 正文

kubernetes 1.24.2实战与源码(1)_telegraf sidecar telegraf-operator kubernets

telegraf sidecar telegraf-operator kubernets

kubernetes 1.24.2实战与源码

第1章 准备工作

1.1 关于Kubernetes的介绍与核心对象概念 

关于Kubernetes的介绍与核心对象概念-阿里云开发者社区

k8s架构

 

 

核心对象

 

 

使用kubeadm+10分钟部署k8集群

使用 KuboardSpray 安装kubernetes_v1.23.1 | Kuboard

k8s-上部署第一个应用程序

Deployment基本概念

给应用添加service,执行扩容和滚动更新

 安装Kuboard在页面上熟悉k8s集群

kubernetes 1.24.2安装kuboard v3 

static pod 安装 kuboard

安装命令

  1. curl -fsSL https://addons.kuboard.cn/kuboard/kuboard-static-pod.sh -o kuboard.sh
  2. sh kuboard.sh

阅读k8s源码的准备工作

vscode

下载k8s 1.24.2源码

k8s组件代码仓库地址

第2章 创建pod时kubectl的执行流程和它的设计模式

 

2.1 使用kubectl部署一个简单的nginx-pod

从创建pod开始看流程和源码

Kubernetes源码分析一叶知秋(一)kubectl中Pod的创建流程 - 掘金

编写一个创建nginx pod的yaml

使用kubelet部署这个pod

2.2 命令行解析工具cobra的使用

cobra中的主要概念

cobra 中有个重要的概念,分别是 commands、arguments 和 flags。其中 commands 代表执行动作,arguments 就是执行参数,flags 是这些动作的标识符。执行命令行程序时的一般格式为:
APPNAME COMMAND ARG --FLAG
比如下面的例子:

  1. # server是 commands,port 是 flag
  2. hugo server --port=1313
  3. # clone 是 commands,URL 是 arguments,brae 是 flag
  4. git clone URL --bare

kubectl create命令 执行入口在cmd目录下各个组件的下面

代码位置/home/gopath/src/k8s.io/kubernetes-1.24.2/cmd/kubectl/kubectl.go

  1. package main
  2. import (
  3. "k8s.io/component-base/cli"
  4. "k8s.io/kubectl/pkg/cmd"
  5. "k8s.io/kubectl/pkg/cmd/util"
  6. // Import to initialize client auth plugins.
  7. _ "k8s.io/client-go/plugin/pkg/client/auth"
  8. )
  9. func main() {
  10. command := cmd.NewDefaultKubectlCommand()
  11. if err := cli.RunNoErrOutput(command); err != nil {
  12. // Pretty-print the error and exit with an error.
  13. util.CheckErr(err)
  14. }
  15. }

 rand.Seed设置随机数

调用kubectl库调用cmd创建command对象

command := cmd.NewDefaultKubectlCommand()

D:\Workspace\Go\src\k8s.io\kubernetes@v0.24.2\staging\src\k8s.io\kubectl\pkg\cmd\cmd.go

staging\src\k8s.io\kubectl\pkg\cmd\cmd.go

github.com/spf13/cobra

cobra的主要功能如下

Cobra主要提供的功能

*   简易的子命令行模式,如 app server, app fetch等等
*   完全兼容posix命令行模式
*   嵌套子命令subcommand
*   支持全局,局部,串联flags
*   使用Cobra很容易的生成应用程序和命令,使用cobra create appname和cobra add cmdname
*   如果命令输入错误,将提供智能建议,如 app srver,将提示srver没有,是否是app server
*   自动生成commands和flags的帮助信息
*   自动生成详细的help信息,如app help
*   自动识别-h,--help帮助flag
*   自动生成应用程序在bash下命令自动完成功能
*   自动生成应用程序的man手册
*   命令行别名
*   自定义help和usage信息
*   可选的紧密集成的[viper](http://github.com/spf13/viper) apps

创建cobra应用

  1. go install github.com/spf13/cobra-cli@latest
  2. mkdir my_cobra
  3. cd my_cobra
  4. // 打开my_cobra项目,执行go mod init后可以看到相关的文件
  5. go mod init github.com/spf13/my_cobra
  6. find
  7. go run main.go
  8. // 修改root.go
  9. // Uncomment the following line if your bare application
  10. // has an action associated with it:
  11. // Run: func(cmd *cobra.Command, args []string) { },
  12. Run: func(cmd *cobra.Command, args []string) {
  13. fmt.Println("my_cobra")
  14. },
  15. // 编译运行后打印
  16. go run main.go
  17. [root@k8s-worker02 my_cobra]# go run main.go
  18. # github.com/spf13/my_cobra/cmd
  19. cmd/root.go:29:10: undefined: fmt
  20. [root@k8s-worker02 my_cobra]# go run main.go
  21. my_cobra
  22. // 用cobra程序生成应用程序框架
  23. cobra-cli init
  24. // 除了init生成应用程序框架,还可以通过cobra-cli add命令生成子命令的代码文件,比如下面的命令会添加两个子命令image和container相关的代码文件:
  25. cobra-cli add image
  26. cobra-cli add container
  27. [root@k8s-worker02 my_cobra]# find
  28. .
  29. ./go.mod
  30. ./main.go
  31. ./cmd
  32. ./cmd/root.go
  33. ./cmd/image.go
  34. ./LICENSE
  35. ./go.sum
  36. [root@k8s-worker02 my_cobra]# cobra-cli add container
  37. container created at /home/gopath/src/my_cobra
  38. [root@k8s-worker02 my_cobra]# go run main.go image
  39. image called
  40. [root@k8s-worker02 my_cobra]# go run main.go container
  41. container called

可以看出执行的是对应xxxCmd下的Run方法

  1. // containerCmd represents the container command
  2. var containerCmd = &cobra.Command{
  3. Use: "container",
  4. Short: "A brief description of your command",
  5. Long: `A longer description that spans multiple lines and likely contains examples
  6. and usage of using your command. For example:
  7. Cobra is a CLI library for Go that empowers applications.
  8. This application is a tool to generate the needed files
  9. to quickly create a Cobra application.`,
  10. Run: func(cmd *cobra.Command, args []string) {
  11. fmt.Println("container called")
  12. },
  13. }

赋值cmd/container.go为version.go添加version信息

  1. package cmd
  2. import (
  3. "fmt"
  4. "github.com/spf13/cobra"
  5. )
  6. // versionCmd represents the version command
  7. var versionCmd = &cobra.Command{
  8. Use: "version",
  9. Short: "A brief description of your command",
  10. Long: `A longer description that spans multiple lines and likely contains examples
  11. and usage of using your command. For example:
  12. Cobra is a CLI library for Go that empowers applications.
  13. This application is a tool to generate the needed files
  14. to quickly create a Cobra application.`,
  15. Run: func(cmd *cobra.Command, args []string) {
  16. fmt.Println("my_cobra version is v1.0")
  17. },
  18. }
  19. func init() {
  20. rootCmd.AddCommand(versionCmd)
  21. }

 

设置一个MinimumNArgs的验证

新增一个cmd/times.go

  1. package cmd
  2. import (
  3. "fmt"
  4. "strings"
  5. "github.com/spf13/cobra"
  6. )
  7. // containerCmd respresents the container command
  8. var echoTimes int
  9. var timesCmd = &cobra.Command{
  10. Use: "times [string to echo]",
  11. Short: "Echo anything to the screen more times",
  12. Long: `echo things multiple times back to the user by providing a count and a string`,
  13. Args: cobra.MinimumNArgs(1),
  14. Run: func(cmd *cobra.Command, args []string) {
  15. for i := 0; i < echoTimes; i++ {
  16. fmt.Println("Echo: " + strings.Join(args, " "))
  17. }
  18. },
  19. }
  20. func init() {
  21. rootCmd.AddCommand(timesCmd)
  22. timseCmd.Flags().IntVarP(&echoTimes, "times", "t", 1, "times to echo the input")
  23. }

因为我们为timeCmd命令设置了Args: cobra.MinimumNArgs(1),所以必须为times子命令传入一个参数,不然times子命令会报错:

go run main.go times -t=4 k8s

[root@k8s-worker02 my_cobra]# go run main.go times -t=4 k8s
Echo: k8s
Echo: k8s
Echo: k8s
Echo: k8s

修改rootCmd

  1. PersistentPreRun: func(cmd *cobra.Command, args []string) {
  2. fmt.Printf("[step_1]PersistentPreRun with args: %v\n", args)
  3. },
  4. PreRun: func(cmd *cobra.Command, args []string) {
  5. fmt.Printf("[step_2]PreRun with args: %v\n", args)
  6. },
  7. Run: func(cmd *cobra.Command, args []string) {
  8. fmt.Printf("[step_3]my_cobra version is v1.0: %v\n", args")
  9. },
  10. PostRun: func(cmd *cobra.Command, args []string) {
  11. fmt.Printf("[step_4]PostRun with args: %v\n", args)
  12. },
  13. PersistentPostRun: func(cmd *cobra.Command, args []string) {
  14. fmt.Printf("[step_5]PersistentPostRun with args: %v\n", args)
  15. },

[root@k8s-worker02 my_cobra]# go run main.go 
[step_1]PersistentPreRun with args: []
[step_2]PreRun with args: []
[step_3]my_cobra version is v1.0: []
[step_4]PostRun with args: []
[step_5]PersistentPostRun with args: []

 kubectl命令行设置pprof抓取火焰图

cmd调用入口

D:\Workspace\Go\src\k8s.io\kubernetes@v0.24.2\staging\src\k8s.io\kubectl\pkg\cmd\cmd.go

  1. // NewDefaultKubectlCommand creates the `kubectl` command with default arguments
  2. func NewDefaultKubectlCommand() *cobra.Command {
  3. return NewDefaultKubectlCommandWithArgs(KubectlOptions{
  4. PluginHandler: NewDefaultPluginHandler(plugin.ValidPluginFilenamePrefixes),
  5. Arguments: os.Args,
  6. ConfigFlags: defaultConfigFlags,
  7. IOStreams: genericclioptions.IOStreams{In: os.Stdin, Out: os.Stdout, ErrOut: os.Stderr},
  8. })
  9. }

底层函数NewKubectlCommand解析

func NewKubectlCommand(o KubectlOptions) *cobra.Command {}

使用cobra创建rootCmd

  1. // Parent command to which all subcommands are added.
  2. cmds := &cobra.Command{
  3. Use: "kubectl",
  4. Short: i18n.T("kubectl controls the Kubernetes cluster manager"),
  5. Long: templates.LongDesc(`
  6. kubectl controls the Kubernetes cluster manager.
  7. Find more information at:
  8. https://kubernetes.io/docs/reference/kubectl/overview/`),
  9. Run: runHelp,
  10. // Hook before and after Run initialize and write profiles to disk,
  11. // respectively.
  12. PersistentPreRunE: func(*cobra.Command, []string) error {
  13. rest.SetDefaultWarningHandler(warningHandler)
  14. return initProfiling()
  15. },
  16. PersistentPostRunE: func(*cobra.Command, []string) error {
  17. if err := flushProfiling(); err != nil {
  18. return err
  19. }
  20. if warningsAsErrors {
  21. count := warningHandler.WarningCount()
  22. switch count {
  23. case 0:
  24. // no warnings
  25. case 1:
  26. return fmt.Errorf("%d warning received", count)
  27. default:
  28. return fmt.Errorf("%d warnings received", count)
  29. }
  30. }
  31. return nil
  32. },
  33. }

配合后面的addProfilingFlags(flags)添加pprof的flag

在persistentPreRunE设置pprof采集相关指令

代码位置

staging\src\k8s.io\kubectl\pkg\cmd\profiling.go

意思是有两个选项

--profile代表pprof统计哪类指标,可以是cpu,block等

--profile-output代表输出的pprof结果文件

initProfiling代码

  1. func addProfilingFlags(flags *pflag.FlagSet) {
  2. flags.StringVar(&profileName, "profile", "none", "Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)")
  3. flags.StringVar(&profileOutput, "profile-output", "profile.pprof", "Name of the file to write the profile to")
  4. }
  5. func initProfiling() error {
  6. var (
  7. f *os.File
  8. err error
  9. )
  10. switch profileName {
  11. case "none":
  12. return nil
  13. case "cpu":
  14. f, err = os.Create(profileOutput)
  15. if err != nil {
  16. return err
  17. }
  18. err = pprof.StartCPUProfile(f)
  19. if err != nil {
  20. return err
  21. }
  22. // Block and mutex profiles need a call to Set{Block,Mutex}ProfileRate to
  23. // output anything. We choose to sample all events.
  24. case "block":
  25. runtime.SetBlockProfileRate(1)
  26. case "mutex":
  27. runtime.SetMutexProfileFraction(1)
  28. default:
  29. // Check the profile name is valid.
  30. if profile := pprof.Lookup(profileName); profile == nil {
  31. return fmt.Errorf("unknown profile '%s'", profileName)
  32. }
  33. }
  34. // If the command is interrupted before the end (ctrl-c), flush the
  35. // profiling files
  36. c := make(chan os.Signal, 1)
  37. signal.Notify(c, os.Interrupt)
  38. go func() {
  39. <-c
  40. f.Close()
  41. flushProfiling()
  42. os.Exit(0)
  43. }()
  44. return nil
  45. }

并且在PersistentPostRunE中设置了pprof统计结果落盘

  1. PersistentPostRunE: func(*cobra.Command, []string) error {
  2. if err := flushProfiling(); err != nil {
  3. return err
  4. }
  5. if warningsAsErrors {
  6. count := warningHandler.WarningCount()
  7. switch count {
  8. case 0:
  9. // no warnings
  10. case 1:
  11. return fmt.Errorf("%d warning received", count)
  12. default:
  13. return fmt.Errorf("%d warnings received", count)
  14. }
  15. }
  16. return nil
  17. },

对应执行的flushProfiling

  1. func flushProfiling() error {
  2. switch profileName {
  3. case "none":
  4. return nil
  5. case "cpu":
  6. pprof.StopCPUProfile()
  7. case "heap":
  8. runtime.GC()
  9. fallthrough
  10. default:
  11. profile := pprof.Lookup(profileName)
  12. if profile == nil {
  13. return nil
  14. }
  15. f, err := os.Create(profileOutput)
  16. if err != nil {
  17. return err
  18. }
  19. defer f.Close()
  20. profile.WriteTo(f, 0)
  21. }
  22. return nil
  23. }

执行采集pprof cpu的kubelet命令

  1. # 执行命令
  2. kubectl get node --profile=cpu --profile-output=cpu.pprof
  3. # 查看结果文件
  4. ll cpu.pprof
  5. # 生成svg
  6. go tool pprof -svg cpu.pprof > kubectl_get_node_cpu.svg
  7. kubectl get node --profile=goroutine --profile-output=goroutine.pprof
  8. go tool pprof -text goroutine.pprof

cpu火焰图svg结果

2.4 kubectl命令行设置7大命令分组

kubectl架构图

用cmd工厂函数f创建7大分组命令

基础初级命令Basic Commands(Beginner)

基础中级命令Basic Commands(Intermediate)

部署命令Deploy Commands

集群管理分组 Cluster Management Commands

故障排查和调试Troubleshooting and Debugging Commands

高级命令Advanced Commands

设置命令Settings Commands

设置参数-替换方法

  1. flags := cmds.PersistentFlags()
  2. addProfilingFlags(flags)
  3. flags.BoolVar(&warningsAsErrors, "warnings-as-errors", warningsAsErrors, "Treat warnings received from the server as errors and exit with a non-zero exit code")

设置kubeconfig相关的命令行

  1. kubeConfigFlags := o.ConfigFlags
  2. if kubeConfigFlags == nil {
  3. kubeConfigFlags = defaultConfigFlags
  4. }
  5. kubeConfigFlags.AddFlags(flags)
  6. matchVersionKubeConfigFlags := cmdutil.NewMatchVersionFlags(kubeConfigFlags)
  7. matchVersionKubeConfigFlags.AddFlags(flags)

设置cmd工厂函数f,主要是封装了与kube-apiserver交互客户端

后面的子命令都使用这个f创建

	f := cmdutil.NewFactory(matchVersionKubeConfigFlags)

创建proxy子命令

  1. proxyCmd := proxy.NewCmdProxy(f, o.IOStreams)
  2. proxyCmd.PreRun = func(cmd *cobra.Command, args []string) {
  3. kubeConfigFlags.WrapConfigFn = nil
  4. }

创建7大分组命令

1.基础初级命令Basic Commands (Begginner)

代码

  1. {
  2. Message: "Basic Commands (Beginner):",
  3. Commands: []*cobra.Command{
  4. create.NewCmdCreate(f, o.IOStreams),
  5. expose.NewCmdExposeService(f, o.IOStreams),
  6. run.NewCmdRun(f, o.IOStreams),
  7. set.NewCmdSet(f, o.IOStreams),
  8. },
  9. },

命令行使用kubectl

对应的输出

 释义

create代表创建资源

expose将一种资源暴露成service

run运行一个镜像

set在对象上设置一些功能

2.基础中级命令Basic Commands(Intermediate)

  1. {
  2. Message: "Basic Commands (Intermediate):",
  3. Commands: []*cobra.Command{
  4. explain.NewCmdExplain("kubectl", f, o.IOStreams),
  5. getCmd,
  6. edit.NewCmdEdit(f, o.IOStreams),
  7. delete.NewCmdDelete(f, o.IOStreams),
  8. },
  9. },

打印的help效果

释义

explain获取资源的文档

get显示资源

edit编辑资源

delete删除资源

3.部署命令Deploy Commands

  1. {
  2. Message: "Deploy Commands:",
  3. Commands: []*cobra.Command{
  4. rollout.NewCmdRollout(f, o.IOStreams),
  5. scale.NewCmdScale(f, o.IOStreams),
  6. autoscale.NewCmdAutoscale(f, o.IOStreams),
  7. },
  8. },

释义

rollout滚动更新

scale扩缩容

autoscale自动扩缩容

4.集群管理分组Cluster Management Commands:

  1. {
  2. Message: "Cluster Management Commands:",
  3. Commands: []*cobra.Command{
  4. certificates.NewCmdCertificate(f, o.IOStreams),
  5. clusterinfo.NewCmdClusterInfo(f, o.IOStreams),
  6. top.NewCmdTop(f, o.IOStreams),
  7. drain.NewCmdCordon(f, o.IOStreams),
  8. drain.NewCmdUncordon(f, o.IOStreams),
  9. drain.NewCmdDrain(f, o.IOStreams),
  10. taint.NewCmdTaint(f, o.IOStreams),
  11. },
  12. },

释义

certificate管理证书

cluster-info展示集群信息

top展示资源消耗top

cordon将节点标记为不可用

uncordon将节点标记为可用

drain驱逐pod

taint设置节点污点

5.故障排查和调试Troubleshooting and Debugging Commands

  1. {
  2. Message: "Troubleshooting and Debugging Commands:",
  3. Commands: []*cobra.Command{
  4. describe.NewCmdDescribe("kubectl", f, o.IOStreams),
  5. logs.NewCmdLogs(f, o.IOStreams),
  6. attach.NewCmdAttach(f, o.IOStreams),
  7. cmdexec.NewCmdExec(f, o.IOStreams),
  8. portforward.NewCmdPortForward(f, o.IOStreams),
  9. proxyCmd,
  10. cp.NewCmdCp(f, o.IOStreams),
  11. auth.NewCmdAuth(f, o.IOStreams),
  12. debug.NewCmdDebug(f, o.IOStreams),
  13. },
  14. },

 输出

 释义

describe展示资源详情

logs打印pod中容器日志

attach进入容器

exec在容器中执行命令

port-forward端口转发

proxy运行代理

cp拷贝文件

auth检查鉴权

debug打印debug

6.高级命令Advanced Commands

代码

  1. {
  2. Message: "Advanced Commands:",
  3. Commands: []*cobra.Command{
  4. diff.NewCmdDiff(f, o.IOStreams),
  5. apply.NewCmdApply("kubectl", f, o.IOStreams),
  6. patch.NewCmdPatch(f, o.IOStreams),
  7. replace.NewCmdReplace(f, o.IOStreams),
  8. wait.NewCmdWait(f, o.IOStreams),
  9. kustomize.NewCmdKustomize(o.IOStreams),
  10. },
  11. },

输出

 释义

diff对比当前和应该运行的版本

apply应用变更或配置

patch更新资源的字段

replace替换资源

wait等待资源的特定状态

kustomize从目录或远程url构建kustomization目标

7.设置命令Setting Commands

代码

  1. {
  2. Message: "Settings Commands:",
  3. Commands: []*cobra.Command{
  4. label.NewCmdLabel(f, o.IOStreams),
  5. annotate.NewCmdAnnotate("kubectl", f, o.IOStreams),
  6. completion.NewCmdCompletion(o.IOStreams.Out, ""),
  7. },
  8. },

输出

释义

label打标签

annotate更新注释

completion在shell上设置补全

本节重点总结

设置cmd工厂函数f,主要是封装了与kube-apiserver交互客户端

用cmd工厂函数f创建7大分组命令

2.5 create命令执行流程

kubectl create架构图

 create流程

newCmdCreate调用cobra的Run函数

调用RunCreate构建resourceBuilder对象

调用visit方法创建资源

底层使用resetclient和看k8s-api通信

create的流程NewCmdCreate

代码入口staging\src\k8s.io\kubectl\pkg\cmd\create\create.go

创建Create选项对象

o := NewCreateOptions(ioStreams)

初始化cmd

  1. cmd := &cobra.Command{
  2. Use: "create -f FILENAME",
  3. DisableFlagsInUseLine: true,
  4. Short: i18n.T("Create a resource from a file or from stdin"),
  5. Long: createLong,
  6. Example: createExample,
  7. Run: func(cmd *cobra.Command, args []string) {
  8. if cmdutil.IsFilenameSliceEmpty(o.FilenameOptions.Filenames, o.FilenameOptions.Kustomize) {
  9. ioStreams.ErrOut.Write([]byte("Error: must specify one of -f and -k\n\n"))
  10. defaultRunFunc := cmdutil.DefaultSubCommandRun(ioStreams.ErrOut)
  11. defaultRunFunc(cmd, args)
  12. return
  13. }
  14. cmdutil.CheckErr(o.Complete(f, cmd))
  15. cmdutil.CheckErr(o.ValidateArgs(cmd, args))
  16. cmdutil.CheckErr(o.RunCreate(f, cmd))
  17. },
  18. }

 设置选项

具体绑定到o的各个字段上

  1. // bind flag structs
  2. o.RecordFlags.AddFlags(cmd)
  3. usage := "to use to create the resource"
  4. cmdutil.AddFilenameOptionFlags(cmd, &o.FilenameOptions, usage)
  5. cmdutil.AddValidateFlags(cmd)
  6. cmd.Flags().BoolVar(&o.EditBeforeCreate, "edit", o.EditBeforeCreate, "Edit the API resource before creating")
  7. cmd.Flags().Bool("windows-line-endings", runtime.GOOS == "windows",
  8. "Only relevant if --edit=true. Defaults to the line ending native to your platform.")
  9. cmdutil.AddApplyAnnotationFlags(cmd)
  10. cmdutil.AddDryRunFlag(cmd)
  11. cmdutil.AddLabelSelectorFlagVar(cmd, &o.Selector)
  12. cmd.Flags().StringVar(&o.Raw, "raw", o.Raw, "Raw URI to POST to the server. Uses the transport specified by the kubeconfig file.")
  13. cmdutil.AddFieldManagerFlagVar(cmd, &o.fieldManager, "kubectl-create")
  14. o.PrintFlags.AddFlags(cmd)

绑定创建子命令

  1. // create subcommands
  2. cmd.AddCommand(NewCmdCreateNamespace(f, ioStreams))
  3. cmd.AddCommand(NewCmdCreateQuota(f, ioStreams))
  4. cmd.AddCommand(NewCmdCreateSecret(f, ioStreams))
  5. cmd.AddCommand(NewCmdCreateConfigMap(f, ioStreams))
  6. cmd.AddCommand(NewCmdCreateServiceAccount(f, ioStreams))
  7. cmd.AddCommand(NewCmdCreateService(f, ioStreams))
  8. cmd.AddCommand(NewCmdCreateDeployment(f, ioStreams))
  9. cmd.AddCommand(NewCmdCreateClusterRole(f, ioStreams))
  10. cmd.AddCommand(NewCmdCreateClusterRoleBinding(f, ioStreams))
  11. cmd.AddCommand(NewCmdCreateRole(f, ioStreams))
  12. cmd.AddCommand(NewCmdCreateRoleBinding(f, ioStreams))
  13. cmd.AddCommand(NewCmdCreatePodDisruptionBudget(f, ioStreams))
  14. cmd.AddCommand(NewCmdCreatePriorityClass(f, ioStreams))
  15. cmd.AddCommand(NewCmdCreateJob(f, ioStreams))
  16. cmd.AddCommand(NewCmdCreateCronJob(f, ioStreams))
  17. cmd.AddCommand(NewCmdCreateIngress(f, ioStreams))
  18. cmd.AddCommand(NewCmdCreateToken(f, ioStreams))

核心的cmd.Run函数

校验文件参数

  1. if cmdutil.IsFilenameSliceEmpty(o.FilenameOptions.Filenames, o.FilenameOptions.Kustomize) {
  2. ioStreams.ErrOut.Write([]byte("Error: must specify one of -f and -k\n\n"))
  3. defaultRunFunc := cmdutil.DefaultSubCommandRun(ioStreams.ErrOut)
  4. defaultRunFunc(cmd, args)
  5. return
  6. }

完善并填充所需字段

			cmdutil.CheckErr(o.Complete(f, cmd))

校验参数

			cmdutil.CheckErr(o.ValidateArgs(cmd, args))

核心的RunCreate

			cmdutil.CheckErr(o.RunCreate(f, cmd))

如果配置了apiserver的raw-uri就直接发送请求

  1. if len(o.Raw) > 0 {
  2. restClient, err := f.RESTClient()
  3. if err != nil {
  4. return err
  5. }
  6. return rawhttp.RawPost(restClient, o.IOStreams, o.Raw, o.FilenameOptions.Filenames[0])
  7. }

如果配置了创建前edit就执行RunEditOnCreate

  1. if o.EditBeforeCreate {
  2. return RunEditOnCreate(f, o.PrintFlags, o.RecordFlags, o.IOStreams, cmd, &o.FilenameOptions, o.fieldManager)
  3. }

根据配置中validate决定是否开启validate

--validate=true: 使用一种模式校验一下配置,模式是true的

  1. cmdNamespace, enforceNamespace, err := f.ToRawKubeConfigLoader().Namespace()
  2. if err != nil {
  3. return err
  4. }

构建builder对象,建造者模式

  1. r := f.NewBuilder().
  2. Unstructured().
  3. Schema(schema).
  4. ContinueOnError().
  5. NamespaceParam(cmdNamespace).DefaultNamespace().
  6. FilenameParam(enforceNamespace, &o.FilenameOptions).
  7. LabelSelectorParam(o.Selector).
  8. Flatten().
  9. Do()
  10. err = r.Err()
  11. if err != nil {
  12. return err
  13. }

FilenameParam读取配置文件

除了支持简单的本地文件,也支持标准输入和http/https协议访问的文件,保存为Visitor

代码位置 staging\src\k8s.io\cli-runtime\pkg\resource\builder.go

  1. // FilenameParam groups input in two categories: URLs and files (files, directories, STDIN)
  2. // If enforceNamespace is false, namespaces in the specs will be allowed to
  3. // override the default namespace. If it is true, namespaces that don't match
  4. // will cause an error.
  5. // If ContinueOnError() is set prior to this method, objects on the path that are not
  6. // recognized will be ignored (but logged at V(2)).
  7. func (b *Builder) FilenameParam(enforceNamespace bool, filenameOptions *FilenameOptions) *Builder {
  8. if errs := filenameOptions.validate(); len(errs) > 0 {
  9. b.errs = append(b.errs, errs...)
  10. return b
  11. }
  12. recursive := filenameOptions.Recursive
  13. paths := filenameOptions.Filenames
  14. for _, s := range paths {
  15. switch {
  16. case s == "-":
  17. b.Stdin()
  18. case strings.Index(s, "http://") == 0 || strings.Index(s, "https://") == 0:
  19. url, err := url.Parse(s)
  20. if err != nil {
  21. b.errs = append(b.errs, fmt.Errorf("the URL passed to filename %q is not valid: %v", s, err))
  22. continue
  23. }
  24. b.URL(defaultHttpGetAttempts, url)
  25. default:
  26. matches, err := expandIfFilePattern(s)
  27. if err != nil {
  28. b.errs = append(b.errs, err)
  29. continue
  30. }
  31. if !recursive && len(matches) == 1 {
  32. b.singleItemImplied = true
  33. }
  34. b.Path(recursive, matches...)
  35. }
  36. }
  37. if filenameOptions.Kustomize != "" {
  38. b.paths = append(
  39. b.paths,
  40. &KustomizeVisitor{
  41. mapper: b.mapper,
  42. dirPath: filenameOptions.Kustomize,
  43. schema: b.schema,
  44. fSys: filesys.MakeFsOnDisk(),
  45. })
  46. }
  47. if enforceNamespace {
  48. b.RequireNamespace()
  49. }
  50. return b
  51. }

调用visit函数创建资源

  1. err = r.Visit(func(info *resource.Info, err error) error {
  2. if err != nil {
  3. return err
  4. }
  5. if err := util.CreateOrUpdateAnnotation(cmdutil.GetFlagBool(cmd, cmdutil.ApplyAnnotationsFlag), info.Object, scheme.DefaultJSONEncoder()); err != nil {
  6. return cmdutil.AddSourceToErr("creating", info.Source, err)
  7. }
  8. if err := o.Recorder.Record(info.Object); err != nil {
  9. klog.V(4).Infof("error recording current command: %v", err)
  10. }
  11. if o.DryRunStrategy != cmdutil.DryRunClient {
  12. if o.DryRunStrategy == cmdutil.DryRunServer {
  13. if err := o.DryRunVerifier.HasSupport(info.Mapping.GroupVersionKind); err != nil {
  14. return cmdutil.AddSourceToErr("creating", info.Source, err)
  15. }
  16. }
  17. obj, err := resource.
  18. NewHelper(info.Client, info.Mapping).
  19. DryRun(o.DryRunStrategy == cmdutil.DryRunServer).
  20. WithFieldManager(o.fieldManager).
  21. WithFieldValidation(o.ValidationDirective).
  22. Create(info.Namespace, true, info.Object)
  23. if err != nil {
  24. return cmdutil.AddSourceToErr("creating", info.Source, err)
  25. }
  26. info.Refresh(obj, true)
  27. }

Create函数追踪底层调用createResource创建资源

代码位置D:\Workspace\Go\src\k8s.io\kubernetes@v0.24.2\staging\src\k8s.io\cli-runtime\pkg\resource\helper.go

  1. func (m *Helper) createResource(c RESTClient, resource, namespace string, obj runtime.Object, options *metav1.CreateOptions) (runtime.Object, error) {
  2. return c.Post().
  3. NamespaceIfScoped(namespace, m.NamespaceScoped).
  4. Resource(resource).
  5. VersionedParams(options, metav1.ParameterCodec).
  6. Body(obj).
  7. Do(context.TODO()).
  8. Get()
  9. }

底层使用restfulclient.post

代码位置staging\src\k8s.io\cli-runtime\pkg\resource\interfaces.go

  1. // RESTClient is a client helper for dealing with RESTful resources
  2. // in a generic way.
  3. type RESTClient interface {
  4. Get() *rest.Request
  5. Post() *rest.Request
  6. Patch(types.PatchType) *rest.Request
  7. Delete() *rest.Request
  8. Put() *rest.Request
  9. }

本节重点总结

1.newCmdCreate调用cobra的Run函数

2.调用RunCreate构建resourceBuilder对象

3.调用Visit方法创建资源

4.底层使用resetclient和k8s-api通信

2.6 createCmd中的builder建造者设计模式模式

本节重点总结

设计模式之建造者模式

优点

缺点

kubectl中的创建者模式

设计模式之建造者模式

建造者(Builder)模式:指将一个复杂对象的构造与它的表示分离

使同样的构建过程可以创建不同的对象,这样的设计模式被称为建造者模式

它是将一个复杂的对象分解为多个简单的对象,然后一步一步构建而成

它将变与不变相分离,即产品的组成部分是不变的,但每一部分是可以灵活选择的。

更多用来针对复杂对象的创建

优点

封装性好,构建和表示分离

扩展性好,各个具体的建造者相互分离,有利于系统的解耦

客户端不必知道产品内部组成的细节,建造者可以对创建过程逐步细化,而不对其他模块产生任何影响,便于控制细节风险。

缺点

产品的组成部分必须相同,这限制了其使用范围

如果产品的内部变化复杂,如果产品内部发生变化,则建造者也要同步修改,后期维护成本较大。

kubectl中的创建者模式

kubectl中的Builder对象

特点1 针对复杂对象的创建,字段非常多

特点2 开头的方法要返回要创建对象的指针

特点3 所有的方法都返回的是建造者对象的指针

特点1 针对复杂对象的创建,字段非常多

kubectl中的Builder对象,可以看到字段非常多

如果使用Init函数构造参数会非常多

而且参数是不固定的,即可以根据用户传入的参数情况构造不同对象

代码位置staging\src\k8s.io\cli-runtime\pkg\resource\builder.go

  1. // Builder provides convenience functions for taking arguments and parameters
  2. // from the command line and converting them to a list of resources to iterate
  3. // over using the Visitor interface.
  4. type Builder struct {
  5. categoryExpanderFn CategoryExpanderFunc
  6. // mapper is set explicitly by resource builders
  7. mapper *mapper
  8. // clientConfigFn is a function to produce a client, *if* you need one
  9. clientConfigFn ClientConfigFunc
  10. restMapperFn RESTMapperFunc
  11. // objectTyper is statically determinant per-command invocation based on your internal or unstructured choice
  12. // it does not ever need to rely upon discovery.
  13. objectTyper runtime.ObjectTyper
  14. // codecFactory describes which codecs you want to use
  15. negotiatedSerializer runtime.NegotiatedSerializer
  16. // local indicates that we cannot make server calls
  17. local bool
  18. errs []error
  19. paths []Visitor
  20. stream bool
  21. stdinInUse bool
  22. dir bool
  23. labelSelector *string
  24. fieldSelector *string
  25. selectAll bool
  26. limitChunks int64
  27. requestTransforms []RequestTransform
  28. resources []string
  29. subresource string
  30. namespace string
  31. allNamespace bool
  32. names []string
  33. resourceTuples []resourceTuple
  34. defaultNamespace bool
  35. requireNamespace bool
  36. flatten bool
  37. latest bool
  38. requireObject bool
  39. singleResourceType bool
  40. continueOnError bool
  41. singleItemImplied bool
  42. schema ContentValidator
  43. // fakeClientFn is used for testing
  44. fakeClientFn FakeClientFunc
  45. }

特点2 开头的方法要返回要创建对象的指针

  1. func NewBuilder(restClientGetter RESTClientGetter) *Builder {
  2. categoryExpanderFn := func() (restmapper.CategoryExpander, error) {
  3. discoveryClient, err := restClientGetter.ToDiscoveryClient()
  4. if err != nil {
  5. return nil, err
  6. }
  7. return restmapper.NewDiscoveryCategoryExpander(discoveryClient), err
  8. }
  9. return newBuilder(
  10. restClientGetter.ToRESTConfig,
  11. restClientGetter.ToRESTMapper,
  12. (&cachingCategoryExpanderFunc{delegate: categoryExpanderFn}).ToCategoryExpander,
  13. )
  14. }

特点3 所有的方法都返回的是建造者对象的指针

staging\src\k8s.io\kubectl\pkg\cmd\create\create.go

  1. r := f.NewBuilder().
  2. Unstructured().
  3. Schema(schema).
  4. ContinueOnError().
  5. NamespaceParam(cmdNamespace).DefaultNamespace().
  6. FilenameParam(enforceNamespace, &o.FilenameOptions).
  7. LabelSelectorParam(o.Selector).
  8. Flatten().
  9. Do()

调用时看着像链式调用,链上的每个方法都返回这个要建造对象的指针

  1. func (b *Builder) Schema(schema ContentValidator) *Builder {
  2. b.schema = schema
  3. return b
  4. }
  5. func (b *Builder) ContinueOnError() *Builder {
  6. b.continueOnError= true
  7. return b
  8. }

看起来就是设置构造对象的各种属性

2.7 createCmd中的visitor访问者设计模式

visitor访问者模式简介

访问者模式(Visitor Pattern)是一种将数据结构与数据操作作分离的设计模式,

指封装一些作用于某种数据结构中的各元素的操作,

可以在不改变数据结构的前提下定义作用于这些元素的新的操作,

属于行为型设计模式。

kubectl中的访问者模式

在kubectl中多个Visitor是来访问一个数据结构的不同部分

这种情况下,数据结构有点像一个数据库,而各个Visitor会成为一个个小应用

本节重点总结:

visitor访问者模式简介

kubeclt中的visitor应用

visitor访问者模式简介

访问者模式(Visitor Pattern)是一种将数据结构与数据操作分离的设计模式,

指封装一些作用于某种数据结构中的各元素的操作,

可以在不改变数据结构的前提下定义作用于这些元素的新操作,

属于行为型设计模式。

kubectl中的访问者模式

在kubectl中多个Visitor是来访问一个数据结构的不同部分。

这种情况下,数据结构有点像一个数据库,而各个Visitor会成为一个个小应用。

访问者模式主要适用于以下应用场景:

(1)数据结构稳定,作用于数据结构稳定的操作经常变化的场景。

(2)需要数据结构与数据作分离的场景。

(3)需要对不同数据类型(元素)进行操作,而不使用分支判断具体类型的场景。

访问者模式的优点

(1)解耦了数据结构与数据操作,使得操作集合可以独立变化。

(2)可以通过扩展访问者角色,实现对数据集的不同操作,程序扩展性更好。

(3)元素具体类型并非单一,访问者均可操作。

(4)各角色职责分离,符合单一职责原则。

访问者模式的缺点

(1)无法增加元素类型:若系统数据结构对象易于变化,

经常有新的数据对象增加进来,

则访问者类必须增加对应元素类型的操作,违背了开闭原则。

(2)具体元素变更困难:具体元素增加属性、删除属性等操作,

会导致对应的访问者类需要进行相应的修改,

尤其当有大量访问类时,修改范围太大。

(3)违背依赖倒置原则:为了达到“区别对待”,

访问者角色依赖的具体元素类型,而不是抽象。

kubectl中访问者模式

在kubectl中多个Visitor是来访问一个数据结构的不同部分

这种情况下,数据结构有点像一个数据库,而各个Visitor会成为一个个小应用。

Visitor接口和VisitorFunc定义

位置在kubernetes/staging/src/k8s.io/cli-runtime/pkg/resource/interfaces.go

  1. // Visitor lets clients walk a list of resources.
  2. type Visitor interface {
  3. Visit(VisitorFunc) error
  4. }
  5. // VisitorFunc implements the Visitor interface for a matching function.
  6. // If there was a problem walking a list of resources, the incoming error
  7. // will describe the problem and the function can decide how to handle that error.
  8. // A nil returned indicates to accept an error to continue loops even when errors happen.
  9. // This is useful for ignoring certain kinds of errors or aggregating errors in some way.
  10. type VisitorFunc func(*Info, error) error

result的Visit方法

  1. func(r *Result) Visit(fn VisitorFunc) error {
  2. if r.err != nil {
  3. return r.err
  4. }
  5. err := r.visitor.Visit(fn)
  6. return utilerrors.FilterOut(err, r.ignoreErrors...)
  7. }

具体的visitor的visit方法定义,参数都是一个VisitorFunc的fn

  1. // Visit in a FileVisitor is just taking care of opening/closing files
  2. func (v *FileVisitor) Visit(fn VisitorFunc) error {
  3. var f *os.File
  4. if v.Path == constSTDINstr {
  5. f = os.Stdin
  6. } else {
  7. var err error
  8. f, err = os.Open(v.Path)
  9. if err != nil {
  10. return err
  11. }
  12. defer f.Close()
  13. }
  14. // TODO: Consider adding a flag to force to UTF16, apparently some
  15. // Windows tools don't write the BOM
  16. utf16bom := unicode.BOMOverride(unicode.UTF8.NewDecoder())
  17. v.StreamVisitor.Reader = transform.NewReader(f, utf16bom)
  18. return v.StreamVisitor.Visit(fn)
  19. }

kubectl create中 通过Builder模式创建visitor并执行的过程

FilenameParam解析 -f文件参数 创建一个visitor

位置kubernetes/staging/src/k8s.io/cli-runtime/pkg/resource/builder.go

validate校验-f参数

  1. func (o *FilenameOptions) validate() []error {
  2. var errs []error
  3. if len(o.Filenames) > 0 && len(o.Kustomize) > 0 {
  4. errs = append(errs, fmt.Errorf("only one of -f or -k can be specified"))
  5. }
  6. if len(o.Kustomize) > 0 && o.Recursive {
  7. errs = append(errs, fmt.Errorf("the -k flag can't be used with -f or -R"))
  8. }
  9. return errs
  10. }

-k代表使用Kustomize配置

如果-f -k都存在报错only one of -f or -k can be specified

  1. kubectl create -f rule.yaml -k rule.yaml
  2. error: only one of -f or -k can be specified

-k不支持递归 -R

  1. kubectl create -k rule.yaml -R
  2. error: the -k flag can't be used with -f or -R

调用path解析文件

  1. recursive := filenameOptions.Recursive
  2. paths := filenameOptions.Filenames
  3. for _, s := range paths {
  4. switch {
  5. case s == "-":
  6. b.Stdin()
  7. case strings.Index(s, "http://") == 0 || strings.Index(s, "https://") == 0:
  8. url, err := url.Parse(s)
  9. if err != nil {
  10. b.errs = append(b.errs, fmt.Errorf("the URL passed to filename %q is not valid: %v", s, err))
  11. continue
  12. }
  13. b.URL(defaultHttpGetAttempts, url)
  14. default:
  15. matches, err := expandIfFilePattern(s)
  16. if err != nil {
  17. b.errs = append(b.errs, err)
  18. continue
  19. }
  20. if !recursive && len(matches) == 1 {
  21. b.singleItemImplied = true
  22. }
  23. b.Path(recursive, matches...)
  24. }
  25. }

遍历-f传入的paths

如果是-代表从标准输入传入

如果是http开头的代表从远端http接口读取,调用b.URL

默认是文件,调用b.Path解析

b.Path调用ExpandPathsToFileVisitors生产visitor

  1. // ExpandPathsToFileVisitors will return a slice of FileVisitors that will handle files from the provided path.
  2. // After FileVisitors open the files, they will pass an io.Reader to a StreamVisitor to do the reading. (stdin
  3. // is also taken care of). Paths argument also accepts a single file, and will return a single visitor
  4. func ExpandPathsToFileVisitors(mapper *mapper, paths string, recursive bool, extensions []string, schema ContentValidator) ([]Visitor, error) {
  5. var visitors []Visitor
  6. err := filepath.Walk(paths, func(path string, fi os.FileInfo, err error) error {
  7. if err != nil {
  8. return err
  9. }
  10. if fi.IsDir() {
  11. if path != paths && !recursive {
  12. return filepath.SkipDir
  13. }
  14. return nil
  15. }
  16. // Don't check extension if the filepath was passed explicitly
  17. if path != paths && ignoreFile(path, extensions) {
  18. return nil
  19. }
  20. visitor := &FileVisitor{
  21. Path: path,
  22. StreamVisitor: NewStreamVisitor(nil, mapper, path, schema),
  23. }
  24. visitors = append(visitors, visitor)
  25. return nil
  26. })
  27. if err != nil {
  28. return nil, err
  29. }
  30. return visitors, nil
  31. }

底层调用的StreamVisitor,把对应的方法注册到visitor中

位置D:\Workspace\Go\kubernetes\staging\src\k8s.io\cli-runtime\pkg\resource\visitor.go

  1. // Visit implements Visitor over a stream. StreamVisitor is able to distinct multiple resources in one stream.
  2. func (v *StreamVisitor) Visit(fn VisitorFunc) error {
  3. d := yaml.NewYAMLOrJSONDecoder(v.Reader, 4096)
  4. for {
  5. ext := runtime.RawExtension{}
  6. if err := d.Decode(&ext); err != nil {
  7. if err == io.EOF {
  8. return nil
  9. }
  10. return fmt.Errorf("error parsing %s: %v", v.Source, err)
  11. }
  12. // TODO: This needs to be able to handle object in other encodings and schemas.
  13. ext.Raw = bytes.TrimSpace(ext.Raw)
  14. if len(ext.Raw) == 0 || bytes.Equal(ext.Raw, []byte("null")) {
  15. continue
  16. }
  17. if err := ValidateSchema(ext.Raw, v.Schema); err != nil {
  18. return fmt.Errorf("error validating %q: %v", v.Source, err)
  19. }
  20. info, err := v.infoForData(ext.Raw, v.Source)
  21. if err != nil {
  22. if fnErr := fn(info, err); fnErr != nil {
  23. return fnErr
  24. }
  25. continue
  26. }
  27. if err := fn(info, nil); err != nil {
  28. return err
  29. }
  30. }
  31. }

用jsonYamlDecoder解析文件

ValidateSchema会解析文件中字段进行校验,比如我们把spec故意写成aspec

  1. kubectl apply -f rule.yaml
  2. error: error validating "rule.yaml": error validating data: [ValidationError(PrometheusRule): Unknown field "aspec" in

infoForData将解析结果转换为Info对象

创建Info。object就是k8s的对象

位置staging\src\k8s.io\cli-runtime\pkg\resource\mapper.go

m.decoder.Decode解析出object和gvk对象

其中object代表就是k8s的对象

gvk是Group/Vsersion/Kind的缩写

  1. // InfoForData creates an Info object for the given data. An error is returned
  2. // if any of the decoding or client lookup steps fail. Name and namespace will be
  3. // set into Info if the mapping's MetadataAccessor can retrieve them.
  4. func (m *mapper) infoForData(data []byte, source string) (*Info, error) {
  5. obj, gvk, err := m.decoder.Decode(data, nil, nil)
  6. if err != nil {
  7. return nil, fmt.Errorf("unable to decode %q: %v", source, err)
  8. }
  9. name, _ := metadataAccessor.Name(obj)
  10. namespace, _ := metadataAccessor.Namespace(obj)
  11. resourceVersion, _ := metadataAccessor.ResourceVersion(obj)
  12. ret := &Info{
  13. Source: source,
  14. Namespace: namespace,
  15. Name: name,
  16. ResourceVersion: resourceVersion,
  17. Object: obj,
  18. }
  19. if m.localFn == nil || !m.localFn() {
  20. restMapper, err := m.restMapperFn()
  21. if err != nil {
  22. return nil, err
  23. }
  24. mapping, err := restMapper.RESTMapping(gvk.GroupKind(), gvk.Version)
  25. if err != nil {
  26. if _, ok := err.(*meta.NoKindMatchError); ok {
  27. return nil, fmt.Errorf("resource mapping not found for name: %q namespace: %q from %q: %v\nensure CRDs are installed first",
  28. name, namespace, source, err)
  29. }
  30. return nil, fmt.Errorf("unable to recognize %q: %v", source, err)
  31. }
  32. ret.Mapping = mapping
  33. client, err := m.clientFn(gvk.GroupVersion())
  34. if err != nil {
  35. return nil, fmt.Errorf("unable to connect to a server to handle %q: %v", mapping.Resource, err)
  36. }
  37. ret.Client = client
  38. }
  39. return ret, nil
  40. }

k8s对象object讲解

Object k8s对象

文档地址https://kubenetes.io/zh/docs/concepts/overview/working-objects/kubernetes-objects/

位置staging\src\k8s.io\apimachinery\pkg\runtime\interfaces.go

  1. // Object interface must be supported by all API types registered with Scheme. Since objects in a scheme are
  2. // expected to be serialized to the wire, the interface an Object must provide to the Scheme allows
  3. // serializers to set the kind, version, and group the object is represented as. An Object may choose
  4. // to return a no-op ObjectKindAccessor in cases where it is not expected to be serialized.
  5. type Object interface {
  6. GetObjectKind() schema.ObjectKind
  7. DeepCopyObject() Object
  8. }

作用

Kubernetes对象是持久化的实体

Kubernetes使用这些实体去表示整个集群的状态。特别地,它们描述了如下信息:

哪些容器化应用在运行(以及在哪些节点上)

可以被应用使用的资源

关于应用运行时表现的策略,比如重启策略、升级策略,以及容错策略

操作Kubernetes对象,无论是创建、修改,或者删除,需要使用Kubernetes API

期望状态

Kuberetes对象是“目标性记录”一旦创建对象,Kubernetes系统将持续工作以确保对象存在

通过创建对象,本质上是在告知Kubernetes系统,所需要的集群工作负载看起来是什么样子的,这就是Kubernetes集群的期望状态(Desired State)

对象规约(Spec)与状态(Status)

几乎每个Kuberneter对象包含两个嵌套的对象字段,它们负责管理对象的配置:对象spec(规约)和对象status(状态)

对于具有spec的对象,你必须在创建时设置其内容,描述你希望对象所具有的特征:期望状态(Desired State)。

status描述了对象的当前状态(Current State),它是由Kubernetes系统和组件设置并更新的。在任何时刻,Kubernetes控制平面都一直积极地管理者对象的实际状态,以使之与期望状态相匹配。

yaml中的必须字段

在想要创建的Kubernetes对象对应的.yaml文件中,需要配置如下的字段:

apiVersion - 创建该对象所使用的Kubernetes API的版本

kind - 想要创建的对象的类别

metadata-帮助唯一性标识对象的一些数据,包括一个name字符串、UID和可选的namespace

Do中创建一批visitor

  1. // Do returns a Result object with a Visitor for the resources identified by the Builder.
  2. // The visitor will respect the error behavior specified by ContinueOnError. Note that stream
  3. // inputs are consumed by the first execution - use Infos() or Object() on the Result to capture a list
  4. // for further iteration.
  5. func (b *Builder) Do() *Result {
  6. r := b.visitorResult()
  7. r.mapper = b.Mapper()
  8. if r.err != nil {
  9. return r
  10. }
  11. if b.flatten {
  12. r.visitor = NewFlattenListVisitor(r.visitor, b.objectTyper, b.mapper)
  13. }
  14. helpers := []VisitorFunc{}
  15. if b.defaultNamespace {
  16. helpers = append(helpers, SetNamespace(b.namespace))
  17. }
  18. if b.requireNamespace {
  19. helpers = append(helpers, RequireNamespace(b.namespace))
  20. }
  21. helpers = append(helpers, FilterNamespace)
  22. if b.requireObject {
  23. helpers = append(helpers, RetrieveLazy)
  24. }
  25. if b.continueOnError {
  26. r.visitor = ContinueOnErrorVisitor{Visitor: r.visitor}
  27. }
  28. r.visitor = NewDecoratedVisitor(r.visitor, helpers...)
  29. return r
  30. }

helpers代表一批VisitorFunc

比如校验namespace的RequireNamespace

  1. // RequireNamespace will either set a namespace if none is provided on the
  2. // Info object, or if the namespace is set and does not match the provided
  3. // value, returns an error. This is intended to guard against administrators
  4. // accidentally operating on resources outside their namespace.
  5. func RequireNamespace(namespace string) VisitorFunc {
  6. return func(info *Info, err error) error {
  7. if err != nil {
  8. return err
  9. }
  10. if !info.Namespaced() {
  11. return nil
  12. }
  13. if len(info.Namespace) == 0 {
  14. info.Namespace = namespace
  15. UpdateObjectNamespace(info, nil)
  16. return nil
  17. }
  18. if info.Namespace != namespace {
  19. return fmt.Errorf("the namespace from the provided object %q does not match the namespace %q. You must pass '--namespace=%s' to perform this operation.", info.Namespace, namespace, info.Namespace)
  20. }
  21. return nil
  22. }
  23. }

创建带装饰器的visitor DecoratedVisitor

  1. if b.continueOnError {
  2. r.visitor = ContinueOnErrorVisitor{Visitor: r.visitor}
  3. }
  4. r.visitor = NewDecoratedVisitor(r.visitor, helpers...)

对应的visit方法

  1. // Visit implements Visitor
  2. func (v DecoratedVisitor) Visit(fn VisitorFunc) error {
  3. return v.visitor.Visit(func(info *Info, err error) error {
  4. if err != nil {
  5. return err
  6. }
  7. for i := range v.decorators {
  8. if err := v.decorators[i](info, nil); err != nil {
  9. return err
  10. }
  11. }
  12. return fn(info, nil)
  13. })
  14. }

visitor的调用

Visitor调用链分析

外层调用result.Visit方法,内部的func

  1. err = r.Visit(func(info *resource.Info, err error) error {
  2. if err != nil {
  3. return err
  4. }
  5. if err := util.CreateOrUpdateAnnotation(cmdutil.GetFlagBool(cmd, cmdutil.ApplyAnnotationsFlag), info.Object, scheme.DefaultJSONEncoder()); err != nil {
  6. return cmdutil.AddSourceToErr("creating", info.Source, err)
  7. }
  8. if err := o.Recorder.Record(info.Object); err != nil {
  9. klog.V(4).Infof("error recording current command: %v", err)
  10. }
  11. if o.DryRunStrategy != cmdutil.DryRunClient {
  12. if o.DryRunStrategy == cmdutil.DryRunServer {
  13. if err := o.DryRunVerifier.HasSupport(info.Mapping.GroupVersionKind); err != nil {
  14. return cmdutil.AddSourceToErr("creating", info.Source, err)
  15. }
  16. }
  17. obj, err := resource.
  18. NewHelper(info.Client, info.Mapping).
  19. DryRun(o.DryRunStrategy == cmdutil.DryRunServer).
  20. WithFieldManager(o.fieldManager).
  21. WithFieldValidation(o.ValidationDirective).
  22. Create(info.Namespace, true, info.Object)
  23. if err != nil {
  24. return cmdutil.AddSourceToErr("creating", info.Source, err)
  25. }
  26. info.Refresh(obj, true)
  27. }
  28. count++
  29. return o.PrintObj(info.Object)
  30. })

visitor接口中的调用方法

  1. // Visit implements the Visitor interface on the items described in the Builder.
  2. // Note that some visitor sources are not traversable more than once, or may
  3. // return different results. If you wish to operate on the same set of resources
  4. // multiple times, use the Infos() method.
  5. func (r *Result) Visit(fn VisitorFunc) error {
  6. if r.err != nil {
  7. return r.err
  8. }
  9. err := r.visitor.Visit(fn)
  10. return utilerrors.FilterOut(err, r.ignoreErrors...)
  11. }

最终的调用就是前面注册的各个visitor的Visit方法

外层VisitorFunc分析

如果出错就返回错误

DryRunStraregy代表试运行策略

默认为None代表不试运行

client代表客户端试运行,不发送请求到server

server点服务端试运行,发送请求,但是如果会改变状态的话就不做

最终调用,Create创建资源,然后调用o.PrintObj(info.Object)打印结果

  1. func(info *resource.Info, err error) error {
  2. if err != nil {
  3. return err
  4. }
  5. if err := util.CreateOrUpdateAnnotation(cmdutil.GetFlagBool(cmd, cmdutil.ApplyAnnotationsFlag), info.Object, scheme.DefaultJSONEncoder()); err != nil {
  6. return cmdutil.AddSourceToErr("creating", info.Source, err)
  7. }
  8. if err := o.Recorder.Record(info.Object); err != nil {
  9. klog.V(4).Infof("error recording current command: %v", err)
  10. }
  11. if o.DryRunStrategy != cmdutil.DryRunClient {
  12. if o.DryRunStrategy == cmdutil.DryRunServer {
  13. if err := o.DryRunVerifier.HasSupport(info.Mapping.GroupVersionKind); err != nil {
  14. return cmdutil.AddSourceToErr("creating", info.Source, err)
  15. }
  16. }
  17. obj, err := resource.
  18. NewHelper(info.Client, info.Mapping).
  19. DryRun(o.DryRunStrategy == cmdutil.DryRunServer).
  20. WithFieldManager(o.fieldManager).
  21. WithFieldValidation(o.ValidationDirective).
  22. Create(info.Namespace, true, info.Object)
  23. if err != nil {
  24. return cmdutil.AddSourceToErr("creating", info.Source, err)
  25. }
  26. info.Refresh(obj, true)
  27. }
  28. count++
  29. return o.PrintObj(info.Object)
  30. }

2.8 kubectl功能和对象总结

kubectl的职责

主要的工作是处理用户提交的东西(包括,命令行参数,yaml文件等)

然后其会把用户提交的这些东西组织成一个数据结构体

然后把其发送给API Server

kubectl的代码原理

cobra从命令行和yaml文件中获取信息

通过Builder模式并把其转成一系列的资源

最后用Visitor模式来迭代处理这些Resources,实现各类资源对象的解析和校验

用RESTClient将Object发送到kube-apiserver

kubectl架构图

 create流程

kubectl中的核心对象

RESTClient 和k8s-api通信的restful-client

位置D:\Workspace\Go\kubernetes\staging\src\k8s.io\cli-runtime\pkg\resource\interfaces.go

  1. type RESTClientGetter interface {
  2. ToRESTConfig() (*rest.Config, error)
  3. ToDiscoveryClient() (discovery.CachedDiscoveryInterface, error)
  4. ToRESTMapper() (meta.RESTMapper, error)
  5. }

 Object k8s对象

文档地址https://kubenetes.io/zh/docs/concepts/overview/working-objects/kubernetes-objects/

staging\src\k8s.io\cli-runtime\pkg\resource\interfaces.go

第3章 apiserver中的权限相关

3.1 apiserver启动主流程分析

本节重点总结:

apiserver启动流程

CreateServerChain创建3个server

CreateKubeAPIServer创建kubeAPIServer代表API核心服务,包括常见的Pod/Deployment/Service

createAPIExtensionsServer创建apiExtensionsServer代表API扩展服务,主要针对CRD

createAggregatorServer创建aggregatorServer代表处理merics的服务

apiserver启动流程

入口地址

位置D:\Workspace\Go\kubernetes\cmd\kube-apiserver\apiserver.go

初始化apiserver的cmd并执行

  1. func main() {
  2. command := app.NewAPIServerCommand()
  3. code := cli.Run(command)
  4. os.Exit(code)
  5. }

newCmd执行流程

之前我们说过cobra的几个func执行顺序

  1. // The *Run functions are executed in the following order:
  2. // * PersistenPreRun()
  3. // * PreRun()
  4. // * Run()
  5. // * *PostRun()
  6. // * *PersistentPostRun()
  7. // All functions get the same args, the arguments after the command name.
  8. //

PersistentPreRunE准备

设置WarningHandler

  1. PersistentPreRunE: func(*cobra.Command, []string) error {
  2. // silence client-go warnings.
  3. // kube-apiserver loopback clients should not log self-issued warnings.
  4. rest.SetDefaultWarningHandler(rest.NoWarnings{})
  5. return nil
  6. },

runE解析 准备工作

打印版本信息

  1. verflag.PrintAndExitIfRequested()
  2. ...
  3. // PrintAndExitIfRequested will check if the -version flag was passed
  4. // and, if so, print the version and exit.
  5. func PrintAndExitIfRequested() {
  6. if *versionFlag == VersionRaw {
  7. fmt.Printf("%#v\n", version.Get())
  8. os.Exit(0)
  9. } else if *versionFlag == VersionTrue {
  10. fmt.Printf("%s %s\n", programName, version.Get())
  11. os.Exit(0)
  12. }
  13. }

打印命令行参数

  1. // PrintFlags logs the flags in the flagset
  2. func PrintFlags(flags *pflag.FlagSet) {
  3. flags.VisitAll(func(flag *pflag.Flag) {
  4. klog.V(1).Infof("FLAG: --%s=%q", flag.Name, flag.Value)
  5. })
  6. }

检查不安全的端口

delete this check after insecure flags removed in v1.24

Complete设置默认值

  1. // set default options
  2. completedOptions, err := Complete(s)
  3. if err != nil {
  4. return err
  5. }

检查命令行参数

  1. // validate options
  2. if errs := completedOptions.Validate(); len(errs) != 0 {
  3. return utilerrors.NewAggregate(errs)
  4. }
  5. cmd\kube-apiserver\app\options\validation.go
  6. // Validate checks ServerRunOptions and return a slice of found errs.
  7. func (s *ServerRunOptions) Validate() []error {
  8. var errs []error
  9. if s.MasterCount <= 0 {
  10. errs = append(errs, fmt.Errorf("--apiserver-count should be a positive number, but value '%d' provided", s.MasterCount))
  11. }
  12. errs = append(errs, s.Etcd.Validate()...)
  13. errs = append(errs, validateClusterIPFlags(s)...)
  14. errs = append(errs, validateServiceNodePort(s)...)
  15. errs = append(errs, validateAPIPriorityAndFairness(s)...)
  16. errs = append(errs, s.SecureServing.Validate()...)
  17. errs = append(errs, s.Authentication.Validate()...)
  18. errs = append(errs, s.Authorization.Validate()...)
  19. errs = append(errs, s.Audit.Validate()...)
  20. errs = append(errs, s.Admission.Validate()...)
  21. errs = append(errs, s.APIEnablement.Validate(legacyscheme.Scheme, apiextensionsapiserver.Scheme, aggregatorscheme.Scheme)...)
  22. errs = append(errs, validateTokenRequest(s)...)
  23. errs = append(errs, s.Metrics.Validate()...)
  24. errs = append(errs, validateAPIServerIdentity(s)...)
  25. return errs
  26. }

举一个例子,比如这个校验etcd的src\k8s.io\apiserver\pkg\server\options\etcd.go

  1. func (s *EtcdOptions) Validate() []error {
  2. if s == nil {
  3. return nil
  4. }
  5. allErrors := []error{}
  6. if len(s.StorageConfig.Transport.ServerList) == 0 {
  7. allErrors = append(allErrors, fmt.Errorf("--etcd-servers must be specified"))
  8. }
  9. if s.StorageConfig.Type != storagebackend.StorageTypeUnset && !storageTypes.Has(s.StorageConfig.Type) {
  10. allErrors = append(allErrors, fmt.Errorf("--storage-backend invalid, allowed values: %s. If not specified, it will default to 'etcd3'", strings.Join(storageTypes.List(), ", ")))
  11. }
  12. for _, override := range s.EtcdServersOverrides {
  13. tokens := strings.Split(override, "#")
  14. if len(tokens) != 2 {
  15. allErrors = append(allErrors, fmt.Errorf("--etcd-servers-overrides invalid, must be of format: group/resource#servers, where servers are URLs, semicolon separated"))
  16. continue
  17. }
  18. apiresource := strings.Split(tokens[0], "/")
  19. if len(apiresource) != 2 {
  20. allErrors = append(allErrors, fmt.Errorf("--etcd-servers-overrides invalid, must be of format: group/resource#servers, where servers are URLs, semicolon separated"))
  21. continue
  22. }
  23. }
  24. return allErrors
  25. }
  1. kubectl get pod -n kube-system
  2. ps -ef |grep apiserver
  3. ps -ef |grep apiserver |grep etcd

真正的Run函数

Run(completeOptions, genericapiserver.SetupSignalHandler())

completedOptions代表ServerRunOptions

第二个参数解析stopCh

在底层的Run函数定义上可以看到第二个参数类型是一个只读的stop chan, stopCh <- chan struct{}

对应的genericapiserver.SetupSignalHandler()解析

  1. var onlyOneSignalHandler = make(chan struct{})
  2. var shutdownHandler chan os.Signal
  3. // SetupSignalHandler registered for SIGTERM and SIGINT. A stop channel is returned
  4. // which is closed on one of these signals. If a second signal is caught, the program
  5. // is terminated with exit code 1.
  6. // Only one of SetupSignalContext and SetupSignalHandler should be called, and only can
  7. // be called once.
  8. func SetupSignalHandler() <-chan struct{} {
  9. return SetupSignalContext().Done()
  10. }
  11. // SetupSignalContext is same as SetupSignalHandler, but a context.Context is returned.
  12. // Only one of SetupSignalContext and SetupSignalHandler should be called, and only can
  13. // be called once.
  14. func SetupSignalContext() context.Context {
  15. close(onlyOneSignalHandler) // panics when called twice
  16. shutdownHandler = make(chan os.Signal, 2)
  17. ctx, cancel := context.WithCancel(context.Background())
  18. signal.Notify(shutdownHandler, shutdownSignals...)
  19. go func() {
  20. <-shutdownHandler
  21. cancel()
  22. <-shutdownHandler
  23. os.Exit(1) // second signal. Exit directly.
  24. }()
  25. return ctx
  26. }

从上面可以看出这是一个context的Done方法返回,就是一个<-chan struct{}

CreateServerChain创建3个server

CreateKubeAPIServer创建 kubeAPIServer代表API核心服务,包括常见的Pod/Deployment/Service

createAPIExtensionsServer创建 apiExtensionsServer 代表API扩展服务,主要针对CRD

createAggregatorServer创建aggregatorServer代表处理metrics的服务

然后运行

这一小节先简单过一下运行的流程,后面再慢慢看细节

  1. // Run runs the specified APIServer. This should never exit.
  2. func Run(completeOptions completedServerRunOptions, stopCh <-chan struct{}) error {
  3. // To help debugging, immediately log version
  4. klog.Infof("Version: %+v", version.Get())
  5. klog.InfoS("Golang settings", "GOGC", os.Getenv("GOGC"), "GOMAXPROCS", os.Getenv("GOMAXPROCS"), "GOTRACEBACK", os.Getenv("GOTRACEBACK"))
  6. server, err := CreateServerChain(completeOptions, stopCh)
  7. if err != nil {
  8. return err
  9. }
  10. prepared, err := server.PrepareRun()
  11. if err != nil {
  12. return err
  13. }
  14. return prepared.Run(stopCh)
  15. }

3.2 API核心服务通用配置genericConifg的准备工作

本节重点总结

API核心服务需要的通用配置工作中的准备工作

创建和节点通信的结构体proxyTransport,使用缓存长连接来提高效率

创建clientset

初始化etcd存储

CreateKubeAPIServerConfig创建所需配置解析

D:\Workspace\Go\src\github.com\kubernetes\kubernetes\cmd\kube-apiserver\app\server.go

创建和节点通信的结构体proxyTransport,使用缓存长连接提高效率

proxyTransport := CreateProxyTransport()

http.transport功能简介

transport的主要功能其实就是缓存了长连接

用于大量http请求场景下的连接复用

减少发送请求时TCP(TLS)连接建立的时间损耗

创建通用配置genericConfig

	genericConfig, versionedInformers, serviceResolver, pluginInitializers, admissionPostStartHook, storageFactory, err := buildGenericConfig(s.ServerRunOptions, proxyTransport)

下面是众多的ApplyTo分析

众多ApplyTo分析,并且有对应的AddFlags标记命令行参数

先创建genericConfig

genericConfig = genericapiserver.NewConfig(legacyscheme.Codecs)

以检查https配置的ApplyTo分析

  1. if lastErr = s.SecureServing.ApplyTo(&genericConfig.SecureServing, &genericConfig.LoopbackClientConfig); lastErr != nil {
  2. return
  3. }

底层调用SecureServingOptions的ApplyTo,有对应的AddFlags方法标记命令行参数,位置再

staging\src\k8s.io\apiserver\pkg\server\options\serving.go

  1. func (s *SecureServingOptions) AddFlags(fs *pflag.FlagSet) {
  2. if s == nil {
  3. return
  4. }
  5. fs.IPVar(&s.BindAddress, "bind-address", s.BindAddress, ""+
  6. "The IP address on which to listen for the --secure-port port. The "+
  7. "associated interface(s) must be reachable by the rest of the cluster, and by CLI/web "+
  8. "clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.")
  9. desc := "The port on which to serve HTTPS with authentication and authorization."
  10. if s.Required {
  11. desc += " It cannot be switched off with 0."
  12. } else {
  13. desc += " If 0, don't serve HTTPS at all."
  14. }
  15. fs.IntVar(&s.BindPort, "secure-port", s.BindPort, desc)

初始化etcd存储

创建存储工厂配置

  1. storageFactoryConfig := kubeapiserver.NewStorageFactoryConfig()
  2. storageFactoryConfig.APIResourceConfig = genericConfig.MergedResourceConfig
  3. completedStorageFactoryConfig, err := storageFactoryConfig.Complete(s.Etcd)
  4. if err != nil {
  5. lastErr = err
  6. return
  7. }

初始化存储工厂

  1. storageFactory, lastErr = completedStorageFactoryConfig.New()
  2. if lastErr != nil {
  3. return
  4. }

将存储工厂应用到服务端运行对象中,后期可以通过RESTOptionsGetter获取操作Etcd的句柄

  1. if lastErr = s.Etcd.ApplyWithStorageFactoryTo(storageFactory, genericConfig); lastErr != nil {
  2. return
  3. }
  4. func (s *EtcdOptions) ApplyWithStorageFactoryTo(factory serverstorage.StorageFactory, c *server.Config) error {
  5. if err := s.addEtcdHealthEndpoint(c); err != nil {
  6. return err
  7. }
  8. // use the StorageObjectCountTracker interface instance from server.Config
  9. s.StorageConfig.StorageObjectCountTracker = c.StorageObjectCountTracker
  10. c.RESTOptionsGetter = &StorageFactoryRestOptionsFactory{Options: *s, StorageFactory: factory}
  11. return nil
  12. }

addEtcdHealthEndpoint创建etcd的健康检测

  1. func (s *EtcdOptions) addEtcdHealthEndpoint(c *server.Config) error {
  2. healthCheck, err := storagefactory.CreateHealthCheck(s.StorageConfig)
  3. if err != nil {
  4. return err
  5. }
  6. c.AddHealthChecks(healthz.NamedCheck("etcd", func(r *http.Request) error {
  7. return healthCheck()
  8. }))
  9. if s.EncryptionProviderConfigFilepath != "" {
  10. kmsPluginHealthzChecks, err := encryptionconfig.GetKMSPluginHealthzCheckers(s.EncryptionProviderConfigFilepath)
  11. if err != nil {
  12. return err
  13. }
  14. c.AddHealthChecks(kmsPluginHealthzChecks...)
  15. }
  16. return nil
  17. }

从CreateHealthCheck得知,只支持etcdV3的接口

  1. // CreateHealthCheck creates a healthcheck function based on given config.
  2. func CreateHealthCheck(c storagebackend.Config) (func() error, error) {
  3. switch c.Type {
  4. case storagebackend.StorageTypeETCD2:
  5. return nil, fmt.Errorf("%s is no longer a supported storage backend", c.Type)
  6. case storagebackend.StorageTypeUnset, storagebackend.StorageTypeETCD3:
  7. return newETCD3HealthCheck(c)
  8. default:
  9. return nil, fmt.Errorf("unknown storage type: %s", c.Type)
  10. }
  11. }

设置使用protobufs用来内部交互,并且禁用压缩功能

因为内部网络速度快,没必要为了节省带宽而将cpu浪费在压缩和解压上

  1. // Use protobufs for self-communication.
  2. // Since not every generic apiserver has to support protobufs, we
  3. // cannot default to it in generic apiserver and need to explicitly
  4. // set it in kube-apiserver.
  5. genericConfig.LoopbackClientConfig.ContentConfig.ContentType = "application/vnd.kubernetes.protobuf"
  6. // Disable compression for self-communication, since we are going to be
  7. // on a fast local network
  8. genericConfig.LoopbackClientConfig.DisableCompression = true

创建clientset

  1. kubeClientConfig := genericConfig.LoopbackClientConfig
  2. clientgoExternalClient, err := clientgoclientset.NewForConfig(kubeClientConfig)
  3. if err != nil {
  4. lastErr = fmt.Errorf("failed to create real external clientset: %v", err)
  5. return
  6. }
  7. versionedInformers = clientgoinformers.NewSharedInformerFactory(clientgoExternalClient, 10*time.Minute)

versionedInformers代表k8s-client的informer对象,用于listAndWatch k8s对象的

3.3 API核心服务的Authentication认证

Authenticatioon的目的

验证你是谁 确认“你是不是你”

包括多种方式,如Client Certificates,Password,and Plain Tokens, Bootstarp Tokens, and JWT Tokens等

Kubernets使用身份认证插件利用下面的策略来认证API请求的身份

-客户端证书

-持有者令牌(Bearer Token)

-身份认证代理(Proxy)

-HTTP基本认证机制

union认证的规则

-如果某一个认证方法报错就返回,说明认证没过

-如果某一个认证方法报ok,说明认证过了,直接return了,无需再运行其他认证了

-如果所有的认证方法都没报ok,则认证没过

验证你是谁 确认“你是不是你”

包括多种方式,如Client Certificates,Password,and Plain Tokens, Bootstarp Tokens, and JWT Tokens等

文档地址https://kubernetes.io/zh/docs/reference/access-authn-authz/authentication/

所有Kubernetes集群都有两类用户:由Kubernetes管理的服务账号和普通用户

所以认证要围绕这两类用户展开

身份认证策略

Kubernetes使用身份认证插件利用客户端证书、持有者令牌(Bearer Token)、身份认证代理(Proxy)或者HTTP基本认证机制来认证API请求的身份

Http请求发给API服务器时,插件会将以下属性关联到请求本身:

- 用户名:用来辨识最终用户的字符串。常见的值可以是kube-admin或tom@example.com。

- 用户ID:用来辨识最终用户的字符串。旨在比用户名有更好的一致性和唯一性。

- 用户组:取值为一组字符串,其中各个字符串用来表明用户是某个命名的用户逻辑集合的成员。常见的值可能是sysytem:masters或者devops-team等。

-附加字段:一组额外的键-值映射,键是字符串,值是一组字符串;用来保存一些鉴权组件可能觉得有额外信息

你可以同时启用多种身份认证方法,并且你通常会至少使用两种方法:

-针对服务账号使用服务账号令牌

-至少另外一种方法对用户的身份进行认证

当集群中启用了多个身份认证模块时,第一个成功地对请求完成身份认证的模块会直接做出评估决定。API服务器并不保证身份认证模块的运行顺序。

对于所有通过身份认证的用户,system:authenticated组都会被添加到其组列表中。

与其他身份认证协议(LDAP、SAML、Kuberos、X509的替代模块等等)都可以通过使用一个身份认证代理或身份认证webhook来实现。

代码解读

D:\Workspace\Go\src\github.com\kubernetes\kubernetes\cmd\kube-apiserver\app\server.go

之前构建server之前生成通用配置buildGenericConfig里

  1. // Authentication.ApplyTo requires already applied OpenAPIConfig and EgressSelector if present
  2. if lastErr = s.Authentication.ApplyTo(&genericConfig.Authentication, genericConfig.SecureServing, genericConfig.EgressSelector, genericConfig.OpenAPIConfig, genericConfig.OpenAPIV3Config, clientgoExternalClient, versionedInformers); lastErr != nil {
  3. return
  4. }

真正的Authentication初始化

D:\Workspace\Go\src\github.com\kubernetes\kubernetes\pkg\kubeapiserver\options\authorization.go

	authInfo.Authenticator, openAPIConfig.SecurityDefinitions, err = authenticatorConfig.New()

New代码、创建认证实例,支持多种认证方式:请求Header认证、Auth文件认证、CA证书认证、Bearer token认证

D:\Workspace\Go\src\github.com\kubernetes\kubernetes\pkg\kubeapiserver\authenticator\config.go

核心变量1 tokenAuthenticators []authenticator.Token代表Bearer token认证

  1. // Token checks a string value against a backing authentication store and
  2. // returns a Response or an error if the token could not be checked.
  3. type Token interface {
  4. AuthenticateToken(ctx context.Context, token string) (*Response, bool, error)
  5. }

不断添加到数组中,最终创建union对象,最终调用unionAuthTokenHandler.AuthenticateToken

  1. // Union the token authenticators
  2. tokenAuth := tokenunion.New(tokenAuthenticators...)
  3. // AuthenticateToken authenticates the token using a chain of authenticator.Token objects.
  4. func (authHandler *unionAuthTokenHandler) AuthenticateToken(ctx context.Context, token string) (*authenticator.Response, bool, error) {
  5. var errlist []error
  6. for _, currAuthRequestHandler := range authHandler.Handlers {
  7. info, ok, err := currAuthRequestHandler.AuthenticateToken(ctx, token)
  8. if err != nil {
  9. if authHandler.FailOnError {
  10. return info, ok, err
  11. }
  12. errlist = append(errlist, err)
  13. continue
  14. }
  15. if ok {
  16. return info, ok, err
  17. }
  18. }
  19. return nil, false, utilerrors.NewAggregate(errlist)
  20. }

核心变量2 authenticator.Request代表用户认证的接口,其中AuthenticateRequest是对应得认证方法

  1. // Request attempts to extract authentication information from a request and
  2. // returns a Response or an error if the request could not be checked.
  3. type Request interface {
  4. AuthenticateRequest(req *http.Request) (*Response, bool, error)
  5. }

然后不断添加到切片中,比如x509认证

  1. // X509 methods
  2. if config.ClientCAContentProvider != nil {
  3. certAuth := x509.NewDynamic(config.ClientCAContentProvider.VerifyOptions, x509.CommonNameUserConversion)
  4. authenticators = append(authenticators, certAuth)
  5. }

把上面得unionAuthTokenHandler也加入到链中

		authenticators = append(authenticators, bearertoken.New(tokenAuth), websocket.NewProtocolAuthenticator(tokenAuth))

最后创建一个union对象unionAuthRequestHandler

authenticator := union.New(authenticators...)

最终调用得unionAuthRequestHandler.AuthenticateRequest方法遍历认证方法认证

  1. // AuthenticateRequest authenticates the request using a chain of authenticator.Request objects.
  2. func (authHandler *unionAuthRequestHandler) AuthenticateRequest(req *http.Request) (*authenticator.Response, bool, error) {
  3. var errlist []error
  4. for _, currAuthRequestHandler := range authHandler.Handlers {
  5. resp, ok, err := currAuthRequestHandler.AuthenticateRequest(req)
  6. if err != nil {
  7. if authHandler.FailOnError {
  8. return resp, ok, err
  9. }
  10. errlist = append(errlist, err)
  11. continue
  12. }
  13. if ok {
  14. return resp, ok, err
  15. }
  16. }
  17. return nil, false, utilerrors.NewAggregate(errlist)
  18. }

代码解读:

-如果某一个认证方法报错就返回,说明认证没过

-如果某一个认证方法报ok,说明认证过了,直接retrun了,无需再运行其他认证了

-如果所有得认证方法都没报ok,则认证没过

本节重点总结:

Authentication的目的

Kubernetes使用身份认证插件利用下面的策略来认证API请求的身份

-客户端证书

-持有者令牌(Bearer Token)

-身份认证代理(Proxy)

-HTTP基本认证机制

union认证的规则

-如果某一个认证方法报错就返回,说明认证没过

-如果某一个认证方法报ok,说明认证过了,直接retrun了,无需再运行其他认证了

-如果所有得认证方法都没报ok,则认证没过

3.4API核心服务的Authorization鉴权

Authorization鉴权

确认“你是不是有权力做这件事”。怎样判定是否有权利,通过配置策略

4种鉴权模块

鉴权执行链unionAuthorizHandler

Authorization鉴权相关

3.5node类型Authorization鉴权

 Authorization鉴权,确认“你是不是有权力做这件事”。怎样判定是否有权利,通过配置策略。

Kubernetes使用API服务器对API请求进行鉴权

它根据所有策略评估所有请求属性来决定允许或拒绝请求。

一个API请求的所有部分都必须被某些策略允许才能继续。这意味着默认情况下拒绝权限。

当系统配置了多个鉴权模块时,Kubernetes将按顺序使用每个模块。如果任何鉴权模块批准或拒绝请求·,则立即返回该决定,并不会与其他鉴权模块协商。如果所有模块对请求没有意见,则拒绝该请求。被拒绝相应返回HTTP状态码403.

文档地址https://kubernrtes.io/zh/docs/reference/access-authn-authz/authorization/

4种鉴权模块

文档地址https://kubernetes.io/zh/docs/references/access-authn-authz/authorization/#authorization-modules

Node - 一个专用鉴权组件,根据调度到kubelet上运行的Pod为kubelet授予权限。了解有关使用节点鉴权模式的更多信息,请参阅节点鉴权。

ABAC-基于属性的访问控制(ABAC)定义了一种访问控制范型,通过使用将属性组合在一起的策略,将访问权限授予用户。策略可以使用任何类型的属性(用户属性、资源属性、对象、环境属性等)。要了解有关ABAC模式更多信息,请参阅ABAC模式。

RBAC-基于角色的访问控制(RBAC)是一种基于企业内个人用户的角色管理对计算机或网络资源的访问的方法。在此上下文中,权限是单个用户执行特定任务的能力,例如查看、创建或修改文件。要了解有关使用RBAC模式更多信息,请参阅RBAC模式。

- 被启用之后,RBAC(基于角色的访问控制)使用rbac.authorization.k8s.io API组成驱动鉴权决策,从而允许管理员通过Kubernetes API动态配置权限策略。

- 要启用RBAC,请使用--authorization-mode = RBAC启动API服务器。

Webhook-Webhook是一个HTTP回调:发生某些事情时调用的HTTP POST:通过HTTP POST进行简单的事件通知。实现Webhook的Web应用程序会在发生某些事情时将消息发布到UrL。要了解有关使用Webhook模型的更多信息,请参阅webhook模式。

代码解析

入口还在buildGenericConfig D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

	genericConfig.Authorization.Authorizer, genericConfig.RuleResolver, err = BuildAuthorizer(s, genericConfig.EgressSelector, versionedInformers)

还是通过New构造,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\authorizer\config.go

authorizationConfig.New()

构造函数New分析

核心变量1 authorizers

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\authorization\authorizer\interfaces.go

  1. // Authorizer makes an authorization decision based on information gained by making
  2. // zero or more calls to methods of the Attributes interface. It returns nil when an action is
  3. // authorized, otherwise it returns an error.
  4. type Authorizer interface {
  5. Authorize(ctx context.Context, a Attributes) (authorized Decision, reason string, err error)
  6. }

鉴权的接口,有对应的Authorize执行鉴权操作,返回参数如下

Decision代表鉴权结果,有

- 拒绝DecisionDeny

-通过DecisionAllow

- 未表态 DecisionNoOpinion

reason代表拒绝的原因

核心变量2 ruleResolvers

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\authorization\authorizer\interfaces.go

  1. // RuleResolver provides a mechanism for resolving the list of rules that apply to a given user within a namespace.
  2. type RuleResolver interface {
  3. // RulesFor get the list of cluster wide rules, the list of rules in the specific namespace, incomplete status and errors.
  4. RulesFor(user user.Info, namespace string) ([]ResourceRuleInfo, []NonResourceRuleInfo, bool, error)
  5. }

获取rule的接口,有对应的RulesFor执行获取rule操作,返回参数如下

[]ResourceRuleInfo代表资源型的rule

[]NonResourceRuleInfo代表非资源型的如nonResourceURLs:["/metrics"]

遍历鉴权模块判断,向上述切片中append

  1. for _, authorizationMode := range config.AuthorizationModes {
  2. // Keep cases in sync with constant list in k8s.io/kubernetes/pkg/kubeapiserver/authorizer/modes/modes.go.
  3. switch authorizationMode {
  4. case modes.ModeNode:
  5. node.RegisterMetrics()
  6. graph := node.NewGraph()
  7. node.AddGraphEventHandlers(
  8. graph,
  9. config.VersionedInformerFactory.Core().V1().Nodes(),
  10. config.VersionedInformerFactory.Core().V1().Pods(),
  11. config.VersionedInformerFactory.Core().V1().PersistentVolumes(),
  12. config.VersionedInformerFactory.Storage().V1().VolumeAttachments(),
  13. )
  14. nodeAuthorizer := node.NewAuthorizer(graph, nodeidentifier.NewDefaultNodeIdentifier(), bootstrappolicy.NodeRules())
  15. authorizers = append(authorizers, nodeAuthorizer)
  16. ruleResolvers = append(ruleResolvers, nodeAuthorizer)
  17. case modes.ModeAlwaysAllow:
  18. alwaysAllowAuthorizer := authorizerfactory.NewAlwaysAllowAuthorizer()
  19. authorizers = append(authorizers, alwaysAllowAuthorizer)
  20. ruleResolvers = append(ruleResolvers, alwaysAllowAuthorizer)
  21. case modes.ModeAlwaysDeny:
  22. alwaysDenyAuthorizer := authorizerfactory.NewAlwaysDenyAuthorizer()
  23. authorizers = append(authorizers, alwaysDenyAuthorizer)
  24. ruleResolvers = append(ruleResolvers, alwaysDenyAuthorizer)
  25. case modes.ModeABAC:
  26. abacAuthorizer, err := abac.NewFromFile(config.PolicyFile)
  27. if err != nil {
  28. return nil, nil, err
  29. }
  30. authorizers = append(authorizers, abacAuthorizer)
  31. ruleResolvers = append(ruleResolvers, abacAuthorizer)
  32. case modes.ModeWebhook:
  33. if config.WebhookRetryBackoff == nil {
  34. return nil, nil, errors.New("retry backoff parameters for authorization webhook has not been specified")
  35. }
  36. clientConfig, err := webhookutil.LoadKubeconfig(config.WebhookConfigFile, config.CustomDial)
  37. if err != nil {
  38. return nil, nil, err
  39. }
  40. webhookAuthorizer, err := webhook.New(clientConfig,
  41. config.WebhookVersion,
  42. config.WebhookCacheAuthorizedTTL,
  43. config.WebhookCacheUnauthorizedTTL,
  44. *config.WebhookRetryBackoff,
  45. )
  46. if err != nil {
  47. return nil, nil, err
  48. }
  49. authorizers = append(authorizers, webhookAuthorizer)
  50. ruleResolvers = append(ruleResolvers, webhookAuthorizer)
  51. case modes.ModeRBAC:
  52. rbacAuthorizer := rbac.New(
  53. &rbac.RoleGetter{Lister: config.VersionedInformerFactory.Rbac().V1().Roles().Lister()},
  54. &rbac.RoleBindingLister{Lister: config.VersionedInformerFactory.Rbac().V1().RoleBindings().Lister()},
  55. &rbac.ClusterRoleGetter{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoles().Lister()},
  56. &rbac.ClusterRoleBindingLister{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoleBindings().Lister()},
  57. )
  58. authorizers = append(authorizers, rbacAuthorizer)
  59. ruleResolvers = append(ruleResolvers, rbacAuthorizer)
  60. default:
  61. return nil, nil, fmt.Errorf("unknown authorization mode %s specified", authorizationMode)
  62. }
  63. }

最后返回两个对象的union对象,跟authentication一样

	return union.New(authorizers...), union.NewRuleResolvers(ruleResolvers...), nil

authorizaers的union unionauthzHandler

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\authorization\union\union.go

  1. // New returns an authorizer that authorizes against a chain of authorizer.Authorizer objects
  2. func New(authorizationHandlers ...authorizer.Authorizer) authorizer.Authorizer {
  3. return unionAuthzHandler(authorizationHandlers)
  4. }
  5. // Authorizes against a chain of authorizer.Authorizer objects and returns nil if successful and returns error if unsuccessful
  6. func (authzHandler unionAuthzHandler) Authorize(ctx context.Context, a authorizer.Attributes) (authorizer.Decision, string, error) {
  7. var (
  8. errlist []error
  9. reasonlist []string
  10. )
  11. for _, currAuthzHandler := range authzHandler {
  12. decision, reason, err := currAuthzHandler.Authorize(ctx, a)
  13. if err != nil {
  14. errlist = append(errlist, err)
  15. }
  16. if len(reason) != 0 {
  17. reasonlist = append(reasonlist, reason)
  18. }
  19. switch decision {
  20. case authorizer.DecisionAllow, authorizer.DecisionDeny:
  21. return decision, reason, err
  22. case authorizer.DecisionNoOpinion:
  23. // continue to the next authorizer
  24. }
  25. }
  26. return authorizer.DecisionNoOpinion, strings.Join(reasonlist, "\n"), utilerrors.NewAggregate(errlist)
  27. }

unionAuthzHandler的鉴权执行方法Auhorize同样是遍历执行内部的鉴权方法Authoriza

如果任一方法的鉴权结果decision为通过或者拒绝,就直接返回

否则代表不表态,继续执行下一个Authorize方法

ruleResolvers的union unionauthzHandler

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\authorization\union\union.go

  1. // unionAuthzRulesHandler authorizer against a chain of authorizer.RuleResolver
  2. type unionAuthzRulesHandler []authorizer.RuleResolver
  3. // NewRuleResolvers returns an authorizer that authorizes against a chain of authorizer.Authorizer objects
  4. func NewRuleResolvers(authorizationHandlers ...authorizer.RuleResolver) authorizer.RuleResolver {
  5. return unionAuthzRulesHandler(authorizationHandlers)
  6. }
  7. // RulesFor against a chain of authorizer.RuleResolver objects and returns nil if successful and returns error if unsuccessful
  8. func (authzHandler unionAuthzRulesHandler) RulesFor(user user.Info, namespace string) ([]authorizer.ResourceRuleInfo, []authorizer.NonResourceRuleInfo, bool, error) {
  9. var (
  10. errList []error
  11. resourceRulesList []authorizer.ResourceRuleInfo
  12. nonResourceRulesList []authorizer.NonResourceRuleInfo
  13. )
  14. incompleteStatus := false
  15. for _, currAuthzHandler := range authzHandler {
  16. resourceRules, nonResourceRules, incomplete, err := currAuthzHandler.RulesFor(user, namespace)
  17. if incomplete {
  18. incompleteStatus = true
  19. }
  20. if err != nil {
  21. errList = append(errList, err)
  22. }
  23. if len(resourceRules) > 0 {
  24. resourceRulesList = append(resourceRulesList, resourceRules...)
  25. }
  26. if len(nonResourceRules) > 0 {
  27. nonResourceRulesList = append(nonResourceRulesList, nonResourceRules...)
  28. }
  29. }
  30. return resourceRulesList, nonResourceRulesList, incompleteStatus, utilerrors.NewAggregate(errList)
  31. }

unionAuthzRulesHandler的执行方法RulesFor中遍历内部的authzHandler

执行他们的RulesFor方法获取resourceRules和nonResourceRules

并将结果添加到resourceRuleList和nonReourceRulesList,返回

本节重点总结:

Authorization鉴权的目的

4种鉴权模块

鉴权执行链unionAuthzHandler

3.5 node类型的Authorization鉴权

Authorization鉴权

本节重点总结:

节点鉴权是一种特殊用途的鉴权模式,专门对kubelet发出的API请求进行鉴权

4种规则解读-如果不是node的请求则拒绝

- 如果nodeName没找到则拒绝

-如果请求的是configmap、pod、pv、pvc、secret需要校验

  - 如果动作是非get,拒绝

  - 如果请求的资源和节点没关系,拒绝

- 如果请求其他资源,需要按照定义好的rule匹配

节点鉴权

文档地址https://kubernetes.io/zh/docs/reference/access-authn-authz/node/

节点鉴权是一种特殊用途的鉴权模式,专门对kubelet发出的API请求进行鉴权。

概述

节点鉴权器允许kubelet执行API操作,包括:

读取操作:

services

endpoints

nodes

pods

secrets、configmaps、pvcs以及绑定到kubelet节点的与pod相关的持久卷

写入操作:

节点和节点状态(启用NodeRestriction准入插件以限制kubelet只能修改自己的节点)

Pod和Pod状态(启用NodeRestriction准入插件以限制kubelet只能修改绑定到自身的Pod)

鉴权相关操作:

对于基于TLS的启动引导过程时使用的certificationsigningrequests API的读/写权限

为委派的身份验证/授权检查创建tokenreviews和subjectaccessreviews的能力

源码解读

位置D:\Workspace\Go\src\k8s.io\kubernetes\plugin\pkg\auth\authorizer\node\node_authorizer.go

  1. func (r *NodeAuthorizer) Authorize(ctx context.Context, attrs authorizer.Attributes) (authorizer.Decision, string, error) {
  2. nodeName, isNode := r.identifier.NodeIdentity(attrs.GetUser())
  3. if !isNode {
  4. // reject requests from non-nodes
  5. return authorizer.DecisionNoOpinion, "", nil
  6. }
  7. if len(nodeName) == 0 {
  8. // reject requests from unidentifiable nodes
  9. klog.V(2).Infof("NODE DENY: unknown node for user %q", attrs.GetUser().GetName())
  10. return authorizer.DecisionNoOpinion, fmt.Sprintf("unknown node for user %q", attrs.GetUser().GetName()), nil
  11. }
  12. // subdivide access to specific resources
  13. if attrs.IsResourceRequest() {
  14. requestResource := schema.GroupResource{Group: attrs.GetAPIGroup(), Resource: attrs.GetResource()}
  15. switch requestResource {
  16. case secretResource:
  17. return r.authorizeReadNamespacedObject(nodeName, secretVertexType, attrs)
  18. case configMapResource:
  19. return r.authorizeReadNamespacedObject(nodeName, configMapVertexType, attrs)
  20. case pvcResource:
  21. if attrs.GetSubresource() == "status" {
  22. return r.authorizeStatusUpdate(nodeName, pvcVertexType, attrs)
  23. }
  24. return r.authorizeGet(nodeName, pvcVertexType, attrs)
  25. case pvResource:
  26. return r.authorizeGet(nodeName, pvVertexType, attrs)
  27. case vaResource:
  28. return r.authorizeGet(nodeName, vaVertexType, attrs)
  29. case svcAcctResource:
  30. return r.authorizeCreateToken(nodeName, serviceAccountVertexType, attrs)
  31. case leaseResource:
  32. return r.authorizeLease(nodeName, attrs)
  33. case csiNodeResource:
  34. return r.authorizeCSINode(nodeName, attrs)
  35. }
  36. }
  37. // Access to other resources is not subdivided, so just evaluate against the statically defined node rules
  38. if rbac.RulesAllow(attrs, r.nodeRules...) {
  39. return authorizer.DecisionAllow, "", nil
  40. }
  41. return authorizer.DecisionNoOpinion, "", nil
  42. }

规则解读

  1. // NodeAuthorizer authorizes requests from kubelets, with the following logic:
  2. // 1. If a request is not from a node (NodeIdentity() returns isNode=false), reject
  3. // 2. If a specific node cannot be identified (NodeIdentity() returns nodeName=""), reject
  4. // 3. If a request is for a secret, configmap, persistent volume or persistent volume claim, reject unless the verb is get, and the requested object is related to the requesting node:
  5. // node <- configmap
  6. // node <- pod
  7. // node <- pod <- secret
  8. // node <- pod <- configmap
  9. // node <- pod <- pvc
  10. // node <- pod <- pvc <- pv
  11. // node <- pod <- pvc <- pv <- secret
  12. // 4. For other resources, authorize all nodes uniformly using statically defined rules

前两条规则很好理解

规则3解读

第三条如果请求的资源时secret,configmap,persistent volume or persistent volume claim,需要验证动作是否是get

以secretResource为例,调用authorizeReadNamespaceObject方法

  1. case secretResource:
  2. return r.authorizeReadNamespaceObject(nodeName, secretVertexType, attrs)

authorizeReadNamespaceObject验证namespace的方法

authorizeReadNamespaceObject方法是装饰方法,先校验资源是否是namespace级别的,再调用底层的authorize方法

  1. // authorizeReadNamespacedObject authorizes "get", "list" and "watch" requests to single objects of a
  2. // specified types if they are related to the specified node.
  3. func (r *NodeAuthorizer) authorizeReadNamespacedObject(nodeName string, startingType vertexType, attrs authorizer.Attributes) (authorizer.Decision, string, error) {
  4. switch attrs.GetVerb() {
  5. case "get", "list", "watch":
  6. //ok
  7. default:
  8. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  9. return authorizer.DecisionNoOpinion, "can only read resources of this type", nil
  10. }
  11. if len(attrs.GetSubresource()) > 0 {
  12. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  13. return authorizer.DecisionNoOpinion, "cannot read subresource", nil
  14. }
  15. if len(attrs.GetNamespace()) == 0 {
  16. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  17. return authorizer.DecisionNoOpinion, "can only read namespaced object of this type", nil
  18. }
  19. return r.authorize(nodeName, startingType, attrs)
  20. }

解读

- DecisionNoOpinion代表不表态,如果只有一个Authorizer,意味着拒绝

- 如果动作是变更类的就拒绝

- 如果请求保护子资源就拒绝

- 如果请求参数中没有namespace就拒绝

- 然后调用底层的authorize

node底层的authorize方法

- 如果资源的名称没找到就拒绝

- hasPathFrom代表判断资源是否和节点有关系

- 如果没关系就拒绝

  1. func (r *NodeAuthorizer) authorize(nodeName string, startingType vertexType, attrs authorizer.Attributes) (authorizer.Decision, string, error) {
  2. if len(attrs.GetName()) == 0 {
  3. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  4. return authorizer.DecisionNoOpinion, "No Object name found", nil
  5. }
  6. ok, err := r.hasPathFrom(nodeName, startingType, attrs.GetNamespace(), attrs.GetName())
  7. if err != nil {
  8. klog.V(2).InfoS("NODE DENY", "err", err)
  9. return authorizer.DecisionNoOpinion, fmt.Sprintf("no relationship found between node '%s' and this object", nodeName), nil
  10. }
  11. if !ok {
  12. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  13. return authorizer.DecisionNoOpinion, fmt.Sprintf("no relationship found between node '%s' and this object", nodeName), nil
  14. }
  15. return authorizer.DecisionAllow, "", nil
  16. }

pvResource使用的authorizeGet解析

如果动作不是get就拒绝

如果含有subresource就拒绝

然后调用底层的authorize

  1. // authorizeGet authorizes "get" requests to objects of the specified type if they are related to the specified node
  2. func (r *NodeAuthorizer) authorizeGet(nodeName string, startingType vertexType, attrs authorizer.Attributes) (authorizer.Decision, string, error) {
  3. if attrs.GetVerb() != "get" {
  4. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  5. return authorizer.DecisionNoOpinion, "can only get individual resources of this type", nil
  6. }
  7. if len(attrs.GetSubresource()) > 0 {
  8. klog.V(2).Infof("NODE DENY: '%s' %#v", nodeName, attrs)
  9. return authorizer.DecisionNoOpinion, "cannot get subresource", nil
  10. }
  11. return r.authorize(nodeName, startingType, attrs)
  12. }

规则4解读

规则4代表如果node请求其他资源,就通过对应的静态队长认证

  1. // Access to other resources is not subdivided, so just evaluate against the statically defined node rules
  2. if rbac.RulesAllow(attrs, r.nodeRules...) {
  3. return authorizer.DecisionAllow, "", nil
  4. }
  5. return authorizer.DecisionNoOpinion, "", nil

底层调用rbac.RuleAllows

D:\Workspace\Go\src\k8s.io\kubernetes\plugin\pkg\auth\authorizer\rbac\rbac.go

  1. func RuleAllows(requestAttributes authorizer.Attributes, rule *rbacv1.PolicyRule) bool {
  2. if requestAttributes.IsResourceRequest() {
  3. combinedResource := requestAttributes.GetResource()
  4. if len(requestAttributes.GetSubresource()) > 0 {
  5. combinedResource = requestAttributes.GetResource() + "/" + requestAttributes.GetSubresource()
  6. }
  7. return rbacv1helpers.VerbMatches(rule, requestAttributes.GetVerb()) &&
  8. rbacv1helpers.APIGroupMatches(rule, requestAttributes.GetAPIGroup()) &&
  9. rbacv1helpers.ResourceMatches(rule, combinedResource, requestAttributes.GetSubresource()) &&
  10. rbacv1helpers.ResourceNameMatches(rule, requestAttributes.GetName())
  11. }
  12. return rbacv1helpers.VerbMatches(rule, requestAttributes.GetVerb()) &&
  13. rbacv1helpers.NonResourceURLMatches(rule, requestAttributes.GetPath())
  14. }

最底层是两个Matches

VerbMatches

代表如果动作是*那就放行

如果请求的动作和rule的动作一致就放行

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\apis\rbac\v1\evaluation_helpers.go

  1. func VerbMatches(rule *rbacv1.PolicyRule, requestedVerb string) bool {
  2. for _, ruleVerb := range rule.Verbs {
  3. if ruleVerb == rbacv1.VerbAll {
  4. return true
  5. }
  6. if ruleVerb == requestedVerb {
  7. return true
  8. }
  9. }
  10. return false
  11. }

NonResourceURLMatches

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\apis\rbac\v1\evaluation_helpers.go

如果动作是*就放行

cat rbac.yaml

 如果请求的url和rule定义的url一致就放行

如果rule中定义的url末尾有*代表通配,那么要判断请求的url是否包含定义url的前缀

  1. func NonResourceURLMatches(rule *rbacv1.PolicyRule, requestedURL string) bool {
  2. for _, ruleURL := range rule.NonResourceURLs {
  3. if ruleURL == rbacv1.NonResourceAll {
  4. return true
  5. }
  6. if ruleURL == requestedURL {
  7. return true
  8. }
  9. if strings.HasSuffix(ruleURL, "*") && strings.HasPrefix(requestedURL, strings.TrimRight(ruleURL, "*")) {
  10. return true
  11. }
  12. }
  13. return false
  14. }

node的rule在哪里

位置D:\Workspace\Go\src\k8s.io\kubernetes\plugin\pkg\auth\authorizer\rbac\bootstrappolicy\policy.go

  1. // NodeRules returns node policy rules, it is slice of rbacv1.PolicyRule.
  2. func NodeRules() []rbacv1.PolicyRule {
  3. nodePolicyRules := []rbacv1.PolicyRule{
  4. // Needed to check API access. These creates are non-mutating
  5. rbacv1helpers.NewRule("create").Groups(authenticationGroup).Resources("tokenreviews").RuleOrDie(),
  6. rbacv1helpers.NewRule("create").Groups(authorizationGroup).Resources("subjectaccessreviews", "localsubjectaccessreviews").RuleOrDie(),
  7. // Needed to build serviceLister, to populate env vars for services
  8. rbacv1helpers.NewRule(Read...).Groups(legacyGroup).Resources("services").RuleOrDie(),
  9. // Nodes can register Node API objects and report status.
  10. // Use the NodeRestriction admission plugin to limit a node to creating/updating its own API object.
  11. rbacv1helpers.NewRule("create", "get", "list", "watch").Groups(legacyGroup).Resources("nodes").RuleOrDie(),
  12. rbacv1helpers.NewRule("update", "patch").Groups(legacyGroup).Resources("nodes/status").RuleOrDie(),
  13. rbacv1helpers.NewRule("update", "patch").Groups(legacyGroup).Resources("nodes").RuleOrDie(),
  14. // TODO: restrict to the bound node as creator in the NodeRestrictions admission plugin
  15. rbacv1helpers.NewRule("create", "update", "patch").Groups(legacyGroup).Resources("events").RuleOrDie(),
  16. // TODO: restrict to pods scheduled on the bound node once field selectors are supported by list/watch authorization
  17. rbacv1helpers.NewRule(Read...).Groups(legacyGroup).Resources("pods").RuleOrDie(),
  18. // Needed for the node to create/delete mirror pods.
  19. // Use the NodeRestriction admission plugin to limit a node to creating/deleting mirror pods bound to itself.
  20. rbacv1helpers.NewRule("create", "delete").Groups(legacyGroup).Resources("pods").RuleOrDie(),
  21. // Needed for the node to report status of pods it is running.
  22. // Use the NodeRestriction admission plugin to limit a node to updating status of pods bound to itself.
  23. rbacv1helpers.NewRule("update", "patch").Groups(legacyGroup).Resources("pods/status").RuleOrDie(),
  24. // Needed for the node to create pod evictions.
  25. // Use the NodeRestriction admission plugin to limit a node to creating evictions for pods bound to itself.
  26. rbacv1helpers.NewRule("create").Groups(legacyGroup).Resources("pods/eviction").RuleOrDie(),
  27. // Needed for imagepullsecrets, rbd/ceph and secret volumes, and secrets in envs
  28. // Needed for configmap volume and envs
  29. // Use the Node authorization mode to limit a node to get secrets/configmaps referenced by pods bound to itself.
  30. rbacv1helpers.NewRule("get", "list", "watch").Groups(legacyGroup).Resources("secrets", "configmaps").RuleOrDie(),
  31. // Needed for persistent volumes
  32. // Use the Node authorization mode to limit a node to get pv/pvc objects referenced by pods bound to itself.
  33. rbacv1helpers.NewRule("get").Groups(legacyGroup).Resources("persistentvolumeclaims", "persistentvolumes").RuleOrDie(),
  34. // TODO: add to the Node authorizer and restrict to endpoints referenced by pods or PVs bound to the node
  35. // Needed for glusterfs volumes
  36. rbacv1helpers.NewRule("get").Groups(legacyGroup).Resources("endpoints").RuleOrDie(),
  37. // Used to create a certificatesigningrequest for a node-specific client certificate, and watch
  38. // for it to be signed. This allows the kubelet to rotate it's own certificate.
  39. rbacv1helpers.NewRule("create", "get", "list", "watch").Groups(certificatesGroup).Resources("certificatesigningrequests").RuleOrDie(),
  40. // Leases
  41. rbacv1helpers.NewRule("get", "create", "update", "patch", "delete").Groups("coordination.k8s.io").Resources("leases").RuleOrDie(),
  42. // CSI
  43. rbacv1helpers.NewRule("get").Groups(storageGroup).Resources("volumeattachments").RuleOrDie(),
  44. // Use the Node authorization to limit a node to create tokens for service accounts running on that node
  45. // Use the NodeRestriction admission plugin to limit a node to create tokens bound to pods on that node
  46. rbacv1helpers.NewRule("create").Groups(legacyGroup).Resources("serviceaccounts/token").RuleOrDie(),
  47. }
  48. // Use the Node authorization mode to limit a node to update status of pvc objects referenced by pods bound to itself.
  49. // Use the NodeRestriction admission plugin to limit a node to just update the status stanza.
  50. pvcStatusPolicyRule := rbacv1helpers.NewRule("get", "update", "patch").Groups(legacyGroup).Resources("persistentvolumeclaims/status").RuleOrDie()
  51. nodePolicyRules = append(nodePolicyRules, pvcStatusPolicyRule)
  52. // CSI
  53. csiDriverRule := rbacv1helpers.NewRule("get", "watch", "list").Groups("storage.k8s.io").Resources("csidrivers").RuleOrDie()
  54. nodePolicyRules = append(nodePolicyRules, csiDriverRule)
  55. csiNodeInfoRule := rbacv1helpers.NewRule("get", "create", "update", "patch", "delete").Groups("storage.k8s.io").Resources("csinodes").RuleOrDie()
  56. nodePolicyRules = append(nodePolicyRules, csiNodeInfoRule)
  57. // RuntimeClass
  58. nodePolicyRules = append(nodePolicyRules, rbacv1helpers.NewRule("get", "list", "watch").Groups("node.k8s.io").Resources("runtimeclasses").RuleOrDie())
  59. return nodePolicyRules
  60. }

以endpoint为例,代表node可以访问core apigroup中的endpoint资源,用get方法

				rbacv1helpers.NewRule("get").Groups(legacyGroup).Resources("nodes").RuleOrDie(),

本节重点总结 同上

3.6 rbac类型Authorization鉴权

role、clusterrole中的rules规则

- 资源对象

- 非资源对象

- apiGroups

- verb动作

rbac鉴权的代码逻辑

- 通过informer获取clusterRoleBindings列表,根据user匹配subject,通过informer获取clusterRoleBindings的rules,遍历调用visit进行rule匹配

- 通过informer获取RoleBindings列表,根据user和namespace匹配subject,,通过informer获取RoleBindings的rules,遍历调用visit进行rule匹配

本节重点总结

rbac模型四种对象的关系
        o role, clusterrole
        o rolebinding,clusterrolebinding
。role、clusterrole中的rules规则
        。资源对象
        。非资源对象
        。apiGroups
        o verb动作
。rbac鉴权的代码逻辑
        。通过informer获取clusterRoleBindings列表,根据user匹配subject,通过informer获取clusterRoleBindings的rules,遍历调用visit进行rule匹配
        。通过informer获取RoleBindings列表,根据user和namespace匹配subject,,通过informer获取RoleBindings的rules,遍历调用visit进行rule匹配

rbac鉴权模型

文档地址https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
简介
基干角色(Role) 的访问控制 (RBAC) 是一种基于组织中用户的角色来调节控制对 计算机或网络资源的访问的方法。

RBAC 鉴权机制使用rbac.authorization.k8sio API组来驱动鉴权决定,允许你通过 Kubernetes API动态配置策略。
看文档介绍

源码解读

入口D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\authorizer\config.go

  1. case modes.ModeRBAC:
  2. rbacAuthorizer := rbac.New(
  3. &rbac.RoleGetter{Lister: config.VersionedInformerFactory.Rbac().V1().Roles().Lister()},
  4. &rbac.RoleBindingLister{Lister: config.VersionedInformerFactory.Rbac().V1().RoleBindings().Lister()},
  5. &rbac.ClusterRoleGetter{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoles().Lister()},
  6. &rbac.ClusterRoleBindingLister{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoleBindings().Lister()},
  7. )
  8. authorizers = append(authorizers, rbacAuthorizer)
  9. ruleResolvers = append(ruleResolvers, rbacAuthorizer)

rbac.New 传入Role、ClusterRole、RoleBinding 和ClusterRoleBinding 4种对象的Getter

  1. func New(roles rbacregistryvalidation.RoleGetter, roleBindings rbacregistryvalidation.RoleBindingLister, clusterRoles rbacregistryvalidation.ClusterRoleGetter, clusterRoleBindings rbacregistryvalidation.ClusterRoleBindingLister) *RBACAuthorizer {
  2. authorizer := &RBACAuthorizer{
  3. authorizationRuleResolver: rbacregistryvalidation.NewDefaultRuleResolver(
  4. roles, roleBindings, clusterRoles, clusterRoleBindings,
  5. ),
  6. }
  7. return authorizer
  8. }

构建DefaultRuleResolver,并用DefaultRuleResolver构建RBACAuthorizer
RBACAuthorizer的Authorize解析

核心判断点在ruleCheckingVisitor的allowed标志位,如果为true,则通过,否则就不通过

  1. func (r *RBACAuthorizer) Authorize(ctx context.Context, requestAttributes authorizer.Attributes) (authorizer.Decision, string, error) {
  2. ruleCheckingVisitor := &authorizingVisitor{requestAttributes: requestAttributes}
  3. r.authorizationRuleResolver.VisitRulesFor(requestAttributes.GetUser(), requestAttributes.GetNamespace(), ruleCheckingVisitor.visit)
  4. if ruleCheckingVisitor.allowed {
  5. return authorizer.DecisionAllow, ruleCheckingVisitor.reason, nil
  6. }
  7. // Build a detailed log of the denial.
  8. // Make the whole block conditional so we don't do a lot of string-building we won't use.
  9. if klogV := klog.V(5); klogV.Enabled() {
  10. var operation string
  11. if requestAttributes.IsResourceRequest() {
  12. b := &bytes.Buffer{}
  13. b.WriteString(`"`)
  14. b.WriteString(requestAttributes.GetVerb())
  15. b.WriteString(`" resource "`)
  16. b.WriteString(requestAttributes.GetResource())
  17. if len(requestAttributes.GetAPIGroup()) > 0 {
  18. b.WriteString(`.`)
  19. b.WriteString(requestAttributes.GetAPIGroup())
  20. }
  21. if len(requestAttributes.GetSubresource()) > 0 {
  22. b.WriteString(`/`)
  23. b.WriteString(requestAttributes.GetSubresource())
  24. }
  25. b.WriteString(`"`)
  26. if len(requestAttributes.GetName()) > 0 {
  27. b.WriteString(` named "`)
  28. b.WriteString(requestAttributes.GetName())
  29. b.WriteString(`"`)
  30. }
  31. operation = b.String()
  32. } else {
  33. operation = fmt.Sprintf("%q nonResourceURL %q", requestAttributes.GetVerb(), requestAttributes.GetPath())
  34. }
  35. var scope string
  36. if ns := requestAttributes.GetNamespace(); len(ns) > 0 {
  37. scope = fmt.Sprintf("in namespace %q", ns)
  38. } else {
  39. scope = "cluster-wide"
  40. }
  41. klogV.Infof("RBAC: no rules authorize user %q with groups %q to %s %s", requestAttributes.GetUser().GetName(), requestAttributes.GetUser().GetGroups(), operation, scope)
  42. }
  43. reason := ""
  44. if len(ruleCheckingVisitor.errors) > 0 {
  45. reason = fmt.Sprintf("RBAC: %v", utilerrors.NewAggregate(ruleCheckingVisitor.errors))
  46. }
  47. return authorizer.DecisionNoOpinion, reason, nil
  48. }

这个allowed标志位只有在visit方法中才会被设置,条件是RuleAllows=true

  1. func (v *authorizingVisitor) visit(source fmt.Stringer, rule *rbacv1.PolicyRule, err error) bool {
  2. if rule != nil && RuleAllows(v.requestAttributes, rule) {
  3. v.allowed = true
  4. v.reason = fmt.Sprintf("RBAC: allowed by %s", source.String())
  5. return false
  6. }
  7. if err != nil {
  8. v.errors = append(v.errors, err)
  9. }
  10. return true
  11. }

VisitRulesFor调用visit方法校验每一条rule

	r.authorizationRuleResolver.VisitRulesFor(requestAttributes.GetUser(), requestAttributes.GetNamespace(), ruleCheckingVisitor.visit)

先校验clusterRoleBinding
·具体流程先用informer获取clusterRoleBindings,出错了就校验失败,因为传给visitor的rule为nil就意味着allowed不会被设置为true

  1. if clusterRoleBindings, err := r.clusterRoleBindingLister.ListClusterRoleBindings(); err != nil {
  2. if !visitor(nil, nil, err) {
  3. return
  4. }
  5. }

先根据传入的user对象对比subject主体

  1. for _, clusterRoleBinding := range clusterRoleBindings {
  2. subjectIndex, applies := appliesTo(user, clusterRoleBinding.Subjects, "")
  3. if !applies {
  4. continue
  5. }

appliesToUser对比函数
根据user类型判断
。如果是普通用户就判断名字
。如果是group就判断里面的user名字
。如果是ServiceAccount就要用serviceaccountMatchesUsername对比

  1. func appliesToUser(user user.Info, subject rbacv1.Subject, namespace string) bool {
  2. switch subject.Kind {
  3. case rbacv1.UserKind:
  4. return user.GetName() == subject.Name
  5. case rbacv1.GroupKind:
  6. return has(user.GetGroups(), subject.Name)
  7. case rbacv1.ServiceAccountKind:
  8. // default the namespace to namespace we're working in if its available. This allows rolebindings that reference
  9. // SAs in th local namespace to avoid having to qualify them.
  10. saNamespace := namespace
  11. if len(subject.Namespace) > 0 {
  12. saNamespace = subject.Namespace
  13. }
  14. if len(saNamespace) == 0 {
  15. return false
  16. }
  17. // use a more efficient comparison for RBAC checking
  18. return serviceaccount.MatchesUsername(saNamespace, subject.Name, user.GetName())
  19. default:
  20. return false
  21. }
  22. }

serviceaccount.MatchesUsername对比 serviceaccount的全名system:serviceaccount:namespace:name逐次进行对比

  1. // MatchesUsername checks whether the provided username matches the namespace and name without
  2. // allocating. Use this when checking a service account namespace and name against a known string.
  3. func MatchesUsername(namespace, name string, username string) bool {
  4. if !strings.HasPrefix(username, ServiceAccountUsernamePrefix) {
  5. return false
  6. }
  7. username = username[len(ServiceAccountUsernamePrefix):]
  8. if !strings.HasPrefix(username, namespace) {
  9. return false
  10. }
  11. username = username[len(namespace):]
  12. if !strings.HasPrefix(username, ServiceAccountUsernameSeparator) {
  13. return false
  14. }
  15. username = username[len(ServiceAccountUsernameSeparator):]
  16. return username == name
  17. }

再根据clusterRoleBinding.RoleRef从informer获取rules

			rules, err := r.GetRoleReferenceRules(clusterRoleBinding.RoleRef, "")

遍历rule,传入找到的clusterRoleBinding,调用visit进行比对

  1. sourceDescriber.binding = clusterRoleBinding
  2. sourceDescriber.subject = &clusterRoleBinding.Subjects[subjectIndex]
  3. for i := range rules {
  4. if !visitor(sourceDescriber, &rules[i], nil) {
  5. return
  6. }
  7. }

资源型
。对比request和rule的verb是否一致

  1. func VerbMatches(rule *rbacv1.PolicyRule, requestedVerb string) bool {
  2. for _, ruleVerb := range rule.Verbs {
  3. if ruleVerb == rbacv1.VerbAll {
  4. return true
  5. }
  6. if ruleVerb == requestedVerb {
  7. return true
  8. }
  9. }
  10. return false
  11. }

非资源型的
对比request和rule的url和verb

  1. func NonResourceURLMatches(rule *rbacv1.PolicyRule, requestedURL string) bool {
  2. for _, ruleURL := range rule.NonResourceURLs {
  3. if ruleURL == rbacv1.NonResourceAll {
  4. return true
  5. }
  6. if ruleURL == requestedURL {
  7. return true
  8. }
  9. if strings.HasSuffix(ruleURL, "*") && strings.HasPrefix(requestedURL, strings.TrimRight(ruleURL, "*")) {
  10. return true
  11. }
  12. }
  13. return false
  14. }

再校验RoleBinding

通过informer获取roleBinding列表

  1. if roleBindings, err := r.roleBindingLister.ListRoleBindings(namespace); err != nil {
  2. if !visitor(nil, nil, err) {
  3. return
  4. }

遍历对比subject

- 一样的appliesTo判断subject

  1. for _, roleBinding := range roleBindings {
  2. subjectIndex, applies := appliesTo(user, roleBinding.Subjects, namespace)
  3. if !applies {
  4. continue
  5. }

根据informer获取匹配到的roleBinding的rules对象

  1. rules, err := r.GetRoleReferenceRules(roleBinding.RoleRef, namespace)
  2. if err != nil {
  3. if !visitor(nil, nil, err) {
  4. return
  5. }
  6. continue
  7. }

调用visit方法遍历rule进行匹配
        如果匹配中了,allowed置为true

  1. sourceDescriber.binding = roleBinding
  2. sourceDescriber.subject = &roleBinding.Subjects[subjectIndex]
  3. for i := range rules {
  4. if !visitor(sourceDescriber, &rules[i], nil) {
  5. return
  6. }
  7. }

3.7 audit审计功能说明和源码阅读

audit审计的总结

Auditing
Kubernetes 审计 (Auditing) 功能提供了与安全相关的、按时间顺序排列的记录集,记录每个用户、使用 KubernetesAPI的应用以及控制面自身引发的活动

审计功能使得集群管理员能处理以下问题

发生了什么?
什么时候发生的?
谁触发的?
活动发生在哪个(些) 对象上?
在哪观察到的?
它从哪触发的?
活动的后续处理行为是什么?

审计策略 由粗到细粒度增长

None- 符合这条规则的日志将不会记录。

Metadata- 记录请求的元数据(请求的用户、时间戳、资源、动词等等),但是不记录请求或者响应的消息体

Request- 记录事件的元数据和请求的消息体,但是不记录响应的消息体。 这不适用于非资源类型的请求

- RequestResponse - 记录事件的元数据,请求和响应的消息体。这不适用于非资源类型的请求。

审计功能介绍
随文档学习
地https://kubernetes.io/zh/docs/tasks/debug-application-cluster/audit/

audit源码阅读

入口位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

buildGenericConfig

  1. lastErr = s.Audit.ApplyTo(genericConfig)
  2. if lastErr != nil {
  3. return
  4. }

1.从配置的 --audit-policy-file加载audit策略
你可以使用--audit-policy-file 标志将包含策略的文件传递给 kube-apiserver
如果不设置该标志,则不记录事件
rules 字段 必须在审计策略文件中提供。没有 (0)规则的策略将被视为非法配置

  1. // 1. Build policy evaluator
  2. evaluator, err := o.newPolicyRuleEvaluator()
  3. if err != nil {
  4. return err
  5. }

2.从配置的 --audit-log-path设置logBackend

  1. // 2. Build log backend
  2. var logBackend audit.Backend
  3. w, err := o.LogOptions.getWriter()
  4. if err != nil {
  5. return err
  6. }
  7. if w != nil {
  8. if evaluator == nil {
  9. klog.V(2).Info("No audit policy file provided, no events will be recorded for log backend")
  10. } else {
  11. logBackend = o.LogOptions.newBackend(w)
  12. }
  13. }

如果后端--audit-log-path="-"代表记录到标准输出

  1. func (o *AuditLogOptions) getWriter() (io.Writer, error) {
  2. if !o.enabled() {
  3. return nil, nil
  4. }
  5. if o.Path == "-" {
  6. return os.Stdout, nil
  7. }
  8. if err := o.ensureLogFile(); err != nil {
  9. return nil, fmt.Errorf("ensureLogFile: %w", err)
  10. }
  11. return &lumberjack.Logger{
  12. Filename: o.Path,
  13. MaxAge: o.MaxAge,
  14. MaxBackups: o.MaxBackups,
  15. MaxSize: o.MaxSize,
  16. Compress: o.Compress,
  17. }, nil
  18. }

ensureLogFile 尝试打开一下log文件,做日志文件是否可用的验证·

底层使用 https://github.com/natefinch/lumberjack,是个带有滚动功能的日志库
获取到日志writer对象后校验下有没有evaluator
- 如果没有evaluator,打印条提示日志
- 如果将backend设置为w

  1. if w != nil {
  2. if evaluator == nil {
  3. klog.V(2).Info("No audit policy file provided, no events will be recorded for log backend")
  4. } else {
  5. logBackend = o.LogOptions.newBackend(w)
  6. }
  7. }

3根据配置构建webhook的 后端

  1. // 3. Build webhook backend
  2. var webhookBackend audit.Backend
  3. if o.WebhookOptions.enabled() {
  4. if evaluator == nil {
  5. klog.V(2).Info("No audit policy file provided, no events will be recorded for webhook backend")
  6. } else {
  7. if c.EgressSelector != nil {
  8. var egressDialer utilnet.DialFunc
  9. egressDialer, err = c.EgressSelector.Lookup(egressselector.ControlPlane.AsNetworkContext())
  10. if err != nil {
  11. return err
  12. }
  13. webhookBackend, err = o.WebhookOptions.newUntruncatedBackend(egressDialer)
  14. } else {
  15. webhookBackend, err = o.WebhookOptions.newUntruncatedBackend(nil)
  16. }
  17. if err != nil {
  18. return err
  19. }
  20. }
  21. }

4 如果有webhook就把它封装为dynamicBackend

  1. // 4. Apply dynamic options.
  2. var dynamicBackend audit.Backend
  3. if webhookBackend != nil {
  4. // if only webhook is enabled wrap it in the truncate options
  5. dynamicBackend = o.WebhookOptions.TruncateOptions.wrapBackend(webhookBackend, groupVersion)
  6. }

5设置审计的策略计算对象 evaluator

  1. // 5. Set the policy rule evaluator
  2. c.AuditPolicyRuleEvaluator = evaluator

6把logBackend和 dynamicBackend 做union

  1. // 6. Join the log backend with the webhooks
  2. c.AuditBackend = appendBackend(logBackend, dynamicBackend)
  3. func appendBackend(existing, newBackend audit.Backend) audit.Backend {
  4. if existing == nil {
  5. return newBackend
  6. }
  7. if newBackend == nil {
  8. return existing
  9. }
  10. return audit.Union(existing, newBackend)
  11. }

7.最终的运行方法

backend接口方法

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\audit\types.go

  1. type Sink interface {
  2. // ProcessEvents handles events. Per audit ID it might be that ProcessEvents is called up to three times.
  3. // Errors might be logged by the sink itself. If an error should be fatal, leading to an internal
  4. // error, ProcessEvents is supposed to panic. The event must not be mutated and is reused by the caller
  5. // after the call returns, i.e. the sink has to make a deepcopy to keep a copy around if necessary.
  6. // Returns true on success, may return false on error.
  7. ProcessEvents(events ...*auditinternal.Event) bool
  8. }
  9. type Backend interface {
  10. Sink
  11. // Run will initialize the backend. It must not block, but may run go routines in the background. If
  12. // stopCh is closed, it is supposed to stop them. Run will be called before the first call to ProcessEvents.
  13. Run(stopCh <-chan struct{}) error
  14. // Shutdown will synchronously shut down the backend while making sure that all pending
  15. // events are delivered. It can be assumed that this method is called after
  16. // the stopCh channel passed to the Run method has been closed.
  17. Shutdown()
  18. // Returns the backend PluginName.
  19. String() string
  20. }

最终调用audit的ProcessEvents方法,以log举例,位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\plugin\pkg\audit\log\backend.go

  1. func (b *backend) ProcessEvents(events ...*auditinternal.Event) bool {
  2. success := true
  3. for _, ev := range events {
  4. success = b.logEvent(ev) && success
  5. }
  6. return success
  7. }
  8. func (b *backend) logEvent(ev *auditinternal.Event) bool {
  9. line := ""
  10. switch b.format {
  11. case FormatLegacy:
  12. line = audit.EventString(ev) + "\n"
  13. case FormatJson:
  14. bs, err := runtime.Encode(b.encoder, ev)
  15. if err != nil {
  16. audit.HandlePluginError(PluginName, err, ev)
  17. return false
  18. }
  19. line = string(bs[:])
  20. default:
  21. audit.HandlePluginError(PluginName, fmt.Errorf("log format %q is not in list of known formats (%s)",
  22. b.format, strings.Join(AllowedFormats, ",")), ev)
  23. return false
  24. }
  25. if _, err := fmt.Fprint(b.out, line); err != nil {
  26. audit.HandlePluginError(PluginName, err, ev)
  27. return false
  28. }
  29. return true
  30. }

8 http侧调用的handler

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\endpoints\filters\audit.go

  1. // WithAudit decorates a http.Handler with audit logging information for all the
  2. // requests coming to the server. Audit level is decided according to requests'
  3. // attributes and audit policy. Logs are emitted to the audit sink to
  4. // process events. If sink or audit policy is nil, no decoration takes place.
  5. func WithAudit(handler http.Handler, sink audit.Sink, policy audit.PolicyRuleEvaluator, longRunningCheck request.LongRunningRequestCheck) http.Handler {
  6. if sink == nil || policy == nil {
  7. return handler
  8. }
  9. return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
  10. auditContext, err := evaluatePolicyAndCreateAuditEvent(req, policy)
  11. if err != nil {
  12. utilruntime.HandleError(fmt.Errorf("failed to create audit event: %v", err))
  13. responsewriters.InternalError(w, req, errors.New("failed to create audit event"))
  14. return
  15. }
  16. ev := auditContext.Event
  17. if ev == nil || req.Context() == nil {
  18. handler.ServeHTTP(w, req)
  19. return
  20. }
  21. req = req.WithContext(audit.WithAuditContext(req.Context(), auditContext))
  22. ctx := req.Context()
  23. omitStages := auditContext.RequestAuditConfig.OmitStages
  24. ev.Stage = auditinternal.StageRequestReceived
  25. if processed := processAuditEvent(ctx, sink, ev, omitStages); !processed {
  26. audit.ApiserverAuditDroppedCounter.WithContext(ctx).Inc()
  27. responsewriters.InternalError(w, req, errors.New("failed to store audit event"))
  28. return
  29. }
  30. // intercept the status code
  31. var longRunningSink audit.Sink
  32. if longRunningCheck != nil {
  33. ri, _ := request.RequestInfoFrom(ctx)
  34. if longRunningCheck(req, ri) {
  35. longRunningSink = sink
  36. }
  37. }
  38. respWriter := decorateResponseWriter(ctx, w, ev, longRunningSink, omitStages)
  39. // send audit event when we leave this func, either via a panic or cleanly. In the case of long
  40. // running requests, this will be the second audit event.
  41. defer func() {
  42. if r := recover(); r != nil {
  43. defer panic(r)
  44. ev.Stage = auditinternal.StagePanic
  45. ev.ResponseStatus = &metav1.Status{
  46. Code: http.StatusInternalServerError,
  47. Status: metav1.StatusFailure,
  48. Reason: metav1.StatusReasonInternalError,
  49. Message: fmt.Sprintf("APIServer panic'd: %v", r),
  50. }
  51. processAuditEvent(ctx, sink, ev, omitStages)
  52. return
  53. }
  54. // if no StageResponseStarted event was sent b/c neither a status code nor a body was sent, fake it here
  55. // But Audit-Id http header will only be sent when http.ResponseWriter.WriteHeader is called.
  56. fakedSuccessStatus := &metav1.Status{
  57. Code: http.StatusOK,
  58. Status: metav1.StatusSuccess,
  59. Message: "Connection closed early",
  60. }
  61. if ev.ResponseStatus == nil && longRunningSink != nil {
  62. ev.ResponseStatus = fakedSuccessStatus
  63. ev.Stage = auditinternal.StageResponseStarted
  64. processAuditEvent(ctx, longRunningSink, ev, omitStages)
  65. }
  66. ev.Stage = auditinternal.StageResponseComplete
  67. if ev.ResponseStatus == nil {
  68. ev.ResponseStatus = fakedSuccessStatus
  69. }
  70. processAuditEvent(ctx, sink, ev, omitStages)
  71. }()
  72. handler.ServeHTTP(respWriter, req)
  73. })
  74. }
tail -f /var/log/audit/audit.log

3.8 admission准入控制器功能和源码解读

什么是准入控制插件

- 准入控制器是一段代码,它会在请求通过认证和授权之后对象被持久化之前拦截到达 API服务器的请求

- 准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器- - 控制器需要编译进 kube-apiserver 二进制文件,并且只能由集群管理员配置。
- 如果任何一个阶段的任何控制器拒绝了该请求,则整个请求将立即被拒绝,并向终端用户返回一个错误。

本节重点总结

准入控制插件的作用

        - 开启高级特性

什么是准入控制插件

文档地址 https://kubernetesio/zh/docs/reference/access-authn-authz/admission-controllers/

- 准入控制器是一段代码,它会在请求通过认证和授权之后、对象被持久化之前拦截到达 API 服务器的请求

- 准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器- - 控制器需要编译进 kube-apiserver 二进制文件,并且只能由集群管理员配置。

- 如果任何一个阶段的任何控制器拒绝了该请求,则整个请求将立即被拒绝,并向终端用户返回一个错误。
为什么需要准入控制器?
- Kubernetes 的许多高级功能都要求启用一个准入控制器,以便正确地支持该特性。

- 因此,没有正确配置准入控制器的 Kubernetes API 服务器是不完整的,它无法支持你期望的所有特性。 

按照是否可以修改对象分类
- 准入控制器可以执行“验证(Validating)”和/或“变更(Mutating)” 操作

- 变更(mutating)控制器可以修改被其接受的对象;验证 (validating)控制器则不行
按照静态动态分类
- 静态的就是固定的单一功能,如AlwaysPullImages 修改每一个新创建的 Pod 的镜像拉取策略为 Always

动态的如有两个特殊的控制器:MutatingAdmissionWebhook 和 Validating.AdmissionWebhook.。

- 它们根据API 中的配置,分别执行变更和验证准入控制 webhook。

- 相当于可以调用外部的http请求准入控制插件

源码阅读

入口在D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

	pluginInitializers, admissionPostStartHook, err = admissionConfig.New(proxyTransport, genericConfig.EgressSelector, serviceResolver, genericConfig.TracerProvider)

admissionConfig.New初始化准入控制器的配置
New函数设置准入所需的插件和webhook准入的启动钩子函数

  1. // New sets up the plugins and admission start hooks needed for admission
  2. func (c *Config) New(proxyTransport *http.Transport, egressSelector *egressselector.EgressSelector, serviceResolver webhook.ServiceResolver, tp *trace.TracerProvider) ([]admission.PluginInitializer, genericapiserver.PostStartHookFunc, error) {
  3. webhookAuthResolverWrapper := webhook.NewDefaultAuthenticationInfoResolverWrapper(proxyTransport, egressSelector, c.LoopbackClientConfig, tp)
  4. webhookPluginInitializer := webhookinit.NewPluginInitializer(webhookAuthResolverWrapper, serviceResolver)
  5. var cloudConfig []byte
  6. if c.CloudConfigFile != "" {
  7. var err error
  8. cloudConfig, err = ioutil.ReadFile(c.CloudConfigFile)
  9. if err != nil {
  10. klog.Fatalf("Error reading from cloud configuration file %s: %#v", c.CloudConfigFile, err)
  11. }
  12. }
  13. clientset, err := kubernetes.NewForConfig(c.LoopbackClientConfig)
  14. if err != nil {
  15. return nil, nil, err
  16. }
  17. discoveryClient := cacheddiscovery.NewMemCacheClient(clientset.Discovery())
  18. discoveryRESTMapper := restmapper.NewDeferredDiscoveryRESTMapper(discoveryClient)
  19. kubePluginInitializer := NewPluginInitializer(
  20. cloudConfig,
  21. discoveryRESTMapper,
  22. quotainstall.NewQuotaConfigurationForAdmission(),
  23. )
  24. admissionPostStartHook := func(context genericapiserver.PostStartHookContext) error {
  25. discoveryRESTMapper.Reset()
  26. go utilwait.Until(discoveryRESTMapper.Reset, 30*time.Second, context.StopCh)
  27. return nil
  28. }
  29. return []admission.PluginInitializer{webhookPluginInitializer, kubePluginInitializer}, admissionPostStartHook, nil
  30. }

其中用到的准入初始化接口为 Pluginlnitializer,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\admission\initializer.go

同时含有对应的Initialize方法,作用是提供初始化的数据

  1. // PluginInitializer is used for initialization of the Kubernetes specific admission plugins.
  2. type PluginInitializer struct {
  3. cloudConfig []byte
  4. restMapper meta.RESTMapper
  5. quotaConfiguration quota.Configuration
  6. }// Initialize checks the initialization interfaces implemented by each plugin
  7. // and provide the appropriate initialization data
  8. func (i *PluginInitializer) Initialize(plugin admission.Interface) {
  9. if wants, ok := plugin.(WantsCloudConfig); ok {
  10. wants.SetCloudConfig(i.cloudConfig)
  11. }
  12. if wants, ok := plugin.(WantsRESTMapper); ok {
  13. wants.SetRESTMapper(i.restMapper)
  14. }
  15. if wants, ok := plugin.(initializer.WantsQuotaConfiguration); ok {
  16. wants.SetQuotaConfiguration(i.quotaConfiguration)
  17. }
  18. }

同时还初始化了quto配额的准入

  1. kubePluginInitializer := NewPluginInitializer(
  2. cloudConfig,
  3. discoveryRESTMapper,
  4. quotainstall.NewQuotaConfigurationForAdmission(),
  5. )

生成一个webhook启动的钩子,每30秒重置一下discoveryRESTMapper,重置内部缓存的 Discovery

  1. admissionPostStartHook := func(context genericapiserver.PostStartHookContext) error {
  2. discoveryRESTMapper.Reset()
  3. go utilwait.Until(discoveryRESTMapper.Reset, 30*time.Second, context.StopCh)
  4. return nil
  5. }

s.Admission.ApplyTo 初始化准入控制

  1. err = s.Admission.ApplyTo(
  2. genericConfig,
  3. versionedInformers,
  4. kubeClientConfig,
  5. utilfeature.DefaultFeatureGate,
  6. pluginInitializers...)

根据 传入的控制器列表和推荐的计算开启的和关闭的

  1. if a.PluginNames != nil {
  2. // pass PluginNames to generic AdmissionOptions
  3. a.GenericAdmission.EnablePlugins, a.GenericAdmission.DisablePlugins = computePluginNames(a.PluginNames, a.GenericAdmission.RecommendedPluginOrder)
  4. }

PluginNames代表--admission-control传入的
a.GenericAdmission.RecommendedPluginOrder代表官方所有的,AllOrderedPlugins,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\options\plugins.go

使用computePluginNames算差集得到开启的和关闭的

  1. // explicitly disable all plugins that are not in the enabled list
  2. func computePluginNames(explicitlyEnabled []string, all []string) (enabled []string, disabled []string) {
  3. return explicitlyEnabled, sets.NewString(all...).Difference(sets.NewString(explicitlyEnabled...)).List()
  4. }

底层的ApplyTo分析

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\options\admission.go

func (a *AdmissionOptions) ApplyTo(){}

根据传入关闭的、传入开启的、推荐的等插件列表计算真正要开启的列表

  1. // enabledPluginNames makes use of RecommendedPluginOrder, DefaultOffPlugins,
  2. // EnablePlugins, DisablePlugins fields
  3. // to prepare a list of ordered plugin names that are enabled.
  4. func (a *AdmissionOptions) enabledPluginNames() []string {
  5. allOffPlugins := append(a.DefaultOffPlugins.List(), a.DisablePlugins...)
  6. disabledPlugins := sets.NewString(allOffPlugins...)
  7. enabledPlugins := sets.NewString(a.EnablePlugins...)
  8. disabledPlugins = disabledPlugins.Difference(enabledPlugins)
  9. orderedPlugins := []string{}
  10. for _, plugin := range a.RecommendedPluginOrder {
  11. if !disabledPlugins.Has(plugin) {
  12. orderedPlugins = append(orderedPlugins, plugin)
  13. }
  14. }
  15. return orderedPlugins
  16. }

根据配置文件读取配置 admission-control-config-file

  1. pluginsConfigProvider, err := admission.ReadAdmissionConfiguration(pluginNames, a.ConfigFile, configScheme)
  2. if err != nil {
  3. return fmt.Errorf("failed to read plugin config: %v", err)
  4. }

初始化genericInitializer

  1. clientset, err := kubernetes.NewForConfig(kubeAPIServerClientConfig)
  2. if err != nil {
  3. return err
  4. }
  5. genericInitializer := initializer.New(clientset, informers, c.Authorization.Authorizer, features)
  6. initializersChain := admission.PluginInitializers{}
  7. pluginInitializers = append(pluginInitializers, genericInitializer)
  8. initializersChain = append(initializersChain, pluginInitializers...)

NewFromPlugins执行所有启用的准入插件

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\admission\plugins.go

遍历插件,调用InitPlugin初始化对应的实例

  1. for _, pluginName := range pluginNames {
  2. pluginConfig, err := configProvider.ConfigFor(pluginName)
  3. if err != nil {
  4. return nil, err
  5. }
  6. plugin, err := ps.InitPlugin(pluginName, pluginConfig, pluginInitializer)
  7. if err != nil {
  8. return nil, err
  9. }
  10. if plugin != nil {
  11. if decorator != nil {
  12. handlers = append(handlers, decorator.Decorate(plugin, pluginName))
  13. } else {
  14. handlers = append(handlers, plugin)
  15. }
  16. if _, ok := plugin.(MutationInterface); ok {
  17. mutationPlugins = append(mutationPlugins, pluginName)
  18. }
  19. if _, ok := plugin.(ValidationInterface); ok {
  20. validationPlugins = append(validationPlugins, pluginName)
  21. }
  22. }
  23. }

InitPlugin
·调用getPlugin从Plugins获取plugin实例

  1. // InitPlugin creates an instance of the named interface.
  2. func (ps *Plugins) InitPlugin(name string, config io.Reader, pluginInitializer PluginInitializer) (Interface, error) {
  3. if name == "" {
  4. klog.Info("No admission plugin specified.")
  5. return nil, nil
  6. }
  7. plugin, found, err := ps.getPlugin(name, config)
  8. if err != nil {
  9. return nil, fmt.Errorf("couldn't init admission plugin %q: %v", name, err)
  10. }
  11. if !found {
  12. return nil, fmt.Errorf("unknown admission plugin: %s", name)
  13. }
  14. pluginInitializer.Initialize(plugin)
  15. // ensure that plugins have been properly initialized
  16. if err := ValidateInitialization(plugin); err != nil {
  17. return nil, fmt.Errorf("failed to initialize admission plugin %q: %v", name, err)
  18. }
  19. return plugin, nil
  20. }

getPlugin

  1. // getPlugin creates an instance of the named plugin. It returns `false` if the
  2. // the name is not known. The error is returned only when the named provider was
  3. // known but failed to initialize. The config parameter specifies the io.Reader
  4. // handler of the configuration file for the cloud provider, or nil for no configuration.
  5. func (ps *Plugins) getPlugin(name string, config io.Reader) (Interface, bool, error) {
  6. ps.lock.Lock()
  7. defer ps.lock.Unlock()
  8. f, found := ps.registry[name]
  9. if !found {
  10. return nil, false, nil
  11. }
  12. config1, config2, err := splitStream(config)
  13. if err != nil {
  14. return nil, true, err
  15. }
  16. if !PluginEnabledFn(name, config1) {
  17. return nil, true, nil
  18. }
  19. ret, err := f(config2)
  20. return ret, true, err
  21. }

其中最关键的就是去ps.registry map中获取插件,对应的value就是工厂函数

  1. // Factory is a function that returns an Interface for admission decisions.
  2. // The config parameter provides an io.Reader handler to the factory in
  3. // order to load specific configurations. If no configuration is provided
  4. // the parameter is nil.
  5. type Factory func(config io.Reader) (Interface, error)
  6. type Plugins struct {
  7. lock sync.Mutex
  8. registry map[string]Factory
  9. }

追踪可以发现这些工厂函数是在 RegisterAllAdmissionPlugins被注册的

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\options\plugins.go

  1. // RegisterAllAdmissionPlugins registers all admission plugins.
  2. // The order of registration is irrelevant, see AllOrderedPlugins for execution order.
  3. func RegisterAllAdmissionPlugins(plugins *admission.Plugins) {
  4. admit.Register(plugins) // DEPRECATED as no real meaning
  5. alwayspullimages.Register(plugins)
  6. antiaffinity.Register(plugins)
  7. defaulttolerationseconds.Register(plugins)
  8. defaultingressclass.Register(plugins)
  9. denyserviceexternalips.Register(plugins)
  10. deny.Register(plugins) // DEPRECATED as no real meaning
  11. eventratelimit.Register(plugins)
  12. extendedresourcetoleration.Register(plugins)
  13. gc.Register(plugins)
  14. imagepolicy.Register(plugins)
  15. limitranger.Register(plugins)
  16. autoprovision.Register(plugins)
  17. exists.Register(plugins)
  18. noderestriction.Register(plugins)
  19. nodetaint.Register(plugins)
  20. label.Register(plugins) // DEPRECATED, future PVs should not rely on labels for zone topology
  21. podnodeselector.Register(plugins)
  22. podtolerationrestriction.Register(plugins)
  23. runtimeclass.Register(plugins)
  24. resourcequota.Register(plugins)
  25. podsecurity.Register(plugins)
  26. podsecuritypolicy.Register(plugins)
  27. podpriority.Register(plugins)
  28. scdeny.Register(plugins)
  29. serviceaccount.Register(plugins)
  30. setdefault.Register(plugins)
  31. resize.Register(plugins)
  32. storageobjectinuseprotection.Register(plugins)
  33. certapproval.Register(plugins)
  34. certsigning.Register(plugins)
  35. certsubjectrestriction.Register(plugins)
  36. }
ps -ef |grep apiserver |grep admission-control

以alwayspullimages.Register(plugins)为例
- 那么对应的工厂函数为

D:\Workspace\Go\src\k8s.io\kubernetes\plugin\pkg\admission\alwayspullimages\admission.go

  1. // Register registers a plugin
  2. func Register(plugins *admission.Plugins) {
  3. plugins.Register(PluginName, func(config io.Reader) (admission.Interface, error) {
  4. return NewAlwaysPullImages(), nil
  5. })
  6. }

对应就是初始化一个AlwaysPulllmages

  1. // NewAlwaysPullImages creates a new always pull images admission control handler
  2. func NewAlwaysPullImages() *AlwaysPullImages {
  3. return &AlwaysPullImages{
  4. Handler: admission.NewHandler(admission.Create, admission.Update),
  5. }
  6. }

对应就会有修改对象的的变更准入控制器方法Admit

  1. // Admit makes an admission decision based on the request attributes
  2. func (a *AlwaysPullImages) Admit(ctx context.Context, attributes admission.Attributes, o admission.ObjectInterfaces) (err error) {
  3. // Ignore all calls to subresources or resources other than pods.
  4. if shouldIgnore(attributes) {
  5. return nil
  6. }
  7. pod, ok := attributes.GetObject().(*api.Pod)
  8. if !ok {
  9. return apierrors.NewBadRequest("Resource was marked with kind Pod but was unable to be converted")
  10. }
  11. pods.VisitContainersWithPath(&pod.Spec, field.NewPath("spec"), func(c *api.Container, _ *field.Path) bool {
  12. c.ImagePullPolicy = api.PullAlways
  13. return true
  14. })
  15. return nil
  16. }

上面的的是把pod的ImagePullPolicy改为api.PullAlways

对应的文档地址 https://kubernetes.io/zh/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages
。该准入控制器会修改每一个新创建的 Pod 的镜像拉取策略为 Always

。这在多租户集群中是有用的,这样用户就可以放心,他们的私有镜像只能被那些有凭证的人使用

。如果没有这个准入控制器,一旦镜像被拉取到节点上,任何用户的 Pod 都可以通过已了解到的镜像的名称(假设 Pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何授权检查

。当启用这个准入控制器时,总是在启动容器之前拉取镜像,这意味着需要有效的凭证

。同时对应还有校验的方法Validate

  1. // Validate makes sure that all containers are set to always pull images
  2. func (*AlwaysPullImages) Validate(ctx context.Context, attributes admission.Attributes, o admission.ObjectInterfaces) (err error) {
  3. if shouldIgnore(attributes) {
  4. return nil
  5. }
  6. pod, ok := attributes.GetObject().(*api.Pod)
  7. if !ok {
  8. return apierrors.NewBadRequest("Resource was marked with kind Pod but was unable to be converted")
  9. }
  10. var allErrs []error
  11. pods.VisitContainersWithPath(&pod.Spec, field.NewPath("spec"), func(c *api.Container, p *field.Path) bool {
  12. if c.ImagePullPolicy != api.PullAlways {
  13. allErrs = append(allErrs, admission.NewForbidden(attributes,
  14. field.NotSupported(p.Child("imagePullPolicy"), c.ImagePullPolicy, []string{string(api.PullAlways)}),
  15. ))
  16. }
  17. return true
  18. })
  19. if len(allErrs) > 0 {
  20. return utilerrors.NewAggregate(allErrs)
  21. }
  22. return nil
  23. }

第4章 自定义准入控制器,完成nginx sidecar的注入

4.1定义准入控制器需求分析

编写一个准入控制器,实现自动注入nginx sidecar pod
- 编写准入控制器,并运行
- 最终的效果就是指定命名空间下的应用pod都会被注入一个简单的nginx sidecar
istio 自动注入envoy 说明

 https://istio.io/latest/img/service-mesh.svg
- 现在非常火热的的 Service Mesh 应用istio 就是通过k8s apiserver的 mutating webhooks 来自动将Envoy这个 sidecar 容器注入到 Pod 中去的,相关文档https://istio.io/docs/setup/kubernetes/sidecar-injection/。

- 为了利用 Istio 的所有功能,网格中的 pod 必须运行lstio sidecar 代理。

- 当在 pod 的命名空间中启用时,自动注入会在 pod 创建时使用准入控制器注入代理配置,最后你的pod旁边有envoy 运行了
流程说明
- 检查集群中是否启用了admission webhook 控制器,并根据需要进行配置。
- 编写mutating webhook代码
  。启动tls-http server
  。实现/mutate方法
      。当用户调用create/update 方法创建/更新 pod时
      。apiserver调用这个mutating webhook,修改其中的方法,添加nginx sidecar容器
      。返回给apiserver,达到注入的目的

- 创建证书完成ca签名
- 创建MutatingWebhookConfiguration

- 部署服务验证注入结果

4.2 检查k8s集群准入配置和其他准备工作

什么是准入控制插件
- k8s集群检查操作
- 新建项目 kube-mutating-webhook-inject-pod,准备工作

本节重点总结:

- k8s集群检查操作
- 新建项目 kube-mutating-webhook-inject-pod,准备工作
k8s集群检查操作
检查k8s集群否启用了准入注册 API:
- 执行kubectl api-versions lgrep admission。

- 如果有下面的结果说明已启用

  1. kubectl api-versions |grep admission
  2. admissionregistration.k8s.io/v1

检查 apiserver 中启用了MutatingAdmissionWebhook和ValidatingAdmissionWebhook两个准入控制插件
- 1.20以上的版本默认已经启用,在默认启用的命令行中enable-admission-plugins

  1. /usr/local/bin/kube-apiserver -h | grep enable-admission-plugins
  2. --admission-plugins strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run.
  3. --enable-admission plugins strings admission plugins that should be enabled in addition to default enabled ones

编写 webhook
-  新建项目kube-mutating-webhook-inject-pod

go mod init kube-mutating-webhook-inject-pod

注入sidecar容器的配置文件设计
- 因为要注入一个容器,需要定义容器的相关配置所以复用k8s pod中container段的yaml

- 同时要挂载注入容器的配置,所以要复用k8s pod 中volumes的yaml
- 新建config.yaml如下

  1. ```yaml
  2. containers:
  3. - name: sidecar-nginx
  4. image: nginx:1.12.2
  5. imagePullPolicy: IfNotPresent
  6. ports:
  7. - containerPort: 80
  8. volumeMounts:
  9. - name: nginx-conf
  10. mountPath: /etc/nginx
  11. volumes:
  12. - name: nginx-conf
  13. configMap:
  14. name: nginx-configmap
  15. ```

对应的go代码
新建 pkg/webhook.go

  1. package main
  2. import (
  3. corev1 "k8s.io/api/core/v1"
  4. )
  5. type Config struct {
  6. Containers []corevl.Container `yaml:"containers"`
  7. Volumes []corev1.Volume `yaml: "volumes"`
  8. }

解析配置文件的函数

  1. func loadConfig(configFile string) (*Config, error) {
  2. data, err := ioutil.ReadFile(configFile)
  3. if err != nil {
  4. return nil, err
  5. }
  6. glog.Infof("New configuration: sha256sum %x", sha256.Sum256(data))
  7. var cfg Config
  8. if err := yaml.Unmarshal(data, &cfg); err != nil {
  9. return nil, err
  10. }
  11. return &cfg, nil
  12. }

编写webhook server的配置

  1. // Webhook Server options
  2. type webHookSvrOptions struct {
  3. port int // 监听https的端口
  4. certFile string // https x509 证书路径
  5. keyFile string // https x509 证书私钥路径
  6. sidecarCfgFile string // 注入sidecar容器的配置文件路径
  7. }

在main中通过命令行传入默认值并解析

  1. package main
  2. import (
  3. "flag"
  4. "github.com/golang/glog"
  5. )
  6. func main() {
  7. var runOption webHookSvrOptions
  8. // get command line parameters
  9. flag.IntVar(&runOption.port, "port", 8443, "Webhook server port.")
  10. flag.StringVar(&runOption.certFile, "tlsCertFile", "/etc/webhook/certs/cert.pem", "File containing the x509 Certificate for HTTPS.")
  11. flag.StringVar(&runOption.keyFile, "tlsKeyFile", "/etc/webhook/certs/key.pem", "File containing the x509 private key to --tlsCertFile.")
  12. //flag.StringVar(&runOption.sidecarCfgFile, "sidecarCfgFile", "/etc/webhook/config/sidecarconfig.yaml", "File containing the mutation configuration.")
  13. flag.StringVar(&runOption.sidecarCfgFile, "sidecarCfgFile", "config.yaml", "File containing the mutation configuration.")
  14. flag.Parse()
  15. sidecarConfig, err := loadConfig(runOption.sidecarCfgFile)
  16. if err != nil {
  17. glog.Errorf("Failed to load configuration: %v", err)
  18. return
  19. }
  20. glog.Infof("[sidecarConfig:%v]", sidecarConfig)
  21. }

加载tls x509证书

  1. pair, err := tls.LoadX509KeyPair(runOption.certFile, runOption.keyFile)
  2. if err != nil {
  3. glog.Errorf("Failed to load key pair: %v", err)
  4. return
  5. }

定义webhookhttp server,并构造

        webhook.go中

  1. type webhookServer struct {
  2. sidecarConfig *Config // 注入sidecar容器的配置
  3. server *http.Server // http serer
  4. }

main中

  1. webhooksvr := &webhookServer{
  2. sidecarConfig: sidecarConfig,
  3. server: &http.Server{
  4. Addr: fmt.Sprintf(":%v", runOption.port),
  5. TLSConfig: &tls.Config{Certificates: []tls.Certificate{pair}},
  6. },
  7. }

webhookServer的mutate handler并关联path
        webhook.go

  1. func (ws *webhookServer) serveMutate(w http.ResponseWriter, r *http.Request) {
  2. }

main.go

  1. mux := httpNewServeMux()
  2. mux.HandleFunc("/mutate", webhooksvr.serveMutate)
  3. webhooksvr.serverHandler = mux
  4. // start webhook server in new grountine
  5. go func() {
  6. if err := webhooksvr.server.ListenAndServeTLS("", ""); err != nil {
  7. glog.Errorf("Failed to listen and serve webhook server: %v", err)
  8. }
  9. }()

        意思是请求/mutate 由webhooksvr.serveMutate处理

main中监听退出信号

  1. // listening OS shutdown singal
  2. signalChan := make(chan os.Signal, 1)
  3. signal.Notify(signalChan, syscall.SIGINT, syscall.SIGTERM)
  4. <-signalChan
  5. glog.Infof("Got 0S shutdown signal, shutting down webhook server gracefully...")
  6. webhooksvr.server.Shutdown(context.Background())

本节完成红色框中的mutating admission webhooks

代码仓库地址GitHub - yunixiangfeng/k8s-exercise

k8s-exercise/kube-mutating-webhook-inject-pod at main · yunixiangfeng/k8s-exercise · GitHub

4.3 注入sidecar的mutatePod注入函数编写

什么是准入控制插件
serveMutate编写

- 准入控制请求参数校验

- 根据annotation标签判断是否需要注入sidecar

- mutatePod 注入函数编写

- 生成注入容器和volume的patch函数

serveMutate编写
普通校验请求
。 serveMutate方法
。 body是否为空
。 req header的Content-Type是否为application/json

  1. // webhookServer的mutate handler
  2. func (ws *webhookServer) serveMutate(w http.ResponseWriter, r *http.Request) {
  3. var body []byte
  4. if r.Body !=nil {
  5. if data,err := ioutil.ReadAll(r.Body); err == nil {
  6. body = data
  7. }
  8. }
  9. if len(body) == 0 {
  10. glog.Error("empty body")
  11. http.Error(w, "empty body", http.StatusBadRequest)
  12. return
  13. }
  14. // verify the content type is accurate
  15. contentType := r.Header.Get("Content-Type")
  16. if contentType !="application/json" {
  17. glog.Errorf("Content-Type=%s, expect application/json", contentType)
  18. http.Error(w, "invalid Content-Type, expect `application/json`", http.StatusUnsupportedMediaType)
  19. return
  20. }
  21. }

准入控制请求参数校验
- 构造准入控制的审查对象包括请求和响应
- 然后使用UniversalDeserializer解析传入的申请
- 如果出错就设置响应为报错的信息
- 没出错就调用mutatePod生成响应

  1. // 构造准入控制器的响应
  2. var admissionResponse *v1beta1.AdmissionResponse
  3. // 构造准入控制的审查对象 包括请求和响应
  4. // 然后使用UniversalDeserializer解析传入的申请
  5. // 如果出错就设置响应为报错的信息
  6. // 没出错就调用mutatePod生成响应
  7. ar := v1beta1.AdmissionReview{}
  8. if _, _, err := deserializer.Decode(body, nil, &ar); err != nil {
  9. glog.Errorf("Can't decode body: %v", err)
  10. admissionResponse = &v1beta1.AdmissionResponse{
  11. Result: &metav1.Status{
  12. Message: err.Error(),
  13. },
  14. }
  15. } else {
  16. admissionResponse = ws.mutatePod(&ar)
  17. }

解析器使用UniversalDeserializer

D:\Workspace\Go\pkg\mod\k8s.io\apimachinery@v0.24.2\pkg\runtime\serializer\codec_factory.go

  1. import (
  2. "crypto/sha256"
  3. "io/ioutil"
  4. "net/http"
  5. "github.com/golang/glog"
  6. "gopkg.in/yaml.v2"
  7. "k8s.io/api/admission/v1beta1"
  8. corev1 "k8s.io/api/core/v1"
  9. metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  10. "k8s.io/apimachinery/pkg/runtime"
  11. "k8s.io/apimachinery/pkg/runtime/serializer"
  12. )
  13. var (
  14. runtimeScheme = runtime.NewScheme()
  15. codecs = serializer.NewCodecFactory(runtimeScheme)
  16. deserializer = codecs.UniversalDeserializer()
  17. // (https://github .com/kubernetes/kubernetes/issues/57982)
  18. defaulter = runtime.ObjectDefaulter(runtimeScheme)
  19. )

写入响应
- 构造最终响应对象admissionReview

- 给response赋值
- json解析后用 w.write写入

  1. //构造最终响应对象 admissionReview
  2. // 给response贼值
  3. //json解析后用 w.write写入
  4. admissionReview := v1beta1.AdmissionReview{}
  5. if admissionResponse != nil {
  6. admissionReview.Response = admissionResponse
  7. if ar.Request != nil {
  8. admissionReview.Response.UID = ar.Request.UID
  9. }
  10. }
  11. resp, err := json.Marshal(admissionReview)
  12. if err != nil {
  13. glog.Errorf("Can't encode response: %v", err)
  14. http.Error(w, fmt.Sprintf("could not encode response: %v", err),
  15. http.StatusInternalServerError)
  16. }
  17. glog.Infof("Ready to write reponse ...")
  18. if _, err := w.Write(resp); err != nil {
  19. glog.Errorf("Can't write response: %v", err)
  20. http.Error(w, fmt.Sprintf("could not write response: %v", err), http.StatusInternalServerError)
  21. }

mutatePod注入函数编写
- 将请求中的对象解析为pod,如果出错就返回

  1. func (ws *webhookServer) mutatePod(ar *v1beta1.AdmissionReview) *v1beta1.AdmissionResponse {
  2. // 将请求中的对象解析为pod,如果出错就返回
  3. req := ar.Request
  4. var pod corev1.Pod
  5. if err := json.Unmarshal(req.Object.Raw, &pod); err != nil {
  6. glog.Errorf("Could not unmarshal raw object: %v", err)
  7. return &v1beta1.AdmissionResponse{
  8. Result: &metav1.Status{
  9. Message: err.Error(),
  10. },
  11. }
  12. }
  13. }

是否需要注入判断

  1. // 是否需要注入判断
  2. if !mutationRequired(ignoredNamespaces, &pod.ObjectMeta) {
  3. glog.Infof("Skipping mutation for %s/%s due to policy check", pod.Namespace, pod.Name)
  4. return &v1beta1.AdmissionResponser{
  5. Allowed: true,
  6. }
  7. }

mutationRequired判断函数, 判断这个pod资源要不要注入

1.如果pod在高权限的ns中,不注入
2.如果pod annotations中标记为已注入就不再注入了

3.如果pod annotations中配置不愿意注入就不注入

  1. // 判断这个pod资源要不要注入
  2. // 1.如果pod在高权限的ns中,不注入
  3. // 2.如果pod annotations中标记为已注入就不再注入了
  4. // 3.如果pod annotations中配置不愿意注入就不注入
  5. func mutationRequired(ignoredList []string, metadata *metav1.ObjectMeta) bool {
  6. // skip special kubernete system namespaces
  7. for _, namespace := range ignoredList {
  8. if metadata.Namespace == namespace {
  9. glog.Infof("skip mutation for %v for it's in special namespace:%v", metadata.Name, metadata.Namespace)
  10. return false
  11. }
  12. }
  13. annotations := metadata.GetAnnotations()
  14. if annotations == nil {
  15. annotations = map[string]string{}
  16. }
  17. // 如果 annotation中 标记为已注入就不再注入了
  18. status := annotations[admissionWebhookAnnotationStatusKey]
  19. if strings.ToLower(status) == "injected" {
  20. return false
  21. }
  22. // 如果pod中配置不愿意注入就不注入
  23. switch strings.ToLower(annotations[admissionWebhookAnnotationInjectKey]) {
  24. default:
  25. return false
  26. case "true":
  27. return false
  28. }
  29. }

相关的常量定义

  1. const (
  2. // 代表这个pod是否要注入 = ture代表要注入
  3. admissionWebhookAnnotationInjectKey = "sidecar-injector-webhook.xiaoyi/need _inject"
  4. // 代表判断pod已经注入过的标志 = injected代表已经注入了,就不再注入
  5. admissionWebhookAnnotationStatusKey = "sidecar-injector-webhook.xiaoyi/status"
  6. )
  7. // 为了安全,不给这两个ns中的pod注入 sidecar
  8. var ignoredNamespaces = []string{
  9. metav1.NamespaceSystem,
  10. metav1.NamespacePublic,
  11. }

添加默认的配置
https://github.com/kubernetes/kubernetes/pull/58025

  1. defaulter = runtime.ObjectDefaulter(runtimeScheme)
  2. func applyDefaultsWorkaround(containers []corev1.Container, volumes []corev1.Volume) {
  3. defaulter .Default(&corev1Pod{
  4. Spec: corev1.PodSpec{
  5. Containers: containers,
  6. Volumes:volumes,
  7. },
  8. })
  9. }

定义pathoption

  1. type patchOperation struct {
  2. Op string `json:"op"` // 动作
  3. Path string `json:"path"` // 操作的path
  4. Value interface{} `json:"value,omitempty"` //值
  5. }

生成容器端的patch函数

  1. // 添加容器的patch
  2. // 如果是第一个patch 需要在path末尾添加 /-
  3. func addContainer(target, added []corev1.Container, basePath string) (patch []patchOperation) {
  4. first := len(target) == 0
  5. var value interface{}
  6. for _, add := range added {
  7. value = add
  8. path := basePath
  9. if first {
  10. first = false
  11. value =[]corev1.Container{add}
  12. } else {
  13. path = path +"/-"
  14. }
  15. patch = append(patch, patchOperation{
  16. Op: "add",
  17. Path: path,
  18. Value: value,
  19. })
  20. }
  21. return patch
  22. }

生成添加volume的patch函数

  1. func addVolume(target, added []corev1.Volume, basePath string) (patch []patchOperation) {
  2. first := len(target) == 0
  3. var value interface{}
  4. for _, add := range added {
  5. value = add
  6. path := basePath
  7. if first {
  8. first = false
  9. value =[]corev1.Volume{add}
  10. } else {
  11. path = path +"/-"
  12. }
  13. patch = append(patch, patchOperation{
  14. Op: "add",
  15. Path: path,
  16. Value: value,
  17. })
  18. }
  19. return patch
  20. }

更新annotation的patch

  1. func updateAnnotation(target map[string]string, added map[string]string) (patch []patchOperation) {
  2. for key, value := range added {
  3. if target == nil || target[key] == "" {
  4. target = map[string]string{}
  5. patch = append(patch, patchOperation{
  6. Op: "add",
  7. Path: "/metadata/annotations",
  8. Value: map[string]string{
  9. key: value,
  10. },
  11. })
  12. } else {
  13. patch = append(patch, patchOperation{
  14. Op: "replace",
  15. Path: "/metadata/annotations/" + key,
  16. Value: value,
  17. })
  18. }
  19. }
  20. return patch
  21. }

最终的patch调用

  1. func createPatch(pod *corev1.Pod, sidecarConfig *Config, annotations map[string]string) ([]byte,
  2. error) {
  3. var patch []patchOperation
  4. patch = append(patch, addContainer(pod.Spec.Containers, sidecarConfig.Containers, "/spec/containers")...)
  5. patch = append(patch,addVolume(pod.Spec.Volumes, sidecarConfig.Volumes, "/spec/volumes")...)
  6. patch = append(patch, updateAnnotation(pod.Annotations, annotations)...)
  7. return json.Marshal(patch)
  8. }

调用patch 生成patch option
- mutatePod方法中

  1. annotations := map[string]string{admissionWebhookAnnotationStatusKey: "injected"}
  2. patchBytes, err := createPatch(&pod, ws.sidecarConfig, annotations)
  3. if err != nil {
  4. return &v1beta1.AdmissionResponse{
  5. Result: &metav1.Status{
  6. Message: err.Error(),
  7. },
  8. }
  9. }
  10. glog.Infof("AdmissionResponse: patch=%v\n", string(patchBytes))
  11. return &v1beta1.AdmissionResponse{
  12. Allowed: true,
  13. Patch: patchBytes,
  14. PatchType: func() *v1beta1.PatchType {
  15. pt := v1beta1.PatchTypeJSONPatch
  16. return &pt
  17. }(),
  18. }
  19. return nil
  20. }
  1. // Workaround: https://githubcom/kubernetes/kubernetes/issues/57982
  2. glog,Infof("[before applyDefaultsWorkaround][ws.sidecarConfig.Containers:%+v][ws.sidecarconfig.Volumes:%+v]", ws.sidecarConfig.Containers[0], ws.sidecarConfig.Volumes[0])
  3. applyDefaultsWorkaround(ws.sidecarConfig.Containers, ws.sidecarConfig.Volumes)
  4. glog.Infof("[after applyDefaultsWorkaround][ws.sidecarConfig.Containers:%+v][ws.sidecarConfig.Volumes:%+v]", ws.sidecarConfig.Containers[0], ws.sidecarConfig.Volumes[0])
  5. // 这里构造一个本次已注入sidecar的annotations
  6. annotations := map[string]string{admissionWebhookAnnotationStatusKey: "injected"}

本节重点总结:

serveMutate编写
。准入控制请求参数校验
。根据annotation标签判断是否需要注入sidecarmutatePod注入函数编写
。生成注入容器和volume的patch函数

4.4 搭镜像部署并运行注入sidecar验证

本节重点总结:

创建ca证书,通过csr让apiserver签名
获取审批后的证书,用它创建MutatingWebhookConfiguration

编译打镜像
makefile

  1. IMAGE_NAME ?= sidecar-injector
  2. PWD := $(shell pwd)
  3. BASE_DIR := $(shell basename $(PWD))
  4. export GOPATH ?= $(GOPATH_DEFAULT)
  5. IMAGE_TAG ?= $(shell date +v%Y%m%d)-$(shell git describe --match=$(git rev-parse --short-8 HEAD) --tags --always --dirty
  6. build:
  7. @echo "Building the $(IMAGE_NAME) binary..."
  8. @CGO_ENABLED=0 go build -o $(IMAGE_NAME) ./pkg/
  9. build-linux:
  10. @echo "Building the $(IMAGE_NAME) binary for Docker (linux)..."
  11. @GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o $(IMAGE_NAME) ./pkg/
  12. #################################
  13. # image section
  14. #################################
  15. image: build-image
  16. build-image: build-linux
  17. @echo "Building the docker image: $(IMAGE_NAME)..."
  18. @docker build -t $(IMAGE_NAME) f Dockerfile .
  19. .PHONY: all build image

dockerfile

  1. FROM alpine:latest
  2. # set environment variables
  3. ENV SIDECAR_INJECTOR=/usr/local/bin/sidecar-injector \
  4. USER UID=1001 \
  5. USER_NAME=sidecar-injector
  6. COPY sidecar-injector /usr/local/bin/sidecar-injector
  7. # set entrypoint
  8. ENTRYPOINT ["/usr/local/bin/sidecar-injector"]
  9. # switch to non-root user
  10. USER ${USER_UID}

打包代码kube-mutating-webhook-inject-pod.zip

拷贝到k8s集群节点,

打镜像
运行make build-image


将镜像导出并传输到其他节点导入

  1. docker save sidecar-injector > a.tar
  2. scp a.tar k8s-worker02:~
  3. ctr --namespace k8s.io images import a.tar

部署
创建ns nginx-injection,最终部署到这个ns中的容器会被注入nginx sidecar

kubectl create ns nginx-injection

创建ns sidecar-injector,我们的这个mutate webhook服务运行的ns

kubectl create ns sidecar-injector

创建ca证书,并让apiserver签名
01生成证书签名请求配置文件csr.conf

  1. cat <<EOF > csr.conf
  2. [req]
  3. req_extensions = v3_req
  4. distinguished_name = req_distinguished_name
  5. [req_distinguished_name]
  6. [ v3_req ]
  7. basicConstraints = CA:FALSE
  8. keyUsage = nonRepudiation, digitalSignature, keyEncipherment
  9. extendedKeyUsage = serverAuth
  10. subjectAltName = @alt_names
  11. [alt_names]
  12. DNS.1 = sidecar-injector-webhook-svc
  13. DNS.2 = sidecar-injector-webhook-svc.sidecar-injector
  14. DNS.3 = sidecar-injector-webhook-svc.sidecar-injector.svc
  15. EOF

02 opensslgenrsa 命生成RSA 私有秘钥

openssl genrsa -out server-key.pem 2048

03 生成证书请求文件验证证书请求文件和创建根CA

openssl req -new -key server-key.pem -subj "/CN=sidecar-injector-webhook-svc.sidecar-injector.svc" -out server.csr -config csr.conf

删除之前的csr请求

kubectl delete csr sidecar-injector-webhook-svc.sidecar-injector

申请csr CertificateSigningRequest

  1. cat <<EOF | kubectl create -f -
  2. apiVersion: certificates.k8s.io/v1beta1
  3. kind: CertificateSigningRequest
  4. metadata:
  5. name: sidecar-injector-webhook-svc.sidecar-injector
  6. spec:
  7. groups:
  8. - system:authenticated
  9. request: $(< server.csr base64 | tr -d '\n')
  10. usages:
  11. - digital signature
  12. - key encipherment
  13. - server auth
  14. EOF

检查csr

 kubectl get csr

审批csr

  1. kubectl certificate approve sidecar-injector-webhook-svc.sidecar-injector
  2. certificatesigningrequest.certificates.k8s.io/sidecar-injector-webhook-svc.sidecar-injector approve

获取签名后的证书

  1. serverCert=$(kubectl get csr sidecar-injector-webhook-svc.sidecar-injector -o jsonpath='{ .status.certificate}')
  2. echo "${serverCert}" | openssl base64 -d -A -out server-cert.pem

使用证书创建secret

  1. kubectl create secret generic sidecar-injector-webhook-certs \
  2. --from-file=keypem=server-key.pem \
  3. --from-file=cert.pem=server-cert.pem \
  4. --dry-run=client -o yaml |
  5. kubectl -n sidecar-injector apply -f -

检查证书

  1. kubectl get secret -n sidecar-injector
  2. NAME TYPE DATA AGE
  3. default-token-hvgnl kubernetes.io/service-account-token 3 25m
  4. sidecar-injector-webhookcerts Opaque 2 25m

获取CA_BUNDLE并替换 mutatingwebhook中的CA_BUNDLE占位

  1. CA_BUNDLE=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
  2. if [ -z "${CA_BUNDLE}" ]; then
  3. CA_BUNDLE=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.ca\.crt}")
  4. fi

替换

cat deploy/mutating_webhook.yaml | sed -e "s|\${CA_BUNDLE}|${CA_BUNDLE}|g" >  deploy/mutatingwebhook-ca-bundle.yaml

检查结果

cat deploy/mutatingwebhook-ca-bundle.yaml

上述两个步骤可以直接运行脚本
脚本如下

  1. chmod +x ./deploy/*.sh
  2. ./deploy/webhook-create-signed-cert.sh \
  3. --service sidecar-injector-webhook-svc \
  4. --secret sidecar-injector-webhook-certs \
  5. --namespace sidecar-injector
  6. cat deploy/mutating_webhook.yaml | \
  7. deploy/webhook-patch-ca-bundle.sh > \
  8. deploy/mutatingwebhook-ca-bundle.yaml

这里重用Istio 项目中的生成的证书签名请求脚本。通过发送请求到apiserver,获取认证信息,然后使用获得的结果来创建需要的 secret 对象。
部署yaml

01先部署sidecar-injector
部署

  1. kubectl create -f deploy/inject_configmap.yaml
  2. kubectl create -f deploy/inject_deployment.yaml
  3. kubectl create -f deploy/inject_service.yaml

检查

  1. kubectl get pod -n sidecar-injector
  2. kubectl get svc -n sidecar-injector

02 部署 mutatingwebhook

kubectl create -f deploy/mutatingwebhook-ca-bundle.yaml

检查

kubectl get MutatingWebhookConfiguration -A

03 部署nginx-sidecar 运行所需的configmap

kubectl create -f deploy/nginx_configmap.yaml

04 创建一个namespace ,并打上标签 sidecar-injection=enabled

  1. kubectl create ns nginx-injection
  2. kubectl label namespace nginx-injection nginx-sidecar-injection=enabled

sidecar-injection=enabled和 MutatingWebhookConfiguration中的ns过滤器相同

  1. namespaceSelector:
  2. matchLabels:
  3. nginx-sidecar-injection: enabled

检查标签结果,最终部署到这里的pod都判断是否要注入sidecar

kubectl get ns -L nginx-sidecar-injection

 05 向nginx-injection中部署一个pod
annotations中配置的sidecar-injector-webhooknginxsidecar/need_inject:"true"代表需要注入

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. namespace: nginx-injection
  5. name: test-alpine-inject01
  6. labels:
  7. role: myrole
  8. annotations:
  9. sidecar-injector-webhook.nginx.sidecar/need_inject: "true"
  10. spec:
  11. containers:
  12. - image: alpine
  13. command:
  14. - /bin/sh
  15. - "-c"
  16. - "sleep 60m"
  17. imagePullPolicy: IfNotPresent
  18. name: alpine
  19. restartPolicy: Always

部署

kubectl create -f test_sleep_deployment.yaml

查看结果,可以看到test-alpine-inject-01 pod中被注入了nginx sidecar,curl这个pod的ip访问80端口,可以看到nginx sidecar的响应

  1. kubectl get pod -n nginx-injection -o wide
  2. curl pod_ip

 06 观察sidecar-injector的日志

apiserver过来访问 sidecar-injector,然后经过判断后给该pod 注入了sidecar

07 部署一个不需要注入sidecar的pod 

sidecar-injector-webhooknginxsidecar/need inject:"false"明确指出不需要如注入

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. namespace: nginx-injection
  5. name: test-alpine-inject02
  6. labels:
  7. role: myrole
  8. annotations:
  9. sidecar-injector-webhook.nginx.sidecar/need_inject: "false"
  10. spec:
  11. containers:
  12. - image: alpine
  13. command:
  14. - /bin/sh
  15. - "-c"
  16. - "sleep 60m"
  17. imagePullPolicy: IfNotPresent
  18. name: alpine
  19. restartPolicy: Always

观察部署结果,test-alpine-inject-02 只运行了一个容器

 观察sidecar-injector的日志,可以看到[skip mutation][reason=pod not need]

第5章 API核心服务的处理流程

5.1 API核心server的启动流程

本节重点总结 :

通用的GenericApiServerNew函数

apiserver核心服务的初始化

最终的apiserver启动流程

通用的GenericApiServerNew函数

之前我们分析了使用buildGenericConfig构建api核心服务的配置
然后回到CreateServerChain函数中,位置

D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

发现会调用三个server的create函数,传入对应的配置初始化

。 apiExtensionsServer API扩展服务,主要针对CRD
。 kubeAPIServer API核心服务,包括常见的Pod/Deployment/Service

。apiExtensionsServer API聚合服务,主要针对metrics

代码如下

  1. apiExtensionsServer, err := createAPIExtensionsServer(apiExtensionsConfig, genericapiserver.NewEmptyDelegateWithCustomHandler(notFoundHandler))
  2. if err != nil {
  3. return nil, err
  4. }
  5. kubeAPIServer, err := CreateKubeAPIServer(kubeAPIServerConfig, apiExtensionsServer.GenericAPIServer)
  6. if err != nil {
  7. return nil, err
  8. }
  9. // aggregator comes last in the chain
  10. aggregatorConfig, err := createAggregatorConfig(*kubeAPIServerConfig.GenericConfig, completedOptions.ServerRunOptions, kubeAPIServerConfig.ExtraConfig.VersionedInformers, serviceResolver, kubeAPIServerConfig.ExtraConfig.ProxyTransport, pluginInitializer)
  11. if err != nil {
  12. return nil, err
  13. }
  14. aggregatorServer, err := createAggregatorServer(aggregatorConfig, kubeAPIServer.GenericAPIServer, apiExtensionsServer.Informers)
  15. if err != nil {
  16. // we don't need special handling for innerStopCh because the aggregator server doesn't create any go routines
  17. return nil, err
  18. }

对应三个server的create函数都会调用 completedConfig的New

。因为他们三个server
。如 kubeAPIServer
s,err := c.GenericConfig.New("kube-apiserver",delegationTarget)
。还有apiExtensionsServer
genericServer, err := c.GenericConfig.New("apiextensions-apiserver", delegationTarget)
。还有 createAggregatorServer
genericServer,err := c.GenericConfig.ew("kube-aggregator", delegationTarget)

completedConfig的New生成通用的GenericAPIServer

位置

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\config.go

New 创建一个新的服务器,它在逻辑上将处理链与传入的服务器组合在一起。
name用于区分日志记录。
初始化handler

  1. handlerChainBuilder := func(handler http.Handler) http.Handler {
  2. return c.BuildHandlerChainFunc(handler, c.Config)
  3. }
  4. apiServerHandler := NewAPIServerHandler(name, c.Serializer, handlerChainBuilder, delegationTarget.UnprotectedHandler())

用各种参数实例化一个GenericAPIServer

  1. s := &GenericAPIServer{
  2. discoveryAddresses: c.DiscoveryAddresses,
  3. LoopbackClientConfig: c.LoopbackClientConfig,
  4. legacyAPIGroupPrefixes: c.LegacyAPIGroupPrefixes,
  5. admissionControl: c.AdmissionControl,
  6. Serializer: c.Serializer,
  7. AuditBackend: c.AuditBackend,
  8. Authorizer: c.Authorization.Authorizer,
  9. delegationTarget: delegationTarget,
  10. EquivalentResourceRegistry: c.EquivalentResourceRegistry,
  11. HandlerChainWaitGroup: c.HandlerChainWaitGroup,
  12. Handler: apiServerHandler,
  13. listedPathProvider: apiServerHandler,
  14. minRequestTimeout: time.Duration(c.MinRequestTimeout) * time.Second,
  15. ShutdownTimeout: c.RequestTimeout,
  16. ShutdownDelayDuration: c.ShutdownDelayDuration,
  17. SecureServingInfo: c.SecureServing,
  18. ExternalAddress: c.ExternalAddress,
  19. openAPIConfig: c.OpenAPIConfig,
  20. openAPIV3Config: c.OpenAPIV3Config,
  21. skipOpenAPIInstallation: c.SkipOpenAPIInstallation,
  22. postStartHooks: map[string]postStartHookEntry{},
  23. preShutdownHooks: map[string]preShutdownHookEntry{},
  24. disabledPostStartHooks: c.DisabledPostStartHooks,
  25. healthzChecks: c.HealthzChecks,
  26. livezChecks: c.LivezChecks,
  27. readyzChecks: c.ReadyzChecks,
  28. livezGracePeriod: c.LivezGracePeriod,
  29. DiscoveryGroupManager: discovery.NewRootAPIsHandler(c.DiscoveryAddresses, c.Serializer),
  30. maxRequestBodyBytes: c.MaxRequestBodyBytes,
  31. livezClock: clock.RealClock{},
  32. lifecycleSignals: c.lifecycleSignals,
  33. ShutdownSendRetryAfter: c.ShutdownSendRetryAfter,
  34. APIServerID: c.APIServerID,
  35. StorageVersionManager: c.StorageVersionManager,
  36. Version: c.Version,
  37. muxAndDiscoveryCompleteSignals: map[string]<-chan struct{}{},
  38. }

添加钩子函数

先从传入的server中获取

  1. // first add poststarthooks from delegated targets
  2. for k, v := range delegationTarget.PostStartHooks() {
  3. s.postStartHooks[k] = v
  4. }
  5. for k, v := range delegationTarget.PreShutdownHooks() {
  6. s.preShutdownHooks[k] = v
  7. }

再从提前配置comoletedConfig中获取 

  1. // add poststarthooks that were preconfigured. Using the add method will give us an error if the same name has already been registered.
  2. for name, preconfiguredPostStartHook := range c.PostStartHooks {
  3. if err := s.AddPostStartHook(name, preconfiguredPostStartHook.hook); err != nil {
  4. return nil, err
  5. }
  6. }

比如在之前生成GenericConfig配置的admissionPostStartHook准入控制hook,位置

D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

  1. if err := config.GenericConfig.AddPostStartHook("start-kube-apiserver-admission-initializer", admissionPostStartHook); err != nil {
  2. return nil, nil, nil, err
  3. }

对应的hook方法在admissionNew中,位置

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\kubeapiserver\admission\config.go

  1. admissionPostStartHook := func(context genericapiserver.PostStartHookContext) error {
  2. discoveryRESTMapper.Reset()
  3. go utilwait.Until(discoveryRESTMapper.Reset, 30*time.Second, context.StopCh)
  4. return nil
  5. }

注册generic-apiserver-start-informers的hook

  1. genericApiServerHookName := "generic-apiserver-start-informers"
  2. if c.SharedInformerFactory != nil {
  3. if !s.isPostStartHookRegistered(genericApiServerHookName) {
  4. err := s.AddPostStartHook(genericApiServerHookName, func(context PostStartHookContext) error {
  5. c.SharedInformerFactory.Start(context.StopCh)
  6. return nil
  7. })
  8. if err != nil {
  9. return nil, err
  10. }
  11. }
  12. // TODO: Once we get rid of /healthz consider changing this to post-start-hook.
  13. err := s.AddReadyzChecks(healthz.NewInformerSyncHealthz(c.SharedInformerFactory))
  14. if err != nil {
  15. return nil, err
  16. }
  17. }

注册apiserver中的限流策略 hook
具体的内容在限流那章节中讲解

  1. const priorityAndFairnessConfigConsumerHookName = "priority-and-fairness-config-consumer"
  2. if s.isPostStartHookRegistered(priorityAndFairnessConfigConsumerHookName) {
  3. } else if c.FlowControl != nil {
  4. err := s.AddPostStartHook(priorityAndFairnessConfigConsumerHookName, func(context PostStartHookContext) error {
  5. go c.FlowControl.MaintainObservations(context.StopCh)
  6. go c.FlowControl.Run(context.StopCh)
  7. return nil
  8. })
  9. if err != nil {
  10. return nil, err
  11. }
  12. // TODO(yue9944882): plumb pre-shutdown-hook for request-management system?
  13. } else {
  14. klog.V(3).Infof("Not requested to run hook %s", priorityAndFairnessConfigConsumerHookName)
  15. }
  16. // Add PostStartHooks for maintaining the watermarks for the Priority-and-Fairness and the Max-in-Flight filters.
  17. if c.FlowControl != nil {
  18. const priorityAndFairnessFilterHookName = "priority-and-fairness-filter"
  19. if !s.isPostStartHookRegistered(priorityAndFairnessFilterHookName) {
  20. err := s.AddPostStartHook(priorityAndFairnessFilterHookName, func(context PostStartHookContext) error {
  21. genericfilters.StartPriorityAndFairnessWatermarkMaintenance(context.StopCh)
  22. return nil
  23. })
  24. if err != nil {
  25. return nil, err
  26. }
  27. }
  28. } else {
  29. const maxInFlightFilterHookName = "max-in-flight-filter"
  30. if !s.isPostStartHookRegistered(maxInFlightFilterHookName) {
  31. err := s.AddPostStartHook(maxInFlightFilterHookName, func(context PostStartHookContext) error {
  32. genericfilters.StartMaxInFlightWatermarkMaintenance(context.StopCh)
  33. return nil
  34. })
  35. if err != nil {
  36. return nil, err
  37. }
  38. }
  39. }

添加健康检查

  1. for _, delegateCheck := range delegationTarget.HealthzChecks() {
  2. skip := false
  3. for _, existingCheck := range c.HealthzChecks {
  4. if existingCheck.Name() == delegateCheck.Name() {
  5. skip = true
  6. break
  7. }
  8. }
  9. if skip {
  10. continue
  11. }
  12. s.AddHealthChecks(delegateCheck)
  13. }

通过设置liveness 容忍度为0,要求立即发现传入的server不可用

位置

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\healthz.go

  1. // AddHealthChecks adds HealthCheck(s) to health endpoints (healthz, livez, readyz) but
  2. // configures the liveness grace period to be zero, which means we expect this health check
  3. // to immediately indicate that the apiserver is unhealthy.
  4. func (s *GenericAPIServer) AddHealthChecks(checks ...healthz.HealthChecker) error {
  5. // we opt for a delay of zero here, because this entrypoint adds generic health checks
  6. // and not health checks which are specifically related to kube-apiserver boot-sequences.
  7. return s.addHealthChecks(0, checks...)
  8. }

初始化api路由的installAPI
。添加/和/index.html的路由规则

  1. if c.EnableIndex {
  2. routes.Index{}.Install(s.listedPathProvider, s.Handler.NonGoRestfulMux)
  3. }

添加/debug/pprof 分析的路由规则,用于性能分析

  1. if c.EnableProfiling {
  2. routes.Profiling{}.Install(s.Handler.NonGoRestfulMux)
  3. if c.EnableContentionProfiling {
  4. goruntime.SetBlockProfileRate(1)
  5. }
  6. // so far, only logging related endpoints are considered valid to add for these debug flags.
  7. routes.DebugFlags{}.Install(s.Handler.NonGoRestfulMux, "v", routes.StringFlagPutHandler(logs.GlogSetter))
  8. }

添加/metrics 指标监控的路由规则 

  1. if c.EnableMetrics {
  2. if c.EnableProfiling {
  3. routes.MetricsWithReset{}.Install(s.Handler.NonGoRestfulMux)
  4. } else {
  5. routes.DefaultMetrics{}.Install(s.Handler.NonGoRestfulMux)
  6. }
  7. }

添加/version 版本信息的路由规则
routes.Version(Version: c.Version).Install(s.Handler,GoRestfulContainer)
开启服务发现
if c.EnableDiscovery (
s.Handler,GoRestfulContainer.Add(s.DiscoveryGroupHanager.WebService())

apiserver 核心服务的初始化

位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\controlplane\instance.go

  1. // New returns a new instance of Master from the given config.
  2. // Certain config fields will be set to a default value if unset.
  3. // Certain config fields must be specified, including:
  4. // KubeletClientConfig
  5. func (c completedConfig) New(delegationTarget genericapiserver.DelegationTarget) (*Instance, error) { ... }

上面提到的初始化通用的server

	s, err := c.GenericConfig.New("kube-apiserver", delegationTarget)

并且用通用配置实例化master 实例

  1. m := &Instance{
  2. GenericAPIServer: s,
  3. ClusterAuthenticationInfo: c.ExtraConfig.ClusterAuthenticationInfo,
  4. }

注册核心资源的api 

  1. // install legacy rest storage
  2. if err := m.InstallLegacyAPI(&c, c.GenericConfig.RESTOptionsGetter); err != nil {
  3. return nil, err
  4. }

注册api

  1. if err := m.InstallAPIs(c.ExtraConfig.APIResourceConfigSource, c.GenericConfig.RESTOptionsGetter, restStorageProviders...); err != nil {
  2. return nil, err
  3. }

最终的apiserver启动流程
回到Run函数通过CreateServerChain拿到创建的3个server,执行run即可

位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-apiserver\app\server.go

  1. // Run runs the specified APIServer. This should never exit.
  2. func Run(completeOptions completedServerRunOptions, stopCh <-chan struct{}) error {
  3. // To help debugging, immediately log version
  4. klog.Infof("Version: %+v", version.Get())
  5. klog.InfoS("Golang settings", "GOGC", os.Getenv("GOGC"), "GOMAXPROCS", os.Getenv("GOMAXPROCS"), "GOTRACEBACK", os.Getenv("GOTRACEBACK"))
  6. server, err := CreateServerChain(completeOptions, stopCh)
  7. if err != nil {
  8. return err
  9. }
  10. prepared, err := server.PrepareRun()
  11. if err != nil {
  12. return err
  13. }
  14. return prepared.Run(stopCh)
  15. }

preparedGenericAPIServer中的Run 

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\genericapiserver.go 

	stoppedCh, listenerStoppedCh, err := s.NonBlockingRun(stopHttpServerCh, shutdownTimeout)

调用preparedGenericAPIServer的NonBlockingRun

  1. if s.SecureServingInfo != nil && s.Handler != nil {
  2. var err error
  3. stoppedCh, listenerStoppedCh, err = s.SecureServingInfo.Serve(s.Handler, shutdownTimeout, internalStopCh)
  4. if err != nil {
  5. close(internalStopCh)
  6. close(auditStopCh)
  7. return nil, nil, err
  8. }
  9. }

最终调用Serve运行secure http server,
位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\secure_serving.go

  1. // Serve runs the secure http server. It fails only if certificates cannot be loaded or the initial listen call fails.
  2. // The actual server loop (stoppable by closing stopCh) runs in a go routine, i.e. Serve does not block.
  3. // It returns a stoppedCh that is closed when all non-hijacked active requests have been processed.
  4. // It returns a listenerStoppedCh that is closed when the underlying http Server has stopped listening.
  5. func (s *SecureServingInfo) Serve(handler http.Handler, shutdownTimeout time.Duration, stopCh <-chan struct{}) (<-chan struct{}, <-chan struct{}, error) {
  6. if s.Listener == nil {
  7. return nil, nil, fmt.Errorf("listener must not be nil")
  8. }
  9. tlsConfig, err := s.tlsConfig(stopCh)
  10. if err != nil {
  11. return nil, nil, err
  12. }
  13. secureServer := &http.Server{
  14. Addr: s.Listener.Addr().String(),
  15. Handler: handler,
  16. MaxHeaderBytes: 1 << 20,
  17. TLSConfig: tlsConfig,
  18. IdleTimeout: 90 * time.Second, // matches http.DefaultTransport keep-alive timeout
  19. ReadHeaderTimeout: 32 * time.Second, // just shy of requestTimeoutUpperBound
  20. }
  21. // At least 99% of serialized resources in surveyed clusters were smaller than 256kb.
  22. // This should be big enough to accommodate most API POST requests in a single frame,
  23. // and small enough to allow a per connection buffer of this size multiplied by `MaxConcurrentStreams`.
  24. const resourceBody99Percentile = 256 * 1024
  25. http2Options := &http2.Server{
  26. IdleTimeout: 90 * time.Second, // matches http.DefaultTransport keep-alive timeout
  27. }
  28. // shrink the per-stream buffer and max framesize from the 1MB default while still accommodating most API POST requests in a single frame
  29. http2Options.MaxUploadBufferPerStream = resourceBody99Percentile
  30. http2Options.MaxReadFrameSize = resourceBody99Percentile
  31. // use the overridden concurrent streams setting or make the default of 250 explicit so we can size MaxUploadBufferPerConnection appropriately
  32. if s.HTTP2MaxStreamsPerConnection > 0 {
  33. http2Options.MaxConcurrentStreams = uint32(s.HTTP2MaxStreamsPerConnection)
  34. } else {
  35. http2Options.MaxConcurrentStreams = 250
  36. }
  37. // increase the connection buffer size from the 1MB default to handle the specified number of concurrent streams
  38. http2Options.MaxUploadBufferPerConnection = http2Options.MaxUploadBufferPerStream * int32(http2Options.MaxConcurrentStreams)
  39. if !s.DisableHTTP2 {
  40. // apply settings to the server
  41. if err := http2.ConfigureServer(secureServer, http2Options); err != nil {
  42. return nil, nil, fmt.Errorf("error configuring http2: %v", err)
  43. }
  44. }
  45. // use tlsHandshakeErrorWriter to handle messages of tls handshake error
  46. tlsErrorWriter := &tlsHandshakeErrorWriter{os.Stderr}
  47. tlsErrorLogger := log.New(tlsErrorWriter, "", 0)
  48. secureServer.ErrorLog = tlsErrorLogger
  49. klog.Infof("Serving securely on %s", secureServer.Addr)
  50. return RunServer(secureServer, s.Listener, shutdownTimeout, stopCh)
  51. }

5.2 scheme和RESTStorage的初始化

本节重点总结

Scheme 定义了资源序列化和反序列化的方法以及资源类型和版本的对应关系;

这里我们可以理解成一张纪录表

所有的k8s资源必须要注册到scheme表中才可以使用

RESTStorage定义了一种资源该如何curd,如何和存储打交道
- 各个资源创建的restStore 塞入restStorageMap中

- map的key是 资源/子资源的名称, value是对应的restStore

InstallLegacyAPl

。上节课讲到apiserver 核心服务初始化的时候会创建restStorage

并用restStorage初始化核心服务 

入口地址D:\Workspace\Go\src\k8s.io\kubernetes\pkg\controlplane\instance.go

  1. // InstallLegacyAPI will install the legacy APIs for the restStorageProviders if they are enabled.
  2. func (m *Instance) InstallLegacyAPI(c *completedConfig, restOptionsGetter generic.RESTOptionsGetter) error {
  3. legacyRESTStorageProvider := corerest.LegacyRESTStorageProvider{
  4. StorageFactory: c.ExtraConfig.StorageFactory,
  5. ProxyTransport: c.ExtraConfig.ProxyTransport,
  6. KubeletClientConfig: c.ExtraConfig.KubeletClientConfig,
  7. EventTTL: c.ExtraConfig.EventTTL,
  8. ServiceIPRange: c.ExtraConfig.ServiceIPRange,
  9. SecondaryServiceIPRange: c.ExtraConfig.SecondaryServiceIPRange,
  10. ServiceNodePortRange: c.ExtraConfig.ServiceNodePortRange,
  11. LoopbackClientConfig: c.GenericConfig.LoopbackClientConfig,
  12. ServiceAccountIssuer: c.ExtraConfig.ServiceAccountIssuer,
  13. ExtendExpiration: c.ExtraConfig.ExtendExpiration,
  14. ServiceAccountMaxExpiration: c.ExtraConfig.ServiceAccountMaxExpiration,
  15. APIAudiences: c.GenericConfig.Authentication.APIAudiences,
  16. }
  17. legacyRESTStorage, apiGroupInfo, err := legacyRESTStorageProvider.NewLegacyRESTStorage(c.ExtraConfig.APIResourceConfigSource, restOptionsGetter)
  18. if err != nil {
  19. return fmt.Errorf("error building core storage: %v", err)
  20. }
  21. if len(apiGroupInfo.VersionedResourcesStorageMap) == 0 { // if all core storage is disabled, return.
  22. return nil
  23. }
  24. controllerName := "bootstrap-controller"
  25. coreClient := corev1client.NewForConfigOrDie(c.GenericConfig.LoopbackClientConfig)
  26. bootstrapController, err := c.NewBootstrapController(legacyRESTStorage, coreClient, coreClient, coreClient, coreClient.RESTClient())
  27. if err != nil {
  28. return fmt.Errorf("error creating bootstrap controller: %v", err)
  29. }
  30. m.GenericAPIServer.AddPostStartHookOrDie(controllerName, bootstrapController.PostStartHook)
  31. m.GenericAPIServer.AddPreShutdownHookOrDie(controllerName, bootstrapController.PreShutdownHook)
  32. if err := m.GenericAPIServer.InstallLegacyAPIGroup(genericapiserver.DefaultLegacyAPIPrefix, &apiGroupInfo); err != nil {
  33. return fmt.Errorf("error in registering group versions: %v", err)
  34. }
  35. return nil
  36. }

NewLegacyRESTStorage分析 

位置 D:\Workspace\Go\src\k8s.io\kubernetes\pkg\registry\core\rest\storage_core.go 

  1. func (c LegacyRESTStorageProvider) NewLegacyRESTStorage(apiResourceConfigSource serverstorage.APIResourceConfigSource, restOptionsGetter generic.RESTOptionsGetter) (LegacyRESTStorage, genericapiserver.APIGroupInfo, error) {
  2. apiGroupInfo := genericapiserver.APIGroupInfo{
  3. PrioritizedVersions: legacyscheme.Scheme.PrioritizedVersionsForGroup(""),
  4. VersionedResourcesStorageMap: map[string]map[string]rest.Storage{},
  5. Scheme: legacyscheme.Scheme,
  6. ParameterCodec: legacyscheme.ParameterCodec,
  7. NegotiatedSerializer: legacyscheme.Codecs,
  8. }

·legacyscheme.Scheme是k8s的重要结构体Scheme 的默认实例

Scheme和k8s的资源 

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apimachinery\pkg\runtime\scheme.go

Scheme 定义了资源序列化和反序列化的方法以及资源类型和版本的对应关系,这里我们可以理解成一张记录表

k8s的资源

运维人员在创建资源的时候,可能只关注kind (如deployment,本能的忽略分组和版本信息)

但是k8s的资源定位中只说deployment是不准确的
因为k8s系统支持多个Group,每个Group支持多个Version,每个Version支持多个Resource

其中部分资源同时会拥有自己的子资源(即SubResource)。例如,Deployment资源拥有Status子资源
资源组、资源版本、资源、子资源的完整表现形式为///
以常用的Deployment资源为例,其完整表现形式为apps/v1/deployments/status 

。其中apps代码资源组
。v1代表版本
deployments代表resource
。status代表子资源

为了方便资源管理和有序迭代,资源有Group (组)和Version (版本)的概念

 

Group:被称为资源组,在Kubernetes API Server中也可称其为APIGroup。
Version:被称为资源版本,在Kubernetes API Server中也可称其为APIVersions。
Resource:被称为资源,在Kubernetes API Server中也可称其为APIResource。.
Kind:资源种类,描述Resource的种类,与Resource为同一级别。 

什么是Scheme

k8s系统拥有众多资源,每一种资源就是一个资源类型
这些资源类型需要有统一的注册、存储、查询、管理等机制
目前k8s系统中的所有资源类型都已注册到Scheme资源注册表中,其是一个内存型的资源注册表,拥有如下特点:
。支持注册多种资源类型,包括内部版本和外部版本。
。支持多种版本转换机制。
。支持不同资源的序列化/反序列化机制。

Scheme资源注册表支持两种资源类型 (Type)的注册

分别是UnversionedType和KnownType资源类型,分别介绍如下
UnversionedType: 无版本资源类型
这是一个早期Kubernetes系统中的概念,它主要应用于某些没有版本的资源类型
该类型的资源对象并不需要进行转换
在目前的Kubernetes发行版本中,无版本类型已被弱化,几乎所有的资源对象都拥有版本

但在metav1元数据中还有部分类型,它们既属于meta.k8s.io/v1又属于UnversionedType无版本资源类型,例如:
o metav1.Status
o metav1.APIVersions
o metav1.APIGroupList
o metav1.APIGroup
o metav1APIResourceList 

KnownType: 是目前Kubernetes最常用的资源类型
也可称其为“拥有版本的资源类型”。在scheme资源注册表中,UnversionedType资源类型的对象通过schemeAddUnversionedTypes方法进行注册

KnownType资源类型的对象通过schemeAddKnownTypes方法进行注册

Scheme结构体定义

代码位置

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apimachinery\pkg\runtime\scheme.go

  1. s := &Scheme{
  2. gvkToType: map[schema.GroupVersionKind]reflect.Type{},
  3. typeToGVK: map[reflect.Type][]schema.GroupVersionKind{},
  4. unversionedTypes: map[reflect.Type]schema.GroupVersionKind{},
  5. unversionedKinds: map[string]reflect.Type{},
  6. fieldLabelConversionFuncs: map[schema.GroupVersionKind]FieldLabelConversionFunc{},
  7. defaulterFuncs: map[reflect.Type]func(interface{}){},
  8. versionPriority: map[string][]string{},
  9. schemeName: naming.GetNameFromCallsite(internalPackages...),
  10. }

 具体定义如下

。gvkToType:存储GVK与Type的映射关系
· typeToGVK:存储Type与GVK的映射关系,一个Type会对应一个或多个GVK。
· unVersionedTypes: 存储UnversionedType与GVK的映射关系。
。unversionedKinds:存储Kind (资源种类)名称与UnversionedType的映射关系
Scheme资源注册表通过Go语言的map结构实现映射关系
这些映射关系可以实现高效的正向和反向检索,从Scheme资源注册表中检索某个GVK的Type,它的时间复杂度O(1)

如何使用Scheme

获取scheme对象

var Scheme = runtime.NewScheme()

定义注册方法AddToScheme
通过runtime.NewScheme实例化一个新的Scheme资源注册表。注册资源类型到Scheme资源注册表有两种方式:

。通过schemeAddKnownTypes方法注册KnownType类型的对象。

。通过schemeAddUnversionedTypes方法注册UnversionedType类型的对象

实例代码

  1. func init() {
  2. metav1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
  3. AddToScheme(Scheme)
  4. }

获取解码对象 

  1. var Codecs = serializer.NewCodecFactory(Scheme)
  2. var ParameterCodec = runtime.NewParameterCodec(Scheme)

实际举例
比如我们之前写的 webhook-mutation的准入控制器注入sidecar。

runtimeScheme代表初始化这个注册表。

codecs和deserializer 是解码编码相关的对象

最后可以调用deserializer.Decode解码参数为 v1beta1.AdmissionReview资源

接着回到NewLegacyRESTStorage分析

创建api group info对象

这里就是用了我们上面提到的scheme 

  1. apiGroupInfo := genericapiserver.APIGroupInfo{
  2. PrioritizedVersions: legacyscheme.Scheme.PrioritizedVersionsForGroup(""),
  3. VersionedResourcesStorageMap: map[string]map[string]rest.Storage{},
  4. Scheme: legacyscheme.Scheme,
  5. ParameterCodec: legacyscheme.ParameterCodec,
  6. NegotiatedSerializer: legacyscheme.Codecs,
  7. }

创建LegacyRESTStorage

    restStorage := LegacyRESTStorage{}

使用各种资源的NewREST创建RESTStorage,以configmap为例

  1. configMapStorage, err := configmapstore.NewREST(restOptionsGetter)
  2. if err != nil {
  3. return LegacyRESTStorage{}, genericapiserver.APIGroupInfo{}, err
  4. }

RESTStorage定义了一种资源该如何curd,如何和存储打交道

confimap的 NewREST 

位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\registry\core\configmap\storage\storage.go

  1. // REST implements a RESTStorage for ConfigMap
  2. type REST struct {
  3. *genericregistry.Store
  4. }
  5. // NewREST returns a RESTStorage object that will work with ConfigMap objects.
  6. func NewREST(optsGetter generic.RESTOptionsGetter) (*REST, error) {
  7. store := &genericregistry.Store{
  8. NewFunc: func() runtime.Object { return &api.ConfigMap{} },
  9. NewListFunc: func() runtime.Object { return &api.ConfigMapList{} },
  10. PredicateFunc: configmap.Matcher,
  11. DefaultQualifiedResource: api.Resource("configmaps"),
  12. CreateStrategy: configmap.Strategy,
  13. UpdateStrategy: configmap.Strategy,
  14. DeleteStrategy: configmap.Strategy,
  15. TableConvertor: printerstorage.TableConvertor{TableGenerator: printers.NewTableGenerator().With(printersinternal.AddHandlers)},
  16. }
  17. options := &generic.StoreOptions{
  18. RESTOptions: optsGetter,
  19. AttrFunc: configmap.GetAttrs,
  20. TriggerFunc: map[string]storage.IndexerFunc{"metadata.name": configmap.NameTriggerFunc},
  21. }
  22. if err := store.CompleteWithOptions(options); err != nil {
  23. return nil, err
  24. }
  25. return &REST{store}, nil
  26. }

· NewFunc代表get一个对象时的方法
。NewListFunc代表list对象时的方法
· PredicateFunc 返回与提供的标签对应的匹配器和字段。如果object 匹配给定的字段和标签选择器则返回真
。DefaultQualifiedResource 是资源的复数名称。
· CreateStrategy 代表创建的策略
· UpdateStrategy代表更新的策略
· DeleteStrategy 代表删除的策略
。TableConvertor 代表输出为表格的方法
。options代表选项,并使用store.CompleteWithOptionsoptions)做校验 

  1. // CompleteWithOptions updates the store with the provided options and
  2. // defaults common fields.
  3. func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
  4. if e.DefaultQualifiedResource.Empty() {
  5. return fmt.Errorf("store %#v must have a non-empty qualified resource", e)
  6. }
  7. if e.NewFunc == nil {
  8. return fmt.Errorf("store for %s must have NewFunc set", e.DefaultQualifiedResource.String())
  9. }
  10. if e.NewListFunc == nil {
  11. return fmt.Errorf("store for %s must have NewListFunc set", e.DefaultQualifiedResource.String())
  12. }
  13. if (e.KeyRootFunc == nil) != (e.KeyFunc == nil) {
  14. return fmt.Errorf("store for %s must set both KeyRootFunc and KeyFunc or neither", e.DefaultQualifiedResource.String())
  15. }
  16. if e.TableConvertor == nil {
  17. return fmt.Errorf("store for %s must set TableConvertor; rest.NewDefaultTableConvertor(e.DefaultQualifiedResource) can be used to output just name/creation time", e.DefaultQualifiedResource.String())
  18. }

pod的newStore

位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\registry\core\pod\storage\storage.go

  1. // NewStorage returns a RESTStorage object that will work against pods.
  2. func NewStorage(optsGetter generic.RESTOptionsGetter, k client.ConnectionInfoGetter, proxyTransport http.RoundTripper, podDisruptionBudgetClient policyclient.PodDisruptionBudgetsGetter) (PodStorage, error) {
  3. store := &genericregistry.Store{
  4. NewFunc: func() runtime.Object { return &api.Pod{} },
  5. NewListFunc: func() runtime.Object { return &api.PodList{} },
  6. PredicateFunc: registrypod.MatchPod,
  7. DefaultQualifiedResource: api.Resource("pods"),
  8. CreateStrategy: registrypod.Strategy,
  9. UpdateStrategy: registrypod.Strategy,
  10. DeleteStrategy: registrypod.Strategy,
  11. ResetFieldsStrategy: registrypod.Strategy,
  12. ReturnDeletedObject: true,
  13. TableConvertor: printerstorage.TableConvertor{TableGenerator: printers.NewTableGenerator().With(printersinternal.AddHandlers)},
  14. }
  15. options := &generic.StoreOptions{
  16. RESTOptions: optsGetter,
  17. AttrFunc: registrypod.GetAttrs,
  18. TriggerFunc: map[string]storage.IndexerFunc{"spec.nodeName": registrypod.NodeNameTriggerFunc},
  19. Indexers: registrypod.Indexers(),
  20. }
  21. if err := store.CompleteWithOptions(options); err != nil {
  22. return PodStorage{}, err
  23. }
  24. statusStore := *store
  25. statusStore.UpdateStrategy = registrypod.StatusStrategy
  26. statusStore.ResetFieldsStrategy = registrypod.StatusStrategy
  27. ephemeralContainersStore := *store
  28. ephemeralContainersStore.UpdateStrategy = registrypod.EphemeralContainersStrategy
  29. bindingREST := &BindingREST{store: store}
  30. return PodStorage{
  31. Pod: &REST{store, proxyTransport},
  32. Binding: &BindingREST{store: store},
  33. LegacyBinding: &LegacyBindingREST{bindingREST},
  34. Eviction: newEvictionStorage(store, podDisruptionBudgetClient),
  35. Status: &StatusREST{store: &statusStore},
  36. EphemeralContainers: &EphemeralContainersREST{store: &ephemeralContainersStore},
  37. Log: &podrest.LogREST{Store: store, KubeletConn: k},
  38. Proxy: &podrest.ProxyREST{Store: store, ProxyTransport: proxyTransport},
  39. Exec: &podrest.ExecREST{Store: store, KubeletConn: k},
  40. Attach: &podrest.AttachREST{Store: store, KubeletConn: k},
  41. PortForward: &podrest.PortForwardREST{Store: store, KubeletConn: k},
  42. }, nil
  43. }

podStore 返回的是PodStorage,和其它资源不同的是下面会有很多subresource 子资源的restStore

  1. // PodStorage includes storage for pods and all sub resources
  2. type PodStorage struct {
  3. Pod *REST
  4. Binding *BindingREST
  5. LegacyBinding *LegacyBindingREST
  6. Eviction *EvictionREST
  7. Status *StatusREST
  8. EphemeralContainers *EphemeralContainersREST
  9. Log *podrest.LogREST
  10. Proxy *podrest.ProxyREST
  11. Exec *podrest.ExecREST
  12. Attach *podrest.AttachREST
  13. PortForward *podrest.PortForwardREST
  14. }

再回到NewLegacyRESTStorage中

。利用上面各个资源创建的restStore 塞入Storage

Map中map的key是资源/子资源的名称,value是对应的 Storage

  1. storage := map[string]rest.Storage{}
  2. if resource := "pods"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  3. storage[resource] = podStorage.Pod
  4. storage[resource+"/attach"] = podStorage.Attach
  5. storage[resource+"/status"] = podStorage.Status
  6. storage[resource+"/log"] = podStorage.Log
  7. storage[resource+"/exec"] = podStorage.Exec
  8. storage[resource+"/portforward"] = podStorage.PortForward
  9. storage[resource+"/proxy"] = podStorage.Proxy
  10. storage[resource+"/binding"] = podStorage.Binding
  11. if podStorage.Eviction != nil {
  12. storage[resource+"/eviction"] = podStorage.Eviction
  13. }
  14. if utilfeature.DefaultFeatureGate.Enabled(features.EphemeralContainers) {
  15. storage[resource+"/ephemeralcontainers"] = podStorage.EphemeralContainers
  16. }
  17. }
  18. if resource := "bindings"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  19. storage[resource] = podStorage.LegacyBinding
  20. }
  21. if resource := "podtemplates"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  22. storage[resource] = podTemplateStorage
  23. }
  24. if resource := "replicationcontrollers"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  25. storage[resource] = controllerStorage.Controller
  26. storage[resource+"/status"] = controllerStorage.Status
  27. if legacyscheme.Scheme.IsVersionRegistered(schema.GroupVersion{Group: "autoscaling", Version: "v1"}) {
  28. storage[resource+"/scale"] = controllerStorage.Scale
  29. }
  30. }
  31. if resource := "services"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  32. storage[resource] = serviceRESTStorage
  33. storage[resource+"/proxy"] = serviceRESTProxy
  34. storage[resource+"/status"] = serviceStatusStorage
  35. }
  36. if resource := "endpoints"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  37. storage[resource] = endpointsStorage
  38. }
  39. if resource := "nodes"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  40. storage[resource] = nodeStorage.Node
  41. storage[resource+"/proxy"] = nodeStorage.Proxy
  42. storage[resource+"/status"] = nodeStorage.Status
  43. }
  44. if resource := "events"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  45. storage[resource] = eventStorage
  46. }
  47. if resource := "limitranges"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  48. storage[resource] = limitRangeStorage
  49. }
  50. if resource := "resourcequotas"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  51. storage[resource] = resourceQuotaStorage
  52. storage[resource+"/status"] = resourceQuotaStatusStorage
  53. }
  54. if resource := "namespaces"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  55. storage[resource] = namespaceStorage
  56. storage[resource+"/status"] = namespaceStatusStorage
  57. storage[resource+"/finalize"] = namespaceFinalizeStorage
  58. }
  59. if resource := "secrets"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  60. storage[resource] = secretStorage
  61. }
  62. if resource := "serviceaccounts"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  63. storage[resource] = serviceAccountStorage
  64. if serviceAccountStorage.Token != nil {
  65. storage[resource+"/token"] = serviceAccountStorage.Token
  66. }
  67. }
  68. if resource := "persistentvolumes"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  69. storage[resource] = persistentVolumeStorage
  70. storage[resource+"/status"] = persistentVolumeStatusStorage
  71. }
  72. if resource := "persistentvolumeclaims"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  73. storage[resource] = persistentVolumeClaimStorage
  74. storage[resource+"/status"] = persistentVolumeClaimStatusStorage
  75. }
  76. if resource := "configmaps"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  77. storage[resource] = configMapStorage
  78. }
  79. if resource := "componentstatuses"; apiResourceConfigSource.ResourceEnabled(corev1.SchemeGroupVersion.WithResource(resource)) {
  80. storage[resource] = componentstatus.NewStorage(componentStatusStorage{c.StorageFactory}.serversToValidate)
  81. }

最终将上述storage塞入apiGrouplnfo的VersionedResourcesStorageMap中

这是一个双层map,第一层的key是版本,然后是资源名称,最后是对应的资源存储

  1.     if len(storage) > 0 {
  2.         apiGroupInfo.VersionedResourcesStorageMap["v1"] = storage
  3.     }

5.3 apiserver中Pod数据的保存

kube-apiserver createPod数据时的保存

pod的restStorage

·上节课我们知道每种资源有对应的Storage,其中定义了如何跟存储打交道
。比如pod的位置 

  1. store := &genericregistry.Store{
  2. NewFunc: func() runtime.Object { return &api.Pod{} },
  3. NewListFunc: func() runtime.Object { return &api.PodList{} },
  4. PredicateFunc: registrypod.MatchPod,
  5. DefaultQualifiedResource: api.Resource("pods"),
  6. CreateStrategy: registrypod.Strategy,
  7. UpdateStrategy: registrypod.Strategy,
  8. DeleteStrategy: registrypod.Strategy,
  9. ResetFieldsStrategy: registrypod.Strategy,
  10. ReturnDeletedObject: true,
  11. TableConvertor: printerstorage.TableConvertor{TableGenerator: printers.NewTableGenerator().With(printersinternal.AddHandlers)},
  12. }

创建pod时apiserver是如何保存数据的

pod的资源对应的就是原始的store
。rest store底层是 genericregistry.Store,下面我们来分析一下genericregistry.Store的create 创建方法 

  1. // REST implements a RESTStorage for pods
  2. type REST struct {
  3. *genericregistry.Store
  4. proxyTransport http.RoundTripper
  5. }

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\registry\generic\registry\store.go

create方法解析

先调用BeginCreate 

  1. if e.BeginCreate != nil {
  2. fn, err := e.BeginCreate(ctx, obj, options)
  3. if err != nil {
  4. return nil, err
  5. }
  6. finishCreate = fn
  7. defer func() {
  8. finishCreate(ctx, false)
  9. }()
  10. }

然后是BeforeCreate

if err := rest.BeforeCreate(e.CreateStrategy, ctx, obj); err != nil {return nil, err}

在BeforeCreate又会调用 strategy的PrepareForCreate

strategy.PrepareForCreate(ctx,obj) 

那么对应pod中就在D:\Workspace\Go\src\k8s.io\kubernetes\pkg\registry\core\pod\strategy.go

  1. // PrepareForCreate clears fields that are not allowed to be set by end users on creation.
  2. func (podStrategy) PrepareForCreate(ctx context.Context, obj runtime.Object) {
  3. pod := obj.(*api.Pod)
  4. pod.Status = api.PodStatus{
  5. Phase: api.PodPending,
  6. QOSClass: qos.GetPodQOS(pod),
  7. }
  8. podutil.DropDisabledPodFields(pod, nil)
  9. applySeccompVersionSkew(pod)
  10. }

pod PrepareForCreate解读

先把pod状态设置为pending

  1. pod.Status = api.PodStatus{
  2. Phase: api.PodPending,

去掉一些字段

  1. podutil.DropDisabledPodFields(pod, nil)
  2. applySeccompVersionSkew(pod)

。通过 GetPodQOS获取pod的 qos

GetPodQOS解读 

kubernetes 中的 Qos 合理分配node上的有限资源

简介
QoS(Quality of Service) 即服务质量
Qos 是一种控制机制,它提供了针对不同用户或者不同数据流采用相应不同的优先级
或者是根据应用程序的要求,保证数据流的性能达到一定的水准。kubernetes 中有三种 Qos,分别为:
。 Guaranteed:pod的requests 与limits 设定的值相等:
。Burstable:pod requests 小于limits 的值目不为0;
。BestEffort: pod 的 requests 与 limits 均为0;

三者的优先级如下所示,依次递增:
BestEffort -> Burstable -> Guaranteed
不同 Qos 的本质区别
在调度时调度器只会根据request 值进行调度;
二是当系统OOM上时对于处理不同OOMScore 的进程表现不同,也就是说当系统OOM 时,首先会kill掉 BestEffort pod 的进程,若系统依然处于OOM 状态,然后才会 kill 掉 Burstable pod,最后是Guaranteed pod;
资源的requests和limits
我们知道在k8s中为了达到容器资源限制的目录,在yaml文件中有cpu和内存的 requests和limits配置

对这两个参数可以简单理解为根据requests进行调度,根据limits进行运行限制。

举例下面的配置代表cpu 申请100m,限制1000m。内存申请100Mi,限制2500Mi

  1. resources:
  2. requests:
  3. cpu: 100m
  4. memory: 100Mi
  5. limits:
  6. cpu: 1000m
  7. memory: 250GMi

。API优先级和公平性
https://kubernetes.io/h/docs/concepts/cluster-administration/flow-control/
代码解读
位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\apis\core\helper\qos\qos.go

首先遍历pod中的容器处理 resource.request

  1. for _, container := range allContainers {
  2. // process requests
  3. for name, quantity := range container.Resources.Requests {
  4. if !isSupportedQoSComputeResource(name) {
  5. continue
  6. }
  7. if quantity.Cmp(zeroQuantity) == 1 {
  8. delta := quantity.DeepCopy()
  9. if _, exists := requests[name]; !exists {
  10. requests[name] = delta
  11. } else {
  12. delta.Add(requests[name])
  13. requests[name] = delta
  14. }
  15. }
  16. }
  17. // process limits
  18. qosLimitsFound := sets.NewString()
  19. for name, quantity := range container.Resources.Limits {
  20. if !isSupportedQoSComputeResource(name) {
  21. continue
  22. }
  23. if quantity.Cmp(zeroQuantity) == 1 {
  24. qosLimitsFound.Insert(string(name))
  25. delta := quantity.DeepCopy()
  26. if _, exists := limits[name]; !exists {
  27. limits[name] = delta
  28. } else {
  29. delta.Add(limits[name])
  30. limits[name] = delta
  31. }
  32. }
  33. }
  34. if !qosLimitsFound.HasAll(string(core.ResourceMemory), string(core.ResourceCPU)) {
  35. isGuaranteed = false
  36. }
  37. }

然后遍历处理 resource.limit

  1. // process limits
  2. qosLimitsFound := sets.NewString()
  3. for name, quantity := range container.Resources.Limits {
  4. if !isSupportedQoSComputeResource(name) {
  5. continue
  6. }
  7. if quantity.Cmp(zeroQuantity) == 1 {
  8. qosLimitsFound.Insert(string(name))
  9. delta := quantity.DeepCopy()
  10. if _, exists := limits[name]; !exists {
  11. limits[name] = delta
  12. } else {
  13. delta.Add(limits[name])
  14. limits[name] = delta
  15. }
  16. }
  17. }

判定规则
如果limit和request都没设置就是 BestEffort

  1. if len(requests) == 0 && len(limits) == 0 {
  2. return core.PodQOSBestEffort
  3. }

如果limit和request相等就是 Guaranteed

  1. if isGuaranteed &&
  2. len(requests) == len(limits) {
  3. return core.PodQOSGuaranteed
  4. }

。否则就是Burstable

然后是真正的操作存储的create 

	if err := e.Storage.Create(ctx, key, obj, out, ttl, dryrun.IsDryRun(options.DryRun)); err != nil {

Storage调用的是 DryRunnableStorage的create

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\registry\generic\registry\dryrun.go

  1. func (s *DryRunnableStorage) Create(ctx context.Context, key string, obj, out runtime.Object, ttl uint64, dryRun bool) error {
  2. if dryRun {
  3. if err := s.Storage.Get(ctx, key, storage.GetOptions{}, out); err == nil {
  4. return storage.NewKeyExistsError(key, 0)
  5. }
  6. return s.copyInto(obj, out)
  7. }
  8. return s.Storage.Create(ctx, key, obj, out, ttl)
  9. }

如果是dryRun就是空跑,不存储在etcd中,只是将资源的结果返回

etcdv3 的create

位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\storage\etcd3\store.go

  1. // Create implements storage.Interface.Create.
  2. func (s *store) Create(ctx context.Context, key string, obj, out runtime.Object, ttl uint64) error {
  3. if version, err := s.versioner.ObjectResourceVersion(obj); err == nil && version != 0 {
  4. return errors.New("resourceVersion should not be set on objects to be created")
  5. }
  6. if err := s.versioner.PrepareObjectForStorage(obj); err != nil {
  7. return fmt.Errorf("PrepareObjectForStorage failed: %v", err)
  8. }
  9. data, err := runtime.Encode(s.codec, obj)
  10. if err != nil {
  11. return err
  12. }
  13. key = path.Join(s.pathPrefix, key)
  14. opts, err := s.ttlOpts(ctx, int64(ttl))
  15. if err != nil {
  16. return err
  17. }
  18. newData, err := s.transformer.TransformToStorage(ctx, data, authenticatedDataString(key))
  19. if err != nil {
  20. return storage.NewInternalError(err.Error())
  21. }
  22. startTime := time.Now()
  23. txnResp, err := s.client.KV.Txn(ctx).If(
  24. notFound(key),
  25. ).Then(
  26. clientv3.OpPut(key, string(newData), opts...),
  27. ).Commit()
  28. metrics.RecordEtcdRequestLatency("create", getTypeName(obj), startTime)
  29. if err != nil {
  30. return err
  31. }
  32. if !txnResp.Succeeded {
  33. return storage.NewKeyExistsError(key, 0)
  34. }
  35. if out != nil {
  36. putResp := txnResp.Responses[0].GetResponsePut()
  37. return decode(s.codec, s.versioner, data, out, putResp.Header.Revision)
  38. }
  39. return nil
  40. }

收尾工作

。如果有AfterCreate和Decorator就调用 

  1. if e.AfterCreate != nil {
  2. e.AfterCreate(out, options)
  3. }
  4. if e.Decorator != nil {
  5. e.Decorator(out)
  6. }

本节重点总结:

。kube-apiserver createPod数据时的保存

。架构图 

5.4 apiserver中的限流策略源码解读

k8s支持多种限流配置

>为了防止突发流量影响apiserver可用性,k8s支持多种限流配置,包括:
·- MaxInFlightLimit,server级别整体限流
·- Client限流
- EventRateLimit,限制event
·- APF,更细力度的限制配置

MaxInFlightLimit限流

apiserver默认可设置最大并发量(集群级别,区分只读与修改操作作)

通过参数--max-requests-inflight代表只读请求

--max-mutating-requests-inflight代表修改请求
。可以简单实现限流

源码解读

。入口GenericAPIServer.New中的添加hook
位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\config.go

  1. // Add PostStartHooks for maintaining the watermarks for the Priority-and-Fairness and the Max-in-Flight filters.
  2. if c.FlowControl != nil {
  3. const priorityAndFairnessFilterHookName = "priority-and-fairness-filter"
  4. if !s.isPostStartHookRegistered(priorityAndFairnessFilterHookName) {
  5. err := s.AddPostStartHook(priorityAndFairnessFilterHookName, func(context PostStartHookContext) error {
  6. genericfilters.StartPriorityAndFairnessWatermarkMaintenance(context.StopCh)
  7. return nil
  8. })
  9. if err != nil {
  10. return nil, err
  11. }
  12. }
  13. } else {
  14. const maxInFlightFilterHookName = "max-in-flight-filter"
  15. if !s.isPostStartHookRegistered(maxInFlightFilterHookName) {
  16. err := s.AddPostStartHook(maxInFlightFilterHookName, func(context PostStartHookContext) error {
  17. genericfilters.StartMaxInFlightWatermarkMaintenance(context.StopCh)
  18. return nil
  19. })
  20. if err != nil {
  21. return nil, err
  22. }
  23. }
  24. }

意思是FlowControl为nil,代表未启用APF,API 服务器中的整体并发量将受到 kube-apiserver 的参数max-requests-inflight和--max-mutating-requests-inflight 的限制

启动metrics观测的函数 

  1. // startWatermarkMaintenance starts the goroutines to observe and maintain the specified watermark.
  2. func startWatermarkMaintenance(watermark *requestWatermark, stopCh <-chan struct{}) {
  3. // Periodically update the inflight usage metric.
  4. go wait.Until(func() {
  5. watermark.lock.Lock()
  6. readOnlyWatermark := watermark.readOnlyWatermark
  7. mutatingWatermark := watermark.mutatingWatermark
  8. watermark.readOnlyWatermark = 0
  9. watermark.mutatingWatermark = 0
  10. watermark.lock.Unlock()
  11. metrics.UpdateInflightRequestMetrics(watermark.phase, readOnlyWatermark, mutatingWatermark)
  12. }, inflightUsageMetricUpdatePeriod, stopCh)
  13. // Periodically observe the watermarks. This is done to ensure that they do not fall too far behind. When they do
  14. // fall too far behind, then there is a long delay in responding to the next request received while the observer
  15. // catches back up.
  16. go wait.Until(func() {
  17. watermark.readOnlyObserver.Add(0)
  18. watermark.mutatingObserver.Add(0)
  19. }, observationMaintenancePeriod, stopCh)
  20. }

WithMaxInFlightLimit代表限流处理函数

调用的入口在 D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apiserver\pkg\server\config.go

  1. if c.FlowControl != nil {
  2. requestWorkEstimator := flowcontrolrequest.NewWorkEstimator(c.StorageObjectCountTracker.Get, c.FlowControl.GetInterestedWatchCount)
  3. handler = filterlatency.TrackCompleted(handler)
  4. handler = genericfilters.WithPriorityAndFairness(handler, c.LongRunningFunc, c.FlowControl, requestWorkEstimator)
  5. handler = filterlatency.TrackStarted(handler, "priorityandfairness")
  6. } else {
  7. handler = genericfilters.WithMaxInFlightLimit(handler, c.MaxRequestsInFlight, c.MaxMutatingRequestsInFlight, c.LongRunningFunc)
  8. }

解读

如果limitnum为0就不开启限流了 

  1. if nonMutatingLimit == 0 && mutatingLimit == 0 {
  2. return handler
  3. }

构造限流的chan,类型为长度=limit的的 bool chan

  1. var nonMutatingChan chan bool
  2. var mutatingChan chan bool
  3. if nonMutatingLimit != 0 {
  4. nonMutatingChan = make(chan bool, nonMutatingLimit)
  5. watermark.readOnlyObserver.SetDenominator(float64(nonMutatingLimit))
  6. }
  7. if mutatingLimit != 0 {
  8. mutatingChan = make(chan bool, mutatingLimit)
  9. watermark.mutatingObserver.SetDenominator(float64(mutatingLimit))
  10. }

检查是否是长时间运行的请求

  1. // Skip tracking long running events.
  2. if longRunningRequestCheck != nil && longRunningRequestCheck(r, requestInfo) {
  3. handler.ServeHTTP(w, r)
  4. return
  5. }

使用BasicLongRunningRequestCheck检查是否是watch或者pprof debug等长时间运行的请求,因为这些请求不受限制,位置D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\apiserver\pkg\server\filters\longrunning.go

  1. // BasicLongRunningRequestCheck returns true if the given request has one of the specified verbs or one of the specified subresources, or is a profiler request.
  2. func BasicLongRunningRequestCheck(longRunningVerbs, longRunningSubresources sets.String) apirequest.LongRunningRequestCheck {
  3. return func(r *http.Request, requestInfo *apirequest.RequestInfo) bool {
  4. if longRunningVerbs.Has(requestInfo.Verb) {
  5. return true
  6. }
  7. if requestInfo.IsResourceRequest && longRunningSubresources.Has(requestInfo.Subresource) {
  8. return true
  9. }
  10. if !requestInfo.IsResourceRequest && strings.HasPrefix(requestInfo.Path, "/debug/pprof/") {
  11. return true
  12. }
  13. return false
  14. }
  15. }

检查是只读操作还是修改操作,决定使用哪个chan限制 

  1. var c chan bool
  2. isMutatingRequest := !nonMutatingRequestVerbs.Has(requestInfo.Verb)
  3. if isMutatingRequest {
  4. c = mutatingChan
  5. } else {
  6. c = nonMutatingChan
  7. }

如果队列未满,有空的位置,则更新下排队数字
使用select 向c中写入true,如果能写入到说明队列未满
记录下对应的指标

  1. select {
  2. case c <- true:
  3. // We note the concurrency level both while the
  4. // request is being served and after it is done being
  5. // served, because both states contribute to the
  6. // sampled stats on concurrency.
  7. if isMutatingRequest {
  8. watermark.recordMutating(len(c))
  9. } else {
  10. watermark.recordReadOnly(len(c))
  11. }
  12. defer func() {
  13. <-c
  14. if isMutatingRequest {
  15. watermark.recordMutating(len(c))
  16. } else {
  17. watermark.recordReadOnly(len(c))
  18. }
  19. }()
  20. handler.ServeHTTP(w, r)
  21. default:
  22. // at this point we're about to return a 429, BUT not all actors should be rate limited. A system:master is so powerful
  23. // that they should always get an answer. It's a super-admin or a loopback connection.
  24. if currUser, ok := apirequest.UserFrom(ctx); ok {
  25. for _, group := range currUser.GetGroups() {
  26. if group == user.SystemPrivilegedGroup {
  27. handler.ServeHTTP(w, r)
  28. return
  29. }
  30. }
  31. }
  32. // We need to split this data between buckets used for throttling.
  33. metrics.RecordDroppedRequest(r, requestInfo, metrics.APIServerComponent, isMutatingRequest)
  34. metrics.RecordRequestTermination(r, requestInfo, metrics.APIServerComponent, http.StatusTooManyRequests)
  35. tooManyRequests(r, w)
  36. }

default代表队列已满,但是如果请求的group中含有 system:masters,则放行。

因为apiserver认为这个组是很重要的请求,不能被限流

  1. if currUser, ok := apirequest.UserFrom(ctx); ok {
  2. for _, group := range currUser.GetGroups() {
  3. if group == user.SystemPrivilegedGroup {
  4. handler.ServeHTTP(w, r)
  5. return
  6. }
  7. }
  8. }

group=system:masters 对应的clusterRole 为cluster-admin

队列已满,如果请求的group中没有 system:masters,则返http 429错误 

·http429代表当前有太多请求了,请重试

并设置response 的header Retry-After =1

  1. // We need to split this data between buckets used for throttling.
  2. metrics.RecordDroppedRequest(r, requestInfo, metrics.APIServerComponent, isMutatingRequest)
  3. metrics.RecordRequestTermination(r, requestInfo, metrics.APIServerComponent, http.StatusTooManyRequests)
  4. tooManyRequests(r, w)

Client限流

例如client-go默认的qps为5,但是只支持客户端限流,只能由各个发起端限制

集群管理员无法控制用户行为。

EventRateLimit

EventRateLimit在1.13之后支持,只限制event请求
集成在apiserver内部webhoook中
可配置某个用户、namespace、server等event操作限制,通过webhook形式实现。
和文档一起学习 

https://kubernetes.io/zh/docs/reference/access-authn-authz/admission-controllers/#eventratelimit
原理

具体原理可以参考提案,每个eventratelimit 配置使用一个单独的令牌桶限速器

每次event操作,遍历每个匹配的限速器检查是否能获取令牌,如果可以允许请求,否则返回429.
优点
实现简单,允许一定量的并发
可支持server/namespace/user等级别的限流
缺点

仅支持event,通过webhook实现只能拦截修改类请求
·所有namespace的限流相同,没有优先级

APF,更细力度的限制配置

。API优先级和公平性(APF)是MaxlnFlightLimit限流的一种替代方案,设计文档见提案

·API 优先级和公平性(1.15以上,alpha版本),以更细粒度(byUser,byNamespace) 对请求进行分

类和隔离。支持突发流量,通过使用公平排队技术从队列中分发请求从而避免饥饿。

APF限流通过两种资源,PriorityLevelConfigurations定义隔离类型和可处理的并发预算量,还可以调整排队行为。FlowSchemas用于对每个入站请求进行分类,并与一PriorityLevelConfigurations相匹配。

可对用户或用户组或全局进行某些资源某些请求的限制,如限制default namespace写services put/patch请求。
优点
考虑情况较全面,支持优先级,白名单等
。可支持server/namespace/user/resource等细粒度级别的限流

缺点
配置复杂,不直观,需要对APF原理深入了解
。功能较新,缺少生产环境验证
文档地址
https://kubernetes.io/zh/docs/concepts/cluster-administration/flow-control/

5.5 apiserver重要对象和功能总结

apiserver总结

apiserver 启动三个server
三个APIServer底层均依赖通用的GenericServer,使用go-restful对外提供RESTful风格的API服务kube-apiserver 对请求进行Authentication、Authorization和Admission三层验证

完成验证后,请求会根据路由规则,触发到对应资源的handler,主要包括数据的预处理和保存·kube-apiserver的底层存储为etcd v3,它被抽象为一种RESTStorage,使请求和存储操作-一对应

apiserver中的三个server

。apiExtensionsServer API扩展服务,主要针对CRD

kubeAPIServer API核心服务,包括常见的Pod/Deployment/Service

apiExtensionsServer API聚合服务,主要针对metrics

Authentication的目的

Authorization 鉴权相关

Admission准入控制

自定义准入控制器实现注入nginx-sidecar容器

kube-apiserver createPod数据时的数据保存

什么是Scheme 

k8s系统拥有众多资源,每一种资源就是一个资源类型
这些资源类型需要有统一的注册、存储、查询、管理等机制
目前k8s系统中的所有资源类型都已注册到Scheme资源注册表中,其是一个内存型的资源注册表,拥有如下特点:
。支持注册多种资源类型,包括内部版本和外部版本。
。支持多种版本转换机制。
。支持不同资源的序列化/反序列化机制。

第6章 kube-scheduler调度pod的流程

6.1 kube-scheduler的启动流程

本节重点总结:

- 了解kube-scheduler的启动流程

- 了解clientset 的使用方法 

kube-scheduler入口

D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\scheduler.go

  1. command := app.NewSchedulerCommand()
  2. code := cli.Run(command)
  3. os.Exit(code)

runCommond入口

D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\server.go

  1. // runCommand runs the scheduler.
  2. func runCommand(cmd *cobra.Command, opts *options.Options, registryOptions ...Option) error {
  3. verflag.PrintAndExitIfRequested()
  4. // Activate logging as soon as possible, after that
  5. // show flags with the final logging configuration.
  6. if err := opts.Logs.ValidateAndApply(utilfeature.DefaultFeatureGate); err != nil {
  7. fmt.Fprintf(os.Stderr, "%v\n", err)
  8. os.Exit(1)
  9. }
  10. cliflag.PrintFlags(cmd.Flags())
  11. ctx, cancel := context.WithCancel(context.Background())
  12. defer cancel()
  13. go func() {
  14. stopCh := server.SetupSignalHandler()
  15. <-stopCh
  16. cancel()
  17. }()
  18. cc, sched, err := Setup(ctx, opts, registryOptions...)
  19. if err != nil {
  20. return err
  21. }
  22. return Run(ctx, cc, sched)
  23. }

setup返回一个完整的配置和scheduler对象

opts.Config初始化配置解读

Apply应用配置

  1. c := &schedulerappconfig.Config{}
  2. if err := o.ApplyTo(c); err != nil {
  3. return nil, err
  4. }

使用 --kubeconfig传入的配置初始化kube config 

  1. // Prepare kube config.
  2. kubeConfig, err := createKubeConfig(c.ComponentConfig.ClientConnection, o.Master)
  3. if err != nil {
  4. return nil, err
  5. }

使用kube-config 创建kube-clients 返回的是client-set对象

  1. // Prepare kube clients.
  2. client, eventClient, err := createClients(kubeConfig)
  3. if err != nil {
  4. return nil, err
  5. }

createClients解读

。Clientset中包含一批rest.Interface的对象,如下

  1. cs.admissionregistrationV1, err = admissionregistrationv1.NewForConfigAndClient(&configShallowCopy, httpClient)
  2. if err != nil {
  3. return nil, err
  4. }
  5. cs.admissionregistrationV1beta1, err = admissionregistrationv1beta1.NewForConfigAndClient(&configShallowCopy, httpClient)
  6. if err != nil {
  7. return nil, err
  8. }
  9. cs.internalV1alpha1, err = internalv1alpha1.NewForConfigAndClient(&configShallowCopy, httpClient)
  10. if err != nil {
  11. return nil, err
  12. }
  13. cs.appsV1, err = appsv1.NewForConfigAndClient(&configShallowCopy, httpClient)
  14. if err != nil {
  15. return nil, err
  16. }

最终返回的Clientset

  1. // Clientset contains the clients for groups. Each group has exactly one
  2. // version included in a Clientset.
  3. type Clientset struct {
  4. *discovery.DiscoveryClient
  5. admissionregistrationV1 *admissionregistrationv1.AdmissionregistrationV1Client
  6. admissionregistrationV1beta1 *admissionregistrationv1beta1.AdmissionregistrationV1beta1Client
  7. internalV1alpha1 *internalv1alpha1.InternalV1alpha1Client
  8. appsV1 *appsv1.AppsV1Client
  9. appsV1beta1 *appsv1beta1.AppsV1beta1Client
  10. appsV1beta2 *appsv1beta2.AppsV1beta2Client
  11. authenticationV1 *authenticationv1.AuthenticationV1Client
  12. authenticationV1beta1 *authenticationv1beta1.AuthenticationV1beta1Client
  13. authorizationV1 *authorizationv1.AuthorizationV1Client
  14. authorizationV1beta1 *authorizationv1beta1.AuthorizationV1beta1Client
  15. autoscalingV1 *autoscalingv1.AutoscalingV1Client
  16. autoscalingV2 *autoscalingv2.AutoscalingV2Client
  17. autoscalingV2beta1 *autoscalingv2beta1.AutoscalingV2beta1Client
  18. autoscalingV2beta2 *autoscalingv2beta2.AutoscalingV2beta2Client
  19. batchV1 *batchv1.BatchV1Client
  20. batchV1beta1 *batchv1beta1.BatchV1beta1Client
  21. certificatesV1 *certificatesv1.CertificatesV1Client
  22. certificatesV1beta1 *certificatesv1beta1.CertificatesV1beta1Client
  23. coordinationV1beta1 *coordinationv1beta1.CoordinationV1beta1Client
  24. coordinationV1 *coordinationv1.CoordinationV1Client
  25. coreV1 *corev1.CoreV1Client
  26. discoveryV1 *discoveryv1.DiscoveryV1Client
  27. discoveryV1beta1 *discoveryv1beta1.DiscoveryV1beta1Client
  28. eventsV1 *eventsv1.EventsV1Client
  29. eventsV1beta1 *eventsv1beta1.EventsV1beta1Client
  30. extensionsV1beta1 *extensionsv1beta1.ExtensionsV1beta1Client
  31. flowcontrolV1alpha1 *flowcontrolv1alpha1.FlowcontrolV1alpha1Client
  32. flowcontrolV1beta1 *flowcontrolv1beta1.FlowcontrolV1beta1Client
  33. flowcontrolV1beta2 *flowcontrolv1beta2.FlowcontrolV1beta2Client
  34. networkingV1 *networkingv1.NetworkingV1Client
  35. networkingV1beta1 *networkingv1beta1.NetworkingV1beta1Client
  36. nodeV1 *nodev1.NodeV1Client
  37. nodeV1alpha1 *nodev1alpha1.NodeV1alpha1Client
  38. nodeV1beta1 *nodev1beta1.NodeV1beta1Client
  39. policyV1 *policyv1.PolicyV1Client
  40. policyV1beta1 *policyv1beta1.PolicyV1beta1Client
  41. rbacV1 *rbacv1.RbacV1Client
  42. rbacV1beta1 *rbacv1beta1.RbacV1beta1Client
  43. rbacV1alpha1 *rbacv1alpha1.RbacV1alpha1Client
  44. schedulingV1alpha1 *schedulingv1alpha1.SchedulingV1alpha1Client
  45. schedulingV1beta1 *schedulingv1beta1.SchedulingV1beta1Client
  46. schedulingV1 *schedulingv1.SchedulingV1Client
  47. storageV1beta1 *storagev1beta1.StorageV1beta1Client
  48. storageV1 *storagev1.StorageV1Client
  49. storageV1alpha1 *storagev1alpha1.StorageV1alpha1Client
  50. }

clientSet的使用
后面程序在list获取对象时就可以使用,比如之前ink8s-pod-metrics中获取node
nodes, err := clientset.CoreV1().Nodes().List(context.TODO(), metav.ListOptions))
获取pod
pods, err := clientset.CoreV1().Pods("kube-system").List(context.TODO(), metav1.ListOptions))

可以看出上面的node和pod使用的都是clientset中的coreV1()

开启选举主的锁配置

默认scheduler 启动的时候带上参数 --leader-elect=true,代表先选主再进行主流程,为了高可用部署

  1. // Set up leader election if enabled.
  2. var leaderElectionConfig *leaderelection.LeaderElectionConfig
  3. if c.ComponentConfig.LeaderElection.LeaderElect {
  4. // Use the scheduler name in the first profile to record leader election.
  5. schedulerName := corev1.DefaultSchedulerName
  6. if len(c.ComponentConfig.Profiles) != 0 {
  7. schedulerName = c.ComponentConfig.Profiles[0].SchedulerName
  8. }
  9. coreRecorder := c.EventBroadcaster.DeprecatedNewLegacyRecorder(schedulerName)
  10. leaderElectionConfig, err = makeLeaderElectionConfig(c.ComponentConfig.LeaderElection, kubeConfig, coreRecorder)
  11. if err != nil {
  12. return nil, err
  13. }
  14. }

创建informer工厂函数

	c.InformerFactory = scheduler.NewInformerFactory(client, 0)

然后就是setup中的New创建scheduler对象

  1. // Create the scheduler.
  2. sched, err := scheduler.New(cc.Client,
  3. cc.InformerFactory,
  4. cc.DynInformerFactory,
  5. recorderFactory,
  6. ctx.Done(),
  7. scheduler.WithComponentConfigVersion(cc.ComponentConfig.TypeMeta.APIVersion),
  8. scheduler.WithKubeConfig(cc.KubeConfig),
  9. scheduler.WithProfiles(cc.ComponentConfig.Profiles...),
  10. scheduler.WithPercentageOfNodesToScore(cc.ComponentConfig.PercentageOfNodesToScore),
  11. scheduler.WithFrameworkOutOfTreeRegistry(outOfTreeRegistry),
  12. scheduler.WithPodMaxBackoffSeconds(cc.ComponentConfig.PodMaxBackoffSeconds),
  13. scheduler.WithPodInitialBackoffSeconds(cc.ComponentConfig.PodInitialBackoffSeconds),
  14. scheduler.WithPodMaxInUnschedulablePodsDuration(cc.PodMaxInUnschedulablePodsDuration),
  15. scheduler.WithExtenders(cc.ComponentConfig.Extenders...),
  16. scheduler.WithParallelism(cc.ComponentConfig.Parallelism),
  17. scheduler.WithBuildFrameworkCapturer(func(profile kubeschedulerconfig.KubeSchedulerProfile) {
  18. // Profiles are processed during Framework instantiation to set default plugins and configurations. Capturing them for logging
  19. completedProfiles = append(completedProfiles, profile)
  20. }),
  21. )

run 运行调度策略

注册配置到configz

保存到全局的map中
可以通过/configz的https path获取到,代码如下

  1. // Configz registration.
  2. if cz, err := configz.New("componentconfig"); err == nil {
  3. cz.Set(cc.ComponentConfig)
  4. } else {
  5. return fmt.Errorf("unable to register configz: %s", err)
  6. }

通过curl命令获取/configz信息
修改我们之前使用的prometheus service account 中的clusterrole策略

在nonResourceURLs中添加"/configz "

vim rbac.yaml

kubectl apply -f rbac.yaml

kubectl get sa prometheus -n kube-system

应用后使用curl 获取token,再访问即可

启动事件广播管理器

。Event事件是k8s里的一个核心资源,下面的章节中详细讲解
// Prepare the event broadcaster .
cc.EventBroadcaster.StartRecordingToSink(ctx.Done())

初始化check

// Setup healthz checks .
var checks []healthz.HealthChecker
if cc.ComponentConfig.LeaderElection.LeaderElect {
checks = append(checks,cc.LeaderElection.WatchDog)}

等待选主结果的chan

waitingForLeader代表选主结果通知的chan。有两个地方会close
。在下面的选主成功时会close
。如果选主功能没开启会close
isLeader代表判断当前节点是否为leader的func,如果waitingForLeader被关闭了则代表当前节点会成为leader

  1. waitingForLeader := make(chan struct{})
  2. isLeader := func() bool {
  3. select {
  4. case _, ok := <-waitingForLeader:
  5. // if channel is closed, we are leading
  6. return !ok
  7. default:
  8. // channel is open, we are waiting for a leader
  9. return false
  10. }
  11. }

isLeader的应用

如果不是leader,那么/metrics/resources就不要注册到路由中了也就是非leader节点不能导出这些指标 

  1. func installMetricHandler(pathRecorderMux *mux.PathRecorderMux, informers informers.SharedInformerFactory, isLeader func() bool) {
  2. configz.InstallHandler(pathRecorderMux)
  3. pathRecorderMux.Handle("/metrics", legacyregistry.HandlerWithReset())
  4. resourceMetricsHandler := resources.Handler(informers.Core().V1().Pods().Lister())
  5. pathRecorderMux.HandleFunc("/metrics/resources", func(w http.ResponseWriter, req *http.Request) {
  6. if !isLeader() {
  7. return
  8. }
  9. resourceMetricsHandler.ServeHTTP(w, req)
  10. })
  11. }

buildHandlerChain构造httphandler链路

逐次添加鉴权、认证、info、cache、logging等handler

  1. // buildHandlerChain wraps the given handler with the standard filters.
  2. func buildHandlerChain(handler http.Handler, authn authenticator.Request, authz authorizer.Authorizer) http.Handler {
  3. requestInfoResolver := &apirequest.RequestInfoFactory{}
  4. failedHandler := genericapifilters.Unauthorized(scheme.Codecs)
  5. handler = genericapifilters.WithAuthorization(handler, authz, scheme.Codecs)
  6. handler = genericapifilters.WithAuthentication(handler, authn, failedHandler, nil)
  7. handler = genericapifilters.WithRequestInfo(handler, requestInfoResolver)
  8. handler = genericapifilters.WithCacheControl(handler)
  9. handler = genericfilters.WithHTTPLogging(handler)
  10. handler = genericfilters.WithPanicRecovery(handler, requestInfoResolver)
  11. return handler
  12. }

cc.InformerFactory.Start 启动所有informer

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\informers\factory.go

  1. // Start initializes all requested informers.
  2. func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
  3. f.lock.Lock()
  4. defer f.lock.Unlock()
  5. for informerType, informer := range f.informers {
  6. if !f.startedInformers[informerType] {
  7. go informer.Run(stopCh)
  8. f.startedInformers[informerType] = true
  9. }
  10. }
  11. }

WaitForCacheSync代表在执行调度前,先把通过informer把资源缓存到本地

    // Wait for all caches to sync before scheduling.

    cc.InformerFactory.WaitForCacheSync(ctx.Done())

 位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\informers\factory.go

  1. // WaitForCacheSync waits for all started informers' cache were synced.
  2. func (f *sharedInformerFactory) WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool {
  3. informers := func() map[reflect.Type]cache.SharedIndexInformer {
  4. f.lock.Lock()
  5. defer f.lock.Unlock()
  6. informers := map[reflect.Type]cache.SharedIndexInformer{}
  7. for informerType, informer := range f.informers {
  8. if f.startedInformers[informerType] {
  9. informers[informerType] = informer
  10. }
  11. }
  12. return informers
  13. }()
  14. res := map[reflect.Type]bool{}
  15. for informType, informer := range informers {
  16. res[informType] = cache.WaitForCacheSync(stopCh, informer.HasSynced)
  17. }
  18. return res
  19. }

开启LeaderElection选主的流程

。如果被选为主的话则执行schedRun​​​​​​

  1. // If leader election is enabled, runCommand via LeaderElector until done and exit.
  2. if cc.LeaderElection != nil {
  3. cc.LeaderElection.Callbacks = leaderelection.LeaderCallbacks{
  4. OnStartedLeading: func(ctx context.Context) {
  5. close(waitingForLeader)
  6. sched.Run(ctx)
  7. },
  8. OnStoppedLeading: func() {
  9. select {
  10. case <-ctx.Done():
  11. // We were asked to terminate. Exit 0.
  12. klog.InfoS("Requested to terminate, exiting")
  13. os.Exit(0)
  14. default:
  15. // We lost the lock.
  16. klog.ErrorS(nil, "Leaderelection lost")
  17. klog.FlushAndExit(klog.ExitFlushTimeout, 1)
  18. }
  19. },
  20. }
  21. leaderElector, err := leaderelection.NewLeaderElector(*cc.LeaderElection)
  22. if err != nil {
  23. return fmt.Errorf("couldn't create leader elector: %v", err)
  24. }
  25. leaderElector.Run(ctx)
  26. return fmt.Errorf("lost lease")
  27. }

6.2 kube-scheduler中的leaderelection选主机制解读

k8s leader election抢锁机制
-leaderelection 主要是利用了k8s API操作的原子性实现了一个分布式锁,在不断的竞争中进行选举
- 选中为leader的进行才会执行具体的业务代码,这在k8s中非常的常见。

为什么要选主
·- 在Kubernetes中,通常kube-schduler和kube-controller-manager都是多副本进行部署的来保证高可用
·- 而真正在工作的实例其实只有一个
- 这里就利用到leaderelection 的选主机制,保证leader是处于工作状态
- 并且在leader挂掉之后,从其他节点选取新的leader保证组件正常工作

源码解读

根据 --leader-elect=true 配置开启选主抢锁

位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\options\options.go

  1. if c.ComponentConfig.LeaderElection.LeaderElect {
  2. // Use the scheduler name in the first profile to record leader election.
  3. schedulerName := corev1.DefaultSchedulerName
  4. if len(c.ComponentConfig.Profiles) != 0 {
  5. schedulerName = c.ComponentConfig.Profiles[0].SchedulerName
  6. }
  7. coreRecorder := c.EventBroadcaster.DeprecatedNewLegacyRecorder(schedulerName)
  8. leaderElectionConfig, err = makeLeaderElectionConfig(c.ComponentConfig.LeaderElection, kubeConfig, coreRecorder)
  9. if err != nil {
  10. return nil, err
  11. }
  12. }

抢锁配置初始化

makeLeaderElectionConfig 创建选主抢锁的配置

位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\options\options.go

使用机器名+uuid作为标识

锁为资源锁resourcelock

resourcelock默认配置可以通过/configz获取

代码如下

  1. // makeLeaderElectionConfig builds a leader election configuration. It will
  2. // create a new resource lock associated with the configuration.
  3. func makeLeaderElectionConfig(config componentbaseconfig.LeaderElectionConfiguration, kubeConfig *restclient.Config, recorder record.EventRecorder) (*leaderelection.LeaderElectionConfig, error) {
  4. hostname, err := os.Hostname()
  5. if err != nil {
  6. return nil, fmt.Errorf("unable to get hostname: %v", err)
  7. }
  8. // add a uniquifier so that two processes on the same host don't accidentally both become active
  9. id := hostname + "_" + string(uuid.NewUUID())
  10. rl, err := resourcelock.NewFromKubeconfig(config.ResourceLock,
  11. config.ResourceNamespace,
  12. config.ResourceName,
  13. resourcelock.ResourceLockConfig{
  14. Identity: id,
  15. EventRecorder: recorder,
  16. },
  17. kubeConfig,
  18. config.RenewDeadline.Duration)
  19. if err != nil {
  20. return nil, fmt.Errorf("couldn't create resource lock: %v", err)
  21. }
  22. return &leaderelection.LeaderElectionConfig{
  23. Lock: rl,
  24. LeaseDuration: config.LeaseDuration.Duration,
  25. RenewDeadline: config.RenewDeadline.Duration,
  26. RetryPeriod: config.RetryPeriod.Duration,
  27. WatchDog: leaderelection.NewLeaderHealthzAdaptor(time.Second * 20),
  28. Name: "kube-scheduler",
  29. ReleaseOnCancel: true,
  30. }, nil
  31. }

resourcelock资源锁的初始化

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\leaderelection\resourcelock\interface.go

  1. // Manufacture will create a lock of a given type according to the input parameters
  2. func New(lockType string, ns string, name string, coreClient corev1.CoreV1Interface, coordinationClient coordinationv1.CoordinationV1Interface, rlc ResourceLockConfig) (Interface, error) {
  3. endpointsLock := &endpointsLock{
  4. EndpointsMeta: metav1.ObjectMeta{
  5. Namespace: ns,
  6. Name: name,
  7. },
  8. Client: coreClient,
  9. LockConfig: rlc,
  10. }
  11. configmapLock := &configMapLock{
  12. ConfigMapMeta: metav1.ObjectMeta{
  13. Namespace: ns,
  14. Name: name,
  15. },
  16. Client: coreClient,
  17. LockConfig: rlc,
  18. }
  19. leaseLock := &LeaseLock{
  20. LeaseMeta: metav1.ObjectMeta{
  21. Namespace: ns,
  22. Name: name,
  23. },
  24. Client: coordinationClient,
  25. LockConfig: rlc,
  26. }
  27. switch lockType {
  28. case endpointsResourceLock:
  29. return nil, fmt.Errorf("endpoints lock is removed, migrate to %s", EndpointsLeasesResourceLock)
  30. case configMapsResourceLock:
  31. return nil, fmt.Errorf("configmaps lock is removed, migrate to %s", ConfigMapsLeasesResourceLock)
  32. case LeasesResourceLock:
  33. return leaseLock, nil
  34. case EndpointsLeasesResourceLock:
  35. return &MultiLock{
  36. Primary: endpointsLock,
  37. Secondary: leaseLock,
  38. }, nil
  39. case ConfigMapsLeasesResourceLock:
  40. return &MultiLock{
  41. Primary: configmapLock,
  42. Secondary: leaseLock,
  43. }, nil
  44. default:
  45. return nil, fmt.Errorf("Invalid lock-type %s", lockType)
  46. }
  47. }

scheduler中抢锁的运行

。在scheduler的Run函数中,位置在D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\server.go

  1. // If leader election is enabled, runCommand via LeaderElector until done and exit.
  2. if cc.LeaderElection != nil {
  3. cc.LeaderElection.Callbacks = leaderelection.LeaderCallbacks{
  4. OnStartedLeading: func(ctx context.Context) {
  5. close(waitingForLeader)
  6. sched.Run(ctx)
  7. },
  8. OnStoppedLeading: func() {
  9. select {
  10. case <-ctx.Done():
  11. // We were asked to terminate. Exit 0.
  12. klog.InfoS("Requested to terminate, exiting")
  13. os.Exit(0)
  14. default:
  15. // We lost the lock.
  16. klog.ErrorS(nil, "Leaderelection lost")
  17. klog.FlushAndExit(klog.ExitFlushTimeout, 1)
  18. }
  19. },
  20. }
  21. leaderElector, err := leaderelection.NewLeaderElector(*cc.LeaderElection)
  22. if err != nil {
  23. return fmt.Errorf("couldn't create leader elector: %v", err)
  24. }
  25. leaderElector.Run(ctx)
  26. return fmt.Errorf("lost lease")
  27. }

底层会调用leaderElector.Run开始执行抢锁选主

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\leaderelection\leaderelection.go

  1. // Run starts the leader election loop. Run will not return
  2. // before leader election loop is stopped by ctx or it has
  3. // stopped holding the leader lease
  4. func (le *LeaderElector) Run(ctx context.Context) {
  5. defer runtime.HandleCrash()
  6. defer func() {
  7. le.config.Callbacks.OnStoppedLeading()
  8. }()
  9. if !le.acquire(ctx) {
  10. return // ctx signalled done
  11. }
  12. ctx, cancel := context.WithCancel(ctx)
  13. defer cancel()
  14. go le.config.Callbacks.OnStartedLeading(ctx)
  15. le.renew(ctx)
  16. }

通过acquire进行抢锁的动作

acquire会轮询调用tryAcquireOrRenew,如果抢到锁就返回true

如果ctx收到了退出信号就返回false 

  1. // acquire loops calling tryAcquireOrRenew and returns true immediately when tryAcquireOrRenew succeeds.
  2. // Returns false if ctx signals done.
  3. func (le *LeaderElector) acquire(ctx context.Context) bool {
  4. ctx, cancel := context.WithCancel(ctx)
  5. defer cancel()
  6. succeeded := false
  7. desc := le.config.Lock.Describe()
  8. klog.Infof("attempting to acquire leader lease %v...", desc)
  9. wait.JitterUntil(func() {
  10. succeeded = le.tryAcquireOrRenew(ctx)
  11. le.maybeReportTransition()
  12. if !succeeded {
  13. klog.V(4).Infof("failed to acquire lease %v", desc)
  14. return
  15. }
  16. le.config.Lock.RecordEvent("became leader")
  17. le.metrics.leaderOn(le.config.Name)
  18. klog.Infof("successfully acquired lease %v", desc)
  19. cancel()
  20. }, le.config.RetryPeriod, JitterFactor, true, ctx.Done())
  21. return succeeded
  22. }

tryAcquireOrRenew解读

。首先获取原有的锁(通过apiserver到etcd中获取)

如果错误是IsNotFound就创建资源,并且持有锁

  1. // 1. obtain or create the ElectionRecord
  2. oldLeaderElectionRecord, oldLeaderElectionRawRecord, err := le.config.Lock.Get(ctx)
  3. if err != nil {
  4. if !errors.IsNotFound(err) {
  5. klog.Errorf("error retrieving resource lock %v: %v", le.config.Lock.Describe(), err)
  6. return false
  7. }
  8. if err = le.config.Lock.Create(ctx, leaderElectionRecord); err != nil {
  9. klog.Errorf("error initially creating leader election record: %v", err)
  10. return false
  11. }
  12. le.setObservedRecord(&leaderElectionRecord)
  13. return true
  14. }

检查本地缓存和远端的锁对象,不一致就更新一下

  1. // 2. Record obtained, check the Identity & Time
  2. if !bytes.Equal(le.observedRawRecord, oldLeaderElectionRawRecord) {
  3. le.setObservedRecord(oldLeaderElectionRecord)
  4. le.observedRawRecord = oldLeaderElectionRawRecord
  5. }

判断持有的锁是否到期以及是否被自己持有

  1. if len(oldLeaderElectionRecord.HolderIdentity) > 0 &&
  2. le.observedTime.Add(le.config.LeaseDuration).After(now.Time) &&
  3. !le.IsLeader() {
  4. klog.V(4).Infof("lock is held by %v and has not yet expired", oldLeaderElectionRecord.HolderIdentity)
  5. return false
  6. }

自己现在是leader,但是分两组情况

le.IsLeader代表上一次也是leader,不需要变更信息

else代表首次变为leader,需要将leader切换+1

  1. // 3. We're going to try to update. The leaderElectionRecord is set to it's default
  2. // here. Let's correct it before updating.
  3. if le.IsLeader() {
  4. leaderElectionRecord.AcquireTime = oldLeaderElectionRecord.AcquireTime
  5. leaderElectionRecord.LeaderTransitions = oldLeaderElectionRecord.LeaderTransitions
  6. } else {
  7. leaderElectionRecord.LeaderTransitions = oldLeaderElectionRecord.LeaderTransitions + 1
  8. }

更新锁资源,这里如果在Get 和 Update 之有变化,将会更新失败

  1. // update the lock itself
  2. if err = le.config.Lock.Update(ctx, leaderElectionRecord); err != nil {
  3. klog.Errorf("Failed to update lock: %v", err)
  4. return false
  5. }
  6. le.setObservedRecord(&leaderElectionRecord)
  7. return true

如果上面的update等操作并发执行会怎么样
在le.config.Lock.Get0 中会获取到锁的对象,其中有一个resourceVersion 字段用于标识一个资源对象的内部版本,每次更新操作都会更新其值
如果一个更新操作附加上了resourceVersion 字段,那么apiserver 就会通过验证当前 resourceVersion的值与指定的值是否相匹配来确保在此次更新操作周期内没有其他的更新操作,从而保证了更新操作的原子性 

resourceVersion 在ObjectMeta中,位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\apimachinery\pkg\apis\meta\v1\types.go

  1. // An opaque value that represents the internal version of this object that can
  2. // be used by clients to determine when objects have changed. May be used for optimistic
  3. // concurrency, change detection, and the watch operation on a resource or set of resources.
  4. // Clients must treat these values as opaque and passed unmodified back to the server.
  5. // They may only be valid for a particular resource or set of resources.
  6. //
  7. // Populated by the system.
  8. // Read-only.
  9. // Value must be treated as opaque by clients and .
  10. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
  11. // +optional
  12. ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`

kube-scheduler lease对象查看

kubectl get lease -n kube-system
[root@k8s-master01 k8s-leaderelection]# kubectl get lease -n kube-system
NAME HOLDER AGE
kube-controller-manager k8s-master01_def02578-36f9-4a43-b700-66dd407ff612 161d

kube-scheduler k8s-master01_29da2906-54c1-4db1-9146-4bf8919b4cda 161d
可以看到我们现在是单master的环境
kube-scheduler 和kube-controller-manager都使用了lease做选主抢锁

在kube-system命名空间下
。holder代表当前锁被那个节点持有,为机器名+uuid

写代码体验leaderelection 的选主机制

新建项目 k8s-leaderelection 

  1. PS D:\Workspace\Go\src\k8s-leaderelection> go mod init k8s-leaderelection
  2. go: creating new go.mod: module k8s-leaderelection

新建 leaderelection.go

  1. package main
  2. import (
  3. "context"
  4. "flag"
  5. "os"
  6. "os/signal"
  7. "syscall"
  8. "time"
  9. "github.com/google/uuid"
  10. metav1 "k8s.io/apimechinery/pkg/apis/meta/v1"
  11. clientset "k8s.io/client-go/kubernetes"
  12. "k8s.io/client-go/rest"
  13. "k8s.io/client-go/tools/clientcmd"
  14. "k8s.io/client-go/tools/leaderelection"
  15. "k8s.io/client-go/tools/leaderelection/resourcelock"
  16. "k8s.io/klog"
  17. )
  18. // 初始化restconfig,如果指定了kubeconfig就用文件,否则使用InClusterConfig对应service account
  19. func buildConfig(kubeconfig string) (*rest.Config, error) {
  20. if kubeconfig != "" {
  21. cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
  22. if err != nil {
  23. return nil, err
  24. }
  25. return cfg, nil
  26. }
  27. cfg, err := rest.InClusterConfig()
  28. if err != nil {
  29. return nil, err
  30. }
  31. return cfg, nil
  32. }
  33. func main() {
  34. klog.InitFlags(nil)
  35. var kubeconfig string
  36. var leaseLockName string
  37. var leaseLockNamespace string
  38. var id string
  39. flag.StringVar(&kubeconfig, "kubeconfig", "", "absolute path to the kubeconfig file")
  40. // 唯一的id
  41. flag.StringVar(&id, "id", uuid.New().String(), "the holder identity name")
  42. // lease-lock资源锁的名称
  43. flag.StringVar(&leaseLockName, "lease-lock-name", "the lease lock resource name")
  44. // 资源锁的namespace
  45. flag.StringVar(&leaseLockNamespace, "lease-lock-namespace", "the lease lock resource namespace")
  46. flag.Parse()
  47. if leaseLockName == "" {
  48. klog.Fatal("unable to get lease lock resource name (missing lease-lock-name flag).")
  49. }
  50. if leaseLockNamespace == "" {
  51. klog.Fatal("unable to get lease lock resource namespace (missing lease-lock-namespace flag).")
  52. }
  53. // leader election uses the Kubernetes API by writing to a// lock object, which can be a LeaseLock object (preferred),
  54. // a ConfigMap, or an Endpoints (deprecated) object.
  55. // Conflicting writes are detected and each client handles those actions
  56. // independently.
  57. config, err := buildConfig(kubeconfig)
  58. if err != nil {
  59. klog.Fatal(err)
  60. }
  61. // @创建clientset
  62. client := clientset.NewForConfigOrDie(config)
  63. run := func(ctx context.Context) {
  64. // complete your controller loop here
  65. klog.Info("Controller loop...")
  66. select {}
  67. }
  68. // use a Go context so we can tell the leaderelection code when we
  69. // want to step down
  70. // 抢锁停止的context
  71. ctx, cancel := context.WithCancel(context.Background())
  72. defer cancel()
  73. // 监信号
  74. ch := make(chan os.Signal, 1)
  75. signal.Notify(ch, os.Interrupt, syscall.SIGTERM)
  76. go func() {
  77. <-ch
  78. klog.Info("Received termination, signaling shutdown")
  79. cancel()
  80. }()
  81. // we use the Lease lock type since edits to Leases are less common
  82. // and fewer objects in the cluster watch "all Leases"
  83. // 指定锁的资源对象,这里使用了Lease贷源,还支持configmap,endpoint,或者multilock(即多种亮合使用)
  84. lock := &resourcelock.LeaseLock{
  85. LeaseMeta: metav1.ObjectMeta{
  86. Name: leaseLockName,
  87. Namespace: leaseLockNamespace,
  88. },
  89. Client: client.CoordinationV1(),
  90. LockConfig: resourcelock.ResourceLockConfig{
  91. Identity: id,
  92. },
  93. }
  94. // start the leader election code loop
  95. leaderelection.RunOrDie(ctx, leaderelection.LeaderElectionConfig{
  96. Lock: lock,
  97. // IMPORTANT: you MUST ensure that any code you have that
  98. // is protected by the lease must terminate **before**
  99. // you call cancel. Otherwise, you could have a background
  100. // loop still running and another process could
  101. // get elected before your background loop finished, violating
  102. // the stated goal of the lease.
  103. ReleaseOnCancel: true,
  104. LeaseDuration: 60 * time.Second, //租约时间
  105. RenewDeadline: 15 * time.Second, //更新租约的
  106. RetryPeriod: 5 * time.Second, //非leader节点重试时间
  107. Callbacks: leaderelection.leaderCallbacks{
  108. OnStartedLeading: func(ctx context.Context) {
  109. //变为leader执行的业务代码
  110. // we're notified when we start - this is where you would
  111. // usually put your coder
  112. run(ctx)
  113. },
  114. OnStoppedLeading: func() {
  115. // 进程退出
  116. // we can do cleanup here
  117. klog.Infof("leader lost: %s", id)
  118. os.Exit(0)
  119. },
  120. OnNewLeader: func(identity string) {
  121. //当产生新的Leader后执行的方法
  122. // we 're notified when new leader elected
  123. if identity == id {
  124. // I just got the lock
  125. return
  126. }
  127. klog.Infof("new leader elected: %s", identity)
  128. },
  129. },
  130. })
  131. }

解读一下

首先通过命令行传入 kubeconfig lease的名字和id等参数

通过buildConfig获取 restConfig
clientset.NewForConfigOrDie创建cntset

实例化resourcelock.LeaseLock,资源类型使用lease
。leaderelection.RunOrDie启动抢锁逻辑
。时间相关参数解读
。 LeaseDuration: 60*time.Second,//租约时间
。 RenewDeadline:  15*time.Second,//更新租约的
。 RetryPeriod: 5*time.Second//leader节点重试时间

事件回调说明.
。OnStartedLeading代表变为leader时执行什么,往往是业务代码,这里执行的是空的run
。OnStoppedLeading 代表进程退出
。OnNewLeader 当产生新的leader后执行的方法 

编译运行

go build
。运行,先启动id=1的成员

可以看到当前进程抢到了锁,被选为主,get一下lease 

再启动一个id=2的成员,可以看到1号为leader

这时候停止1号,可以看到2号获取了锁

k8sleaderelection抢锁机制

leaderelection主要是利用了k8s API操作的原子性实现了一个分布式锁,在不断的竞争中进行选举

选中为leader的进行才会执行具体的业务代码,这在k8s中非常的常见

kube-scheduler使用lease类型的资源锁选主,选主之后才能进行调度

这样做的目的是多副本进行部署的来保证高可用 

6.3 k8s的事件event和kube-scheduler中的事件广播器

什么是k8s的events

k8s的events是向您展示集群内部发生的事情的对象

- 例如调度程序做出了哪些决定

或者为什么某些 Pod 从节点中被逐出

哪些组件可以产生events

所有核心组件和扩展(操作符)都可以通过APServer创建事件
-k8s 多个组件均会产生 event

如何获取event 数据

get events直接获取

describe 资源获取

·比如创建pod时故意将容器的image 仓库名字写错

 

创建之后就可以describe 这个pod获取events,可以看到拉取镜像失败的events

Event事件管理机制要有三部分组成

EventRecorder:是事件生成者,k8s组件通过调用它的方法来生成事件 

EventBroadcaster: 事件广播器,负责消费EventRecorder产生的事件,然后分发给broadcasterWatcher;
broadcasterWatcher:用于定义事件的处理方式,如上报apiserver; 

events保存问题

Events量非常大,只能存在一个很短的时间
Event 一般会通过apiserver暂存在etcd集群中(最好是单独的etcd集群存储events,和集群业务数据的etcd分开)
为避免磁盘空间被填满,故强制执行保留策略:在最后一次的事件发生后,删除1小时之前发生的事件。

event有什么作用

下面图片来自网络可以基于event对k8s集群监控 

kube-scheduler中的event解读

EventBroadcaster的初始化

在初始化配置的Config中,位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\options\options.go

	c.EventBroadcaster = events.NewEventBroadcasterAdapter(eventClient)

new

  1. // NewEventBroadcasterAdapter creates a wrapper around new and legacy broadcasters to simplify
  2. // migration of individual components to the new Event API.
  3. func NewEventBroadcasterAdapter(client clientset.Interface) EventBroadcasterAdapter {
  4. eventClient := &eventBroadcasterAdapterImpl{}
  5. if _, err := client.Discovery().ServerResourcesForGroupVersion(eventsv1.SchemeGroupVersion.String()); err == nil {
  6. eventClient.eventsv1Client = client.EventsV1()
  7. eventClient.eventsv1Broadcaster = NewBroadcaster(&EventSinkImpl{Interface: eventClient.eventsv1Client})
  8. }
  9. // Even though there can soon exist cases when coreBroadcaster won't really be needed,
  10. // we create it unconditionally because its overhead is minor and will simplify using usage
  11. // patterns of this library in all components.
  12. eventClient.coreClient = client.CoreV1()
  13. eventClient.coreBroadcaster = record.NewBroadcaster()
  14. return eventClient
  15. }

底层使用client-go中的eventBroadcasterlmpl,位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\record\event.go

  1. type eventBroadcasterImpl struct {
  2. *watch.Broadcaster
  3. sleepDuration time.Duration
  4. options CorrelatorOptions
  5. }

eventRecorder初始化

初始化scheduler的Setup函数中,位置D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\server.go
    recorderFactory := getRecorderFactory(&cc)
recorderFactory代表生成eventRecorder的工厂函数

最终使用的是client-go中的recorderlmpl,位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\events\event_recorder.go

  1. type recorderImpl struct {
  2. scheme *runtime.Scheme
  3. reportingController string
  4. reportingInstance string
  5. *watch.Broadcaster
  6. clock clock.Clock
  7. }

开启event 事件广播器

。在scheduler的run中,位置在D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\server.go

  1. // Prepare the event broadcaster.
  2. cc.EventBroadcaster.StartRecordingToSink(ctx.Done())

client-go中的StartRecordingToSink解读

  1. // StartRecordingToSink starts sending events received from the specified eventBroadcaster to the given sink.
  2. func (e *eventBroadcasterAdapterImpl) StartRecordingToSink(stopCh <-chan struct{}) {
  3. if e.eventsv1Broadcaster != nil && e.eventsv1Client != nil {
  4. e.eventsv1Broadcaster.StartRecordingToSink(stopCh)
  5. }
  6. if e.coreBroadcaster != nil && e.coreClient != nil {
  7. e.coreBroadcaster.StartRecordingToSink(&typedv1core.EventSinkImpl{Interface: e.coreClient.Events("")})
  8. }
  9. }

startRecordingEvents解读 

D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\events\event_broadcaster.go

  1. func (e *eventBroadcasterImpl) startRecordingEvents(stopCh <-chan struct{}) {
  2. eventHandler := func(obj runtime.Object) {
  3. event, ok := obj.(*eventsv1.Event)
  4. if !ok {
  5. klog.Errorf("unexpected type, expected eventsv1.Event")
  6. return
  7. }
  8. e.recordToSink(event, clock.RealClock{})
  9. }
  10. stopWatcher := e.StartEventWatcher(eventHandler)
  11. go func() {
  12. <-stopCh
  13. stopWatcher()
  14. }()
  15. }

启动一个eventHandler,执行recordToSink写入后端存储

在StartEventWatcher消费ResultChan队列中的event,传递给eventHandler处理

  1. // StartEventWatcher starts sending events received from this EventBroadcaster to the given event handler function.
  2. // The return value is used to stop recording
  3. func (e *eventBroadcasterImpl) StartEventWatcher(eventHandler func(event runtime.Object)) func() {
  4. watcher := e.Watch()
  5. go func() {
  6. defer utilruntime.HandleCrash()
  7. for {
  8. watchEvent, ok := <-watcher.ResultChan()
  9. if !ok {
  10. return
  11. }
  12. eventHandler(watchEvent.Object)
  13. }
  14. }()
  15. return watcher.Stop
  16. }

recordToSink发送逻辑

通过getKey生成event 的类型key,作为cache中的标识

  1. func getKey(event *eventsv1.Event) eventKey {
  2. key := eventKey{
  3. action: event.Action,
  4. reason: event.Reason,
  5. reportingController: event.ReportingController,
  6. regarding: event.Regarding,
  7. }
  8. if event.Related != nil {
  9. key.related = *event.Related
  10. }
  11. return key
  12. }

.Event.series中记录的是这个event的次数和最近一次的时间

用上面的key去eventCache中寻找,如果找到了同时series存在就更新下相关的次数和时间.

  1. isomorphicEvent, isIsomorphic := e.eventCache[eventKey]
  2. if isIsomorphic {
  3. if isomorphicEvent.Series != nil {
  4. isomorphicEvent.Series.Count++
  5. isomorphicEvent.Series.LastObservedTime = metav1.MicroTime{Time: clock.Now()}
  6. return nil
  7. }

不然的话创建新的并返回

  1. isomorphicEvent.Series = &eventsv1.EventSeries{
  2. Count: 1,
  3. LastObservedTime: metav1.MicroTime{Time: clock.Now()},
  4. }
  5. return isomorphicEvent

然后用获取到的evToRecord 发送,并更新缓存 

  1. if evToRecord != nil {
  2. recordedEvent := e.attemptRecording(evToRecord)
  3. if recordedEvent != nil {
  4. recordedEventKey := getKey(recordedEvent)
  5. e.mu.Lock()
  6. defer e.mu.Unlock()
  7. e.eventCache[recordedEventKey] = recordedEvent
  8. }
  9. }

attemptRecording代表带重试发送,底层调用的就是sink的方法,位置 D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\events\interfaces.go

  1. // EventSink knows how to store events (client-go implements it.)
  2. // EventSink must respect the namespace that will be embedded in 'event'.
  3. // It is assumed that EventSink will return the same sorts of errors as
  4. // client-go's REST client.
  5. type EventSink interface {
  6. Create(event *eventsv1.Event) (*eventsv1.Event, error)
  7. Update(event *eventsv1.Event) (*eventsv1.Event, error)
  8. Patch(oldEvent *eventsv1.Event, data []byte) (*eventsv1.Event, error)
  9. }

Event.series的作用
就像重复的连接不上mysql的错误日志会打很多条一样
k8s的同一个event也会产生多条,那么去重降噪是有必要的通过event.Action类型event.Reason原因 eventReportingController产生的源等信息组成唯一的key如果cache中有key的记录,那么更新这种event的发生的次数和最近的时间就可以了

kube-scheduler中的调用 eventRecorder产生的事件

eventRecorder对象

在scheduler的frameworklmpl中,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\framework\runtime\framework.go

  1. // EventRecorder returns an event recorder.
  2. func (f *frameworkImpl) EventRecorder() events.EventRecorder {
  3. return f.eventRecorder
  4. }

scheduler处理调度pod失败的地方

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

  1. // handleSchedulingFailure records an event for the pod that indicates the
  2. // pod has failed to schedule. Also, update the pod condition and nominated node name if set.
  3. func (sched *Scheduler) handleSchedulingFailure(fwk framework.Framework, podInfo *framework.QueuedPodInfo, err error, reason string, nominatingInfo *framework.NominatingInfo) {
  4. sched.Error(podInfo, err)
  5. // Update the scheduling queue with the nominated pod information. Without
  6. // this, there would be a race condition between the next scheduling cycle
  7. // and the time the scheduler receives a Pod Update for the nominated pod.
  8. // Here we check for nil only for tests.
  9. if sched.SchedulingQueue != nil {
  10. sched.SchedulingQueue.AddNominatedPod(podInfo.PodInfo, nominatingInfo)
  11. }
  12. pod := podInfo.Pod
  13. msg := truncateMessage(err.Error())
  14. fwk.EventRecorder().Eventf(pod, nil, v1.EventTypeWarning, "FailedScheduling", "Scheduling", msg)
  15. if err := updatePod(sched.client, pod, &v1.PodCondition{
  16. Type: v1.PodScheduled,
  17. Status: v1.ConditionFalse,
  18. Reason: reason,
  19. Message: err.Error(),
  20. }, nominatingInfo); err != nil {
  21. klog.ErrorS(err, "Error updating pod", "pod", klog.KObj(pod))
  22. }
  23. }

写一个pod的,故意让scheduler调度失败,查看相关的event

这里让pod要调度到 disktype=ssd的node上

创建后获取event,可以看到调度失败的event
kubectl get event

分析下kube-scheduler 记录event的代码

Eventf参数和字段分析

regarding 关注哪种资源的event 这里传入的是pod

related 还关联哪些资源,这里传入的是nil

eventtype代表是warning的还是normal的,这里是v1.EventTypeWarning

reason 原因,这里传入的是 FailedScheduling调度失败

action 代表执行哪个动作,这里传入的是Scheduling

note代表详细信息,这里传入的是错误信息msg 

最后给node节点打上disktype=ssd的标签让pod正常调度

可以看到

pod调度正常也会有相关的event 

反查正常的eventf源码,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler_one.go

  1. func (sched *Scheduler) finishBinding(fwk framework.Framework, assumed *v1.Pod, targetNode string, err error) {
  2. if finErr := sched.Cache.FinishBinding(assumed); finErr != nil {
  3. klog.ErrorS(finErr, "Scheduler cache FinishBinding failed")
  4. }
  5. if err != nil {
  6. klog.V(1).InfoS("Failed to bind pod", "pod", klog.KObj(assumed))
  7. return
  8. }
  9. fwk.EventRecorder().Eventf(assumed, nil, v1.EventTypeNormal, "Scheduled", "Binding", "Successfully assigned %v/%v to %v", assumed.Namespace, assumed.Name, targetNode)
  10. }

本节重点总结

。k8s的events是展示集群内部发生的事情的对象
很多组件都可以产生event,数据会上传到apiserver,临时存储在etcd中,因为量太大了,有删除策略

Event事件管理机制要有三部分组成 

EventRecorder:是事件生成者,k8s组件通过调用它的方法来生成事件;

EventBroadcaster: 事件广播器负责消费EventRecorder产生的事件,然后分发给BroadcasterWatcher 

6.4 k8s的informer机制

informer机制的作用

Informer 机制在不依赖中间件的情况下保证消息的实时性可靠性和顺序性
降低了k8s各个组件跟 Etcd 与 k8s APIServer 的通信压力
-k8s中的informer 框架可以很方便的让每个子模块以及扩展程序拿到k8s中的资源信息。

informer机制的主要对象

- reflector 用来直接和 k8s api server 通信,内部实现了listwatch 机制
- DeltaFIFO:更新队列
- informer 是我们要监听的资源的一个代码抽象

- Indexer: Client-go 用来存储资源对象并自带索引功能的本地存储

informer机制的框架

架构图分为两部分
黄色图标是开发者需要自行开发的部分而

其它的部分是client-go 已经提供的,直接使用即可

informer机制的作用
k8s中的informer框架可以很方便的让每个子模块以及扩展程序拿到k8s中的资源信息
informer机制的作用
Informer 机制在不依赖中间件的情况下保证消息的实时性,可靠性和顺序性降低了k8s各个组件跟Etcd 与k8s APIServer 的通信压力

informer机制的主要对象

Reflector: reflector 用来直接和 k8s api server 通信,内部实现了 listwatch 机制
。listwatch 就是用来监听资源变化的
。一个listwatch 只对应一个资源
。这个资源可以是k8s中内部的资源也可以是自定义的资源
。当收到资源变化时(创建、删除、修改)时会将资源放到 Delta Fifo 队列中

。 Reflector 包会和apiServer 建立长连接

DeltaFIFO:更新队列
。FIFO 就是一个队列,拥有队列基本方法(ADD,UPDATE,DELETE,LIST,POP,CLOSE等)

。Delta 是一个资源对象存储,保存存储对象的消费类型,比如Added,Updated,Deleted,Sync等

informer:informer 是我们要监听的资源的一个代码抽象
。能够将 delta filo 队列中的数据弹出
。然后保存到本地缓存indexer中,也就是图中的步骤5
。同时将数据分发到自定义controller 中进行事件处理也就是图中的步骤6

Indexer:Client-go用来存储资源对象并自带索引功能的本地存储
。Reflector从 DeltaFIFO 中将消费出来的资源对象存储到Indexer 

。lndexer与Etcd 集群中的数据完全保持一致

。从而client-go 可以本地读取,减少 Kubernetes API和 Etcd 集群的压力

使用informer代码

新建项目k8s-informer

go mod init k8s-informer

informer.go

  1. package main
  2. import (
  3. "context"
  4. "flag"
  5. "log"
  6. "os"
  7. "os/signal"
  8. "path/filepath"
  9. "syscall"
  10. "time"
  11. v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  12. "k8s.io/client-go/informers"
  13. "k8s.io/client-go/kubernetes"
  14. "k8s.io/client-go/tools/cache"
  15. "k8s.io/client-go/tools/clientcmd"
  16. "k8s.io/client-go/util/homedir"
  17. "k8s.io/klog"
  18. )
  19. func main() {
  20. var kubeconfig *string
  21. //如果是windows,那么会读取c:\Users\xxx\.kube\config 下面的配置文件
  22. //如果是Linux,那么会读收~/.kube/config 下面的配置文件
  23. if home := homedir.HomeDir(); home != "" {
  24. kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
  25. } else {
  26. kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
  27. }
  28. flag.Parse()
  29. config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
  30. if err != nil {
  31. panic(err)
  32. }
  33. clientset, err := kubernetes.NewForConfig(config)
  34. if err != nil {
  35. panic(err)
  36. }
  37. ctx, cancel := context.WithCancel(context.Background())
  38. defer cancel()
  39. // 监信号
  40. ch := make(chan os.Signal, 1)
  41. signal.Notify(ch, os.Interrupt, syscall.SIGTERM)
  42. go func() {
  43. <-ch
  44. klog.Info("Received termination,signaling shutdown")
  45. cancel()
  46. }()
  47. //表示每分钟进行一次resync,resync会周期性地执行List操作
  48. sharedInformers := informers.NewSharedInformerFactory(clientset, time.Minute)
  49. informer := sharedInformers.Core().V1().Pods().Informer()
  50. informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
  51. AddFuncs: func(obj interface{}) {
  52. mObj := obj.(v1.Object)
  53. log.Printf("New Pod Added to store: %s", mObj.GetName())
  54. },
  55. UpdateFunc: func(oldObj, newobj interface{}) {
  56. oObj := oldObj.(v1.Object)
  57. nObj := newobj.(v1.Object)
  58. log.Printf("%s Pod Updated to %s", oObj.GetName(), nObj.GetName())
  59. },
  60. DeleteFunc: func(obj interface{}) {
  61. mObj := obj.(v1.Object)
  62. log.Printf("pod Deleted from Store: %s", mObj.GetName())
  63. },
  64. })
  65. informer.Run(ctx.Done())
  66. }

解读一下

先通过kubeconfig创建 restclient.Config
config,err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
然后创建和apiserver交互的 clientset
clientset,err := kubernetes.NewForConfig(config)
监听退出信号,并创建退出的context

使用SharedInformerFactory创建sharedInformer,传入的Resync时间是1分钟,代表1分钟执行list操作

然后创建pod资源的informer

添加EventHandler,并执行

。AddFunc代表新的资源创建时的回调

。UpdateFunc代表资源更新时的回调

o DeleteFunc代表资源删除时的回调代码如下 

编译执行

go build
./informer

效果查看

执行后拉取全量的结果

新增一个pod 

informer那边 

修改刚才创建的pod,添加标签信息
pod的yaml 

informer的update日志

每次的resync也会触发 update

6.5 kube-scheduler中的informer源码阅读

在kube-scheduler中的源码解读

初始化shardinformer

。入口在kube-scheduler的config中,位置
D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\options\options.go

这里传入的resync=0,代表不进行周期性的list,而是通过第一次的全量list+增量的更新
    c.InformerFactory = scheduler.NewInformerFactory(client, 0)

    informerFactory := informers.NewSharedInformerFactory(cs, resyncPeriod)
。NewSharedInformerFactory方法最终会调用到NewSharedinformerFactoryWithOptions初始化一个sharedInformerFactory ,位置D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\informers\factory.go

  1. // NewSharedInformerFactoryWithOptions constructs a new instance of a SharedInformerFactory with additional options.
  2. func NewSharedInformerFactoryWithOptions(client kubernetes.Interface, defaultResync time.Duration, options ...SharedInformerOption) SharedInformerFactory {
  3. factory := &sharedInformerFactory{
  4. client: client,
  5. namespace: v1.NamespaceAll,
  6. defaultResync: defaultResync,
  7. informers: make(map[reflect.Type]cache.SharedIndexInformer),
  8. startedInformers: make(map[reflect.Type]bool),
  9. customResync: make(map[reflect.Type]time.Duration),
  10. }
  11. // Apply all options
  12. for _, opt := range options {
  13. factory = opt(factory)
  14. }
  15. return factory
  16. }

为何叫sharedinformer

sharedInformerFactory字段如下 

  1. type sharedInformerFactory struct {
  2. client kubernetes.Interface
  3. namespace string
  4. tweakListOptions internalinterfaces.TweakListOptionsFunc
  5. lock sync.Mutex
  6. defaultResync time.Duration
  7. customResync map[reflect.Type]time.Duration
  8. informers map[reflect.Type]cache.SharedIndexInformer
  9. // startedInformers is used for tracking which informers have been started.
  10. // This allows Start() to be called multiple times safely.
  11. startedInformers map[reflect.Type]bool
  12. }

其中最重要的就是informers这个map,根据资源的类型更新对应的informer

一种资源会对应多个Informer,会导致效率低下,所以让一个资源对应一个sharedinformer,而一个sharedInformer内部自己维护多个Informer

pod-informer初始化

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

  1. // NewInformerFactory creates a SharedInformerFactory and initializes a scheduler specific
  2. // in-place podInformer.
  3. func NewInformerFactory(cs clientset.Interface, resyncPeriod time.Duration) informers.SharedInformerFactory {
  4. informerFactory := informers.NewSharedInformerFactory(cs, resyncPeriod)
  5. informerFactory.InformerFor(&v1.Pod{}, newPodInformer)
  6. return informerFactory
  7. }

我们可以看到使用informerFactory的InformerFor创建了pod的informer对象

具体的InformerFor为sharedInformerFactory的InformerFor,位置在

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\informers\factory.go

  1. // InternalInformerFor returns the SharedIndexInformer for obj using an internal
  2. // client.
  3. func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer {
  4. f.lock.Lock()
  5. defer f.lock.Unlock()
  6. informerType := reflect.TypeOf(obj)
  7. informer, exists := f.informers[informerType]
  8. if exists {
  9. return informer
  10. }
  11. resyncPeriod, exists := f.customResync[informerType]
  12. if !exists {
  13. resyncPeriod = f.defaultResync
  14. }
  15. informer = newFunc(f.client, resyncPeriod)
  16. f.informers[informerType] = informer
  17. return informer
  18. }

InformerFor解读
根据obj的反射类型在informers map中寻找informer
如果找到就返回,找不到就使用传入的newFunc 创建一个,并更新map
对应的 newFunc就是newPodInformer,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

  1. // newPodInformer creates a shared index informer that returns only non-terminal pods.
  2. func newPodInformer(cs clientset.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
  3. selector := fmt.Sprintf("status.phase!=%v,status.phase!=%v", v1.PodSucceeded, v1.PodFailed)
  4. tweakListOptions := func(options *metav1.ListOptions) {
  5. options.FieldSelector = selector
  6. }
  7. return coreinformers.NewFilteredPodInformer(cs, metav1.NamespaceAll, resyncPeriod, nil, tweakListOptions)
  8. }

底层的newPod在client-go中,位置 D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\informers\core\v1\pod.go

在其中能看到对应的 listFunc和WatchFunc

  1. // NewFilteredPodInformer constructs a new informer for Pod type.
  2. // Always prefer using an informer factory to get a shared informer instead of getting an independent
  3. // one. This reduces memory footprint and number of connections to the server.
  4. func NewFilteredPodInformer(client kubernetes.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
  5. return cache.NewSharedIndexInformer(
  6. &cache.ListWatch{
  7. ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
  8. if tweakListOptions != nil {
  9. tweakListOptions(&options)
  10. }
  11. return client.CoreV1().Pods(namespace).List(context.TODO(), options)
  12. },
  13. WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
  14. if tweakListOptions != nil {
  15. tweakListOptions(&options)
  16. }
  17. return client.CoreV1().Pods(namespace).Watch(context.TODO(), options)
  18. },
  19. },
  20. &corev1.Pod{},
  21. resyncPeriod,
  22. indexers,
  23. )
  24. }

Indexer的创建

上面我们提到在 NewSharedIndexlnformer中会新建indexer存储位置 D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\cache\store.go

  1. // NewIndexer returns an Indexer implemented simply with a map and a lock.
  2. func NewIndexer(keyFunc KeyFunc, indexers Indexers) Indexer {
  3. return &cache{
  4. cacheStorage: NewThreadSafeStore(indexers, Indices{}),
  5. keyFunc: keyFunc,
  6. }
  7. }

对应的底层数据结构为 threadSafeMap,位置D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\cache\thread_safe_store.go

  1. // threadSafeMap implements ThreadSafeStore
  2. type threadSafeMap struct {
  3. lock sync.RWMutex
  4. items map[string]interface{}
  5. // indexers maps a name to an IndexFunc
  6. indexers Indexers
  7. // indices maps a name to an Index
  8. indices Indices
  9. }

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\cache\index.go

  1. // Index maps the indexed value to a set of keys in the store that match on that value
  2. type Index map[string]sets.String
  3. // Indexers maps a name to an IndexFunc
  4. type Indexers map[string]IndexFunc
  5. // Indices maps a name to an Index
  6. type Indices map[string]Index

。indices数据结构,看起来是三层的map,key 都是string
看起来 threadSafeMap.items存储具体的资源对象,indices是索引,加速查找

NewIndexer时传入的keyFunc

。使用的是MetaNamespaceKeyFunc,也就是对象的namespace/name 

D:\Workspace\Go\src\k8s.io\kubernetes\staging\src\k8s.io\client-go\tools\cache\store.go

  1. // MetaNamespaceKeyFunc is a convenient default KeyFunc which knows how to make
  2. // keys for API objects which implement meta.Interface.
  3. // The key uses the format <namespace>/<name> unless <namespace> is empty, then
  4. // it's just <name>.
  5. //
  6. // TODO: replace key-as-string with a key-as-struct so that this
  7. // packing/unpacking won't be necessary.
  8. func MetaNamespaceKeyFunc(obj interface{}) (string, error) {
  9. if key, ok := obj.(ExplicitKey); ok {
  10. return string(key), nil
  11. }
  12. meta, err := meta.Accessor(obj)
  13. if err != nil {
  14. return "", fmt.Errorf("object has no meta: %v", err)
  15. }
  16. if len(meta.GetNamespace()) > 0 {
  17. return meta.GetNamespace() + "/" + meta.GetName(), nil
  18. }
  19. return meta.GetName(), nil
  20. }

添加eventHandler

。在 scheduler 的的New中,addAllEventHandlers 

这些handler代表就是使用方能干些什么,informer拿到之后要更新存储,使用方如scheduler拿到后要调度pod

位置 D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

    addAllEventHandlers(sched, informerFactory, dynInformerFactory, unionedGVKs(clusterEventMap))

添加调度pod时的回调,有对应的AddFunc、UpdateFunc、DeleteFunc

  1. // scheduled pod cache
  2. informerFactory.Core().V1().Pods().Informer().AddEventHandler(
  3. cache.FilteringResourceEventHandler{
  4. FilterFunc: func(obj interface{}) bool {
  5. switch t := obj.(type) {
  6. case *v1.Pod:
  7. return assignedPod(t)
  8. case cache.DeletedFinalStateUnknown:
  9. if _, ok := t.Obj.(*v1.Pod); ok {
  10. // The carried object may be stale, so we don't use it to check if
  11. // it's assigned or not. Attempting to cleanup anyways.
  12. return true
  13. }
  14. utilruntime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, sched))
  15. return false
  16. default:
  17. utilruntime.HandleError(fmt.Errorf("unable to handle object in %T: %T", sched, obj))
  18. return false
  19. }
  20. },
  21. Handler: cache.ResourceEventHandlerFuncs{
  22. AddFunc: sched.addPodToCache,
  23. UpdateFunc: sched.updatePodInCache,
  24. DeleteFunc: sched.deletePodFromCache,
  25. },
  26. },
  27. )

启动informers

入口在 scheduler的run中,位置 D:\Workspace\Go\src\k8s.io\kubernetes\cmd\kube-scheduler\app\server.go

  1. // Start all informers.
  2. cc.InformerFactory.Start(ctx.Done())
  3. // DynInformerFactory can be nil in tests.
  4. if cc.DynInformerFactory != nil {
  5. cc.DynInformerFactory.Start(ctx.Done())
  6. }
  7. // Wait for all caches to sync before scheduling.
  8. cc.InformerFactory.WaitForCacheSync(ctx.Done())
  9. // DynInformerFactory can be nil in tests.
  10. if cc.DynInformerFactory != nil {
  11. cc.DynInformerFactory.WaitForCacheSync(ctx.Done())
  12. }

对应的stat为sharedinformerFactory中,遍历informers map,启动每个informer。并且在启动前用startedInformers map做check 

  1. // Start initializes all requested informers.
  2. func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
  3. f.lock.Lock()
  4. defer f.lock.Unlock()
  5. for informerType, informer := range f.informers {
  6. if !f.startedInformers[informerType] {
  7. go informer.Run(stopCh)
  8. f.startedInformers[informerType] = true
  9. }
  10. }
  11. }

sharedIndexInformer Run解读

新建fifo队列

  1. fifo := NewDeltaFIFOWithOptions(DeltaFIFOOptions{
  2. KnownObjects: s.indexer,
  3. EmitDeltaTypeReplaced: true,
  4. })

新建controller

  1. func() {
  2. s.startedLock.Lock()
  3. defer s.startedLock.Unlock()
  4. s.controller = New(cfg)
  5. s.controller.(*controller).clock = s.clock
  6. s.started = true
  7. }()

processor启动listeners
wg.StartwithChannel(processorStopCh,s.processor.run)
底层调用 processorListener的run 

D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\cache\shared_informer.go

在processorListener的run中执行eventHandler注册的回调方法

  1. func (p *processorListener) run() {
  2. // this call blocks until the channel is closed. When a panic happens during the notification
  3. // we will catch it, **the offending item will be skipped!**, and after a short delay (one second)
  4. // the next notification will be attempted. This is usually better than the alternative of never
  5. // delivering again.
  6. stopCh := make(chan struct{})
  7. wait.Until(func() {
  8. for next := range p.nextCh {
  9. switch notification := next.(type) {
  10. case updateNotification:
  11. p.handler.OnUpdate(notification.oldObj, notification.newObj)
  12. case addNotification:
  13. p.handler.OnAdd(notification.newObj)
  14. case deleteNotification:
  15. p.handler.OnDelete(notification.oldObj)
  16. default:
  17. utilruntime.HandleError(fmt.Errorf("unrecognized notification: %T", next))
  18. }
  19. }
  20. // the only way to get here is if the p.nextCh is empty and closed
  21. close(stopCh)
  22. }, 1*time.Second, stopCh)
  23. }

运行controller

D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\cache\controller.go

  1. // Run begins processing items, and will continue until a value is sent down stopCh or it is closed.
  2. // It's an error to call Run more than once.
  3. // Run blocks; call via go.
  4. func (c *controller) Run(stopCh <-chan struct{}) {
  5. defer utilruntime.HandleCrash()
  6. go func() {
  7. <-stopCh
  8. c.config.Queue.Close()
  9. }()
  10. r := NewReflector(
  11. c.config.ListerWatcher,
  12. c.config.ObjectType,
  13. c.config.Queue,
  14. c.config.FullResyncPeriod,
  15. )
  16. r.ShouldResync = c.config.ShouldResync
  17. r.WatchListPageSize = c.config.WatchListPageSize
  18. r.clock = c.clock
  19. if c.config.WatchErrorHandler != nil {
  20. r.watchErrorHandler = c.config.WatchErrorHandler
  21. }
  22. c.reflectorMutex.Lock()
  23. c.reflector = r
  24. c.reflectorMutex.Unlock()
  25. var wg wait.Group
  26. wg.StartWithChannel(stopCh, r.Run)
  27. wait.Until(c.processLoop, time.Second, stopCh)
  28. wg.Wait()
  29. }

·其中新建reflector,r.Run 代表生产,往Queue里放数据

reflector.Run 生产者解读

。ListAndWatch会调用watchHandler
。watchHandler顾名思义,就是Watch到对应的事件,调用对应的Handler在watchhandler中可以看到对增删改等动作的处理,位置 

D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\cache\reflector.go

  1. switch event.Type {
  2. case watch.Added:
  3. err := r.store.Add(event.Object)
  4. if err != nil {
  5. utilruntime.HandleError(fmt.Errorf("%s: unable to add watch event object (%#v) to store: %v", r.name, event.Object, err))
  6. }
  7. case watch.Modified:
  8. err := r.store.Update(event.Object)
  9. if err != nil {
  10. utilruntime.HandleError(fmt.Errorf("%s: unable to update watch event object (%#v) to store: %v", r.name, event.Object, err))
  11. }
  12. case watch.Deleted:
  13. // TODO: Will any consumers need access to the "last known
  14. // state", which is passed in event.Object? If so, may need
  15. // to change this.
  16. err := r.store.Delete(event.Object)
  17. if err != nil {
  18. utilruntime.HandleError(fmt.Errorf("%s: unable to delete watch event object (%#v) from store: %v", r.name, event.Object, err))
  19. }

消费者分析

 D:\Workspace\Go\src\k8s.io\kubernetes\vendor\k8s.io\client-go\tools\cache\shared_informer.go

  1. func (s *sharedIndexInformer) HandleDeltas(obj interface{}) error {
  2. s.blockDeltas.Lock()
  3. defer s.blockDeltas.Unlock()
  4. if deltas, ok := obj.(Deltas); ok {
  5. return processDeltas(s, s.indexer, s.transform, deltas)
  6. }
  7. return errors.New("object given as Process argument is not Deltas")
  8. }

过程如下
。在indexer中判断对象是否存在,存在更新,不存在就新增

同时调用distribute函数分发给listener 

本节重点总结:

informer机制的框架

6.6 kube-scheduler利用informer机制调度pod

Pod的调度是通过一个队列SchedulingQueue异步工作的

监听到对应pod事件后,放入队列

-有个消费者从队列中获取pod,进行调度

单个pod的调度主要分为3个步骤
根据Predict和Priority两个阶段,调用各自的算法插件选择最优的Node
- Assume这个Pod被调度到对应的Node,保存到cache

- 用extender和plugins进行验证,如果通过则绑定

回顾scheduler结构体

位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

。可以看到其中和pod调度直接相关的两个字段

  1. // Scheduler watches for new unscheduled pods. It attempts to find
  2. // nodes that they fit on and writes bindings back to the api server.
  3. type Scheduler struct {
  4.     NextPod func() *framework.QueuedPodInfo  // 获取下一个需要调度的PodSchedulingQueue // 获取下一个需要调度的PodSchedulingQueue
  5.     // SchedulingQueue holds pods to be scheduled
  6.     SchedulingQueue internalqueue.SchedulingQueue // 等待调度的Pd队列,我们重点看看这个队列是什么
  7. }

SchedulingQueue的初始化

在create函数中,创建了podQueue,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\scheduler.go

  1. podQueue := internalqueue.NewSchedulingQueue(
  2. profiles[options.profiles[0].SchedulerName].QueueSortFunc(),
  3. informerFactory,
  4. internalqueue.WithPodInitialBackoffDuration(time.Duration(options.podInitialBackoffSeconds)*time.Second),
  5. internalqueue.WithPodMaxBackoffDuration(time.Duration(options.podMaxBackoffSeconds)*time.Second),
  6. internalqueue.WithPodNominator(nominator),
  7. internalqueue.WithClusterEventMap(clusterEventMap),
  8. internalqueue.WithPodMaxInUnschedulablePodsDuration(options.podMaxInUnschedulablePodsDuration),
  9. )

可以看出这是一个带有优先级的队列

  1. // NewSchedulingQueue initializes a priority queue as a new scheduling queue.
  2. func NewSchedulingQueue(
  3. lessFn framework.LessFunc,
  4. informerFactory informers.SharedInformerFactory,
  5. opts ...Option) SchedulingQueue {
  6. return NewPriorityQueue(lessFn, informerFactory, opts...)
  7. }

为何要有优先级

因为由此pod比较主要,需要优先调度
调度优先级文档


获取集群默认的调度优先级

pod调度优先级实例,之前讲的prometheus statefulset中的配置 

NextPod的初始化

。可以看到就是从 podQueue中 pop一个

  1. // MakeNextPodFunc returns a function to retrieve the next pod from a given
  2. // scheduling queue
  3. func MakeNextPodFunc(queue SchedulingQueue) func() *framework.QueuedPodInfo {
  4. return func() *framework.QueuedPodInfo {
  5. podInfo, err := queue.Pop()
  6. if err == nil {
  7. klog.V(4).InfoS("About to try and schedule pod", "pod", klog.KObj(podInfo.Pod))
  8. for plugin := range podInfo.UnschedulablePlugins {
  9. metrics.UnschedulableReason(plugin, podInfo.Pod.Spec.SchedulerName).Dec()
  10. }
  11. return podInfo
  12. }
  13. klog.ErrorS(err, "Error while retrieving next pod from scheduling queue")
  14. return nil
  15. }
  16. }

pod信息什么时候推入SchedulingQueue

。还记得之前scheduler的New中会有添加回调的函数
    addAllEventHandlers(sched, informerFactory, dynInformerFactory, unionedGVKs(clusterEventMap))

过滤未调度的pod回调 

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\eventhandlers.go 

  1. // scheduled pod cache
  2. informerFactory.Core().V1().Pods().Informer().AddEventHandler(
  3. cache.FilteringResourceEventHandler{
  4. FilterFunc: func(obj interface{}) bool {
  5. switch t := obj.(type) {
  6. case *v1.Pod:
  7. return assignedPod(t)
  8. case cache.DeletedFinalStateUnknown:
  9. if _, ok := t.Obj.(*v1.Pod); ok {
  10. // The carried object may be stale, so we don't use it to check if
  11. // it's assigned or not. Attempting to cleanup anyways.
  12. return true
  13. }
  14. utilruntime.HandleError(fmt.Errorf("unable to convert object %T to *v1.Pod in %T", obj, sched))
  15. return false
  16. default:
  17. utilruntime.HandleError(fmt.Errorf("unable to handle object in %T: %T", sched, obj))
  18. return false
  19. }
  20. },
  21. Handler: cache.ResourceEventHandlerFuncs{
  22. AddFunc: sched.addPodToCache,
  23. UpdateFunc: sched.updatePodInCache,
  24. DeleteFunc: sched.deletePodFromCache,
  25. },
  26. },
  27. )

FilterFunc为一个过滤的函数,assignedPod代表pod信息中有node信息了,说明pod已被调度到node

  1. // assignedPod selects pods that are assigned (scheduled and running).
  2. func assignedPod(pod *v1.Pod) bool {
  3. return len(pod.Spec.NodeName) != 0
  4. }

add对应的触发动作就是sched.addPodToCache。比如我们之i前创建的nginx-pod就该走这里

在下面的addPodToCache可以看到调用了SchedulingQueueAssignedPodAdded将pod推入队列中

  1. func (sched *Scheduler) addPodToCache(obj interface{}) {
  2. pod, ok := obj.(*v1.Pod)
  3. if !ok {
  4. klog.ErrorS(nil, "Cannot convert to *v1.Pod", "obj", obj)
  5. return
  6. }
  7. klog.V(3).InfoS("Add event for scheduled pod", "pod", klog.KObj(pod))
  8. if err := sched.Cache.AddPod(pod); err != nil {
  9. klog.ErrorS(err, "Scheduler cache AddPod failed", "pod", klog.KObj(pod))
  10. }
  11. sched.SchedulingQueue.AssignedPodAdded(pod)
  12. }

至此创建的pod入队出队我们都了解了 

执行调度

。我们可以追踪NextPod合适被调用,追查到在scheduleOne中

  1. // scheduleOne does the entire scheduling workflow for a single pod. It is serialized on the scheduling algorithm's host fitting.
  2. func (sched *Scheduler) scheduleOne(ctx context.Context) {
  3. podInfo := sched.NextPod()
  4. }

在向上追查可以看到是在scheduler启动的时候,选主成功的OnStartedLeading回调中有sched.Run执行调度

  1. // If leader election is enabled, runCommand via LeaderElector until done and exit.
  2. if cc.LeaderElection != nil {
  3. cc.LeaderElection.Callbacks = leaderelection.LeaderCallbacks{
  4. OnStartedLeading: func(ctx context.Context) {
  5. close(waitingForLeader)
  6. sched.Run(ctx)
  7. },
  8. OnStoppedLeading: func() {
  9. select {
  10. case <-ctx.Done():
  11. // We were asked to terminate. Exit 0.
  12. klog.InfoS("Requested to terminate, exiting")
  13. os.Exit(0)
  14. default:
  15. // We lost the lock.
  16. klog.ErrorS(nil, "Leaderelection lost")
  17. klog.FlushAndExit(klog.ExitFlushTimeout, 1)
  18. }
  19. },
  20. }
  21. leaderElector, err := leaderelection.NewLeaderElector(*cc.LeaderElection)
  22. if err != nil {
  23. return fmt.Errorf("couldn't create leader elector: %v", err)
  24. }
  25. leaderElector.Run(ctx)
  26. return fmt.Errorf("lost lease")
  27. }

scheduleOne分析

podinfo 就是从队列中获取到的pod对象,检查pod的有效性

    podInfo := sched.NextPod()

    // pod could be nil when schedulerQueue is closed

    if podInfo == nil || podInfo.Pod == nil {

        return

    }

    pod := podInfo.Pod
根据定义的 pod.SpecSchedulerName查到对应的profile 

    fwk, err := sched.frameworkForPod(pod)

    if err != nil {

        // This shouldn't happen, because we only accept for scheduling the pods

        // which specify a scheduler name that matches one of the profiles.

        klog.ErrorS(err, "Error occurred")

        return

    }

根据调度算法获取结果

    scheduleResult, err := sched.SchedulePod(schedulingCycleCtx, fwk, state, pod)

调用assume对调度算法的结果进行验证

  1. // Tell the cache to assume that a pod now is running on a given node, even though it hasn't been bound yet.
  2. // This allows us to keep scheduling without waiting on binding to occur.
  3. assumedPodInfo := podInfo.DeepCopy()
  4. assumedPod := assumedPodInfo.Pod
  5. // assume modifies `assumedPod` by setting NodeName=scheduleResult.SuggestedHost
  6. err = sched.assume(assumedPod, scheduleResult.SuggestedHost)
  7. if err != nil {
  8. metrics.PodScheduleError(fwk.ProfileName(), metrics.SinceInSeconds(start))
  9. // This is most probably result of a BUG in retrying logic.
  10. // We report an error here so that pod scheduling can be retried.
  11. // This relies on the fact that Error will check if the pod has been bound
  12. // to a node and if so will not add it back to the unscheduled pods queue
  13. // (otherwise this would cause an infinite loop).
  14. sched.handleSchedulingFailure(fwk, assumedPodInfo, err, SchedulerError, clearNominatedNode)
  15. return
  16. }

下面的go func进行异步绑定
// bind the pod to its host asynchronously (we can do this b/c of the assumption step above).

        err := sched.bind(bindingCycleCtx, fwk, assumedPod, scheduleResult.SuggestedHost, state)
绑定成功后就会打几个metrics

        metrics.PodScheduled(fwk.ProfileName(), metrics.SinceInSeconds(start))

        metrics.PodSchedulingAttempts.Observe(float64(podInfo.Attempts))                        metrics.PodSchedulingDuration.WithLabelValues(getAttemptsLabel(podInfo)).Observe(metrics.SinceInSeconds(podInfo.InitialAttemptTimestamp))

比如平均调度时间
scheduler_pod_scheduling_duration_seconds_sum /scheduler_pod_scheduling_duration_seconds_count

Schedule调度解析

D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\schedule_one.go 

对当前信息保存快照,如果快照中的node数量为0就返回没有节点可用

  1. // schedulePod tries to schedule the given pod to one of the nodes in the node list.
  2. // If it succeeds, it will return the name of the node.
  3. // If it fails, it will return a FitError with reasons.
  4. func (sched *Scheduler) schedulePod(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) (result ScheduleResult, err error) {
  5. trace := utiltrace.New("Scheduling", utiltrace.Field{Key: "namespace", Value: pod.Namespace}, utiltrace.Field{Key: "name", Value: pod.Name})
  6. defer trace.LogIfLong(100 * time.Millisecond)
  7. if err := sched.Cache.UpdateSnapshot(sched.nodeInfoSnapshot); err != nil {
  8. return result, err
  9. }
  10. trace.Step("Snapshotting scheduler cache and node infos done")
  11. if sched.nodeInfoSnapshot.NumNodes() == 0 {
  12. return result, ErrNoNodesAvailable
  13. }
  1. feasibleNodes, diagnosis, err := sched.findNodesThatFitPod(ctx, fwk, state, pod)
  2. if err != nil {
  3. return result, err
  4. }
  5. trace.Step("Computing predicates done")
  6. if len(feasibleNodes) == 0 {
  7. return result, &framework.FitError{
  8. Pod: pod,
  9. NumAllNodes: sched.nodeInfoSnapshot.NumNodes(),
  10. Diagnosis: diagnosis,
  11. }
  12. }

Predict阶段: 找到所有满足调度条件的节点feasibleNodes,不满足的就直接过滤

  1. feasibleNodes, diagnosis, err := sched.findNodesThatFitPod(ctx, fwk, state, pod)
  2. if err != nil {
  3. return result, err
  4. }
  5. trace.Step("Computing predicates done")
  6. if len(feasibleNodes) == 0 {
  7. return result, &framework.FitError{
  8. Pod: pod,
  9. NumAllNodes: sched.nodeInfoSnapshot.NumNodes(),
  10. Diagnosis: diagnosis,
  11. }
  12. }

如果Predict阶段只找到一个节点就用它

  1. // When only one node after predicate, just use it.
  2. if len(feasibleNodes) == 1 {
  3. return ScheduleResult{
  4. SuggestedHost: feasibleNodes[0].Name,
  5. EvaluatedNodes: 1 + len(diagnosis.NodeToStatusMap),
  6. FeasibleNodes: 1,
  7. }, nil
  8. }

Priority阶段:通过打分,找到一个分数最高、也就是最优的节点

  1. priorityList, err := prioritizeNodes(ctx, sched.Extenders, fwk, state, pod, feasibleNodes)
  2. if err != nil {
  3. return result, err
  4. }
  5. host, err := selectHost(priorityList)
  6. trace.Step("Prioritizing done")
  7. return ScheduleResult{
  8. SuggestedHost: host,
  9. EvaluatedNodes: len(feasibleNodes) + len(diagnosis.NodeToStatusMap),
  10. FeasibleNodes: len(feasibleNodes),
  11. }, err

Predict 和 Priority
。Predict和 Priority 是选择调度节点的两个关键性步骤,它的底层调用了各种algorithm算法

。我们之前提到的NodeName 匹配属于Predict阶段

Assume验证解读

。将host 填入到 pod spec字段的nodename,假定分配到对应的节点上

。调用 SchedulerCache下的AssumePod测试,如果出错则验证失败

  1. // assume signals to the cache that a pod is already in the cache, so that binding can be asynchronous.
  2. // assume modifies `assumed`.
  3. func (sched *Scheduler) assume(assumed *v1.Pod, host string) error {
  4. // Optimistically assume that the binding will succeed and send it to apiserver
  5. // in the background.
  6. // If the binding fails, scheduler will release resources allocated to assumed pod
  7. // immediately.
  8. assumed.Spec.NodeName = host
  9. if err := sched.Cache.AssumePod(assumed); err != nil {
  10. klog.ErrorS(err, "Scheduler cache AssumePod failed")
  11. return err
  12. }
  13. // if "assumed" is a nominated pod, we should remove it from internal cache
  14. if sched.SchedulingQueue != nil {
  15. sched.SchedulingQueue.DeleteNominatedPodIfExists(assumed)
  16. }
  17. return nil
  18. }

AssumePod解读

·根据pod uid去cache中寻找, 正常是找不到的 

  1. func (cache *cacheImpl) AssumePod(pod *v1.Pod) error {
  2. key, err := framework.GetPodKey(pod)
  3. if err != nil {
  4. return err
  5. }
  6. cache.mu.Lock()
  7. defer cache.mu.Unlock()
  8. if _, ok := cache.podStates[key]; ok {
  9. return fmt.Errorf("pod %v is in the cache, so can't be assumed", key)
  10. }
  11. return cache.addPod(pod, true)
  12. }

cacheaddPod(pod)代表把pod 信息填入node中

  1. // Assumes that lock is already acquired.
  2. func (cache *cacheImpl) addPod(pod *v1.Pod, assumePod bool) error {
  3. key, err := framework.GetPodKey(pod)
  4. if err != nil {
  5. return err
  6. }
  7. n, ok := cache.nodes[pod.Spec.NodeName]
  8. if !ok {
  9. n = newNodeInfoListItem(framework.NewNodeInfo())
  10. cache.nodes[pod.Spec.NodeName] = n
  11. }
  12. n.info.AddPod(pod)
  13. cache.moveNodeInfoToHead(pod.Spec.NodeName)
  14. ps := &podState{
  15. pod: pod,
  16. }
  17. cache.podStates[key] = ps
  18. if assumePod {
  19. cache.assumedPods.Insert(key)
  20. }
  21. return nil
  22. }

AddPodInfo 会更新node的信息,把新来的pod经加上去

  1. // AddPodInfo adds pod information to this NodeInfo.
  2. // Consider using this instead of AddPod if a PodInfo is already computed.
  3. func (n *NodeInfo) AddPodInfo(podInfo *PodInfo) {
  4. res, non0CPU, non0Mem := calculateResource(podInfo.Pod)
  5. n.Requested.MilliCPU += res.MilliCPU
  6. n.Requested.Memory += res.Memory
  7. n.Requested.EphemeralStorage += res.EphemeralStorage
  8. if n.Requested.ScalarResources == nil && len(res.ScalarResources) > 0 {
  9. n.Requested.ScalarResources = map[v1.ResourceName]int64{}
  10. }
  11. for rName, rQuant := range res.ScalarResources {
  12. n.Requested.ScalarResources[rName] += rQuant
  13. }
  14. n.NonZeroRequested.MilliCPU += non0CPU
  15. n.NonZeroRequested.Memory += non0Mem
  16. n.Pods = append(n.Pods, podInfo)
  17. if podWithAffinity(podInfo.Pod) {
  18. n.PodsWithAffinity = append(n.PodsWithAffinity, podInfo)
  19. }
  20. if podWithRequiredAntiAffinity(podInfo.Pod) {
  21. n.PodsWithRequiredAntiAffinity = append(n.PodsWithRequiredAntiAffinity, podInfo)
  22. }
  23. // Consume ports when pods added.
  24. n.updateUsedPorts(podInfo.Pod, true)
  25. n.updatePVCRefCounts(podInfo.Pod, true)
  26. n.Generation = nextGeneration()
  27. }

bind绑定操作解读

将assumed验证过的pod信息bind到node上 

  1. // bind binds a pod to a given node defined in a binding object.
  2. // The precedence for binding is: (1) extenders and (2) framework plugins.
  3. // We expect this to run asynchronously, so we handle binding metrics internally.
  4. func (sched *Scheduler) bind(ctx context.Context, fwk framework.Framework, assumed *v1.Pod, targetNode string, state *framework.CycleState) (err error) {
  5. defer func() {
  6. sched.finishBinding(fwk, assumed, targetNode, err)
  7. }()
  8. bound, err := sched.extendersBinding(assumed, targetNode)
  9. if bound {
  10. return err
  11. }
  12. bindStatus := fwk.RunBindPlugins(ctx, state, assumed, targetNode)
  13. if bindStatus.IsSuccess() {
  14. return nil
  15. }
  16. if bindStatus.Code() == framework.Error {
  17. return bindStatus.AsError()
  18. }
  19. return fmt.Errorf("bind status: %s, %v", bindStatus.Code().String(), bindStatus.Message())
  20. }

底层的请求

  1. // Bind delegates the action of binding a pod to a node to the extender.
  2. func (h *HTTPExtender) Bind(binding *v1.Binding) error {
  3. var result extenderv1.ExtenderBindingResult
  4. if !h.IsBinder() {
  5. // This shouldn't happen as this extender wouldn't have become a Binder.
  6. return fmt.Errorf("unexpected empty bindVerb in extender")
  7. }
  8. req := &extenderv1.ExtenderBindingArgs{
  9. PodName: binding.Name,
  10. PodNamespace: binding.Namespace,
  11. PodUID: binding.UID,
  12. Node: binding.Target.Name,
  13. }
  14. if err := h.send(h.bindVerb, req, &result); err != nil {
  15. return err
  16. }
  17. if result.Error != "" {
  18. return fmt.Errorf(result.Error)
  19. }
  20. return nil
  21. }

解读一个最简单的过滤器node name

首先打开scheduler的插件目录,
D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\framework\plugins

可以看到一堆类似过滤器的目录,在其中找打node name,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\framework\plugins\nodename\node_name.go

  1. // Filter invoked at the filter extension point.
  2. func (pl *NodeName) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status {
  3. if nodeInfo.Node() == nil {
  4. return framework.NewStatus(framework.Error, "node not found")
  5. }
  6. if !Fits(pod, nodeInfo) {
  7. return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReason)
  8. }
  9. return nil
  10. }
  11. // Fits actually checks if the pod fits the node.
  12. func Fits(pod *v1.Pod, nodeInfo *framework.NodeInfo) bool {
  13. return len(pod.Spec.NodeName) == 0 || pod.Spec.NodeName == nodeInfo.Node().Name
  14. }
  15. // New initializes a new plugin and returns it.
  16. func New(_ runtime.Object, _ framework.Handle) (framework.Plugin, error) {
  17. return &NodeName{}, nil
  18. }

这里看到New函数,疑似注册

往上追查可以看到在registry有注册的动作,位置 D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\framework\plugins\registry.go

  1. // NewInTreeRegistry builds the registry with all the in-tree plugins.
  2. // A scheduler that runs out of tree plugins can register additional plugins
  3. // through the WithFrameworkOutOfTreeRegistry option.
  4. func NewInTreeRegistry() runtime.Registry {
  5. fts := plfeature.Features{
  6. EnablePodDisruptionBudget: feature.DefaultFeatureGate.Enabled(features.PodDisruptionBudget),
  7. EnableReadWriteOncePod: feature.DefaultFeatureGate.Enabled(features.ReadWriteOncePod),
  8. EnableVolumeCapacityPriority: feature.DefaultFeatureGate.Enabled(features.VolumeCapacityPriority),
  9. EnableMinDomainsInPodTopologySpread: feature.DefaultFeatureGate.Enabled(features.MinDomainsInPodTopologySpread),
  10. }
  11. return runtime.Registry{
  12. selectorspread.Name: selectorspread.New,
  13. imagelocality.Name: imagelocality.New,
  14. tainttoleration.Name: tainttoleration.New,
  15. nodename.Name: nodename.New,

再向上追溯可以看到是Scheduler的New中调用的NewInTreeRegistry
registry := frameworkplugins.NewInTreeRegistry()

回到node name的Filter函数

。Filter调用Fits判断 pod的spec nodename 是否和目标node相等

追踪Filter调用过程,发现是RunFilterplugins遍历插件调用,位置D:\Workspace\Go\src\k8s.io\kubernetes\pkg\scheduler\framework\runtime\framework.go

  1. // RunFilterPlugins runs the set of configured Filter plugins for pod on
  2. // the given node. If any of these plugins doesn't return "Success", the
  3. // given node is not suitable for running pod.
  4. // Meanwhile, the failure message and status are set for the given node.
  5. func (f *frameworkImpl) RunFilterPlugins(
  6. ctx context.Context,
  7. state *framework.CycleState,
  8. pod *v1.Pod,
  9. nodeInfo *framework.NodeInfo,
  10. ) framework.PluginToStatus {
  11. statuses := make(framework.PluginToStatus)
  12. for _, pl := range f.filterPlugins {
  13. pluginStatus := f.runFilterPlugin(ctx, pl, state, pod, nodeInfo)
  14. if !pluginStatus.IsSuccess() {
  15. if !pluginStatus.IsUnschedulable() {
  16. // Filter plugins are not supposed to return any status other than
  17. // Success or Unschedulable.
  18. errStatus := framework.AsStatus(fmt.Errorf("running %q filter plugin: %w", pl.Name(), pluginStatus.AsError())).WithFailedPlugin(pl.Name())
  19. return map[string]*framework.Status{pl.Name(): errStatus}
  20. }
  21. pluginStatus.SetFailedPlugin(pl.Name())
  22. statuses[pl.Name()] = pluginStatus
  23. }
  24. }
  25. return statuses
  26. }
  27. func (f *frameworkImpl) runFilterPlugin(ctx context.Context, pl framework.FilterPlugin, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status {
  28. if !state.ShouldRecordPluginMetrics() {
  29. return pl.Filter(ctx, state, pod, nodeInfo)
  30. }
  31. startTime := time.Now()
  32. status := pl.Filter(ctx, state, pod, nodeInfo)
  33. f.metricsRecorder.observePluginDurationAsync(Filter, pl.Name(), status, metrics.SinceInSeconds(startTime))
  34. return status
  35. }

最终追查到是findNodesThatPassFilters调用了RunFilterPluginsWithNominatedPods,位置在 

     feasibleNodes, err := sched.findNodesThatPassFilters(ctx, fwk, state, pod, diagnosis, nodes)

// findNodesThatPassFilters finds the nodes that fit the filter plugins.

func (sched *Scheduler) findNodesThatPassFilters(

    ctx context.Context,

    fwk framework.Framework,

    state *framework.CycleState,

    pod *v1.Pod,

    diagnosis framework.Diagnosis,

    nodes []*framework.NodeInfo) ([]*v1.Node, error) {

        status := fwk.RunFilterPluginsWithNominatedPods(ctx, state, pod, nodeInfo)

}

本节重点总结:

  

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/565112
推荐阅读
相关标签
  

闽ICP备14008679号