当前位置:   article > 正文

Kubernetes Jobs - 运行处理任务指南

kubernetes job 在pod之前运行

Kubernetes Jobs - 运行处理任务指南

一个 job 将创建一个或多个pods,然后确保指定数量的pod运行到结束。当pods成功完成,该 job 跟踪到成功运行的状态。当指定的数量到达时,该 job 自身就算完成了。删除 Job 将会清除所创建的 pods。

一个简单的案例,创建 Job object 然后运行一个Pod到完成。该Job object在第一个pod失败或者被删除时,将会创建一个新的 Pod。

Job 可以并行地运行多个pods。

运行一个例程Job

这是Job配置的例子。将计算 π to 2000 places,然后打印出来。大概十秒钟可以完成。

controllers/job.yaml
  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: pi
  5. spec:
  6. template:
  7. spec:
  8. containers:
  9. - name: pi
  10. image: perl
  11. command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
  12. restartPolicy: Never
  13. backoffLimit: 4

运行这个文件,执行命令:

  1. $ kubectl create -f https://k8s.io/examples/controllers/job.yaml
  2. job "pi" created

检查job的状态和命令的输出:

  1. $ kubectl describe jobs/pi
  2. Name: pi
  3. Namespace: default
  4. Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
  5. Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
  6. job-name=pi
  7. Annotations: <none>
  8. Parallelism: 1
  9. Completions: 1
  10. Start Time: Tue, 07 Jun 2016 10:56:16 +0200
  11. Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
  12. Pod Template:
  13. Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
  14. job-name=pi
  15. Containers:
  16. pi:
  17. Image: perl
  18. Port:
  19. Command:
  20. perl
  21. -Mbignum=bpi
  22. -wle
  23. print bpi(2000)
  24. Environment: <none>
  25. Mounts: <none>
  26. Volumes: <none>
  27. Events:
  28. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
  29. --------- -------- ----- ---- ------------- -------- ------ -------
  30. 1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q

查看完成job的pods,使用命令 kubectl get pods.

列出属于该job的所有pods,如下:

  1. $ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
  2. $ echo $pods
  3. pi-aiw0a

这里的 selector 与 job的selector相同。这里 --output=jsonpath 选项指明获取名称的表达式。

查看 pods的输出:

  1. $ kubectl logs $pods
  2. 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

编写Job的Spec指定参数

对于所有的Kubernetes config, 一个Job 需要 apiVersion, kindmetadata 域。

对于Job 还需要.spec section

Pod Template

.spec.template.spec唯一要求的域。

.spec.template 是一个 pod template. 与 pod 完全一致, 除了是嵌套在里面而且没有 apiVersionkind

除了 Pod的域, 一个job中的 pod template 需要指明必要的 labels (see pod selector) 以及合适的重启策略(restart policy)。

只有 RestartPolicy 等于o NeverOnFailure 是允许的。

Pod Selector

.spec.selector 是可选的。大多数情况下不需要指定,查看 specifying your own pod selector

并行任务,Parallel Jobs

三种类型的 jobs:

  1. Non-parallel Jobs
    • 一般只有一个 pod 启动,除非该 pod 运行失败。
    • job 将会在 Pod 成功终止后立即结束。
  2. Parallel Jobs,固定完成计数
    • 指定非0的正数给参数 .spec.completions
    • job结束仅当一个pod范围从 1 到.spec.completions
    • not implemented yet: 每一个pod传递不同的index,范围从 1 到 .spec.completions
  3. Parallel Jobs,通过work queue:  
    • - 不指定 .spec.completions,
    • 缺省 .spec.parallelism.  - pods 自己协调或通过外部服务去确定哪一个将会 work。
    • 每一个 pod 独立地确定所有的对端是否完成,从而确定整个 Job 完成。
    • 当任何pod terminates with success, 不会创建新的pods。
    • 一旦至少一个pod成功终止,以及 all pods 终止, 那么 job 就成功完成。
    • 一旦任何pod成功退出, 其它 pod 不再做任何工作或写入输出,所有的将处于退出状态。

设置参数的说明:

  • 对于Non-parallel job, 可以留下 .spec.completions.spec.parallelism 为unset,如果都为unset, 二者缺省都为 1。
  • 对于Fixed Completion Count job, 需要设置 .spec.completions 为完成计数。
    • 可以设置 .spec.parallelism, 或留下为unset,将为缺省值1。
  • 对于Work Queue Job, 必须留下 .spec.completions 为unset, 以及设置 .spec.parallelism 为非负整数。

关于创建更多不同类型的的job, 参见 job patterns 部分。

控制 Parallelism

要求的并行度-parallelism (.spec.parallelism) 可以被设为任何非负整数。如果未指定,缺省为 1。如果设为 0, 该 Job 就会暂停,知道其值被增加。

实际上的 parallelism (运行同一任务的 pods数量) 可能多于或少于 parallelism,有多种原因:

  • For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of remaining completions. Higher values of .spec.parallelism are effectively ignored.
  • For work queue jobs, no new pods are started after any pod has succeeded – remaining pods are allowed to complete, however.
  • If the controller has not had time to react.
  • If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.), then there may be fewer pods than requested.
  • The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
  • When a pod is gracefully shutdown, it takes time to stop.

处理Pod和Container的失败

A Container in a Pod may fail for a number of reasons, such as because the process in it exited with a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this happens, and the .spec.template.spec.restartPolicy = "OnFailure", then the Pod stays on the node, but the Container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify .spec.template.spec.restartPolicy = "Never". See pods-states for more information on restartPolicy.

An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = "Never". When a Pod fails, then the Job controller starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous runs.

Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice.

If you do specify .spec.parallelism and .spec.completions both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.

Pod Backoff failure策略

There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s …) capped at six minutes. The back-off count is reset if no new failed Pods appear before the Job’s next status check.

Note: Due to a known issue #54870, when the .spec.template.spec.restartPolicy field is set to “OnFailure”, the back-off limit may be ineffective. As a short-term workaround, set the restart policy for the embedded template to “Never”.

Job终止和清理

When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml). When you delete the job using kubectl, all the pods it created are deleted too.

By default, a Job will run uninterrupted unless a Pod fails, at which point the Job defers to the .spec.backoffLimit described above. Another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to a number of seconds.

The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, the Job and all of its Pods are terminated. The result is that the job has a status with reason: DeadlineExceeded.

Note that a Job’s .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached.

Example:

  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: pi-with-timeout
  5. spec:
  6. backoffLimit: 5
  7. activeDeadlineSeconds: 100
  8. template:
  9. spec:
  10. containers:
  11. - name: pi
  12. image: perl
  13. command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
  14. restartPolicy: Never

Note that both the Job Spec and the Pod Template Spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.

Job Patterns-模式

The Job object can be used to support reliable parallel execution of Pods. The Job object is not designed to support closely-communicating parallel processes, as commonly found in scientific computing. It does support parallel processing of a set of independent but related work items. These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a NoSQL database to scan, and so on.

In a complex system, there may be multiple different sets of work items. Here we are just considering one set of work items that the user wants to manage together — a batch job.

There are several different patterns for parallel computation, each with strengths and weaknesses. The tradeoffs are:

  • One Job object for each work item, vs. a single Job object for all work items. The latter is better for large numbers of work items. The former creates some overhead for the user and for the system to manage large numbers of Job objects.
  • Number of pods created equals number of work items, vs. each pod can process multiple work items. The former typically requires less modification to existing code and containers. The latter is better for large numbers of work items, for similar reasons to the previous bullet.
  • Several approaches use a work queue. This requires running a queue service, and modifications to the existing program or container to make it use the work queue. Other approaches are easier to adapt to an existing containerised application.

The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. The pattern names are also links to examples and more detailed description.

PatternSingle Job objectFewer pods than work items?Use app unmodified?Works in Kube 1.1?
Job Template Expansion  
Queue with Pod Per Work Item sometimes
Queue with Variable Pod Count 
Single Job with Static Work Assignment  

When you specify completions with .spec.completions, each Pod created by the Job controller has an identical spec. This means that all pods will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things.

This table shows the required settings for .spec.parallelism and .spec.completions for each of the patterns. Here, W is the number of work items.

Pattern.spec.completions.spec.parallelism
Job Template Expansion1should be 1
Queue with Pod Per Work ItemWany
Queue with Variable Pod Count1any
Single Job with Static Work AssignmentWany

Advanced Usage

Specifying your own pod selector

Normally, when you create a job object, you do not specify .spec.selector. The system defaulting logic adds this field when the job is created. It picks a selector value that will not overlap with any other jobs.

However, in some cases, you might need to override this automatically set selector. To do this, you can specify the .spec.selector of the job.

Be very careful when doing this. If you specify a label selector which is not unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated job may be deleted, or this job may count other pods as completing it, or one or both of the jobs may refuse to create pods or run to completion. If a non-unique selector is chosen, then other controllers (e.g. ReplicationController) and their pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying .spec.selector.

Here is an example of a case when you might want to use this feature.

Say job old is already running. You want existing pods to keep running, but you want the rest of the pods it creates to use a different pod template and for the job to have a new name. You cannot update the job because these fields are not updatable. Therefore, you delete job old but leave its pods running, using kubectl delete jobs/old --cascade=false. Before deleting it, you make a note of what selector it uses:

  1. kind: Job
  2. metadata:
  3. name: old
  4. ...
  5. spec:
  6. selector:
  7. matchLabels:
  8. job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
  9. ...

Then you create a new job with name new and you explicitly specify the same selector. Since the existing pods have label job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they are controlled by job new as well.

You need to specify manualSelector: true in the new job since you are not using the selector that the system normally generates for you automatically.

  1. kind: Job
  2. metadata:
  3. name: new
  4. ...
  5. spec:
  6. manualSelector: true
  7. selector:
  8. matchLabels:
  9. job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
  10. ...

The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system to that you know what you are doing and to allow this mismatch.

Alternatives

Bare Pods

When the node that a pod is running on reboots or fails, the pod is terminated and will not be restarted. However, a Job will create new pods to replace terminated ones. For this reason, we recommend that you use a job rather than a bare pod, even if your application requires only a single pod.

Replication Controller

Jobs are complementary to Replication Controllers. A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job manages pods that are expected to terminate (e.g. batch jobs).

As discussed in Pod Lifecycle, Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never. (Note: If RestartPolicy is not set, the default value is Always.)

Single Job starts Controller Pod

Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort of custom controller for those pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes.

One example of this pattern would be a Job which starts a Pod which runs a script that in turn starts a Spark master controller (see spark example), runs a spark driver, and then cleans up.

An advantage of this approach is that the overall process gets the completion guarantee of a Job object, but complete control over what pods are created and how work is assigned to them.

Cron Jobs

Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes 1.4. More information is available in the cron job documents

转载于:https://my.oschina.net/u/2306127/blog/1985416

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/563729
推荐阅读
相关标签
  

闽ICP备14008679号