赞
踩
核心组件
pod是最小资源单位
任何的一个k8s资源都可以由yml清单文件来定义
k8s yaml的主要组成:
apiVersion: v1 api版本
kind: pod 资源类型
metadata: 元数据、属性
spec: 详细信息
master节点
1.创建存放pod的目录
mkdir -p k8s_ymal/pod && cd k8s_ymal/
2.编写yaml
[root@k8s-master k8s_yaml]# cat k8s_pod.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
注意:
vim /etc/kubernetes/apiserver
删除那个serveraccept 然后重启apiserver
systemctl restart kube-apiserver.service
3.创建资源
[root@k8s-master pod]# kubectl create -f k8s_pod.yaml
4.查看资源
[root@k8s-master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 52s
查看调度到哪一个节点
[root@k8s-master pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 1m <none> 10.0.0.13
5.查看资源的描述
[root@k8s-master pod]# kubectl describe pod nginx
Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
解决方案看下边:
6. node节点操作:
k8s-node2上传pod-infrastructure-latest.tar.gz和nginx
[root@k8s-node-2 ~]# docker load -i pod-infrastructure-latest.tar.gz
[root@k8s-node-2 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/pod-infrastructure:latest
[root@k8s-node-2 ~]# docker load -i docker_nginx1.13.tar.gz
[root@k8s-node-2 ~]# docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/nginx:1.13
=====================================================================
master节点:
kubectl describe pod nginx #查看描述状态
kubectl get pod
======================================================================
7. node1和node2节点都要操作:
7.1.修改镜像地址
vim /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
7.2.重启服务
systemctl restart kubelet.service
8. master节点验证:
[root@k8s-master k8s_yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 3m 172.18.92.2 10.0.0.13
为什么创建一个pod资源,k8s需要启动两个容器
业务容器:nginx ,
基础容器:pod(是k8s最小的资源单位) ,定制化功能,
为了实现k8s自带的那些高级功能
===============================================================================================
pod资源: 至少由两个容器组成,pod基础容器和业务容器组成(最多1+4)
pod配置文件2:
[root@k8s-master pod]# vim k8s_pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
- name: busybox
image: 10.0.0.11:5000/busybox:latest
command: ["sleep","1000"]
[root@k8s-master pod]# kubectl create -f k8s_pod3.yml
pod "test" created
[root@k8s-master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 54m
test 2/2 Running 0 30s
rc功能: 保证指定数量的pod始终存活,rc通过标签选择器select来关联pod
k8s资源的常见命令:
kubectl create -f xxx.yaml 创建资源
kubectl get pod|rc|node 查看资源列表
kubectl describe pod nginx 查看具体资源的详细信息【排错专用】
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml 删除资源=资源的类型+名字
删除资源还可以直接删除yaml文件即可
kubectl edit pod nginx 修改资源的属性和配置文件
kubectl get pod -o wide --show-labels 查看标签信息
kubectl get rc -o wide 查看rc使用哪个标签
[root@k8s-master k8s_yaml]# mkdir rc
[root@k8s-master k8s_yaml]# cd rc/
[root@k8s-master rc]# vim k8s_rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 5 #5个pod
selector:
app: myweb
template: #模板
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
[root@k8s-master rc]# kubectl create -f k8s_rc.yml
replicationcontroller "nginx" created
[root@k8s-master rc]# kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx 5 5 5 4s
[root@k8s-master rc]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 31m 172.18.7.2 10.0.0.13
nginx-9b36r 1/1 Running 0 59s 172.18.7.4 10.0.0.13
nginx-jt31n 1/1 Running 0 59s 172.18.81.3 10.0.0.12
nginx-lhzgt 1/1 Running 0 59s 172.18.7.5 10.0.0.13
nginx-v8mzm 1/1 Running 0 59s 172.18.81.4 10.0.0.12
nginx-vcn83 1/1 Running 0 59s 172.18.81.5 10.0.0.12
nginx2 1/1 Running 0 11m 172.18.81.2 10.0.0.12
test 2/2 Running 0 8m 172.18.7.3 10.0.0.13
模拟某一个node2节点故障:
[root@k8s-node-2 ~]# systemctl stop kubelet.service
[root@k8s-master rc]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 9h
10.0.0.13 NotReady 9h
这时候master节点检测出node2已经故障,k8s会重试拉起node2,它会把故障的驱逐至另一个节点;
其实是驱逐,但是它起的是新pod,这个就是rc的使命
[root@k8s-master rc]# kubectl delete node 10.0.0.13
node "10.0.0.13" deleted
master删除node2后,它会很快的迁移至另一个节点
[root@k8s-master rc]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-jt31n 1/1 Running 0 5m 172.18.81.3 10.0.0.12
nginx-ml7j3 1/1 Running 0 14s 172.18.81.7 10.0.0.12 #启动新pod
nginx-v8mzm 1/1 Running 0 5m 172.18.81.4 10.0.0.12
nginx-vcn83 1/1 Running 0 5m 172.18.81.5 10.0.0.12
nginx-vkgmv 1/1 Running 0 14s 172.18.81.6 10.0.0.12 #启动新pod
nginx2 1/1 Running 0 16m 172.18.81.2 10.0.0.12
启动node3
[root@k8s-node-2 ~]# systemctl start kubelet.service
[root@k8s-master rc]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 9h
10.0.0.13 Ready 49s
接下来创建的新Pod它会先选择node2,直到两个节点平等,如果配置不一样的话,它会优先选择配置好的;如果你在node节点清除容器,它会自动重新启动,kubelet它自带保护机制,这也是它的优势所在
[root@k8s-master rc]# kubectl get pod -o wide --show-labels
[root@k8s-master rc]# kubectl get rc -o wide
总结:
kubelet只监控本机的docker容器,如果本机的pod删除了,它会启动新的容器
k8s集群,pod数量少了(kubectl down机),controller-manager启动新的pod(api-server找schema调度)
rc和pod是通过标签选择器关联的
0. 前提是在我们的node节点上传nginx1.15版本--->docker load -i --->docker tag ---->docker push
[root@k8s-node-2 ~]# docker load -i docker_nginx1.15.tar.gz
[root@k8s-node-2 ~]# docker tag docker.io/nginx:latest 10.0.0.11:5000/nginx:1.15
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/nginx:1.15
=================================================================
1.编写升级版本yml文件
[root@k8s-master rc]# cat k8s_rc2.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx2
spec:
replicas: 5 #副本5
selector:
app: myweb2 #标签选择器
template:
metadata:
labels:
app: myweb2 #标签
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.15 #版本
ports:
- containerPort: 80
2.滚动升级以及验证:
[root@k8s-master rc]# kubectl rolling-update nginx -f k8s_rc2.yml --update-period=5s
Created nginx2
Scaling up nginx2 from 0 to 5, scaling down nginx from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
Scaling nginx2 up to 1
Scaling nginx down to 4
Scaling nginx2 up to 2
Scaling nginx down to 3
Scaling nginx2 up to 3
Scaling nginx down to 2
Scaling nginx2 up to 4
Scaling nginx down to 1
Scaling nginx2 up to 5
Scaling nginx down to 0
Update succeeded. Deleting nginx
replicationcontroller "nginx" rolling updated to "nginx2"
[root@k8s-master rc]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 19m 172.18.7.2 10.0.0.13
nginx2 1/1 Running 0 38m 172.18.81.2 10.0.0.12
nginx2-0xhz7 1/1 Running 0 1m 172.18.81.7 10.0.0.12
nginx2-8psw5 1/1 Running 0 1m 172.18.7.5 10.0.0.13
nginx2-lqw6t 1/1 Running 0 56s 172.18.81.3 10.0.0.12
nginx2-w7jpn 1/1 Running 0 1m 172.18.7.3 10.0.0.13
nginx2-xntt8 1/1 Running 0 1m 172.18.7.4 10.0.0.13
[root@k8s-master rc]# curl -I 172.18.7.3
HTTP/1.1 200 OK
Server: nginx/1.15.5 #升级至1.15
Date: Mon, 27 Jan 2020 10:50:00 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT
Connection: keep-alive
ETag: "5bb38577-264"
Accept-Ranges: bytes
3.回滚操作并验证:回滚主要靠的是yaml文件
[root@k8s-master rc]# kubectl rolling-update nginx2 -f k8s_rc.yml --update-period=2s
Created nginx
Scaling up nginx from 0 to 5, scaling down nginx2 from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
Scaling nginx up to 1
Scaling nginx2 down to 4
Scaling nginx up to 2
Scaling nginx2 down to 3
Scaling nginx up to 3
Scaling nginx2 down to 2
Scaling nginx up to 4
Scaling nginx2 down to 1
Scaling nginx up to 5
Scaling nginx2 down to 0
Update succeeded. Deleting nginx2
replicationcontroller "nginx2" rolling updated to "nginx"
[root@k8s-master rc]# kubectl get pod -o wide
[root@k8s-master rc]# curl -I 172.18.81.5
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 27 Jan 2020 10:52:05 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
总结:
--update-period=2s
如果忘了没指定更新时间,默认是1分钟
回滚操作主要依赖的是yaml文件
service:简称svc,帮助pod暴漏端口
1.创建一个service
[root@k8s-master k8s_yaml]# mkdir svc
[root@k8s-master k8s_yaml]# cd svc/
[root@k8s-master svc]# vim k8s_svc.yml
apiVersion: v1
kind: Service
metadata:
name: myweb #资源的名称
spec:
type: NodePort #ClusterIP
ports:
- port: 80 #clusterIP VIP
nodePort: 30000 #node port宿主机端口
targetPort: 80 #pod port容器端口
selector:
app: myweb #关联的标签
2. 创建svc
[root@k8s-master svc]# kubectl create -f k8s_svc.yml
service "myweb" created
3.查看
[root@k8s-master svc]# kubectl get service -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 3h <none>
myweb 10.254.153.203 <nodes> 80:30000/TCP 50s app=myweb
[root@k8s-master svc]# kubectl describe svc myweb
Name: myweb
Namespace: default
Labels: <none>
Selector: app=myweb
Type: NodePort
IP: 10.254.153.203
Port: <unset> 80/TCP
NodePort: <unset> 30000/TCP
Endpoints: 172.18.7.5:80,172.18.7.6:80,172.18.81.3:80 + 2 more...
Session Affinity: None
No events.
kubectl scale rc nginx --replicas=2 动态调整副本数
4. 进入pod修改nginx;实现负载均衡功能与自动发现
[root@k8s-master svc]# kubectl exec -it nginx-0r0s9 /bin/bash
root@nginx-0r0s9:/# cd /usr/share/nginx/html/
root@nginx-0r0s9:/usr/share/nginx/html# ls
50x.html index.html
root@nginx-0r0s9:/usr/share/nginx/html# echo 'web01...' >index.html
root@nginx-0r0s9:/usr/share/nginx/html# exit
exit
[root@k8s-master svc]# kubectl exec -it nginx-11zqw /bin/bash
root@nginx-11zqw:/# echo 'web02' > /usr/share/nginx/html/index.html
root@nginx-11zqw:/# exit
exit
[root@k8s-master svc]# kubectl exec -it nginx-4f8wz /bin/bash
root@nginx-4f8wz:/# echo 'web03' > /usr/share/nginx/html/index.html
root@nginx-4f8wz:/# exit
exit
5.测试node1和node2
[root@k8s-node-2 ~]# curl http://10.0.0.12:30000
web03
[root@k8s-node-2 ~]# curl http://10.0.0.12:30000
web01...
[root@k8s-node-2 ~]# curl http://10.0.0.12:30000
web02
由kube-proxy实现负载均衡
6.修改nodePort范围;默认允许对外暴漏的端口只有3000-50000
[root@k8s-master svc]# kubectl create -f k8s_svc2.yml
The Service "myweb2" is invalid: spec.ports[0].nodePort: Invalid value: 30000: provided port is already allocated
解决方法:
vim /etc/kubernetes/apiserver 加入最后一行
KUBE_API_ARGS="--service-node-port-range=3000-50000"
[root@k8s-master svc]# systemctl restart kube-apiserver.service
[root@k8s-master svc]# kubectl create -f k8s_svc2.yml
service "myweb2" created
7.命令行创建service资源;随机端口映射到指定容器端口
[root@k8s-master svc]# kubectl expose rc nginx --port=80 --target-port=80 --type=NodePort
service "nginx" exposed
[root@k8s-master svc]# kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 4h <none>
myweb 10.254.153.203 <nodes> 80:30000/TCP 25m app=myweb
myweb2 10.254.210.174 <nodes> 80:3000/TCP 2m app=myweb
nginx 10.254.52.144 <nodes> 80:3949/TCP 1m app=myweb
[root@k8s-master svc]# kubectl delete svc nginx2
8.低版本service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均衡 传输层tcp,udp)
8.1 service负载均衡:
- 默认使用iptables,性能差
- 1.8以后版本推荐使用lvs,性能好
8.2 kubenetes三种类型IP
- node ip #配置文件:/etc/kubernetes/apiserver
- vip #配置文件:/etc/kubernetes/apiserver
- pod ip #etcd中定义flannel网段
==========================================================================================
查看所有
kubectl get all -o wide
网络出现问题解决方法:
systemctl restart flanneld docker kubelet kube-proxy
rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源
服务访问中断
1.未升级可以curl通
[root@k8s-master svc]# curl -I 10.0.0.12:30000
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 27 Jan 2020 12:57:13 GMT
Content-Type: text/html
Content-Length: 6
Last-Modified: Mon, 27 Jan 2020 12:19:45 GMT
Connection: keep-alive
ETag: "5e2ed561-6"
Accept-Ranges: bytes
2.做升级操作
[root@k8s-master rc]# kubectl rolling-update nginx -f k8s_rc2.yml --update-period=3s
Created nginx2
Scaling up nginx2 from 0 to 5, scaling down nginx from 3 to 0 (keep 5 pods available, don't exceed 6 pods)
Scaling nginx2 up to 3
Scaling nginx down to 2
Scaling nginx2 up to 4
Scaling nginx down to 1
Scaling nginx2 up to 5
Scaling nginx down to 0
Update succeeded. Deleting nginx
replicationcontroller "nginx" rolling updated to "nginx2"
3.出现服务访问中断
[root@k8s-master rc]# curl -I 10.0.0.12:30000
^C
4.检测
[root@k8s-master rc]# kubectl get all -o wide
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
rc/nginx2 5 5 5 2m myweb 10.0.0.11:5000/nginx:1.15 app=myweb2
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc/kubernetes 10.254.0.1 <none> 443/TCP 4h <none>
svc/myweb 10.254.153.203 <nodes> 80:30000/TCP 51m app=myweb
痛点:
出现标签不一致;rc在升级时改变了标签,而service不会改标签,最终导致服务中断
解决方案:
1.编写deployment
[root@k8s-master deployment]# cat k8s_deploy.yml
apiVersion: extensions/v1beta1 #扩展版的
kind: Deployment #资源类型
metadata: #资源属性
name: nginx-deployment #资源的名称
spec:
replicas: 3 #副本数
minReadySeconds: 60 #滚动升级间隔
template:
metadata: #模板
labels:
app: nginx #容器的标签
spec:
containers:
- name: nginx #容器的名称
image: 10.0.0.11:5000/nginx:1.13 #容器所使用的镜像
ports:
- containerPort: 80 #容器对外开放的端口
resources: #资源限制
limits: #最大
cpu: 100m #cpu时间片
requests: #最小
cpu: 100m
2.创建deployment
[root@k8s-master deploy]# kubectl create -f k8s_deploy.yml
3.查看所有资源
[root@k8s-master deploy]# kubectl get all -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/nginx-deployment 3 3 3 0 5s
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
rc/nginx2 5 5 5 11m myweb 10.0.0.11:5000/nginx:1.15 app=myweb2
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc/kubernetes 10.254.0.1 <none> 443/TCP 4h <none>
svc/myweb 10.254.153.203 <nodes> 80:30000/TCP 59m app=myweb
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
rs/nginx-deployment-2807576163 3 3 3 5s nginx 10.0.0.11:5000/nginx:1.13 app=nginx,pod-template-hash=2807576163
NAME READY STATUS RESTARTS AGE IP NODE
po/nginx 1/1 Running 0 2h 172.18.7.2 10.0.0.13
po/nginx-deployment-2807576163-4vccn 1/1 Running 0 5s 172.18.7.6 10.0.0.13
po/nginx-deployment-2807576163-5qwxf 1/1 Running 0 5s 172.18.81.4 10.0.0.12
po/nginx-deployment-2807576163-kmw9m 1/1 Running 0 5s 172.18.7.5 10.0.0.13
po/nginx2-1cx68 1/1 Running 0 11m 172.18.81.5 10.0.0.12
po/nginx2-41ppf 1/1 Running 0 11m 172.18.7.4 10.0.0.13
po/nginx2-9j6g4 1/1 Running 0 10m 172.18.81.3 10.0.0.12
po/nginx2-ftqt0 1/1 Running 0 11m 172.18.81.2 10.0.0.12
po/nginx2-w9twq 1/1 Running 0 11m 172.18.7.3 10.0.0.13
4. 创建访问端口
[root@k8s-master deploy]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
5. 查看端口
[root@k8s-master deployment]# kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 4h <none>
myweb 10.254.153.203 <nodes> 80:30000/TCP 1h app=myweb
nginx-deployment 10.254.230.84 <nodes> 80:39576/TCP 1m app=nginx
6.测试
[root@k8s-master deployment]# curl -I 10.0.0.12:39576
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 27 Jan 2020 13:12:53 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
7. 修改deployment资源--升级
[root@k8s-master deploy]# kubectl edit deployment nginx-deployment #进去修改版本,然后直接验证即可
再次curl一下地址+端口,是否能看的版本的升级
[root@k8s-master deployment]# curl -I 10.0.0.12:39576
HTTP/1.1 200 OK
Server: nginx/1.15.5
Date: Mon, 27 Jan 2020 13:16:40 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT
Connection: keep-alive
ETag: "5bb38577-264"
Accept-Ranges: bytes
8. 进去调节两个参数
[root@k8s-master deploy]# kubectl edit deployment nginx-deployment
.........
spec:
minReadySeconds: 60 #滚动升级间隔60s;要调
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate: #由于replicas为3,则整个升级,pod个数在2-4个之间
maxSurge: 2 #滚动升级时会先启动1个pod,要调
maxUnavailable: 1 #滚动升级时允许的最大unavailable的pod个数
type: RollingUpdate
.........
9.调整副本数
[root@k8s-master deploy]# kubectl scale deployment nginx-deployment --replicas=6
10.修改版本为nginx:1.17
[root@k8s-master deploy]# kubectl edit deployment nginx-deployment #提前在node2上传nginx1.17---docker load -i --->docker tag --->docker push
60秒左右调整一次
11. 检查
[root@k8s-master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 3h 172.18.7.2 10.0.0.13
nginx-deployment-3221239399-7j2cq 1/1 Running 0 5m 172.18.7.8 10.0.0.13
nginx-deployment-3221239399-fgm9m 1/1 Running 0 1m 172.18.7.6 10.0.0.13
nginx-deployment-3221239399-fzlrr 1/1 Running 0 2m 172.18.81.7 10.0.0.12
nginx-deployment-3221239399-m7vpj 1/1 Running 0 5m 172.18.7.9 10.0.0.13
nginx-deployment-3221239399-pt8r3 1/1 Running 0 2m 172.18.81.4 10.0.0.12
nginx-deployment-3221239399-xxpqr 1/1 Running 0 5m 172.18.81.8 10.0.0.12
nginx2-1cx68 1/1 Running 0 34m 172.18.81.5 10.0.0.12
nginx2-41ppf 1/1 Running 0 34m 172.18.7.4 10.0.0.13
nginx2-9j6g4 1/1 Running 0 34m 172.18.81.3 10.0.0.12
nginx2-ftqt0 1/1 Running 0 34m 172.18.81.2 10.0.0.12
nginx2-w9twq 1/1 Running 0 34m 172.18.7.3 10.0.0.13
12.查看rs
[root@k8s-master deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-2807576163 0 0 0 27m
nginx-deployment-3014407781 0 0 0 19m
nginx-deployment-3221239399 6 6 6 8m
[root@k8s-master deploy]# kubectl delete rs nginx-deployment-3428071017
replicaset "nginx-deployment-3428071017" deleted
============================================================================
13.deployment升级和回滚
1.命令行创建deployment资源,可记录版本信息,方便回滚
kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
2.命令行升级版本
kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15
3.查看deployment所有历史版本
kubectl rollout history deployment nginx
4.deployment回滚到上一个版本
kubectl rollout undo deployment nginx
5.deployment回滚到指定版本
kubectl rollout undo deployment nginx --to-revision=2
=======================================================================================
6.查看deployment的版本
[root@k8s-master deploy]# kubectl rollout history deployment nginx-deployment
7.模拟测试deployment的版本
[root@k8s-master deploy]# kubectl delete -f k8s_deploy.yml
deployment "nginx-deployment" deleted
[root@k8s-master deploy]# kubectl create -f k8s_deploy.yml --record
deployment "nginx-deployment" created
[root@k8s-master deploy]# kubectl rollout history deployment nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f k8s_deploy.yml --record
可以看到执行的命令,但这不是我们日常所使用的
==========================================================================================
14.另一种方法:虽然可以记录版本的信息,但是升级的话是不可取的
[root@k8s-master deploy]# kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
deployment "nginx" created
14.1 查看资源
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
1 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
14.2 暴露端口
[root@k8s-master deploy]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service "nginx" exposed
[root@k8s-master deploy]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
myweb 10.254.215.251 <nodes> 80:30000/TCP 2h
nginx 10.254.45.223 <nodes> 80:11111/TCP 17s
nginx-deployment 10.254.9.183 <nodes> 80:8416/TCP 32m
14.3 访问测试
[root@k8s-master deploy]# curl -I 10.0.0.12:11111
HTTP/1.1 200 OK
Server: nginx/1.13.12
14.4 版本升级
[root@k8s-master deploy]# kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
deployment "nginx" image updated
14.5 查看是否有版本升级记录
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
1 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
2 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
14.6 升级至1.17
[root@k8s-master deploy]# kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.17
deployment "nginx" image updated
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
1 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
2 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
3 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.17
14.7 版本回滚:rollout默认只会回滚至上一个版本
[root@k8s-master deploy]# kubectl rollout undo deployment nginx
deployment "nginx" rolled back
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
1 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
3 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.17
4 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
============================================================================================
[root@k8s-master deploy]# kubectl rollout undo deployment nginx
deployment "nginx" rolled back
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
1 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
4 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
5 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.17
=============================================================================================
14.8 回滚至指定版本:例如1.13 --->只需要在后面加 --to-revision=你要回滚的版本
[root@k8s-master deploy]# kubectl rollout undo deployment nginx --to-revision=1
deployment "nginx" rolled back
[root@k8s-master deploy]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION CHANGE-CAUSE
4 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15
5 kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.17
6 kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
=============================================================================================
15.deployment和rc比较
deployment升级不会中断服务的访问
deployment升级不需要依赖配置文件
deployment升级时可以指定升级间隔
deployment 创建 rs replcation set replication controller
16. rs和rc的关系?
rs:新一代副本控制器
rc:副本控制器
在k8s中容器之间相互访问,通过VIP地址!
0.提前上传mysql和tomcat容器
[root@k8s-node-2 ~]# docker load -i docker-mysql-5.7.tar.gz
[root@k8s-node-2 ~]# docker tag docker.io/mysql:5.7 10.0.0.11:5000/mysql:5.7
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/mysql:5.7
[root@k8s-node-2 ~]# docker load -i tomcat-app-v2.tar.gz
[root@k8s-node-2 ~]# docker tag docker.io/kubeguide/tomcat-app:v2 10.0.0.11:5000/tomcat-app:v2
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/tomcat-app:v2
=================================================================
1.上传tomcat的yml文件,删除带有pv的配置文件
[root@k8s-master tomcat_demo]# rm -rf *pv*
2.mysql-rc的配置文件:
[root@k8s-master tomcat_demo]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
3.mysql-svc的配置文件
[root@k8s-master tomcat_demo]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
4.创建mysql-sc和mysql-svc
[root@k8s-master tomcat_demo]# kubectl create -f mysql-svc.yml
[root@k8s-master tomcat_demo]# kubectl create -f mysql-rc.yml
[root@k8s-master tomcat_demo]# kubectl get rc
NAME DESIRED CURRENT READY AGE
mysql 1 1 1 4s
nginx2 5 5 5 19h
5.记录mysql的ip地址与tomcat对接
[root@k8s-master tomcat_demo]# kubectl get svc
mysql 10.254.221.29 <none> 3306/TCP 57s
6.配置tomcat-rc
[root@k8s-master tomcat_demo]# cat tomcat-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 2
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: '10.254.221.29' #填写mysql的ip地址
- name: MYSQL_SERVICE_PORT
value: '3306'
7.配置tomcat-svc
[root@k8s-master tomcat_demo]# cat tomcat-svc.yml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30008
selector:
app: myweb
[root@k8s-master tomcat_demo]# kubectl create -f tomcat-rc.yml
[root@k8s-master tomcat_demo]# kubectl create -f tomcat-svc.yml
Error from server (AlreadyExists): error when creating "tomcat-svc.yml": services "myweb" already exists
出现error的话,提示标签名冲突,可先删除掉再重新创建
[root@k8s-master tomcat_demo]# kubectl delete -f tomcat-svc.yml
[root@k8s-master tomcat_demo]# kubectl create -f tomcat-svc.yml
8. 查看所有资源
[root@k8s-master tomcat_demo]# kubectl get all
NAME DESIRED CURRENT READY AGE
rc/mysql 1 1 1 9m
rc/myweb 2 2 2 24s
rc/nginx2 5 5 5 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.254.0.1 <none> 443/TCP 6h
svc/mysql 10.254.56.61 <none> 3306/TCP 9m
svc/myweb 10.254.111.247 <nodes> 8080:30008/TCP 2m
svc/nginx-deployment 10.254.230.84 <nodes> 80:39576/TCP 1h
NAME READY STATUS RESTARTS AGE
po/mysql-r5hmn 1/1 Running 0 9m
po/myweb-15pwk 1/1 Running 0 24s
po/myweb-gb9qt 1/1 Running 0 24s
po/nginx 1/1 Running 0 4h
po/nginx2-1cx68 1/1 Running 0 1h
po/nginx2-41ppf 1/1 Running 0 1h
po/nginx2-9j6g4 1/1 Running 0 1h
po/nginx2-ftqt0 1/1 Running 0 1h
po/nginx2-w9twq 1/1 Running 0 1h
8.最近web界面进行访问测试
[root@k8s-master k8s_yaml]# cp -a tomcat_demo wordpress
[root@k8s-master k8s_yaml]# cd wordpress/
1.配置数据库
[root@k8s-master wordpress]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql-wp
spec:
replicas: 1
selector:
app: mysql-wp
template:
metadata:
labels:
app: mysql-wp
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'somewordpress'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'wordpress'
- name: MYSQL_PASSWORD
value: 'wordpress'
2.编写mysql-svc
[root@k8s-master wordpress]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-wp
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-wp
3.编写wordpress-rc
[root@k8s-master wordpress]# cat wordpress-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress
spec:
replicas: 2
selector:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: 10.0.0.11:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 'mysql-wp'
- name: WORDPRESS_DB_USER
value: 'wordpress'
- name: WORDPRESS_DB_PASSWORD
value: 'wordpress'
4.编写wordpress-svc
[root@k8s-master wordpress]# cat wordpress-svc.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort
ports:
- port: 80
nodePort: 30009
selector:
app: wordpress
5.依次创建wordpress资源
[root@k8s-master wordpress]# kubectl create -f mysql-rc.yml
[root@k8s-master wordpress]# kubectl create -f mysql-svc.yml
[root@k8s-master wordpress]# kubectl create -f wordpress-svc.yml
[root@k8s-master wordpress]# kubectl create -f wordpress-rc.yml
6.查看所有资源
[root@k8s-master wordpress]# kubectl get all
NAME DESIRED CURRENT READY AGE
rc/mysql-wp 1 1 1 9m
rc/nginx2 5 5 5 2h
rc/wordpress 2 2 2 1m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.254.0.1 <none> 443/TCP 6h
svc/mysql-wp 10.254.84.20 <none> 3306/TCP 9m
svc/nginx-deployment 10.254.230.84 <nodes> 80:39576/TCP 2h
svc/wordpress 10.254.182.104 <nodes> 80:30009/TCP 37s
NAME READY STATUS RESTARTS AGE
po/mysql-wp-7mdrm 1/1 Running 0 9m
po/nginx 1/1 Running 0 4h
po/nginx2-1cx68 1/1 Running 0 2h
po/nginx2-41ppf 1/1 Running 0 2h
po/nginx2-9j6g4 1/1 Running 0 2h
po/nginx2-ftqt0 1/1 Running 0 2h
po/nginx2-w9twq 1/1 Running 0 2h
po/wordpress-6ztxf 1/1 Running 2 1m
po/wordpress-m73sv 1/1 Running 1 1m
7. 访问web进行测试
提前将我们所需的镜像上传
1.编写mysql-rc
[root@k8s-master zabbix]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql-server
spec:
replicas: 1
selector:
app: mysql-server
template:
metadata:
labels:
app: mysql-server
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
2.编写Mysql-svc
[root@k8s-master zabbix]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-server
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-server
3.编写zabbix_java-rc
[root@k8s-master zabbix]# cat zabbix_java-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-java
spec:
replicas: 1
selector:
app: zabbix-java
template:
metadata:
labels:
app: zabbix-java
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/zabbix-java-gateway:latest
ports:
- containerPort: 10052
4.编写zabbix-java-svc
[root@k8s-master zabbix]# cat zabbix_java-svc.yml
apiVersion: v1
kind: Service
metadata:
name: zabbix-java
spec:
ports:
- port: 10052
targetPort: 10052
selector:
app: zabbix-java
5.编写zabbix_server
[root@k8s-master zabbix]# cat zabbix_server-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-server
spec:
replicas: 1
selector:
app: zabbix-server
template:
metadata:
labels:
app: zabbix-server
spec:
containers:
- name: zabbix-server
image: 10.0.0.11:5000/zabbix-server-mysql:latest
ports:
- containerPort: 10051
env:
- name: DB_SERVER_HOST
value: '10.254.10.53'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: ZBX_JAVAGATEWAY
value: 'zabbix-java-gateway'
6.编写zabbix_server
[root@k8s-master zabbix]# cat zabbix_server-svc.yml
apiVersion: v1
kind: Service
metadata:
name: zabbix-server
spec:
type: NodePort
ports:
- port: 10051
nodePort: 30010
selector:
app: zabbix-server
7.编写zabbix-web
[root@k8s-master zabbix]# cat zabbix_web-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-web
spec:
replicas: 1
selector:
app: zabbix-web
template:
metadata:
labels:
app: zabbix-web
spec:
containers:
- name: zabbix-web
image: 10.0.0.11:5000/zabbix-web-nginx-mysql:latest
ports:
- containerPort: 80
env:
- name: DB_SERVER_HOST
value: '10.254.10.53'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
8.编写zabbix_web
[root@k8s-master zabbix]# cat zabbix_web-svc.yml
apiVersion: v1
kind: Service
metadata:
name: zabbix-web
spec:
type: NodePort
ports:
- port: 80
nodePort: 30011
selector:
app: zabbix-web
将我们部署的zabbix服务更新为mysql +(zabbix-java-gateway && zabbix-sever && zabbix-web)
1.编写mysql-rc配置文件
[root@k8s-master zabbix2]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
namespace: zabbix
name: mysql-server
spec:
replicas: 1
selector:
app: mysql-server
template:
metadata:
labels:
app: mysql-server
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
2.编写mysql-svc配置文件
[root@k8s-master zabbix2]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
namespace: zabbix
name: mysql-server
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-server
3.服务集合(非常重要)
[root@k8s-master zabbix2]# cat zabbix_server-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
namespace: zabbix
name: zabbix-server
spec:
replicas: 1
selector:
app: zabbix-server
template:
metadata:
labels:
app: zabbix-server
spec:
nodeName: 10.0.0.13
containers:
### java-gateway
- name: zabbix-java-gateway
image: 10.0.0.11:5000/zabbix-java-gateway:latest
# imagePullPolicy: IfNotPresent
ports:
- containerPort: 10052
### zabbix-server
- name: zabbix-server
image: 10.0.0.11:5000/zabbix-server-mysql:latest
# imagePullPolicy: IfNotPresent
ports:
- containerPort: 10051
env:
- name: DB_SERVER_HOST
value: 'mysql-server'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
- name: ZBX_JAVAGATEWAY
value: '127.0.0.1'
###zabbix-web
- name: zabbix-web
image: 10.0.0.11:5000/zabbix-web-nginx-mysql:latest
# imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env:
- name: DB_SERVER_HOST
value: 'mysql-server'
- name: MYSQL_DATABASE
value: 'zabbix'
- name: MYSQL_USER
value: 'zabbix'
- name: MYSQL_PASSWORD
value: 'zabbix_pwd'
- name: MYSQL_ROOT_PASSWORD
value: 'root_pwd'
4.编写zabbix_server-svc配置文件
[root@k8s-master zabbix2]# cat zabbix_server-svc.yml
apiVersion: v1
kind: Service
metadata:
namespace: zabbix
name: zabbix-server
spec:
type: NodePort
ports:
- port: 10051
nodePort: 30010
selector:
app: zabbix-server
5.编写zabbix_web配置文件
[root@k8s-master zabbix2]# cat zabbix_web-svc.yml
apiVersion: v1
kind: Service
metadata:
namespace: zabbix
name: zabbix-web
spec:
type: NodePort
ports:
- port: 80
selector:
app: zabbix-server
6.因为是基于dns服务,在架构上也进行了优化【1拖3架构模式】所有直接一次性创建;登录web访问并测试
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。