赞
踩
在上文 K8s Docker实践一 中我们已经基于K8s实现了最简部署,下面我们对K8s进行深入学习。
分为Master 和node 节点,master 是调度分配任务的,node 实际接受master 调度进行工作的,master 运行的apiserver 接口用户的管理命令,所有服务都是通过api server 通讯的。核心组件:
节点 | 组件 | 功能 |
---|---|---|
Master | Api Server | 提供了HTTPRest接口的关键服务进程,是Kubernetes里所有资源的增、删、改、查等操作 |
Master | scheduler | 负责资源调度,按照预定的调度策略将POD调度到相应的机器上 |
Master | controller-manager | 负责集群状态,比如故障检测、自动扩展、滚动更新等 |
Master | etcd | 保存配置信息 :Kubernetes里的所有资源对象的数据都被保存在etcd中 |
Node | kubelet | 接受调度,管理pod:负责Pod对应的容器的创建、启停等任务,同时与Master密切协作,实现集群管理的基本功能。 |
Node | kube-proxy | 实现Kubernetes Service的通信与负载均衡机制的重要组件 |
Node | pod | k8s中最小部署单元,里面是容器,通常关系紧密的几个容器部署在同一个pod中。 |
Deployment
kubernetes很少直接控制Pod,一般都是通过Pod控制器来完成的。Pod控制器用于pod的管理,确保pod资源符合预期的状态,当pod的资源出现故障时,会尝试进行重启或重建pod。
Service
Service可以看作是一组同类Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
Label
Label是kubernetes系统中的一个重要概念。它的作用就是在资源上添加标识,用来对它们进行区分和选择。
Pod
Pod可以认为是容器的封装,一个Pod中可以存在一个或者多个容器。Pod是k8s中最小的可部署单元。
namespace
Namespace是kubernetes系统中的一种非常重要资源,它的主要作用是用来实现多套环境的资源隔离或者多租户的资源隔离
默认情况下,kubernetes集群中的所有的Pod都是可以相互访问的。但是在实际中,可能不想让两个Pod之间进行互相的访问,那此时就可以将两个Pod划分到不同的namespace下。
kubernetes通过将集群内部的资源分配到不同的Namespace中,可以形成逻辑上的"组",以方便不同的组的资源进行隔离使用和管理。
可以通过kubernetes的授权机制,将不同的namespace交给不同租户进行管理,这样就实现了多租户的资源隔离。此时还能结合kubernetes的资源配额机制,限定不同租户能占用的资源,例如CPU使用量、内存使用量等等,来实现租户可用资源的管理。
kubernetes在集群启动之后,会默认创建4个namespace:
# kubectl get namespace
NAME STATUS AGE
default Active 45h # 所有未指定Namespace的对象(容器)都会被分配在default命名空间
kube-node-lease Active 45h # 集群节点之间的心跳维护,v1.13开始引入
kube-public Active 45h # 此命名空间下的资源可以被所有人访问(包括未认证用户)
kube-system Active 45h # 所有由Kubernetes系统创建的资源都处于这个命名空间
下面来看namespace资源(简写ns)的具体操作:
1,查看
# 1 查看所有的ns
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 45h
kube-node-lease Active 45h
kube-public Active 45h
kube-system Active 45h
# 2 查看指定的ns
[root@master ~]# kubectl get ns default
NAME STATUS AGE
default Active 45h
# 3 指定输出格式 命令:kubectl get ns 名称 -o 格式参数
# kubernetes支持的格式有很多,比较常见的是wide、json、yaml(wide显示更多详细参数,json,yaml是网页格式)
[root@master ~]# kubectl get ns default -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2021-05-08T04:44:16Z"
name: default
resourceVersion: "151"
selfLink: /api/v1/namespaces/default
uid: 7405f73a-e486-43d4-9db6-145f1409f090
spec:
finalizers:
- kubernetes
status:
phase: Active
# 4 查看ns详情 命令:kubectl describe ns 名称
[root@master ~]# kubectl describe ns default
Name: default
Labels: <none>
Annotations: <none>
Status: Active # Active 命名空间正在使用中 Terminating 正在删除命名空间
# ResourceQuota 针对namespace做的资源限制
# LimitRange针对namespace中的每个组件做的资源限制
No resource quota.
No LimitRange resource.
2,创建删除
# 创建namespace
[root@master ~]# kubectl create ns dev
namespace/dev created
# 删除namespace
[root@master ~]# kubectl delete ns dev
namespace "dev" deleted
3,使用配置文件(.yaml)进行管理
apiVersion: v1
kind: Namespace
metadata:
name: dev
创建:kubectl create -f ns-dev.yaml
删除:kubectl delete -f ns-dev.yaml
Pod可以认为是容器的封装,一个Pod中可以存在一个或者多个容器。
kubernetes在集群启动之后,集群中的各个组件也都是以Pod方式运行的。可以通过下面命令查看:
[root@master ~]# kubectl get pod -n kube-system
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-68g6v 1/1 Running 0 2d1h
kube-system coredns-6955765f44-cs5r8 1/1 Running 0 2d1h
kube-system etcd-master 1/1 Running 0 2d1h
kube-system kube-apiserver-master 1/1 Running 0 2d1h
kube-system kube-controller-manager-master 1/1 Running 0 2d1h
kube-system kube-flannel-ds-amd64-47r25 1/1 Running 0 2d1h
kube-system kube-flannel-ds-amd64-ls5lh 1/1 Running 0 2d1h
kube-system kube-proxy-685tk 1/1 Running 0 2d1h
kube-system kube-proxy-87spt 1/1 Running 0 2d1h
kube-system kube-scheduler-master 1/1 Running 0 2d1h
1, 创建并运行
kubernetes没有提供单独运行Pod的命令,都是通过Pod控制器来实现的
# 命令格式: kubectl run (pod控制器名称) [参数]
# --image 指定Pod的镜像
# --port 指定端口
# --namespace 指定namespace
[root@master ~]# kubectl run nginx --image=nginx:latest --port=80 --namespace dev
deployment.apps/nginx created
(此处就新建了一个名为nginx的pod控制器,而生成的pod名为nginx-****-****)
2, 查看pod信息
# 查看Pod基本信息
[root@master ~]# kubectl get pods -n dev //这个pod加不加s都可以
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 43s
# 查看Pod的详细信息
[root@master ~]# kubectl describe pod nginx -n dev
Name: nginx
Namespace: dev
Priority: 0
Node: node1/192.168.5.4
Start Time: Wed, 08 May 2021 09:29:24 +0800
Labels: pod-template-hash=5ff7956ff6
run=nginx
Annotations: <none>
Status: Running
IP: 10.244.1.23
IPs:
IP: 10.244.1.23
Controlled By: ReplicaSet/nginx
Containers:
nginx:
Container ID: docker://4c62b8c0648d2512380f4ffa5da2c99d16e05634979973449c98e9b829f6253c
Image: nginx:latest
Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 08 May 2021 09:30:01 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hwvvw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hwvvw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hwvvw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned dev/nginx-5ff7956ff6-fg2db to node1
Normal Pulling 4m11s kubelet, node1 Pulling image "nginx:latest"
Normal Pulled 3m36s kubelet, node1 Successfully pulled image "nginx:latest"
Normal Created 3m36s kubelet, node1 Created container nginx
Normal Started 3m36s kubelet, node1 Started container nginx
3, 访问Pod
# 获取podIP
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE ...
nginx 1/1 Running 0 190s 10.244.1.23 node1 ...
#访问pod中部署的服务
[root@master ~]# curl 10.244.1.23:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
4, 删除指定Pod
# 删除指定Pod
[root@master ~]# kubectl delete pod nginx -n dev
pod "nginx" deleted
# 此时,显示删除Pod成功,但是再查询,发现又新产生了一个
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 21s
# 这是因为当前Pod是由Pod控制器创建的,控制器会监控Pod状况,一旦发现Pod死亡,会立即重建
# 此时要想删除Pod,必须删除Pod控制器
# 先来查询一下当前namespace下的Pod控制器
[root@master ~]# kubectl get deploy -n dev //deploy加不加ment都可以
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 9m7s
# 接下来,删除此PodPod控制器
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted
# 稍等片刻,再查询Pod,发现Pod被删除了
[root@master ~]# kubectl get pods -n dev
No resources found in dev namespace.
5,配置操作
创建一个pod-nginx.yaml,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: dev
spec:
containers:
- image: nginx:latest
name: pod
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
创建:kubectl create -f pod-nginx.yaml
删除:kubectl delete -f pod-nginx.yaml
Label是kubernetes系统中的一个重要概念。它的作用就是在资源上添加标识,用来对它们进行区分和选择。
Label的特点:
可以通过Label实现资源的多维度分组,以便灵活、方便地进行资源分配、调度、配置、部署等管理工作。一些常用的Label 示例如下:
Label Selector(标签选择器)用于查询和筛选拥有某些标签的资源对象
当前有两种Label Selector:
标签的选择条件可以使用多个,此时将多个Label Selector进行组合,使用逗号","进行分隔即可。例如:
name=slave,env!=production
name not in (frontend),env!=production
1,命令方式
# 为pod资源打标签
[root@master ~]# kubectl label pod nginx-pod version=1.0 -n dev
pod/nginx-pod labeled
# 为pod资源更新标签(--overwrite)
[root@master ~]# kubectl label pod nginx-pod version=2.0 -n dev --overwrite
pod/nginx-pod labeled
# 查看标签(--show-labels)
[root@master ~]# kubectl get pod nginx-pod -n dev --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-pod 1/1 Running 0 10m version=2.0
# 筛选标签(-l 标签对)
[root@master ~]# kubectl get pod -n dev -l version=2.0 --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-pod 1/1 Running 0 17m version=2.0
[root@master ~]# kubectl get pod -n dev -l version!=2.0 --show-labels
No resources found in dev namespace.
#删除标签(指定pod 标签的键加一个减号)
[root@master ~]# kubectl label pod nginx-pod version- -n dev
pod/nginx-pod labeled
2,配置方式
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: dev
labels:
version: "3.0" //标签一
env: "test" //标签二
spec:
containers:
- image: nginx:latest
name: pod
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
kubectl apply -f pod-nginx.yaml
在kubernetes中,Pod是最小的控制单元,但是kubernetes很少直接控制Pod,一般都是通过Pod控制器来完成的。Pod控制器用于pod的管理,确保pod资源符合预期的状态,当pod的资源出现故障时,会尝试进行重启或重建pod。
1,命令操作
# 命令格式: kubectl create deployment 名称 [参数]
# --image 指定pod的镜像
# --port 指定端口
# --replicas 指定创建pod数量
# --namespace 指定namespace
[root@master ~]# kubectl run nginx --image=nginx:latest --port=80 --replicas=3 -n dev
deployment.apps/nginx created
# 查看创建的Pod
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
nginx-5ff7956ff6-6k8cb 1/1 Running 0 19s
nginx-5ff7956ff6-jxfjt 1/1 Running 0 19s
nginx-5ff7956ff6-v6jqw 1/1 Running 0 19s
# 查看deployment的信息(简写为deploy)
[root@master ~]# kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 2m42s
# UP-TO-DATE成功升级的副本数量,AVAILABLE可用副本的数量
[root@master ~]# kubectl get deploy -n dev -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 3/3 3 3 2m51s nginx nginx:latest run=nginx
# 查看deployment的详细信息
[root@master ~]# kubectl describe deploy nginx -n dev
Name: nginx
Namespace: dev
CreationTimestamp: Wed, 08 May 2021 11:14:14 +0800
Labels: run=nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx:latest
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-5ff7956ff6 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m43s deployment-controller Scaled up replicaset nginx-5ff7956ff6 to 3
# 删除
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted
2,配置操作
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
protocol: TCP
template:相当于模板(有点java构造函数的味道,deployment通过tmplate格式,生成pod)
replicas: 3——指副本数量
然后就可以执行对应的创建和删除命令了:
创建:kubectl create -f deploy-nginx.yaml
删除:kubectl delete -f deploy-nginx.yaml
上面已经能够利用Deployment来创建一组Pod来提供具有高可用性的服务。虽然每个Pod都会分配一个单独的Pod IP,然而却存在如下两问题:
这样对于访问这个服务带来了难度。因此,kubernetes设计了Service来解决这个问题。
Service可以看作是一组同类Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
1,创建集群内部可访问的Service(type=ClusterIP)
# 暴露Service(通过暴露pod管理器deployment来暴露其管理的pod)
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=ClusterIP --port=80 --target-port=80 -n dev
service/svc-nginx1 exposed
#--type=ClusterIP指定服务类型为集群内部IP——ClusterIP
# 查看service(可以简写为svc)
[root@master ~]# kubectl get svc svc-nginx1 -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc-nginx1 ClusterIP 10.109.179.231 <none> 80/TCP 3m51s run=nginx
# 这里产生了一个CLUSTER-IP,这就是service的IP,在Service的生命周期中,这个地址是不会变动的
# 可以通过这个IP访问当前service对应的POD(只能在当前集群内的几个结点上使用)
[root@master ~]# curl 10.109.179.231:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
<h1>Welcome to nginx!</h1>
.......
</body>
</html>
2,创建集群外部也可访问的Service(type=NodePort)
# 上面创建的Service的type类型为ClusterIP,这个ip地址只用集群内部可访问
# 如果需要创建外部也可以访问的Service,需要修改type为NodePort
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx2 --type=NodePort --port=80 --target-port=80 -n dev
service/svc-nginx2 exposed
# 此时查看,会发现出现了NodePort类型的Service,而且有一对Port(80:31928/TC)
[root@master ~]# kubectl get svc svc-nginx2 -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc-nginx2 NodePort 10.100.94.0 <none> 80:31928/TCP 9s run=nginx
# 接下来就可以通过集群外的主机访问 节点IP:31928访问服务了
# 例如在的电脑主机上通过浏览器访问下面的地址
http://192.168.90.100:31928/
3,配置方式
创建一个svc-nginx.yaml,内容如下:
apiVersion: v1
kind: Service
metadata:
name: svc-nginx
namespace: dev
spec:
clusterIP: 10.109.179.231 #固定svc的内网ip
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
type: ClusterIP
创建:kubectl create -f svc-nginx.yaml
删除:kubectl delete -f svc-nginx.yaml
1 ,首先检查下k8s集群的系统pod状态,确保所有系统pod运行正常:
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5d78c9869d-fqtjh 1/1 Running 0 2d19h
coredns-5d78c9869d-hrt4g 1/1 Running 0 2d19h
etcd-docker-desktop 1/1 Running 0 2d19h
kube-apiserver-docker-desktop 1/1 Running 0 2d19h
kube-controller-manager-docker-desktop 1/1 Running 0 2d19h
kube-proxy-rlflb 1/1 Running 0 2d19h
kube-scheduler-docker-desktop 1/1 Running 0 2d19h
storage-provisioner 1/1 Running 0 2d19h
vpnkit-controller 1/1 Running 0 2d19h
2,应用mysql-deployment.yaml文件,快速创建部署一个数据库服务:
文件包含 kind: Deployment 和 kind:Service 两部分:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mysql
labels:
app: mysql
spec:
replicas: 3 # 副本数量
selector:
matchLabels:
app: mysql
template: # deployment通过template格式生成pod
metadata:
labels:
app: mysql
spec:
containers: # 定义容器部分
- image: mysql:latest
name: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "1234"
---
apiVersion: v1
kind: Service
metadata:
name: my-db
spec:
ports:
- port: 3306
nodePort: 30011
selector:
app: mysql
type: NodePort # 集群外可访问,ClusterIP不可访问
执行:
# kubectl apply -f mysql-deployment.yaml
deployment.apps/my-mysql created
service/my-db created
查看:
# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/my-mysql-bd9978b94-gjjjd 1/1 Running 0 2m30s 10.1.0.26 docker-desktop <none> <none>
pod/my-mysql-bd9978b94-q5z5h 1/1 Running 0 2m30s 10.1.0.25 docker-desktop <none> <none>
pod/my-mysql-bd9978b94-rc9v6 1/1 Running 0 2m30s 10.1.0.24 docker-desktop <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d19h <none>
service/my-db NodePort 10.102.141.35 <none> 3306:30011/TCP 2m30s app=mysql
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/my-mysql 3/3 3 3 2m30s mysql mysql:latest app=mysql
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/my-mysql-bd9978b94 3 3 3 2m30s mysql mysql:latest app=mysql,pod-template-hash=bd9978b94
如果pod一直没有running成功,可以用kubectl describe pods
查看细节。
3,外部客户端远程访问mysql成功:
# mysql -h 192.168.1.6 -uroot -p1234 -P30011
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 8.2.0 MySQL Community Server - GPL
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> quit
Bye
1.本次实践的yaml文件可作为后续进阶学习的基础模板。
2.mysql的存储可以配置持久存储来保存数据,使用storageclass来配置持久存储;
3.安全性方面,可以配置secret来确保数据库的安全性;
4.还可以配置初始化容器,来检查pod的运行环境是否正常;
5.对于本次yaml文件,可以增加更多的功能及高级特性,进行不断的优化升级;
ConfigMap 是一种 API 对象,用来将非机密性的数据保存到键值对中。使用时, Pod 可以将其用作环境变量、命令行参数或者存储卷中的配置文件。ConfigMap 将你的环境配置信息和容器镜像解耦,便于应用配置的修改。但是注意:ConfigMap 并不提供保密或者加密功能。 如果你想存储的数据是机密的,请使用 Secret, 或者使用其他第三方工具来保证你的数据的私密性,而不是用 ConfigMap。
Secret与ConfigMap都是用来存储配置信息的,不同之处在于ConfigMap是明文存储的,而Secret用来保存敏感信息,如:密码、OAuth令牌,ssh key等等。Secret常用有三种类型:
另外,容器中的数据是需要保留的,否则容器重启就没了,需要创建数据目录,存储 MySQL 数据;在本地创建 MySQL 数据文件夹,然后挂载到 MySQL 容器,实现 MySQL 数据的可以持久化。
下面我们通过创建 Secret 对象,将 MySQL 的用户名、密码传递到镜像中使用。
1,创建secret对象:
# kubectl create secret generic mysql-auth --from-literal=username=root --from-literal=password=1234
secret/mysql-auth created
# kubectl get secret mysql-auth
NAME TYPE DATA AGE
mysql-auth Opaque 2 14s
# kubectl get secret mysql-auth -o yaml
apiVersion: v1
data:
password: MTIzNA==
username: cm9vdA==
kind: Secret
metadata:
creationTimestamp: "2023-11-18T07:48:50Z"
name: mysql-auth
namespace: default
resourceVersion: "82836"
uid: 7e767963-840f-4bb6-8e9c-f452478cc8d9
type: Opaque
# echo MTIzNA== | base64 -d
1234%
2 ,yaml文件部署:
修改引用mysql-auth中的password值,并且增加volumes数据卷进行持久化:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mysql
labels:
app: mysql
spec:
replicas: 1 # 副本数量
selector:
matchLabels:
app: mysql
template: # deployment通过template格式生成pod
metadata:
labels:
app: mysql
spec:
containers: # 定义容器部分
- image: mysql:latest
name: mysql
ports:
- containerPort: 3306
volumeMounts: #挂载数据卷
- name: mysql-data
mountPath: "/mysql" #挂载到容器内的目录
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom: # 读取密码
secretKeyRef:
name: mysql-auth
key: password #引用mysql-auth中的password值
volumes: #数据卷
- name: mysql-data
hostPath: #宿主机路径
path: /Users/Shared/Data #宿主机的目录
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: my-db
spec:
ports:
- port: 3306
nodePort: 30011
selector:
app: mysql
type: NodePort # 集群外可访问,ClusterIP不可访问
执行:
kubectl apply -f mysql-deployment.yaml
3,验证:
# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-mysql-5cdb7f574f-8nwsb 1/1 Running 0 11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d1h
service/my-db NodePort 10.102.104.177 <none> 3306:30011/TCP 11s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-mysql 1/1 1 1 11s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-mysql-5cdb7f574f 1 1 1 11s
# 进入容器新建测试文件
# kubectl exec -it my-mysql-5cdb7f574f-8nwsb /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.4# touch /mysql/test.log
sh-4.4# exit
exit
command terminated with exit code 127
# 对应节点上可以看到文件已存在
# ls /Users/Shared/Data
test.log
一般来说,REDIS部署有三种模式:
总的来说,集群模式明显优于哨兵模式。
本次使用本地仓库的redis镜像实现:
1, 搭建本地仓库
(1)在终端或命令提示符中,使用以下命令创建一个本地仓库容器:
docker run -d -p5000:5000 --restart=always --name registry -v /path/to/registry:/tmp/registry registry:2
将/path/to/registry
替换为您选择的存储镜像的目录的路径。
(2)运行上述命令后,Docker会自动从Docker Hub下载并启动一个本地仓库容器,可以使用docker ps
命令来验证容器是否正在运行。
(3)现在,可以将本地镜像推送到本地仓库。首先,使用docker tag
命令为您要推送的镜像添加标签:
# docker tag redis:latest localhost:5000/redis:latest
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost:5000/redis latest 720b987633ae 3 weeks ago 158MB
redis latest 720b987633ae 3 weeks ago 158MB
(4) 使用docker push
命令将镜像推送到本地仓库:
docker push localhost:5000/redis:latest
(5)推送完成后,可以使用docker pull
命令从本地仓库中拉取镜像:
docker pull localhost:5000/redis:latest
2,配置configmap
在k8s中我们通过configmap来配置内容,并且通过volume 将configmap 映射到redis实例对应的pod 相关路径下,这样就可以配置redis的运行参数
首先来看下redis 配置的configmap yaml文件redis-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
创建configmap:
# kubectl apply -f redis-config.yaml
configmap/redis-config created
# 确认configmap 内容
# kubectl describe configmap redis-config
Name: redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
BinaryData
====
Events: <none>
3,kubectl使用yaml 启动本地镜像容器:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-redis
labels:
app: redis
spec:
replicas: 1 # 副本数量
selector:
matchLabels:
app: redis
template: # deployment通过template格式生成pod
metadata:
labels:
app: redis
spec:
containers: # 定义容器部分
- image: localhost:5000/redis:latest
name: redis
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
---
apiVersion: v1
kind: Service
metadata:
name: my-redis
spec:
ports:
- port: 6379
nodePort: 30012
selector:
app: redis
type: NodePort # 集群外可访问,ClusterIP不可访问
执行:kubectl apply -f redis-deployment.yaml
因为redis集群,最终需要对应的文件有,redis.conf、data等
redis.conf可以是一个相同的,其他两个肯定是不一样的。因此如果同一个node的pod镜像使用 Volume 挂载文件夹部署是不能满足的。所以这里,就引出了SC、PV了。
1,创建SC
# cat redis-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: redis-sc
provisioner: nfs-storage
# kubectl apply -f redis-sc.yaml
# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 3d3h
redis-sc nfs-storage Delete Immediate false 25m
2,创建PV关联SC
# cat redis-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
spec:
storageClassName: redis-sc
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.6
path: "/User/Shard/Data/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv2
spec:
storageClassName: redis-sc
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.6
path: "/User/Shard/Data/pv2"
名称为nfs-pv1,对应的storageClassName为redis-sc,capacity容器200M,accessModes访问模式可被多节点读写
对应nfs服务器192.168.1.6,对应文件夹路径path(对应安装的nfs服务器),这里我们创建2个pv.
# kubectl apply -f redis-pv.yaml
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 200M RWX Retain Available redis-sc 6s
nfs-pv2 200M RWX Retain Available redis-sc 6s
3,利用StatefulSet创建Redis 集群节点。
RC、Deployment、DaemonSet都是面向无状态的服务,它们所管理的Pod的IP、名字,启停顺序等都是随机的,而StatefulSet是什么?顾名思义,有状态的集合,管理所有有状态的服务,比如MySQL、MongoDB集群等。
StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。
在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service。headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名,这个域名的格式为:
$(pod.name).$(headless server.name).${namespace}.svc.cluster.local
也即是说,对于有状态服务,我们最好使用固定的网络标识(如域名信息)来标记节点,当然这也需要应用程序的支持(如Zookeeper就支持在配置文件中写入主机域名)。
StatefulSet基于Headless Service(即没有Cluster IP的Service)为Pod实现了稳定的网络标志(包括Pod的hostname和DNS Records),在Pod重新调度后也保持不变。同时,结合PV/PVC,StatefulSet可以实现稳定的持久化存储,就算Pod重新调度后,还是能访问到原先的持久化数据。
以下为使用StatefulSet部署Redis的架构,无论是Master还是Slave,都作为StatefulSet的一个副本,并且数据通过PV进行持久化,对外暴露为一个Service,接受客户端请求。
# cat redis.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-redis
labels:
app: redis
spec:
replicas: 1 # 副本数量
selector:
matchLabels:
app: redis
template: # deployment通过template格式生成pod
metadata:
labels:
app: redis
spec:
containers: # 定义容器部分
- image: redis:latest
name: redis
ports:
- containerPort: 6379
volumeMounts: #挂载数据卷
- name: redis-conf
mountPath: "/etc/redis/redis.conf"
- name: redis-data
mountPath: "/data"
volumes: #数据卷
- name: redis-conf
hostPath:
path: "/Users/Shared/Data/redis.conf"
type: FileOrCreate
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 200M
storageClassName: redis-sc
---
apiVersion: v1
kind: Service
metadata:
name: my-redis
spec:
ports:
- port: 6379
nodePort: 30012
selector:
app: redis
type: NodePort # 集群外可访问,ClusterIP不可访问
1,启动依赖的mysql和redis:
# kubectl apply -f mysql-deployment.yaml
# kubectl apply -f redis-deployment.yaml
# kubectl get all
2,基于dockefile制作好springboot应用镜像:
dockerfile:
FROM openjdk:8-jdk-alpine
COPY target/demospringboot-0.0.1-SNAPSHOT.jar F:\docker
WORKDIR F:\docker
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "demospringboot-0.0.1-SNAPSHOT.jar"]
执行:docker build -f my-app-dockfile -t my-springboot-img:v1 .
2,基于yaml文件启动:
由于applications.properties 里mysql和redis的ip都是localhost的,实际访问的是容器内部会导致失败。
spring.datasource.url=jdbc:mysql://localhost:3306/mydatabase?createDatabaseIfNotExist=true
spring.redis.host=localhost
对应的解决办法是在yaml创建Deployment时,通过要访问的service变量名来覆盖SpringBoot中的localhost默认配置,也可以避免其他容器重启后ip变化导致问题:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-springboot
labels:
app: my-springboot
spec:
replicas: 1 # 副本数量
selector:
matchLabels:
app: my-springboot
template: # deployment通过template格式生成pod
metadata:
labels:
app: my-springboot
spec:
containers: # 定义容器部分
- image: my-springboot-img:v1
name: my-springboot
env:
# 指定数据库连接地址
- name: spring.datasource.url
value: jdbc:mysql://my-db:3306/mydatabase?createDatabaseIfNotExist=true
# 指定redis连接地址
- name: spring.redis.host
value: my-redis
# 指定日志文件路径
- name: logging.file.path
value: /var/logs
volumeMounts:
- mountPath: /var/logs
name: log-volume
volumes:
- name: log-volume
hostPath:
path: /Users/Shared/Data # 宿主机目录
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: my-springboot
spec:
ports:
- port: 8080
nodePort: 30013
selector:
app: my-springboot
type: NodePort # 集群外可访问,ClusterIP不可访
执行:kubectl apply -f myspringboot-deployment.yaml
查看:
# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/my-mysql-5cdb7f574f-gwcjb 1/1 Running 0 38m 10.1.0.95 docker-desktop <none> <none>
pod/my-redis-77956b54c6-vsqht 1/1 Running 0 2m56s 10.1.0.99 docker-desktop <none> <none>
pod/my-springboot-9ddcb9ff5-wzlr2 1/1 Running 0 8m27s 10.1.0.98 docker-desktop <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d <none>
service/my-db NodePort 10.103.19.0 <none> 3306:30011/TCP 38m app=mysql
service/my-redis NodePort 10.109.146.158 <none> 6379:30012/TCP 2m56s app=redis
service/my-springboot NodePort 10.103.209.218 <none> 8080:30013/TCP 8m27s app=my-springboot
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/my-mysql 1/1 1 1 38m mysql mysql:latest app=mysql
deployment.apps/my-redis 1/1 1 1 2m56s redis localhost:5000/redis:latest app=redis
deployment.apps/my-springboot 1/1 1 1 8m27s my-springboot my-springboot-img:v1 app=my-springboot
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/my-mysql-5cdb7f574f 1 1 1 38m mysql mysql:latest app=mysql,pod-template-hash=5cdb7f574f
replicaset.apps/my-redis-77956b54c6 1 1 1 2m56s redis localhost:5000/redis:latest app=redis,pod-template-hash=77956b54c6
replicaset.apps/my-springboot-9ddcb9ff5 1 1 1 8m27s my-springboot my-springboot-img:v1 app=my-springboot,pod-template-hash=9ddcb9ff5
查看服务日志:
# kubectl logs service/my-springboot
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.15-SNAPSHOT)
2023-11-25 16:18:07.123 INFO 1 --- [ main] c.e.d.DemospringbootApplication : Starting DemospringbootApplication v0.0.1-SNAPSHOT using Java 1.8.0_212 on my-springboot-6f968db86b-fkdpl with PID 1 (/Users/l00277914/Downloads/demospringboot-0.0.1-SNAPSHOT.jar started by root in /Users/l00277914/Downloads)
2023-11-25 16:18:07.125 INFO 1 --- [ main] c.e.d.DemospringbootApplication : No active profile set, falling back to 1 default profile: "default"
2023-11-25 16:18:07.718 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode
2023-11-25 16:18:07.720 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data Redis repositories in DEFAULT mode.
2023-11-25 16:18:07.731 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 3 ms. Found 0 Redis repository interfaces.
2023-11-25 16:18:08.259 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2023-11-25 16:18:08.267 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2023-11-25 16:18:08.267 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.79]
2023-11-25 16:18:08.343 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2023-11-25 16:18:08.343 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1177 ms
2023-11-25 16:18:08.909 INFO 1 --- [ main] pertySourcedRequestMappingHandlerMapping : Mapped URL path [/v2/api-docs] onto method [springfox.documentation.swagger2.web.Swagger2Controller#getDocumentation(String, HttpServletRequest)]
2023-11-25 16:18:09.457 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-11-25 16:18:09.669 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2023-11-25 16:18:09.723 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2023-11-25 16:18:09.723 INFO 1 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2023-11-25 16:18:09.738 INFO 1 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2023-11-25 16:18:09.758 INFO 1 --- [ main] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2023-11-25 16:18:10.012 INFO 1 --- [ main] c.e.d.DemospringbootApplication : Started DemospringbootApplication in 3.116 seconds (JVM running for 3.397)
2023-11-25 16:18:10.013 INFO 1 --- [ main] c.e.d.DemospringbootApplication : ======== 自动初始化数据库开始 ========
2023-11-25 16:18:10.188 INFO 1 --- [ main] c.e.d.DemospringbootApplication : ======== 自动初始化数据库结束 ========
查看宿主机上生成了对应的日志文件:
# ls /Users/Shared/Data/spring.log
/Users/Shared/Data/spring.log
通过映射端口验证,成功访问登录页面:
http://localhost:30013/login
整个master节点信息如下:
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready control-plane 10d v1.27.2 192.168.65.3 <none> Docker Desktop 6.4.16-linuxkit docker://24.0.6
# kubectl describe node docker-desktop
Name: docker-desktop
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=docker-desktop
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Nov 2023 16:18:44 +0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: docker-desktop
AcquireTime: <unset>
RenewTime: Sun, 26 Nov 2023 10:52:28 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 26 Nov 2023 10:52:01 +0800 Wed, 15 Nov 2023 16:18:43 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 26 Nov 2023 10:52:01 +0800 Wed, 15 Nov 2023 16:18:43 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 26 Nov 2023 10:52:01 +0800 Wed, 15 Nov 2023 16:18:43 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 26 Nov 2023 10:52:01 +0800 Wed, 15 Nov 2023 16:18:44 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.65.3
Hostname: docker-desktop
Capacity:
cpu: 9
ephemeral-storage: 61202244Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8039836Ki
pods: 110
Allocatable:
cpu: 9
ephemeral-storage: 56403987978
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 7937436Ki
pods: 110
System Info:
Machine ID: a096ab83-21d3-4150-a901-5300f59de4bc
System UUID: a096ab83-21d3-4150-a901-5300f59de4bc
Boot ID: 55ef270c-490d-4aad-8c96-a2d1b1b80b5e
Kernel Version: 6.4.16-linuxkit
OS Image: Docker Desktop
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.27.2
Kube-Proxy Version: v1.27.2
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default my-mysql-5cdb7f574f-wkjv6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h
default my-redis-77956b54c6-422zz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h
default my-springboot-9ddcb9ff5-plp6q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h
kube-system coredns-5d78c9869d-fqtjh 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 10d
kube-system coredns-5d78c9869d-hrt4g 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 10d
kube-system etcd-docker-desktop 100m (1%) 0 (0%) 100Mi (1%) 0 (0%) 10d
kube-system kube-apiserver-docker-desktop 250m (2%) 0 (0%) 0 (0%) 0 (0%) 10d
kube-system kube-controller-manager-docker-desktop 200m (2%) 0 (0%) 0 (0%) 0 (0%) 10d
kube-system kube-proxy-rlflb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d
kube-system kube-scheduler-docker-desktop 100m (1%) 0 (0%) 0 (0%) 0 (0%) 10d
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d
kube-system vpnkit-controller 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (9%) 0 (0%)
memory 240Mi (3%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events: <none>
nginx.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: web_server
replicas: 1
template:
metadata:
labels:
app: web_server
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
spec:
ports:
- port: 80
nodePort: 30014
selector:
app: web_server
type: NodePort # 集群外可访问,ClusterIP不可访问
参考:
https://blog.csdn.net/weixin_48751167/article/details/123507306
https://zhuanlan.zhihu.com/p/631315345
https://blog.csdn.net/jks212454/article/details/130201521
https://kubernetes.io/zh-cn/docs/concepts/configuration/configmap/
https://blog.csdn.net/ABAP_Brave/article/details/129425282
https://blog.csdn.net/m0_70618214/article/details/131290729
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。