赞
踩
目录
2.创建 Service Account,用来管理 NFS Provisioner 在 k8s 集群中运行的权限,设置 nfs-client 对 PV,PVC,StorageClass 等的规则
3.创建 StorageClass,负责建立 PVC 并调用 NFS provisioner 进行预定的工作,并让 PV 与 PVC 建立关联
PV 全称叫做 Persistent Volume,持久化存储卷。它是用来描述或者说用来定义一个存储卷的,这个通常都是由运维工程师来定义。
PVC 的全称是 Persistent Volume Claim,是持久化存储的请求。它是用来描述希望使用什么样的
或者说是满足什么条件的 PV 存储。
PVC 的使用逻辑:在 Pod 中定义一个存储卷(该存储卷类型为 PVC),定义的时候直接指定大
小,PVC 必须与对应的 PV 建立关系,PVC 会根据配置的定义去 PV 申请,而 PV 是由存储空间
创建出来的。PV 和 PVC 是 Kubernetes 抽象出来的一种存储资源。
上面介绍的PV和PVC模式是需要运维人员先创建好PV,然后开发人员定义好PVC进行一对一的
Bond,但是如果PVC请求成千上万,那么就需要创建成千上万的PV,对于运维人员来说维护成本
很高,Kubernetes提供一种自动创建PV的机制,叫StorageClass,它的作用就是创建PV的模板。
创建 StorageClass 需要定义 PV 的属性,比如存储类型、大小等;另外创建这种 PV 需要用到的
存储插件,比如 Ceph 等。 有了这两部分信息,Kubernetes 就能够根据用户提交的 PVC,找到对
应的 StorageClass,然后 Kubernetes 就会调用 StorageClass 声明的存储插件,自动创建需要的
PV 并进行绑定
PV是集群中的资源。 PVC是对这些资源的请求,也是对资源的索引检查。
PV和PVC之间的相互作用遵循这个生命周期
Provisioning(配置)---> Binding(绑定)---> Using(使用)---> Releasing(释放) --->
Recycling(回收)
根据这 5 个阶段,PV 的状态有以下 4 种
一个PV从创建到销毁的具体流程如下:
192.168.80.100服务器配置nfs
- mkdir -p /data/volumes/v{1..5}
- vim /etc/exports
- /data/volumes/v1 192.168.80.0/24(rw,sync,no_root_squash)
- /data/volumes/v2 192.168.80.0/24(rw,sync,no_root_squash)
- /data/volumes/v3 192.168.80.0/24(rw,sync,no_root_squash)
- /data/volumes/v4 192.168.80.0/24(rw,sync,no_root_squash)
- /data/volumes/v5 192.168.80.0/24(rw,sync,no_root_squash)
-
- exportfs -arv
- vim demo1-pv.yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata: #由于 PV 是集群级别的资源,即 PV 可以跨 namespace 使用,所以 PV 的 metadata 中不用配置 namespace
- name: pv001
- spec:
- capacity: 定义存储能力,一般用于设置存储空间
- storage: 1Gi 指定大小
- accessModes: 定义访问模式
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle 回收策略
- #storageClassName: slow 自定义存储类名称,此配置用于绑定具有相同类别的PVC和PV
- nfs: 定义存储类型
- path: /data/volumes/v1 定义挂载卷路径
- server: 192.168.80.101 定义服务器名称
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv002
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v2
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv003
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v3
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv004
- spec:
- capacity:
- storage: 4Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v4
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv005
- spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v5
- server: 192.168.80.100
- ---
-
- kubectl apply -f demo1-pv.yaml
- vim demo2-pvc.yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mypvc001
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 2Gi
- #storageClassName: slow
- kubectl apply -f demo2-pvc.yaml
- kubectl apply -f demo2-pvc.yaml #再以同样的文件创建pvc
- kubectl get pv,pvc
- #可以发现即使条件再匹配,也不会与原先处于released状态的pv匹配
- vim demo1-pv.yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata: #由于 PV 是集群级别的资源,即 PV 可以跨 namespace 使用,所以 PV 的 metadata 中不用配置 namespace
- name: pv001
- spec:
- capacity: 定义存储能力,一般用于设置存储空间
- storage: 1Gi 指定大小
- accessModes: 定义访问模式
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle 回收策略
- #storageClassName: slow 自定义存储类名称,此配置用于绑定具有相同类别的PVC和PV
- nfs: 定义存储类型
- path: /data/volumes/v1 定义挂载卷路径
- server: 192.168.80.100 定义服务器名称
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv002
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v2
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv003
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Recycle #当设置指定访问策略为 Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v3
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv004
- spec:
- capacity:
- storage: 4Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v4
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv005
- spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v5
- server: 192.168.80.100
- ---
-
- kubectl apply -f demo1-pv.yaml
- kubectl apply -f demo2-pvc.yaml
- kubectl get pv,pvc
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv001
- spec:
- capacity:
- storage: 1Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v1
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv002
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v2
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv003
- spec:
- capacity:
- storage: 2Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v3
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv004
- spec:
- capacity:
- storage: 4Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Recycle
- #storageClassName: slow
- nfs:
- path: /data/volumes/v4
- server: 192.168.80.100
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv005
- spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteOnce
- - ReadWriteMany
- #persistentVolumeReclaimPolicy: Recycle
- #storageClassName: liu
- nfs:
- path: /data/volumes/v5
- server: 192.168.80.100
- ---
-
-
- kubectl apply -f demo1-pv.yaml
-
- vim demo2-pvc.yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mypvc001
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 2Gi
- #storageClassName: liu
-
- kubectl apply -f demo2-pvc.yaml
- vim demo3-pod.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- creationTimestamp: null
- labels:
- run: demo3
- name: demo3-pod
- spec:
- volumes:
- - name: tan-vol
- persistentVolumeClaim:
- claimName: mypvc001 #创建的pvc名称
- containers:
- - image: soscscs/myapp:v1
- name: demo
- ports:
- - containerPort: 80
- resources: {}
- volumeMounts:
- - name: tan-vol
- mountPath: /mnt/
- dnsPolicy: ClusterFirst
- restartPolicy: Always
- status: {}
-
- kubectl apply -f demo3-pod.yaml
kubectl delete pod demo3-pod
- 上传nfs-client-provisioner.tar 、nfs-client.zip压缩包到master节点
-
- 上传nfs-client-provisioner.tar到两个node节点
- master节点
- kubectl apply -f nfs-client-rbac.yaml
- kubectl get serviceaccounts
- vim /etc/kubernetes/manifests/kube-apiserver.yaml
- - --feature-gates=RemoveSelfLink=false #添加这一行
- 由于 1.20 版本禁用了 selfLink,所以 k8s 1.20+ 版本通过 nfs provisioner 动态生成 PV 会报错,需要添加
- cd /etc/kubernetes/manifests
- mv kube-apiserver.yaml /tmp/ #进行重启操作
- mv /tmp/kube-apiserver.yaml ./
- nfs服务器
- vim /etc/exports
- /opt/nfs 192.168.80.0/24(rw,sync,no_root_squash)
- exportfs -avr
- 两个node节点检查
-
- #创建 NFS Provisioner
-
- master节点
- cd /root/day9/pv
- vim nfs-client-provisioner.yaml
-
- kubectl apply -f nfs-client-provisioner.yaml
- vim nfs-client-storageclass.yaml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: nfs-client-storageclass
- provisioner: nfs-tan #要与 nfs-client-provisioner.yaml中设置的一致
- parameters:
- archiveOnDelete: "true" #做数据备份
-
- kubectl apply -f nfs-client-storageclass.yaml
- vim demo2-pvc.yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mypvc003
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 2Gi
- storageClassName: nfs-client-storageclass
- vim nfs-client-storageclass.yaml
-
- kubectl delete -f nfs-client-storageclass.yaml && kubectl apply -f nfs-client-storageclass.yaml
- kubectl get pv,pvc
- vim pod.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- labels:
- run: demo3
- name: dem3-pod
- spec:
- volumes:
- - name: scj-vol
- persistentVolumeClaim:
- claimName: mypvc003
- containers:
- - image: soscscs/myapp:v1
- name: myapp
- ports:
- - containerPort: 80
- resources: {}
- volumeMounts:
- - name: scj-vol
- mountPath: /mnt
- dnsPolicy: ClusterFirst
- restartPolicy: Always
- status: {}
-
- kubectl apply -f pod.yaml
- kubectl exec -it dem3-pod -- sh
- cd /mnt
- echo '123456' > liu.txt
- kubectl delete pod dem3-pod
- kubectl delete pvc mypvc003 #删除查看是否备份
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。