赞
踩
目录
三,搭建storageclass+NFS 实现NFS的动态PV创建
2,使用deployment来创建NFS Provisioner
4,创建storageclass,复制建立pvc并调用NFS provisioner 进行预订的工作,并让pv与pvc建立关联。
PV是集群中的资源,PVC是这些资源的请求,也是对资源的索引检查。
PV和PVC之间的相互作用遵循以下生命周期:
Provisioning(配置) ---> Binding(绑定) ---> Using(使用) ---> Releasing(释放) ---> Recycling(回收)
根据这五个阶段,pv的状态有以下4种:
Avaliable(可用):表示可以状态。还未被任何PVC绑定
Bound(已绑定):表示PV已经被绑定到PVC
Released(已释放):表示PVC被删掉,但是资源尚未被集群回收
Failes(失败):表示该PV的自动回收失败
有三种回收策略,Retain,Delete和Recycle。
Retain就是保留现场,K8S集群什么也不做,等待用户收到去处理PV里的1数据,处理完成后,再手动删除PV。
Delete策略:K8S会自动删除该PV及里面的数据。
Recycle方式:K8S会将PV里的数据删除,然后把PV的状态变成Available,又可以被新的pvc绑定使用。
查看PV的定义方式
kubectl explain pv
- FIELDS:
- apiVersion: v1
- kind: PersistentVolume
- metadata: #由于 PV 是集群级别的资源,即 PV 可以跨 namespace 使用,所以 PV 的 metadata 中不用配置 namespace
- name:
- spec
查看PV定义的规格
kubectl explain pv.spec
- spec:
- nfs:(定义存储类型)
- path:(定义挂载卷路径)
- server:(定义服务器名称)
- accessModes:(定义访问模型,有以下三种访问模型,以列表的方式存在,也就是说可以定义多个访问模式)
- - ReadWriteOnce #(RWO)存储可读可写,但只支持被单个 Pod 挂载
- - ReadOnlyMany #(ROX)存储可以以只读的方式被多个 Pod 挂载
- - ReadWriteMany #(RWX)存储可以以读写的方式被多个 Pod 共享
- #nfs 支持全部三种;iSCSI 不支持 ReadWriteMany(iSCSI 就是在 IP 网络上运行 SCSI 协议的一种网络存储技术);HostPath 不支持 ReadOnlyMany 和 ReadWriteMany。
- capacity:(定义存储能力,一般用于设置存储空间)
- storage: 2Gi (指定大小)
- storageClassName: (自定义存储类名称,此配置用于绑定具有相同类别的PVC和PV)
- persistentVolumeReclaimPolicy: Retain #回收策略(Retain/Delete/Recycle)
- #Retain(保留):当删除与之绑定的PVC时候,这个PV被标记为released(PVC与PV解绑但还没有执行回收策略)且之前的数据依然保存在该PV上,但是该PV不可用,需要手动来处理这些数据并删除该PV。
- #Delete(删除):删除与PV相连的后端存储资源(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)
- #Recycle(回收):删除数据,效果相当于执行了 rm -rf /thevolume/* (只有 NFS 和 HostPath 支持)

查看PVC的定义方式
kubectl explain pvc
- KIND: PersistentVolumeClaim
- VERSION: v1
- FIELDS:
- apiVersion <string>
- kind <string>
- metadata <Object>
- spec <Object>
PV和PVC中的spec关键字段要匹配,比如存储(storage)大小,访问模式(accessModes)、存储类名称(storageClassName)
- kubectl explain pvc.spec
- spec:
- accessModes: (定义访问模式,必须是PV的访问模式的子集)
- resources:
- requests:
- storage: (定义申请资源的大小)
- storageClassName: (定义存储类名称,此配置用于绑定具有相同类别的PVC和PV)
这里有两种PV的提供方式:静态或者动态
静态----直接固定存储空间
动态------通过存储类进行动态创建存储空间
master | 192.168.135.90 |
node1 | 192.168.135.196 |
node2 | 192.168.135.200 |
nfs | 192.168.135.189 |
yum install -y nfs-utils rpcbind
mkdir -p /data/{vol1,vol2,vol3,vol4,vol5}
- [root@nfs189 data]#chmod 777 vol1/
- [root@nfs189 data]#chmod 777 vol2
- [root@nfs189 data]#chmod 777 vol3
- [root@nfs189 data]#chmod 777 vol4
- [root@nfs189 data]#chmod 777 vol5
- vim /etc/exports
- /data/vol1 192.168.135.0/24(rw,no_root_squash,sync)
- /data/vol2 192.168.135.0/24(rw,no_root_squash,sync)
- /data/vol3 192.168.135.0/24(rw,no_root_squash,sync)
- /data/vol4 192.168.135.0/24(rw,no_root_squash,sync)
- /data/vol5 192.168.135.0/24(rw,no_root_squash,sync)
-
- exportfs -rv
- #手动加载 NFS 共享服务时,应该先启动 rpcbind,再启动 nfs
- systemctl start rpcbind && systemctl enable rpcbind
- systemctl start nfs && systemctl enable nfs
-
- #查看 rpcbind 端口是否开启,rpcbind 服务默认使用 tcp 端口 111
- netstat -anpt | grep rpcbind
这里定义了5个pv,并且定义挂载的路径已经访问模式,还有pv划分的大小。
- [root@master demo]#cat pv-demo.yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv001
- labels:
- name: pv001
- spec:
- nfs:
- path: /data/vol1
- server: 192.168.135.189
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 1Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv002
- labels:
- name: pv002
- spec:
- nfs:
- path: /data/vol2
- server: 192.168.135.189
- accessModes: ["ReadWriteOnce"]
- capacity:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv003
- labels:
- name: pv003
- spec:
- nfs:
- path: /data/vol3
- server: 192.168.135.189
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv004
- labels:
- name: pv004
- spec:
- nfs:
- path: /data/vol4
- server: 192.168.135.189
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 4Gi
- ---
- piVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv005
- labels:
- name: pv005
- spec:
- nfs:
- path: /data/vol5
- server: 192.168.135.189
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 5Gi

创建并查看
- kubectl apply -f pv-demo.yaml
- kubectl get pv
-
- [root@master demo]#kubectl apply -f pv-demo.yaml
- persistentvolume/pv001 unchanged
- persistentvolume/pv002 unchanged
- persistentvolume/pv003 unchanged
- persistentvolume/pv004 unchanged
- persistentvolume/pv005 configured
- [root@master demo]#kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pv001 1Gi RWO,RWX Retain Available 3m3s
- pv002 2Gi RWO Retain Available 3m3s
- pv003 2Gi RWO,RWX Retain Available 3m3s
- pv004 4Gi RWO,RWX Retain Available 3m3s
- pv005 5Gi RWO,RWX Retain Available 32s

kubectl describe pv pv001
kubectl explain pv
这里定义了 PVC 的访问模式为多路读写,该访问模式必须在前面 PV 定义的访问模式之中。定义 PVC 申请的大小为 2Gi,此时 PVC 会自动去匹配多路读写且大小为 2Gi 的 PV ,匹配成功获取 PVC 的状态即为 Bound。
- vim pvc-demo.yaml
- ---------------------------------------
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mypvc
- spec:
- accessModes: ["ReadWriteMany"]
- resources:
- requests:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: Pod
- metadata:
- name: pv-pvc
- spec:
- containers:
- - name: myapp
- image: nginx
- volumeMounts:
- - name: html
- mountPath: /usr/share/nginx/html
- volumes:
- - name: html
- persistentVolumeClaim:
- claimName: mypvc
-

发布并查看
- kubectl apply -f pvc-dema.yaml
- kubectl get pv
-
- [root@master demo]#kubectl apply -f pvc-demo.yaml
- persistentvolumeclaim/mypvc created
- pod/pv-pvc created
- [root@master demo]#kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pv001 1Gi RWO,RWX Retain Available 28m
- pv002 2Gi RWO Retain Available 28m
- pv003 2Gi RWO,RWX Retain Bound default/mypvc 28m
- pv004 4Gi RWO,RWX Retain Available 28m
- pv005 5Gi RWO,RWX Retain Available
- kubectl get pv
- kubectl describe pv pv003
- 找到挂载点
-
- [root@nfs189 data]#cd vol3/
- [root@nfs189 vol3]#ls
- [root@nfs189 vol3]#echo "this is is a vol3" >> index.htm
测试访问
- [root@master demo]#kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- nginx-7f6dd89d46-fhv4z 1/1 Running 4 2d4h 10.244.2.49 node2 <none> <none>
- pod-example 1/1 Running 2 23h 10.244.2.48 node2 <none> <none>
- pv-pvc 1/1 Running 0 13m 10.244.1.31 node1 <none> <none>
- [root@master demo]#curl 10.244.1.31
- this is is a vol3
- apiVersion: v1
- kind: ServiceAccount
- metadata:
-
- name: nfs-client-provisioner
-
- #创建集群角色
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- name: nfs-client-provisioner-clusterrole
- rules:
-
- - apiGroups: [""]
- resources: ["persistentvolumes"]
- verbs: ["get", "list", "watch", "create", "delete"]
- - apiGroups: [""]
- resources: ["persistentvolumeclaims"]
- verbs: ["get", "list", "watch", "update"]
- - apiGroups: ["storage.k8s.io"]
- resources: ["storageclasses"]
- verbs: ["get", "list", "watch"]
- - apiGroups: [""]
- resources: ["events"]
- verbs: ["list", "watch", "create", "update", "patch"]
- - apiGroups: [""]
- resources: ["endpoints"]
- verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
-
- ---
-
- #集群角色绑定
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: nfs-client-provisioner-clusterrolebinding
- subjects:
-
- - kind: ServiceAccount
- name: nfs-client-provisioner
- namespace: default
- roleRef:
- kind: ClusterRole
- name: nfs-client-provisioner-clusterrole
- apiGroup: rbac.authorization.k8s.io

kubectl apply -f pvc-demo.yaml
#由于 1.20 版本启用了 selfLink,所以 k8s 1.20+ 版本通过 nfs provisioner 动态生成pv会报错,解决方法如下: vim /etc/kubernetes/manifests/kube-apiserver.yaml
- vim /etc/kubernetes/manifests/kube-apiserver.yaml
- spec:
- containers:
-
- - command:
- - kube-apiserver
- - --feature-gates=RemoveSelfLink=false #添加这一行
- - --advertise-address=192.168.80.20
- ......
- #创建 NFS Provisioner
- vim nfs-client-provisioner.yaml
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: nfs-client-provisioner
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: nfs-client-provisioner
- strategy:
- type: Recreate
- template:
- metadata:
- labels:
- app: nfs-client-provisioner
- spec:
- serviceAccountName: nfs-client-provisioner #指定Service Account账户
- containers:
- - name: nfs-client-provisioner
- image: quay.io/external_storage/nfs-client-provisioner:latest
- imagePullPolicy: IfNotPresent
- volumeMounts:
- - name: nfs-client-root
- mountPath: /persistentvolumes
- env:
- - name: PROVISIONER_NAME
- value: nfs-storage #配置provisioner的Name,确保该名称与StorageClass资源中的provisioner名称保持一致
- - name: NFS_SERVER
- value: stor01 #配置绑定的nfs服务器
- - name: NFS_PATH
- value: /opt/k8s #配置绑定的nfs服务器目录
- volumes: #申明nfs数据卷
- - name: nfs-client-root
- nfs:
- server: stor01
- path: /opt/k8s

- vim nfs-client-storageclass.yaml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: nfs-client-storageclass
- provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
- parameters:
- archiveOnDelete: "false" #false表示在删除PVC时不会对数据进行存档,即删除数据
kubectl apply -f test-pvc-pod.yaml
- //PVC 通过 StorageClass 自动申请到空间
- kubectl get pvc
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- test-nfs-pvc Bound pvc-11670f39-782d-41b8-a842-eabe1859a456 1Gi RWX nfs-client-storageclass 2s
- //查看 NFS 服务器上是否生成对应的目录,自动创建的 PV 会以 ${namespace}-${pvcName}-${pvName} 的目录格式放到 NFS 服务器上
- ls /opt/k8s/
- default-test-nfs-pvc-pvc-11670f39-782d-41b8-a842-eabe1859a456
-
- //进入 Pod 在挂载目录 /mnt 下写一个文件,然后查看 NFS 服务器上是否存在该文件
- kubectl exec -it test-storageclass-pod sh
- / # cd /mnt/
- /mnt # echo 'this is test file' > test.txt
-
- //发现 NFS 服务器上存在,说明验证成功
- cat /opt/k8s/test.txt
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。