赞
踩
注意:在做实验前需要安装配置NFS服务 详细内容见上节k8s九。 1.首先清理环境 kubectl get pod kubectl delete -f nfs.yaml kubectl get pod kubectl get pv kubectl get pvc 2.编写文件 vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv1 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: /nfsdata server: 192.168.20.10 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv2 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: /nfsdata server: 192.168.20.10 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv3 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Recycle storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: /nfsdata server: 192.168.20.10 3.运行文件 kubectl apply -f pv.yaml 4.查看pv kubectl get pv 5.创建pvc vim pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim ##pvc模式 metadata: name: pvc1 spec: storageClassName: nfs accessModes: - ReadWriteOnce ##单点读写 resources: requests: storage: 5Gi 6.运行文件 kubectl apply -f pvc.yaml 7.查看pvc和pv kubectl get pvc kubectl get pv 8.创建pod vim pod1.yaml apiVersion: v1 ## 创建pod kind: Pod metadata: name: test-pd spec: containers: - image: myapp:v1 name: nginx volumeMounts: - mountPath: /usr/share/nginx/html name: pv1 volumes: - name: pv1 persistentVolumeClaim: claimName: pvc1 9.运行文件 kubectl apply -f pod1.yaml 10.查看pod kubectl get pod kubectl get pod -o wide 11.测试 curl 10.244.1.84 12.修改容器内页面 kubectl exec -it test-pd -- sh cd /usr/share/nginx/html echo www.linux.org > index.html 13.查看/nfsdata 发现/nfsdata同步发生了变化 14.删除pod再重建 kubectl delete pod test-pd kubectl apply -f pod1.yaml kubectl get pod kubectl get pod -o wide 15.测试 curl 10.244.1.85 当删除pod后再重建pod,我们发现仍可以访问原来的资源,即体现了持久化。
首先清理环境
创建多个pv
pvc1和pv1已经成功绑定
创建pod
在上一篇博客时已经添加了首页,这里直接用。
访问成功
我们可以修改容器内页面
发现/nfsdata同步发生了变化
当删除pod后再重建pod,我们发现仍可以访问原来的资源,即体现了持久化。
1.创建目录 mkdir nfs cd nfs 2.创建ns kubectl create namespace nfs-client-provisioner kubectl get namespaces 3.编写文件 vim nfs-client-provisioner.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.20.10 - name: NFS_PATH value: /nfsdata volumes: - name: nfs-client-root nfs: server: 192.168.20.10 path: /nfsdata --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" 4.拉取镜像并放到仓库(没有harbor的直接本地拉取,我这次是直接本地下载拉取的) docker pull heegor/nfs-subdir-external-provisioner:v4.0.0 docker tag heegor/nfs-subdir-external-provisioner:v4.0.0 www.lyueyue.org/library/nfs-subdir-external-provisioner:v4.0.0 docker push www.lyueyue.org/library/nfs-subdir-external-provisioner:v4.0.0 5.运行文件 kubectl apply -f nfs-client-provisioner.yaml 6.查看pod和StorageClass kubectl get pod -n nfs-client-provisioner kubectl get sc 7.编写pvc.yaml vim pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim spec: storageClassName: managed-nfs-storage accessModes: - ReadWriteMany resources: requests: storage: 1Gi 8.运行pvc.yaml kubectl apply -f pvc.yaml 9.查看pv和pvc kubectl get pv kubectl get pvc 10.删除pvc.yaml kubectl delete -f pvc.yaml 11.测试 查看/nfsdata
注意:记得server2和server3也要拉取镜像如下
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。