当前位置:   article > 正文

K8S+DevOps架构师实战课 | kubernetes对接分布式存储_kubernetes分布式存储

kubernetes分布式存储

视频来源:B站《Docker&k8s教程天花板,绝对是B站讲的最好的,这一套学会k8s搞定Docker 全部核心知识都在这里》

一边学习一边整理老师的课程内容及试验笔记,并与大家分享,侵权即删,谢谢支持!

附上汇总贴:K8S+DevOps架构师实战课 | 汇总_热爱编程的通信人的博客-CSDN博客


PV与PVC快速入门

k8s存储的目的就是保证Pod重建后,数据不丢失。简单的数据持久化的下述方式:

  • emptyDirapiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - image: http://k8s.gcr.io/test-webserver name: webserver volumeMounts: - mountPath: /cache name: cache-volume - image: http://k8s.gcr.io/test-redis name: redis volumeMounts: - mountPath: /data name: cache-volume volumes: - name: cache-volume emptyDir: {}Pod内的容器共享卷的数据存在于Pod的生命周期,Pod销毁,数据丢失Pod内的容器自动重建后,数据不会丢失
  • hostPath
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: test-pod
  5. spec:
  6. containers:
  7. - image: k8s.gcr.io/test-webserver
  8. name: test-container
  9. volumeMounts:
  10. - mountPath: /test-pod
  11. name: test-volume
  12. volumes:
  13. - name: test-volume
  14. hostPath:
  15. # directory location on host
  16. path: /data
  17. # this field is optional
  18. type: Directory

通常配合nodeSelector使用

  • nfs存储
  1. ...
  2. volumes:
  3. - name: redisdata #卷名称
  4. nfs: #使用NFS网络存储卷
  5. server: 192.168.31.241 #NFS服务器地址
  6. path: /data/redis #NFS服务器共享的目录
  7. readOnly: false #是否为只读
  8. ...

volume支持的种类众多(参考 https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes ),每种对应不同的存储后端实现,因此为了屏蔽后端存储的细节,同时使得Pod在使用存储的时候更加简洁和规范,k8s引入了两个新的资源类型,PV和PVC。

PersistentVolume(持久化卷),是对底层的存储的一种抽象,它和具体的底层的共享存储技术的实现方式有关,比如 Ceph、GlusterFS、NFS 等,都是通过插件机制完成与共享存储的对接。如使用PV对接NFS存储:

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: nfs-pv
  5. spec:
  6. capacity:
  7. storage: 1Gi
  8. accessModes:
  9. - ReadWriteMany
  10. persistentVolumeReclaimPolicy: Retain
  11. nfs:
  12. path: /data/k8s
  13. server: 172.21.51.55
  • capacity,存储能力, 目前只支持存储空间的设置, 就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。
  • accessModes,访问模式, 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载ReadOnlyMany(ROX):只读权限,可以被多个节点挂载ReadWriteMany(RWX):读写权限,可以被多个节点挂载

  • persistentVolumeReclaimPolicy,pv的回收策略, 目前只有 NFS 和 HostPath 两种类型支持回收策略Retain(保留)- 保留数据,需要管理员手工清理数据Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。

因为PV是直接对接底层存储的,就像集群中的Node可以为Pod提供计算资源(CPU和内存)一样,PV可以为Pod提供存储资源。因此PV不是namespaced的资源,属于集群层面可用的资源。Pod如果想使用该PV,需要通过创建PVC挂载到Pod中。

PVC全写是PersistentVolumeClaim(持久化卷声明),PVC 是用户存储的一种声明,创建完成后,可以和PV实现一对一绑定。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC 即可。

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: pvc-nfs
  5. namespace: default
  6. spec:
  7. accessModes:
  8. - ReadWriteMany
  9. resources:
  10. requests:
  11. storage: 1Gi

然后Pod中通过如下方式去使用:

  1. ...
  2. spec:
  3. containers:
  4. - name: nginx
  5. image: nginx:alpine
  6. imagePullPolicy: IfNotPresent
  7. ports:
  8. - containerPort: 80
  9. name: web
  10. volumeMounts: #挂载容器中的目录到pvc nfs中的目录
  11. - name: www
  12. mountPath: /usr/share/nginx/html
  13. volumes:
  14. - name: www
  15. persistentVolumeClaim: #指定pvc
  16. claimName: pvc-nfs
  17. ...

PV与PVC管理NFS存储卷实践

环境准备

服务端:172.21.51.55

  1. $ yum -y install nfs-utils rpcbind
  2. # 共享目录
  3. $ mkdir -p /data/k8s && chmod 755 /data/k8s
  4. $ echo '/data/k8s *(insecure,rw,sync,no_root_squash)'>>/etc/exports
  5. $ systemctl enable rpcbind && systemctl start rpcbind
  6. $ systemctl enable nfs && systemctl start nfs

客户端:k8s集群slave节点

  1. $ yum -y install nfs-utils rpcbind
  2. $ mkdir /nfsdata
  3. $ mount -t nfs 172.21.51.55:/data/k8s /nfsdata

PV与PVC演示

  1. $ cat pv-nfs.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: nfs-pv
  6. spec:
  7. capacity:
  8. storage: 1Gi
  9. accessModes:
  10. - ReadWriteMany
  11. persistentVolumeReclaimPolicy: Retain
  12. nfs:
  13. path: /data/k8s/nginx
  14. server: 172.21.51.55
  15. $ kubectl create -f pv-nfs.yaml
  16. $ kubectl get pv
  17. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
  18. nfs-pv 1Gi RWO Retain Available

一个 PV 的生命周期中,可能会处于4种不同的阶段:

  • Available(可用):表示可用状态,还未被任何 PVC 绑定
  • Bound(已绑定):表示 PV 已经被 PVC 绑定
  • Released(已释放):PVC 被删除,但是资源还未被集群重新声明
  • Failed(失败): 表示该 PV 的自动回收失败
  1. $ cat pvc.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: pvc-nfs
  6. namespace: default
  7. spec:
  8. accessModes:
  9. - ReadWriteMany
  10. resources:
  11. requests:
  12. storage: 1Gi
  13. $ kubectl create -f pvc.yaml
  14. $ kubectl get pvc
  15. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  16. pvc-nfs Bound nfs-pv 1Gi RWO 3s
  17. $ kubectl get pv
  18. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
  19. nfs-pv 1Gi RWO Retain Bound default/pvc-nfs
  20. #访问模式,storage大小(pvc大小需要小于pv大小),以及 PV 和 PVC 的 storageClassName 字段必须一样,这样才能够进行绑定。
  21. #PersistentVolumeController会不断地循环去查看每一个 PVC,是不是已经处于 Bound(已绑定)状态。如果不是,那它就会遍历所有的、可用的 PV,并尝试将其与未绑定的 PVC 进行绑定,这样,Kubernetes 就可以保证用户提交的每一个 PVC,只要有合适的 PV 出现,它就能够很快进入绑定状态。而所谓将一个 PV 与 PVC 进行“绑定”,其实就是将这个 PV 对象的名字,填在了 PVC 对象的 spec.volumeName 字段上。
  22. # 查看nfs数据目录
  23. $ ls /nfsdata

创建Pod挂载pvc

  1. $ cat deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-pvc
  6. spec:
  7. replicas: 1
  8. selector: #指定Pod的选择器
  9. matchLabels:
  10. app: nginx
  11. template:
  12. metadata:
  13. labels:
  14. app: nginx
  15. spec:
  16. containers:
  17. - name: nginx
  18. image: nginx:alpine
  19. imagePullPolicy: IfNotPresent
  20. ports:
  21. - containerPort: 80
  22. name: web
  23. volumeMounts: #挂载容器中的目录到pvc nfs中的目录
  24. - name: www
  25. mountPath: /usr/share/nginx/html
  26. volumes:
  27. - name: www
  28. persistentVolumeClaim: #指定pvc
  29. claimName: pvc-nfs
  30. $ kubectl create -f deployment.yaml
  31. # 查看容器/usr/share/nginx/html目录

storageClass实现动态挂载

创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。

部署: https://github.com/kubernetes-retired/external-storage

provisioner.yaml

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nfs-client-provisioner
  5. labels:
  6. app: nfs-client-provisioner
  7. # replace with namespace where provisioner is deployed
  8. namespace: default
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: nfs-client-provisioner
  14. strategy:
  15. type: Recreate
  16. selector:
  17. matchLabels:
  18. app: nfs-client-provisioner
  19. template:
  20. metadata:
  21. labels:
  22. app: nfs-client-provisioner
  23. spec:
  24. serviceAccountName: nfs-client-provisioner
  25. containers:
  26. - name: nfs-client-provisioner
  27. image: quay.io/external_storage/nfs-client-provisioner:latest
  28. volumeMounts:
  29. - name: nfs-client-root
  30. mountPath: /persistentvolumes
  31. env:
  32. - name: PROVISIONER_NAME
  33. value: luffy.com/nfs
  34. - name: NFS_SERVER
  35. value: 172.21.51.55
  36. - name: NFS_PATH
  37. value: /data/k8s
  38. volumes:
  39. - name: nfs-client-root
  40. nfs:
  41. server: 172.21.51.55
  42. path: /data/k8s

rbac.yaml

  1. kind: ServiceAccount
  2. apiVersion: v1
  3. metadata:
  4. name: nfs-client-provisioner
  5. namespace: nfs-provisioner
  6. ---
  7. kind: ClusterRole
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. metadata:
  10. name: nfs-client-provisioner-runner
  11. namespace: nfs-provisioner
  12. rules:
  13. - apiGroups: [""]
  14. resources: ["persistentvolumes"]
  15. verbs: ["get", "list", "watch", "create", "delete"]
  16. - apiGroups: [""]
  17. resources: ["persistentvolumeclaims"]
  18. verbs: ["get", "list", "watch", "update"]
  19. - apiGroups: ["storage.k8s.io"]
  20. resources: ["storageclasses"]
  21. verbs: ["get", "list", "watch"]
  22. - apiGroups: [""]
  23. resources: ["events"]
  24. verbs: ["create", "update", "patch"]
  25. ---
  26. kind: ClusterRoleBinding
  27. apiVersion: rbac.authorization.k8s.io/v1
  28. metadata:
  29. name: run-nfs-client-provisioner
  30. namespace: nfs-provisioner
  31. subjects:
  32. - kind: ServiceAccount
  33. name: nfs-client-provisioner
  34. namespace: nfs-provisioner
  35. roleRef:
  36. kind: ClusterRole
  37. name: nfs-client-provisioner-runner
  38. apiGroup: rbac.authorization.k8s.io
  39. ---
  40. kind: Role
  41. apiVersion: rbac.authorization.k8s.io/v1
  42. metadata:
  43. name: leader-locking-nfs-client-provisioner
  44. namespace: nfs-provisioner
  45. rules:
  46. - apiGroups: [""]
  47. resources: ["endpoints"]
  48. verbs: ["get", "list", "watch", "create", "update", "patch"]
  49. ---
  50. kind: RoleBinding
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. metadata:
  53. name: leader-locking-nfs-client-provisioner
  54. namespace: nfs-provisioner
  55. subjects:
  56. - kind: ServiceAccount
  57. name: nfs-client-provisioner
  58. # replace with namespace where provisioner is deployed
  59. namespace: nfs-provisioner
  60. roleRef:
  61. kind: Role
  62. name: leader-locking-nfs-client-provisioner
  63. apiGroup: rbac.authorization.k8s.io

storage-class.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. storageclass.kubernetes.io/is-default-class: "true" # 设置为default StorageClass
  6. name: nfs
  7. provisioner: luffy.com/nfs
  8. parameters:
  9. archiveOnDelete: "true"

pvc.yaml

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: test-claim
  5. spec:
  6. accessModes:
  7. - ReadWriteMany
  8. resources:
  9. requests:
  10. storage: 1Mi
  11. storageClassName: nfs

对接Ceph存储实践

ceph的安装及使用参考 http://docs.ceph.org.cn/start/intro/

单点快速安装: https://blog.csdn.net/h106140873/article/details/90201379

  1. # CephFS需要使用两个Pool来分别存储数据和元数据
  2. ceph osd pool create cephfs_data 128
  3. ceph osd pool create cephfs_meta 128
  4. ceph osd lspools
  5. # 创建一个CephFS
  6. ceph fs new cephfs cephfs_meta cephfs_data
  7. # 查看
  8. ceph fs ls
  9. # ceph auth get-key client.admin
  10. client.admin
  11. key: AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
  12. # 挂载
  13. $ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==

storageClass实现动态挂载

创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。

比如,针对cephfs,可以创建如下类型的storageclass:

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: dynamic-cephfs
  5. provisioner: ceph.com/cephfs
  6. parameters:
  7. monitors: 172.21.51.55:6789
  8. adminId: admin
  9. adminSecretName: ceph-admin-secret
  10. adminSecretNamespace: "kube-system"
  11. claimRoot: /volumes/kubernetes

NFS,ceph-rbd,cephfs均提供了对应的provisioner

部署cephfs-provisioner

  1. $ cat external-storage-cephfs-provisioner.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: cephfs-provisioner
  6. namespace: kube-system
  7. ---
  8. kind: ClusterRole
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. metadata:
  11. name: cephfs-provisioner
  12. rules:
  13. - apiGroups: [""]
  14. resources: ["persistentvolumes"]
  15. verbs: ["get", "list", "watch", "create", "delete"]
  16. - apiGroups: [""]
  17. resources: ["persistentvolumeclaims"]
  18. verbs: ["get", "list", "watch", "update"]
  19. - apiGroups: ["storage.k8s.io"]
  20. resources: ["storageclasses"]
  21. verbs: ["get", "list", "watch"]
  22. - apiGroups: [""]
  23. resources: ["events"]
  24. verbs: ["create", "update", "patch"]
  25. - apiGroups: [""]
  26. resources: ["endpoints"]
  27. verbs: ["get", "list", "watch", "create", "update", "patch"]
  28. - apiGroups: [""]
  29. resources: ["secrets"]
  30. verbs: ["create", "get", "delete"]
  31. ---
  32. kind: ClusterRoleBinding
  33. apiVersion: rbac.authorization.k8s.io/v1
  34. metadata:
  35. name: cephfs-provisioner
  36. subjects:
  37. - kind: ServiceAccount
  38. name: cephfs-provisioner
  39. namespace: kube-system
  40. roleRef:
  41. kind: ClusterRole
  42. name: cephfs-provisioner
  43. apiGroup: rbac.authorization.k8s.io
  44. ---
  45. apiVersion: apps/v1
  46. kind: Deployment
  47. metadata:
  48. name: cephfs-provisioner
  49. namespace: kube-system
  50. spec:
  51. replicas: 1
  52. selector:
  53. matchLabels:
  54. app: cephfs-provisioner
  55. strategy:
  56. type: Recreate
  57. template:
  58. metadata:
  59. labels:
  60. app: cephfs-provisioner
  61. spec:
  62. containers:
  63. - name: cephfs-provisioner
  64. image: "quay.io/external_storage/cephfs-provisioner:latest"
  65. env:
  66. - name: PROVISIONER_NAME
  67. value: ceph.com/cephfs
  68. imagePullPolicy: IfNotPresent
  69. command:
  70. - "/usr/local/bin/cephfs-provisioner"
  71. args:
  72. - "-id=cephfs-provisioner-1"
  73. - "-disable-ceph-namespace-isolation=true"
  74. serviceAccount: cephfs-provisioner

在ceph monitor机器中查看admin账户的key

  1. $ ceph auth list
  2. $ ceph auth get-key client.admin
  3. AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==

创建secret

  1. $ echo -n AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==|base64
  2. QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
  3. $ cat ceph-admin-secret.yaml
  4. apiVersion: v1
  5. data:
  6. key: QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
  7. kind: Secret
  8. metadata:
  9. name: ceph-admin-secret
  10. namespace: kube-system
  11. type: Opaque

创建storageclass

  1. $ cat cephfs-storage-class.yaml
  2. kind: StorageClass
  3. apiVersion: storage.k8s.io/v1
  4. metadata:
  5. name: dynamic-cephfs
  6. provisioner: ceph.com/cephfs
  7. parameters:
  8. monitors: 172.21.51.55:6789
  9. adminId: admin
  10. adminSecretName: ceph-admin-secret
  11. adminSecretNamespace: "kube-system"
  12. claimRoot: /volumes/kubernetes

动态pvc验证及实现分析

使用流程: 创建pvc,指定storageclass和存储大小,即可实现动态存储。

创建pvc测试自动生成pv

  1. $ cat cephfs-pvc-test.yaml
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: cephfs-claim
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: dynamic-cephfs
  10. resources:
  11. requests:
  12. storage: 2Gi
  13. $ kubectl create -f cephfs-pvc-test.yaml
  14. $ kubectl get pv
  15. pvc-2abe427e-7568-442d-939f-2c273695c3db 2Gi RWO Delete Bound default/cephfs-claim dynamic-cephfs 1s

创建Pod使用pvc挂载cephfs数据盘

  1. $ cat test-pvc-cephfs.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: nginx-pod
  6. labels:
  7. name: nginx-pod
  8. spec:
  9. containers:
  10. - name: nginx-pod
  11. image: nginx:alpine
  12. ports:
  13. - name: web
  14. containerPort: 80
  15. volumeMounts:
  16. - name: cephfs
  17. mountPath: /usr/share/nginx/html
  18. volumes:
  19. - name: cephfs
  20. persistentVolumeClaim:
  21. claimName: cephfs-claim
  22. $ kubectl create -f test-pvc-cephfs.yaml

我们所说的容器的持久化,实际上应该理解为宿主机中volume的持久化,因为Pod是支持销毁重建的,所以只能通过宿主机volume持久化,然后挂载到Pod内部来实现Pod的数据持久化。

宿主机上的volume持久化,因为要支持数据漂移,所以通常是数据存储在分布式存储中,宿主机本地挂载远程存储(NFS,Ceph,OSS),这样即使Pod漂移也不影响数据。

k8s的pod的挂载盘通常的格式为:

/var/lib/kubelet/pods/<Pod的ID>/volumes/kubernetes.io~<Volume类型>/<Volume名字>

查看nginx-pod的挂载盘,

  1. $ df -TH
  2. /var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/
  3. $ findmnt /var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/
  4. 172.21.51.55:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-ffe3d84d-c433-11ea-b347-6acc3cf3c15f

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/360322
推荐阅读
相关标签
  

闽ICP备14008679号