当前位置:   article > 正文

Linux RAID和LVM_lvm 和 mdadm 对比

lvm 和 mdadm 对比

                   Linux RAID     LVM的配置

  

      我们先来学一下RAID

      RAIDRedundant  Array  of   Independent  Disks     独立磁盘冗余阵列

       简单的说,RAID就是把多块独立的硬盘(物理磁盘)按不同的方式组合起来的一个硬盘组,也叫逻辑硬盘,通过此来提高存储性能和数据备份。RAID有两大特点:

速度,二 安全。

RAID是按级别划分的有七种基本的RAID

 

   RAID 0 :称之为条带(Striping),它具有很高的存储性能。需要两块硬盘

 且每块硬盘存储相同的数据大小切实把数据平均分配比如有64K,则各分32K。但其缺点是一旦数据损坏,损坏的数据将无法恢复。

 

   RAID  1   :称之为镜像(mirror, 他可以最大限度的保证用户的可用性和可修复性。

比如我们有两块硬盘,有64K数据,则每个盘上都有64K的数据且完全相同,这种方式至少要两块盘。缺点是磁盘利用率低,优点是数据有很高安全性,容易恢复

 

当然有这两种的组合RAID0+1)和 RAID (1+0),这里就不再介绍了,呵呵…..

 

RAID  3   是把数据分成多个,按照一定的容错算法,存放在N+1个硬盘上,实际数据占用的有效空间为N个硬盘的空间总和,而第N+1个硬盘上存储的数据是校验容错信息,当这N+1个硬盘中的其中一个硬盘出现故障时,从其它N个硬盘中的数据也可以恢复原始数据,这样,仅使用这N个硬盘也可以带伤继续工作(如采集和回放素材),当更换一个新硬盘后,系统可以重新恢复完整的校验容错信息。由于在一个硬盘阵列中,多于一个硬盘同时出现故障率的几率很小,,安全性是可以得到保障的。读写速度方面相对较慢。比较适合大文件类型且安全性要求较高的应用,如视频编辑、硬盘播出机、大型数据库等。

RAID   5  :它是一种存储性能 数据安全和存储成本都考虑在内的方案。他至少需要三块硬盘       A                       B                   C

6

 

4

 

 

            

 

 

3

2

 

5

 

 

1

 

      

 

 

 

 


                       (注意:每天数字的为校验码)

     我们从图上可以清晰的看出来它往每一盘都存储数据,且每一个盘都有校验码,这是随机的,因此如果其中一个盘坏掉了,可以通过数据恢复回来,不错吧,但是如果坏掉两快那就不可能回复了。

RAID  6  :他需要四块盘且一块完全存储校验码,其余三块安RAID5的方式存储。

 

 

好了我们去演示模拟一下吧。现在我们就去搭建环境。(在虚拟机上模拟一下)RAID0

RAID1       RAID5

 

 

     我们先去添加一块硬盘,此时我们先看添加的盘。(我虚拟机上原来有一块为sda

         文件类型为fd

第一部分分区

[root@server28 ~]# fdisk –l   /dev/sdb  (这是查看磁盘)

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes(这是我们挂的盘,我这里把上面的内容剪切了)

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

 

[root@server28 ~]# fdisk /dev/sdb (我们开始去把盘分区给定文件类型)

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +1G

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1         123      987966   83  Linux

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 2

First cylinder (124-2610, default 124):

Using default value 124

Last cylinder or +size or +sizeM or +sizeK (124-2610, default 2610): +1G

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 3

First cylinder (247-2610, default 247):

Using default value 247

Last cylinder or +size or +sizeM or +sizeK (247-2610, default 2610): +1G

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

e

Selected partition 4(这里建了扩展分区)

First cylinder (370-2610, default 370):

Using default value 370

Last cylinder or +size or +sizeM or +sizeK (370-2610, default 2610):

Using default value 2610

 

Command (m for help): n

First cylinder (370-2610, default 370):

Using default value 370

Last cylinder or +size or +sizeM or +sizeK (370-2610, default 2610): +1G

 

Command (m for help): n

First cylinder (493-2610, default 493):

Using default value 493

Last cylinder or +size or +sizeM or +sizeK (493-2610, default 2610): +1G

 

Command (m for help): n

First cylinder (616-2610, default 616):

Using default value 616

Last cylinder or +size or +sizeM or +sizeK (616-2610, default 2610): +1G

 

Command (m for help): t

Partition number (1-7): 7

Hex code (type L to list codes): fd

Changed system type of partition 7 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-7): 6

Hex code (type L to list codes): fd

Changed system type of partition 6 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-7): 5

Hex code (type L to list codes): fd

Changed system type of partition 5 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-7): 3

Hex code (type L to list codes): fd

Changed system type of partition 3 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-7): fd

Partition number (1-7): 2

Hex code (type L to list codes): fd

Changed system type of partition 2 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-7): 1

Hex code (type L to list codes): fd

Changed system type of partition 1 to fd (Linux raid autodetect)

 

Command (m for help): n

First cylinder (739-2610, default 739):

Using default value 739

Last cylinder or +size or +sizeM or +sizeK (739-2610, default 2610): +1G

 

Command (m for help): t

Partition number (1-8): 8

Hex code (type L to list codes): fd

Changed system type of partition 8 to fd (Linux raid autodetect)

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1         123      987966   fd  Linux raid autodetect

/dev/sdb2             124         246      987997+  fd  Linux raid autodetect

/dev/sdb3             247         369      987997+  fd  Linux raid autodetect

/dev/sdb4             370        2610    18000832+   5  Extended

/dev/sdb5             370         492      987966   fd  Linux raid autodetect

/dev/sdb6             493         615      987966   fd  Linux raid autodetect

/dev/sdb7             616         738      987966   fd  Linux raid autodetect

/dev/sdb8             739         861      987966   fd  Linux raid autodetect

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@server28 ~]# fdisk –l   /dev/sdb

 

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1         123      987966   fd  Linux raid autodetect

/dev/sdb2             124         246      987997+  fd  Linux raid autodetect

/dev/sdb3             247         369      987997+  fd  Linux raid autodetect

/dev/sdb4             370        2610    18000832+   5  Extended

/dev/sdb5             370         492      987966   fd  Linux raid autodetect

/dev/sdb6             493         615      987966   fd  Linux raid autodetect

/dev/sdb7             616         738      987966   fd  Linux raid autodetect

/dev/sdb8             739         861      987966   fd  Linux raid autodetect

 

 

第二部分制作RAID  0

 

[root@server28 ~]# mdadm -v -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb{1,2}

mdadm: chunk size defaults to 64K

mdadm: array /dev/md0 started.

[root@server28 ~]# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdb2[1] sdb1[0]

      1975744 blocks 64k chunks

     

unused devices: <none>    

[root@server28 ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

[root@server28 ~]# mkfs.ext3 /dev/md0

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

247296 inodes, 493936 blocks

24696 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=507510784

16 block groups

32768 blocks per group, 32768 fragments per group

15456 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912

 

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 33 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@server28 ~]# mkdir /mnt/md0

[root@server28 ~]# mount /dev/md0  /mnt/md0

[root@server28 ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vol0-root

                       18G  5.7G   11G  35% /

/dev/mapper/vol0-home

                      465M   11M  431M   3% /home

/dev/sda1              99M   21M   74M  22% /boot

tmpfs                 197M     0  197M   0% /dev/shm

/dev/hdc              3.8G  3.8G     0 100% /media/CentOS_5.4_Final

/dev/md0              1.9G   35M  1.8G   2% /mnt/md0

[root@server28 ~]# mdadm --detail /dev/md0

/dev/md0:

        Version : 0.90

  Creation Time : Sun Feb  7 21:27:23 2010

     Raid Level : raid0

     Array Size : 1975744 (1929.76 MiB 2023.16 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 0

    Persistence : Superblock is persistent

 

    Update Time : Sun Feb  7 21:27:23 2010

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

     Chunk Size : 64K

 

           UUID : 97411d81:e1c1f23f:2b77aa26:7ee9a3cc

         Events : 0.1

 

    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       18        1      active sync   /dev/sdb2

   

 

  第三部分RAID  1

 

 

[root@server28 ~]# mdadm -v -C /dev/md1 -a yes -l 1 -n 2  /dev/sdb{3,5}

mdadm: size set to 987840K

mdadm: array /dev/md1 started.

[root@server28 ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb5[1] sdb3[0]

      987840 blocks [2/2] [UU]

     

md0 : active raid0 sdb2[1] sdb1[0]

      1975744 blocks 64k chunks

     

unused devices: <none>

[root@server28 ~]# partpobe

bash: partpobe: command not found

[root@server28 ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

[root@server28 ~]# mkfs.ext3  /dev/md1

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

123648 inodes, 246960 blocks

12348 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

15456 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376

 

Writing inode tables: done                           

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@server28 ~]# mkdir /mnt/md1

[root@server28 ~]# mount /dev/md1  /mnt/md1

[root@server28 ~]# cd mnt/md1

bash: cd: mnt/md1: No such file or directory

[root@server28 ~]# cd /mnt/md1

[root@server28 md1]# ll

total 16

drwx------ 2 root root 16384 Feb  7 21:40 lost+found

[root@server28 md1]# mdadm --detail /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Sun Feb  7 21:37:54 2010

     Raid Level : raid1

     Array Size : 987840 (964.85 MiB 1011.55 MB)

  Used Dev Size : 987840 (964.85 MiB 1011.55 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Sun Feb  7 21:41:53 2010

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : a84e5afd:e5147d94:bd05736f:cb8766fd

         Events : 0.2

 

    Number   Major   Minor   RaidDevice State

       0       8       19        0      active sync   /dev/sdb3

       1       8       21        1      active sync   /dev/sdb5

 

第四部分RAID  5

 

     mdadm -v -C /dev/md5 -a yes -l 5 -n  3  /dev/sdb{6,7,8}

     cat /proc/mdstat

     partprobe

     mkfs.ext3  /dev/md5

     mkdir /mnt/md5

     mount /dev/md5  /mnt/md5

     mdadm --detail /dev/md5

     mdadm –detail  --scan  >  /etc/mdadm.conf

好了我们总算做好了RAID,接下来就可以使用了。呵呵,不错吧。

 

 

 

 

 

 接下来我们来看一下LVM

 

Linux提供的逻辑盘卷管理(LVMLogical  Volume Manger

 

LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性。通过LVM系统管理员可以轻松管理磁盘分区,如:将若干个磁盘分区连接为一个整块的卷组(volume group),形成一个存储池。管理员可以在卷组上随意创建逻辑卷组(logical volumes),并进一步在逻辑卷组上创建文件系统。管理员通过LVM可以方便的调整存储卷组的大小,并且可以对磁盘存储按照组的方式进行命名、管理和分配,例如按照使用用途进行定义:“development”“sales”,而不是使用物理磁盘名“sda”“sdb”。而且当系统添加了新的磁盘,通过LVM管理员就不必将磁盘的文件移动到新的磁盘上以充分利用新的存储空间,而是直接扩展文件系统跨越磁盘即可。

 

LVM的术语

·         物理存储介质(The physical media
这里指系统的存储设备:硬盘,如:/dev/hda1/dev/sda等等,是存储系统最低层的存储单元。

·         物理卷(physical volume
物理卷就是指硬盘分区或从逻辑上与磁盘分区具有同样功能的设备(RAID),是LVM的基本存储逻辑块,但和基本的物理存储介质(如分区、磁盘等)比较,却包含有与LVM相关的管理参数。

·         卷组(Volume Group
LVM
卷组类似于非LVM系统中的物理硬盘,其由物理卷组成。可以在卷组上创建一个或多个“LVM分区(逻辑卷),LVM卷组由一个或多个物理卷组成。

·         逻辑卷(logical volume
LVM
的逻辑卷类似于非LVM系统中的硬盘分区,在逻辑卷之上可以建立文件系统(比如/home或者/usr)

·         PEphysical extent
每一个物理卷被划分为称为PE(Physical Extents)的基本单元,具有唯一编号的PE是可以被LVM寻址的最小单元。PE的大小是可配置的,默认为4MB

·         LElogical extent
逻辑卷也被划分为被称为LE(Logical Extents) 的可被寻址的基本单位。在同一个卷组中,LE的大小和PE是相同的,并且一一对应。

 

概念我们已经清楚了,接下来我们可以去工作了

   在刚刚用的虚拟机上继续去分区

#fdisk  /dev/sdb

这次的文件类型为8e 这是LVM所独有的。

#fdisk  -l  /dev/sdb

/dev/sdb9             862         984      987966   8e  Linux LVM

/dev/sdb10            985        1107      987966   8e  Linux LVM

/dev/sdb11           1108        1230      987966   8e  Linux LVM

 

那我们开始去创建LVM

 第一步:

[root@server28 ~]# pvcreate -v /dev/sdb9

    Set up physical volume for "/dev/sdb9" with 1975932 available sectors

    Zeroing start of device /dev/sdb9

  Physical volume "/dev/sdb9" successfully created

[root@server28 ~]# pvdisplay /dev/sdb9

  "/dev/sdb9" is a new physical volume of "964.81 MB"

  --- NEW Physical volume ---

  PV Name               /dev/sdb9

  VG Name              

  PV Size               964.81 MB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               Kpj8mS-jPWJ-20QR-06ue-NAXj-ME9m-IgOIs7

 

 

  第二步:

 

 

[root@server28 ~]# vgcreate -v vol2  /dev/sdb9

  /dev/cdrom: open failed: Read-only file system

    Wiping cache of LVM-capable devices

  /dev/cdrom: open failed: Read-only file system

  Attempt to close device '/dev/cdrom' which is not open.

    Adding physical volume '/dev/sdb9' to volume group 'vol2'

    Archiving volume group "vol2" metadata (seqno 0).

    Creating volume group backup "/etc/lvm/backup/vol2" (seqno 1).

  Volume group "vol2" successfully created

[root@server28 ~]# vgdisplay vol2

  --- Volume group ---

  VG Name               vol2

  System ID            

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               964.00 MB

  PE Size               4.00 MB

  Total PE              241

  Alloc PE / Size       0 / 0  

  Free  PE / Size       241 / 964.00 MB

  VG UUID               G3cviB-vMXH-DqzV-COU7-r63P-xReg-6mEO0g

 

 

 

  第三步:

 

 

 

[root@server28 ~]# lvcreate -v -L 300M -n lv2  vol2

    Setting logging type to disk

    Finding volume group "vol2"

    Archiving volume group "vol2" metadata (seqno 1).

    Creating logical volume lv2

    Creating volume group backup "/etc/lvm/backup/vol2" (seqno 2).

    Found volume group "vol2"

    visited lv2

    visited lv2

    Creating vol2-lv2

    Loading vol2-lv2 table

    Resuming vol2-lv2 (253:6)

    Clearing start of logical volume "lv2"

    Creating volume group backup "/etc/lvm/backup/vol2" (seqno 2).

  Logical volume "lv2" created

[root@server28 ~]# lvdisplay  /dev/vol2/lv2

  --- Logical volume ---

  LV Name                /dev/vol2/lv2

  VG Name                vol2

  LV UUID                ZW1oe1-Foh6-0X92-YC5W-bR1L-Pq6H-o6Lh2m

  LV Write Access        read/write

  LV Status              available

  # open                 0

  LV Size                300.00 MB

  Current LE             75

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:6

[root@server28 ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

[root@server28 ~]# mkfs -t ext3  /dev/vol2/lv2

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

76912 inodes, 307200 blocks

15360 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

38 block groups

8192 blocks per group, 8192 fragments per group

2024 inodes per group

Superblock backups stored on blocks:

        8193, 24577, 40961, 57345, 73729, 204801, 221185

 

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@server28 ~]# mkdir /mnt/lv2

[root@server28 ~]# mount /dev/vol2/lv2  /mnt/lv2

[root@server28 ~]# cd /mnt/lv2

[root@server28 lv2]# ll

total 12

drwx------ 2 root root 12288 Feb  7 22:45 lost+found

  

 

 

 

 

 

 

 

 

 

 

 

 

 

    

          

 

 

 

 

 

 

 

 

 

 

 

 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/451361
推荐阅读
相关标签
  

闽ICP备14008679号