New Hard Drive Space… Almost

Well, today I finally made the plunge of actually starting to set up the new hard drives I bought at Christmas. I played around a lot with EVMS and LVM under VMWareas a way to maintain the disks, and add or remove space (and devices) as needed. Basically the situation I have is this.

I have 3 x 250G in a RAID5, which is half a terabyte of space (I get shivers saying that). I also have 2x120G in a RAID1 with data on it. What I want to do is move the data from the RAID1 onto the RAID5, then absorb the RAID1 into a single volume of data. Now technically I’m going to be changing the 2x120G into 3x120G in a RAID5, but that’s a minor detail. I need(ed) a technology that would allow me to grow and shink volumes with data on them without chance of data loss, and would keep things going if/when I add more disks.

I’m hoping the end result will look something like this:

Current setup (almost, the first RAID5 is technically a RAID1…), with existing data in /mnt/share and the new 500G array with LVM management and put in the “storage” logical volume.


Physical: [sda] [sdb] [hdb] [hde] [hdg] [hdh]
Region: [ R A I D 5 ] [ R A I D 5 ]
Volume Group: [ /dev/vg ]
Logical Volume: [/dev/vg/storage]
FileSystem: [ /mnt/share/ ] [ /mnt/storage ]

After copying the data, expand the volume group and the logical volume inside it:


Physical: [sda] [sdb] [hdb] [hde] [hdg] [hdh]
Region: [ R A I D 5 ] [ R A I D 5 ]
Volume Group: [ /dev/vg ]
Logical Volume: [ /dev/vg/storage ]
FileSystem: [ /mnt/storage ]

In theory at this point if I get new drives I can add them into the system or if I need to partition off /mnt/storage into /mnt/music and /mnt/videos, I can simply resize the logical volumes.

So far mostly it’s gone well, I’m using a hybrid of LVM and EVMS to do the work. EVMS seems to not allow me to do things that I want to do, and recognizes problems that the rest of the system doesn’t recognize, and makes me fix them by deleting data (this is why I do testing in VMWare). EVMS is much nicer than LVM for management I have to admit. It automatically tells me how much I can expand something, instead of me having to guess, or do math, or put in random numbers on the command line. LVM on the other hand, as far as creating the volumes and groups, is much easier, and I can understand the architecture much better. IE: I have a physical volume (the raid array itself), the volume group that is the container of data, and then inside it are “partitions” of logical volumes.

EVMS on the other hand has volumes, containers, regions and segments, some of which can go inside the other, some of which can’t, some can be resized sometimes, but not others, and sometimes shrinking/expanding isn’t allowed. Very scary if you’re doing this for the “long run”.

Right now I’m waiting for the 3x250G RAID5 array to finish syncing. It looks like it’s going to be a loooong wait.


md1 : active raid5 dm-4[2] dm-3[1] dm-2[0]
488391808 blocks level 5, 32k chunk, algorithm 0 [3/3] [UUU]
[=>...................] resync = 8.7% (21275520/244195904) finish=3458.7min speed=1073K/sec

And yes, that’s 57 hours to do the resync 🙁 I’ve increased /proc/sys/dev/raid/speed_limit_max as well, and made sure the all the hdparm tricks were enabled. Good to know this is giving me a trial run for the new UFies.org box though. And yes, 740G is more hard drive space than I can possibly fill up….. this week 🙂

Update – Duh, setting speed_limit_min is what I needed to do, as I guess it keeps resync speed down to keep from overloading the system. Setting it to 7000 from 1000 drops my time down to 4 hours 🙂