More Storage Space… Hello SATA

Finally got around to ordering, receiving and finally today, installing, some more storage space. Two 120G SATA drives and a Promise SATA 150 TX2plus controller card. Mucho thanks to Cat5 for picking them up from the shop for me. Sadly they “only” had 120’s, not the 160G I was looking for, and the price point is below 200G. Oh well. Now I have the drives in I’m not sure what to do with them.


To RAID0 or to RAID1, that is the question:

Whether ’tis nobler in the mind to suffer

The slings and arrows of horrendous data loss,

Or to take arms against a sea of troubles,

And mirror the data? To lose the space: to halve the drivespace;


I just don’t know. Right now I have 226 gigabytes of free space with them set up for striping with RAID0, and am trying to decide what to move over into that free space. I keep on thinking, “well, I can more the squid cache over, it doesn’t matter if I lose that…. I can move the gallery albums over, because I have the originals…” I think I have to go to RAID1 (mirroring) and effectively “lose” 120G of data storage space, but not have to worry about losing data if something does go wrong.


Soon I’ll get myself a third 120G (next year sometime, or after a few paychecks have accumulated) and set it up as RAID5. With RAID1 at least I can move over easily enough (mark one drive faulty, run in degrated mode, set up a raid5 array with two drives, run it in degraded mode, move data over, nuke raid1 array and move the last drive from it into the raid5 array). Luckily I have enough space around I can always move stuff around if I change my mind. I normally wouldn’t anticipate any problems from brand new drives, but I’ve had enough issues lately that I’d rather not risk it.


Some timings and benchmarks below…


Setup

Two Maxtor 120G drives, 8m cache, SATA interface.

Promise TX2Plus SATA150 controller.

Linux kernel 2.6.0 with software raid enabled.

PII Celeron 533, 256MB RAM.


RAID0 (striping)

Note that the benchmarks here might not be all that high due to the fact that the raid array is in the middle of resyncing itself.

naked / # hdparm -tT /dev/md0
/dev/md0:
Timing buffer-cache reads: 376 MB in 2.00 seconds = 187.65 MB/sec
Timing buffered disk reads: 100 MB in 3.10 seconds = 32.21 MB/sec


RAID1 (mirroring)

naked / # hdparm -tT /dev/md0
/dev/md0:
Timing buffer-cache reads: 372 MB in 2.01 seconds = 184.74 MB/sec
Timing buffered disk reads: 142 MB in 3.04 seconds = 46.76 MB/sec


Individual Drive

naked / # hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 268 MB in 2.01 seconds = 133.09 MB/sec
Timing buffered disk reads: 160 MB in 3.03 seconds = 52.88 MB/sec


Other Drives

For comparision, the same benchmarks off an 80G 8m cache Western Digital drive and the RAID5 array it’s in.
IDE Drive

naked / # hdparm -tT /dev/hdg
/dev/hdg:
Timing buffer-cache reads: 252 MB in 2.03 seconds = 124.03 MB/sec
Timing buffered disk reads: 80 MB in 3.06 seconds = 26.14 MB/sec


RAID5 (stripe + parity)

naked / # hdparm -tT /dev/md1
/dev/md1:
Timing buffer-cache reads: 256 MB in 2.02 seconds = 126.56 MB/sec
Timing buffered disk reads: 64 MB in 3.00 seconds = 21.32 MB/sec


Other System

The low scores are presumably because of the system that the drives are attached to. Looking at a single drive on my other system, an XP2500 with 1G ram, I get this:



phoenix alan # hdparm -tT /dev/hdb
/dev/hdb:
Timing buffer-cache reads: 1640 MB in 2.00 seconds = 819.59 MB/sec
Timing buffered disk reads: 96 MB in 3.02 seconds = 31.76 MB/sec