UFies Updates RSN

I haven’t bothered to put anything out there yet but I hope to the the grand re-install of OS and hardware checking, fixing on UFies Real Soon Now™. It’s hard for me to escape on weekdays, hard for Fred (who has some of the hardware and who is good for sanity checking me πŸ™‚ to escape on weekends, and I haven’t heard back from the colo guys about how they feel about someone getting one of them to come in on a weekend to sit around while I re-install. I’d rather get it done sooner than later of course as I have a fear of rebooting the box now that I shouldn’t have (last time it took a hard boot for it to see the SCSI properly, and the time before that it lost an IDE drive during rebuild, no idea why), but on the other hand there is nothing that needs to be done on it. Well, there is, but nothing that’s preventing things from running along smoothly. I do still have questions on what is best for some of the set up, which I won’t bore you with here, so read on if you’re a techy and want to answer the question of “how would you set up a server?”

I’ve been talking to people lately on mailing lists and email and researching the best way to set up the new box. It’s been suggested that the 4 IDE drives be moved from a RAID5 3 disk+1 spare array to a RAID 1+0 (or is it 0+1?, which ever is the better one) to increase performance and redundancy. Sounds like a decent idea, but again, it still has to be set up. I’d rather not have RAID on the root just because it makes me a bit nervous for some reason, but on the other hand my home box runs and boots just dandy with a RAID0 root. Thing is if I’m re-installing with Gentoo their partition scheme is basically a small /boot partition and the rest is /, whereas the Debian way was a small / and then mount /usr and /var. I realize that I can set it up however I want of course, but I don’t know which is “best”. Of course I can’t define in my head (or from my fingertips, where this wonderful late night shouldn’t-you-be-in-bed rambling is coming from) what defines “best”. I’ll probably write something up so that my 4 readers can respond (assuming they haven’t already responded to my numerous other posts πŸ™‚


However, just cause….


I have a big-ass server with 4x40gig IDE drives and 3x18G SCSI drives (oh, and 2G of ram). Right now the SCSIs are for /home (raid5 intended) and the IDEs for the server filesystem, currently split up as:


  • hdX1 500m /
  • hdX2 500m swap
  • hdX3 20G raid autodetect
  • hdX4 19G raid autodetect

All four drives are partitioned that way. The / is rsynced every night so that if one drive is lost another can be put in it’s place easily. The two raid partitions are used in /dev/md0 and /dev/md1 which are RAID5 (three disks, one spare). and mounted as /var and /usr. I set it up this way long ago when I first discovered raid and I don’t think it completely makes sense… ie, if one disk goes I lose one part of two raid arrays. However, I don’t have the hardware to have separate disks each raid array.


So my quandry is how to best:


  • Set up partitioning?
  • Set up raid?
  • Attain high reliability and redundancy?

Is the answer leave it as it is? Partition almost the same as above but have the first partition a small /boot, the second swap, and the third a big raid partition that is part of a striped set up mirrored arrays (0+1?)? One big raid 5 array? One disk for / and swap and three for /usr and /var in a RAID5 array? ARGH!