Networking: A Beginner's Guide
Which level of RAID should you use on your network server? Most network
administrators favor RAID 5 because it requires only 20 to 25 percent of the total
disk capacity for the redundancy function, yet it performs well and offers a measure
of safety. However, RAID 3 and RAID 5 arrays do occasionally fail to recover data
properly (although they very rarely lose data). For this reason, you usually should opt
for either RAID 1 or a RAID 10 array for network servers that store vital data.
In general, the different RAID configurations offer different levels of reliability. Ranked
from best to worst purely in terms of the system's likelihood of losing data would be
RAID 1, RAID 10, RAID 6, RAID 5, and RAID 3. There are always trade-offs, though.
A system with 20 disks using just RAID 1 would be unwieldy to manage, because you
would have 10 logical drives to manage and use efficiently. However, if you configured
those same 20 disks as two RAID 5 arrays, you would be able to manage more efficiently
the two logical disks that would result.
You must make your own decision based on the importance of the data, the
required levels of performance, the capabilities of the server, and the budget available
to you. One thing you should never do, though, is trust that any RAID level replaces
regular, tested, reliable tape backups of network data!
Server State Monitoring
An important feature of most servers is the capability to monitor its own internal
components and to notify you if any problems develop or appear to be developing.
Higher-end servers can typically monitor the following:
Proper fan operation
Memory errors, even if corrected by ECC memory
Disk errors, even if corrected automatically
A RAID 5 array stripes data across multiple disks, and alternately uses all disks
for ECC data
Data and ECC distributed
to all drives