Networking: A Beginner's Guide
Also called SCSI-3, this specification increases the SCSI bus speed
even higher--to 20 MHz. Using a narrow, 8-bit bus, Ultra SCSI can handle
20 MBps. It can also run with a 16-bit bus, increasing the speed further to 40 MBps.
Yet another enhancement of the SCSI standard, Ultra2 SCSI
doubles (yet again) the performance of Ultra SCSI. Ultra2 SCSI subsystems
can scale up to 80 MBps using a 16-bit bus.
By now you should know the story: Ultra160 SCSI again
doubles the performance available from Ultra2 SCSI. Ultra160 SCSI (previously
called Ultra3 SCSI) is named for its throughput of 160 MBps.
Ultra320 SCSI can move data at a rate of 320 MBps.
Another doubling of the SCSI interface speed, Ultra640 SCSI
was promulgated as a new standard in early 2003.
A storage connection technology, called Fibre Channel, can use either fiber-optic or copper
cable, is a much more flexible connection scheme than SCSI, and promises throughput many times
faster than even that of Ultra640 SCSI. Based loosely on a network paradigm, Fibre Channel is initially
expensive to implement, but large data centers will benefit greatly from its advances over SCSI.
As you can see from the preceding list, a dizzying array of SCSI choices is available
on the market today. Because of all the different standards, it's a good idea to make
sure you purchase matched components when building a SCSI disk subsystem or
when purchasing one as part of a server. Make sure the controller card you plan to use
is compatible with the drives you will use, that the card uses the appropriate cables,
and that it is compatible with both the server computer and the NOS you will use.
The good news is that once you get a SCSI disk subsystem up and running, it will run
reliably and with excellent performance.
Disk Topologies: It's a RAID!
The acronym RAID stands for redundant array of independent disks. RAID is a technique
of using many disks to do the work of one disk, and it offers many advantages compared
to using fewer, larger disks.
The basic idea behind RAID is to spread a server's data across many disks,
seamlessly. For example, a single file might have portions of itself spread across four
or five disks. The RAID system manages all those parts so you never know they're
actually spread across all the disks. You open the file, the RAID system accesses all the
appropriate disks and "reassembles" the file, and provides the entire file to you.
The immediate benefit you get is that the multiple disks perform much more
quickly than a single disk. This is because all the disks can independently work on
finding their own data and sending it to the controller to be assembled. A single disk
drive would be limited by a single disk head and would take much longer to gather
the same amount of data. Amazingly, the performance of a RAID system increases as
you add more disks, because of the benefit of having all those disk heads independently
working toward retrieving the needed data.