Raid?

  • Thread starter Thread starter NL5
  • Start date Start date
NL5

NL5

Unpossible!
I am considering doing a RAID 1 setup (mirroring) is this gonna slow down my throughput? I REALLY want my data safe. It will also get backed up.
 
Mirroring will not slow down anything unless the controller asks a lot of the cpu.
 
Repeat after me: RAID is not a backup.

Your RAID controller goes wonky, the data can just as easily be fried on both drives. Your application corrupts a project file, the file is corrupted on both drives. You screw up and delete the wrong thing, it is gone on both drives.

RAID has two purposes: improving performance (under some workloads with some RAID types) and providing robustness against mechanical failure when a drive is under heavy use. It is not, however, a replacement for an actual backup.

I would not bother with RAID 1. If you're going to use RAID, do RAID 0 (striping), not RAID 1 (mirroring). With RAID 0, you get increased capacity and better performance. The extra redundancy of a mirrored set doesn't really buy you much if you back up regularly (which you need to do anyway), and the performance boost of RAID 1 can also be achieved with a good RAID 0 implementation without losing the extra storage.
 
Last edited:
At the time, it was cheaper for me to get two 200GB drives than it was to get a single 400GB drive. So I thought, hey cool I'll just do that fancy Raid0 thing and I'll have a big drive to record audio on plus I'll never have to worry about bogging down a single hard drive while playing back a bunch of tracks. Well, yeah that is all fine and good; but now I am worrying about what happens if a system drive fails and I have these two loose drives with data striped across them. I can't just plug them in and have a system re-recognize them can I? I think they might would have to be reformatted to be restated into their RAID configuration. Of course it hasn't happened yet, so I don't know for sure. I guess I could just splurge $140 and get an external backup drive...
 
and providing robustness against mechanical failure when a drive is under heavy use. It is not, however, a replacement for an actual backup.

So, it is a backup then, right? If you read my first post, you will see that I will also be backing up projects nightly. However, I had a drive fail after working for 12 hours one day (nonstop) - I didn't lose the entire project, but we lost some great stuff, and wasted a LONG day. That is what I was hoping the RAID array would prevent. However, I have 52 in and 52 out worth of I/o, plus 4 UAD cards running on my PC. I was just wondering how much more strain would be placed on the busses with data being wrote twice, or does the raid controller handle the extra writing I/O. Does that make sense?
 
Last edited:
At the time, it was cheaper for me to get two 200GB drives than it was to get a single 400GB drive. So I thought, hey cool I'll just do that fancy Raid0 thing and I'll have a big drive to record audio on plus I'll never have to worry about bogging down a single hard drive while playing back a bunch of tracks. Well, yeah that is all fine and good; but now I am worrying about what happens if a system drive fails and I have these two loose drives with data striped across them. I can't just plug them in and have a system re-recognize them can I? I think they might would have to be reformatted to be restated into their RAID configuration. Of course it hasn't happened yet, so I don't know for sure. I guess I could just splurge $140 and get an external backup drive...

Yes it would pay to copy all that data to another drive. If one of the striped drives fails, you lose the lot
 
I was just wondering how much more strain would be placed on the busses with data being wrote twice, or does the raid controller handle the extra writing I/O. Does that make sense?
Depends on the RAID implementation. If you use hardware RAID which is what I recommend, then there shouldn't be any additional strain on the processor and the PCI bus as the RAID controller should handle the writing. When using hardware RAID controller, from the system's point of view the controller itself is the hard drive. At least that's the way it is in servers with SCSI or SAS RAID controllers. On workstation with SATA RAID it's a bit more difficult as I believe the RAID functions are handled by the chipset.
 
I would not stripe my drives in a raid 0 configuration unless at any moment you are preapared to lose the data from BOTH drives. If one of a series of striped drives dies, it often leaves the data on all other drives in the striped array virtually useless, at least without expensive and/or extraordinarily time consuming repairs. Mirroring is a great way to protect against drive failures and is an excellent first step backup plan. It definately would protect you against data loss form a drive crash since your last backup and in todays systems should not reduce your throughput by anything even close to noticable.

Above it was mentioned that ...
"Your RAID controller goes wonky, the data can just as easily be fried on both drives. Your application corrupts a project file, the file is corrupted on both drives. You screw up and delete the wrong thing, it is gone on both drives."

This is true, but other things need to be thought of as well.... If your application corrupts a project file, we often do not know it until the next time you open it. If in the meantime you have already done your external backup, then you are still at a loss. Secondly, most applications write a lot of hidden project files which you can restore to in the event that a project file gets lost and then you don't lose nearly as much information.

A raid controller going "wonky" typically would not hurt a mirrored array. There are protection measures inline in all good RAID setups that warn you in the event that there is some sort of error. In any event, this is a "disaster scenario" that you just can't protect against like so many other disaster scenarios out there.

If you screw up and delete the wrong file, sure it is gone from both drives. However, this would be true of the backup you are just about to make as well. None of this protects against user error. In most cases if you realize you made a screw up you can still get it back. The problem is that usually when we the users do screw up it is too late before we figure it out.

These were valid points to consider, but in the end, I do not see them as a compelling reason to not mirror, but there are excellent reasons to not stripe in the audio environment unless you truly need that extra bandwidth and have a very large system ready to handle the striped backup mirror. I have vowed to never again stripe drives after having one drive in the array go down and losing all data from BOTH drives.

The bottom line is that no system is absolutely secure unless you take extreme measures like triple backups to offsite locations, daily or hourly backups that are not rewrites over old backups etc... These are certainly more secure methods but require a huge amount of resources and money. Starting with a mirrored array for your "current" or "in use" backup is an excellent start and then finishing each certain set amount of time ( i.e. each day, each week etc...) with an external back up or two or three is a great way to add stability and longevity to your data:)
 
Last edited:
I really would not put my faith in consumer level raid solutions. If you really want safety and instant recovery, you will need to look at a real RAID 5 array which has distributed parity. RAID 5 is at least 3 disks which are striped and mirrored over all disks so a single disk can fail without any loss of data or major interruptions. But these industrial strength RAID systems will cost way more than your computer and are probably not really well suited for a studio environment. I would really just make sure you use reliable hard drives, SCSI or SAS drives are usually well suited for media work and are designed to take a beating continuously.

I have had a raid array (0+1) go down and despite it being striped AND mirrored, it would not recover. Needless to say, the data was hosed and my interest in cheapo RAID was over
 
If your on a mac...to aleviate the backup problem, just get an external hardrive and leopard and turn on time machine... It's loverly.
 
This is true, but other things need to be thought of as well.... If your application corrupts a project file, we often do not know it until the next time you open it. If in the meantime you have already done your external backup, then you are still at a loss.

Rule #1 of backups: never keep only one backup. The minimum is two, and only if you check them periodically.


A raid controller going "wonky" typically would not hurt a mirrored array. There are protection measures inline in all good RAID setups that warn you in the event that there is some sort of error. In any event, this is a "disaster scenario" that you just can't protect against like so many other disaster scenarios out there.

You'd think that. Sadly, it happens far more often than you think. There are a fair number of RAID controllers that can massively corrupt data in a RAID-1 configuration if one drive goes south....

The worst possible situation is when one drive glitches (e.g. failing a S.M.A.R.T. test) and the RAID controller takes it offline but you don't notice some small blinking red light. Then, you shut down, bring the RAID back up, and suddenly you are reading some of the valid data off of one drive and some stale data off the other one. You are now officially f*cked.

And people ask me why I don't trust consumer RAID hardware....
 
If your on a mac...to aleviate the backup problem, just get an external hardrive and leopard and turn on time machine... It's loverly.

It would be an interesting experiment. I've never tried TM on an audio volume. Let us know how that works out for you.

It should be about the same as running a daily backup. It is, after all, a daily backup. :)
 
Last edited:
but now I am worrying about what happens if a system drive fails and I have these two loose drives with data striped across them. I can't just plug them in and have a system re-recognize them can I?

Depends. There has been a test some time ago where they did exactly that: make an array and try the disks on a different controller to see if it could be used. This was with chipset controllers.

But as a guide, don't depend on having to replace your controller, even if it is a "serious" one. There is no guarantee it will work. This may be down to board version, bios version and driver version. This is the main reason why raid is not a backup, there is still a single point of failure, the controller.

Another point to consider is that in a raid array the disks are typical of the same manufacturer, same model and same batch because bought from the same supplier in a single buy. Those disks will always be exposed to the same conditions (temp, voltage), see the same usage pattern, run exactly as many hours etc. So when the first fails the other will follow fast.

So even while I run a 4 disk raid 5 for all my data, I have a backup and a second one at my mother in case the house burns down. The only thing it helps is that when a disk fails, I may have the time to save whatever is going on and take a last backup. Then toss out all the disks and start again.

Digital data doesn't exist until it is at least in 2 different places.

I would really just make sure you use reliable hard drives, SCSI or SAS drives are usually well suited for media work and are designed to take a beating continuously.

Well, I don't know if that really helps. You get better/longer warranty from the manufacturer but apart from some really old ide disks the only ones that failed me were 10k scsi types.
 
Of the drives I've had die:

Quantum: vast majority died within 3 years. This line of drive became Maxtor's Fireball line and continued to give Maxtor a bad name long after Quantum ceased to exist, IMHO. :D

Seagate: apart from one DOA drive that I bought used on eBay, 100% still work. That said, there is one 9GB SCSI 5.25" full height monster HD (from circa 1993) that has trouble spinning up sometimes... oh, and another one that drew so much power that it smoked the case's power supply. That case now has a full PC power supply inside it. :D But they all work last I checked. One of them (a much more traditional 9GB 3.5" SCSI HD) has been in continuous operation for approx. 11 years without even a glitch.

Western Digital: One died after about 3 years of idling, one is noisy as heck in my TiVo. Another got so noisy after 3 years that I just yanked it even though it is working. These do not work well in continuous operation or anything close to it. If you are using it in an external case for occasional backup use, they're a great value, but I wouldn't count on a drive whose bearings sound worse than my Quantum head crash victim. :D

IBM/Toshiba: very mixed bag. I have one 5400 RPM that after sitting idle for a couple of years had a hard time starting up, but it appears to be okay otherwise. I cloned the data to a new drive anyway, though, and took the drive out of service. I had one 80 GB drive (IBM Deskstar during the DeathStar years) that started showing lots of bad blocks after sitting there mostly idle for a couple of years. I had one IBM laptop drive whose bearings exhibited acoustic failure (loud as heck) but otherwise worked just fine. I also have several that are many years old and work just fine. My gut says that their newer fluid-bearing drives are much, much better than the old ones in terms of noise, spin-up problems, etc. I think I would mostly trust these drives now, but not completely.

Fujitsu: no problems with the few I've used. Not enough samples to have a solid opinion, though. They don't build any ATA desktop drives, so I pretty much only experience them in laptops.

Maxtor: 50% failure rate after 5 years, but this is with a sample size of only 2, so again, not enough samples to have a solid opinion. The drive that failed did so gracefully after 5 years in a highly overheated state. I cut a hole in the case and added a fan after that. :D

My biggest piece of advice is not to abuse the drives. Any drive that doesn't have adequate ventilation is much more likely to experience premature failure. Use cases with fans in another room if you have to, but keep the drive mechanisms cool. A good drive should last a decade or more if you treat it right, but even a relatively good drive will keel over if you don't keep air blowing across it....
 
Back
Top