Posts: 196
Joined: Dec 2009
Reputation:
3
RAID isn't an alternative for backups, if there is really priceless data on the array it would be worth setting up backups. Just plug in one of those external drives and set-up rsnyc for your important directories.
Posts: 180
Joined: Jan 2010
Reputation:
2
alex84
Senior Member
Posts: 180
2012-03-16, 22:35
(This post was last modified: 2012-03-16, 22:37 by alex84.)
I would say chose raid level depending on what fail disk tolerance you want. Usally raid5 with 1 failed disk is enought for most. However if 1 drive would fail and you replace it with an new drive the rebuild will start. If there where to be another bad disk in your array or soon to fail, it could do so when or under the rebuild process.
Its unlikely but it happens. I use raid6 for critical data etc. For regular home usage i could live with the fact that if a second disk dies during rebuild process i only lose music and videos that i can replace.
For personal photos and other important data i would do an backup to an external drive or second unit.
---------------------------------------------------
Intel NUC Haswell D34010WYK | ATV2 | Logitech Harmony One | Onkyo TX-NR808 Receiver | QNAP 809 | APC Back-UPS RS 550
Posts: 3,544
Joined: Mar 2010
Reputation:
119
Ive used raid 5 for a few years now and never had a drive fail yet (knock on wood..) but like other have said I backup all my important stuff (photos, music, documents) to my desktop/htpc/laptop using rsync so I have multiple backups. I also have set mdadm to email me when there is a HDD failure so I know about a failure as soon as its occurs.
Posts: 13
Joined: Jan 2009
Reputation:
0
I have a Synology 1010+ here is what I have found:
I'm running 5 x 2TB green drives (WD), I NOW run RAID 6. I have had 2 x RAID 5 crashes, when I say crash I mean, one drive failed and the RAID volume was unable to rebuild because there was an error in the parity data.
Uncommon? Well I did a fair bit of research and it appears of volume of that size, no. apparently green drives have a bit failure rate of 10^14 which means on a 8TB RAID 5 volume there is a good chance one bit will be wrong and rebuild will FAIL. This is the manufacturers figure BTW.
There are 3 ways to address this problem:
1. Regularly check (rebuild) the parity data by starting a volume check, on the synology this is a command line thing, so I have added it once a month to the crontab.
2. Use Black drives on large volumes 8TB+ to reduce the chances of a bad bit, and having a failed rebuild.
3. Use RAID 6 instead of RAID 5, this gives you 2 redundant drives.
4. If you have the space backup or rsync your data to another volume, even if there is no RAID on that volume.
If you want the most bullet proof setup, then I'd recommend all 4 points above. My setup now backs up the 1010+ to a 211j via the inbuilt net backup tool (rsync), the 211j has 2 x 3TB striped set).
So, I'm covering points 1,3 and 4 in my setup. I can't justify the price of Black drives and RAID 6.
Also Consider: For the same amount of space green drives in RAID 6 is cheaper than Black drives in RAID 5 (provided you have the drive bays)
I CAN NOT make this any clearer, Running RAID 5 is not 100% bullet proof. Not to mention Fire, theft, data corruption, deletion etc. You need your data in 2 places (physical places, i.e. not in two devices stacked on top of one another), and the RAID setup you use needs to primarily take into consideration the Hardware (drives) you are using.
Hope that all helps.
Posts: 180
Joined: Jan 2010
Reputation:
2
alex84
Senior Member
Posts: 180
My advice.
Ditch the green drives that are non recommended for raid usage by WD and also Synology. Powersaving drives in raid arrays is asking for trouble or failed / degraded raid. I belive the wd blacks are also non fit an non recommended atleast by wd. Always check your nas vendors hdd compability list.
Eighter use cheap consumer drives that does not have any "green" powersaving features for raid usage. Or if you got the extra cash and want more realible disk go for enterprise drives like the wd re4.
Have seen to many users complaning of failed raid arrays full off cheap wd green drives. The green drive are built for powersaving mode. When the disk don't wake up in a resonable amount of time the raid controller / software will mark the disk as bad and degrade the array. All the drive did was doing what it was built for, that is powersaving. All the raid controller / software did was degrade the array since no disk responded. Well thats in short why this kind of things happens. Both components works as intended but not good togheter.
I use mostly consumer hitachi drives never a problem. I also use enterprise wd re4 and hitachi drives for more critical stuff.
Thats my 2 cents on this.
---------------------------------------------------
Intel NUC Haswell D34010WYK | ATV2 | Logitech Harmony One | Onkyo TX-NR808 Receiver | QNAP 809 | APC Back-UPS RS 550
Posts: 1,741
Joined: Jul 2006
Reputation:
4
Why use true RAID at all? I've been running unRAID for years and am pretty happy. It's perfect for storing videos and whatnot but not something I'd use for say a high volume SQL database. Parity is stored on a single drive, each data drive is standard ReisferFS. If a drive fails standard tools could be used to recover it. If a single drive fails you swap in a new one and rebuild - you have access to the data while this occurs. IF God forbid two drives fail then you lose the data on those two drives and nothing else - you can try to recover them with ReiserFS tools. Since parity isn't striped only those disks being accessed are spun up. OS boots from a thumbdrive so no space is consumed by the OS. You can also use drives of varying brand\size\speed with no issue. The only rule is that parity be as big or bigger than the other drives. 3x 2TB drives would be 4TB of storage. 10x 2TB drives would be 18TB of storage. Make sense?
I run two of these servers currently. One of them has had as many as 15 drives in it in the past. Currently I've consolidated it down with 1.5TB drives and moved it from IDE to SATA (yeah, I've been doing this awhile!). This server has 12 drives in it currently but two are empty last I looked. My other machine has 10 drives and it's a catch all for drives. I used to have 750 all the way up to 2TB drives in it but have been slowly upgrading them all to 2TB drives and moving the old ones to the other server. I've yet to suffer a dual drive failure but I've had single drive failures about 4 times, none of them were traumatic and no data was lost. 3TB drives are now supported in beta software but when I tried upgrading to that software I ran into issues, starting one from scratch with the new stuff would likely be just fine.
Anyway, just a thought. Flex is another choice but I don't know as much about it. For home video serving full on RAID just doesn't make sense to me from an energy\heat\noise and cost aspect. <shrug>
P.S. green drives, slow drives, fast drives - all living together in my systems. I buy what's on sale when I need them!
Posts: 40
Joined: Nov 2009
Reputation:
3
I used RAID 5 for a few years to keep things cheap. The only nightmare was rebuilding, it can take days. It did recover successful... I just caved and bought another drive and use a RAID10 now, hard not too when the prices were so cheap last year.
Posts: 825
Joined: Oct 2009
Reputation:
7
Raid10? That's 2 raid 0 arrays in a mirror. You get 1/2 the bought capacity, ie 6 2TB drive = 6TB of usable capacity. That's seems to be a waste of HDDs when something like unraid/flexraid/etc gives everything you need for media storage and doesn't waste so many drives.
Posts: 327
Joined: Sep 2008
Reputation:
0
What about the Seagate Barracuda 3TB (ST3000DM001) to use in my NAS? I have a Synology 1512+ and it is on their list of supported drives. Would you guys recommend this for a RAID setup?
Posts: 3,212
Joined: Apr 2010
Reputation:
62
The first time I lost my array I ditched RAID 5 forever. It is fine for business use, but for a media server it adds risk (in the form of striping) to get speed you will never need.