FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version
+- XBMC Community Forum (http://forum.xbmc.org)
+-- Forum: Help and Support (/forumdisplay.php?fid=33)
+--- Forum: Hardware for XBMC (/forumdisplay.php?fid=112)
+--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811)
- froggit - 2010-10-15 23:44
teaguecl Wrote:Now that it seems the flames have subsided, how about if we consolidate the wisdom of this thread into a pros/cons list for these two solutions? Maybe put a new entry on the wiki for it?
Good idea. I'll contribute towards the ZFS pros/cons.
- gadgetman - 2010-10-15 23:45
@froggit: not to belittle any potential problem, but correct me if I'm wrong here: the possibility of bitrot happening is far less than a drive failure, within the same amount of time.
Not to mention, bitrot should not happen on read-only access on drives- which is like 90% use of media content servers.
- harryzimm - 2010-10-15 23:49
Nice comparisons guys. It's good to know that some people can see the positives in other than what they have chosen for their own setup. Stop taking this so personally (froggit). I think the amount of research you have put in has clouded your judgment. Take a step back once in a while. You don't need to prove your setup works for you, we believe you.
- darkscout - 2010-10-15 23:49
Quote:unRAID is also cheaper. The $60 for the the license cost is easily recovered by its flexibility to use different types and sizes of hard drives.
Quote:-Because RAID/ZFS requires all the drives to be the same make and size for optimal results you often buy the drives at one out of single batch that could have issues
Why does everyone keep saying this? It's not true. True, with ANY RAID (even unRAID) you're going to get better performance with matched drives. If you have mismatched drives, you either give up security or space. I have mismatched drives right now for my Xen virtual disks.
Quote:One of the things that concerns me most about the ZFS solution is the part of Simon's blog described as RAIDZ expansion. If I have 14TB of data, I have to have at least 28TB of storage to increase the size of the storage pool. Actually, it's 28TB, plus whatever you want to grow the pool by. I'd have a hard time justifying buying, building and managing a spare box just to keep a spare, empty 14TB of space around.
You seem to be confused. You can't expand a vdev. You can certainly expand pools. You should read up a bit more on how they all relate. If you have 14TB of data and want to expand the system, all you need to do is slap more drives into a vdev then add the vdev to the pool.
And giggity. I just re-read the Wiki. ZFS is in GNU/kFreeBSD. So that's yet another option.
- poofyhairguy - 2010-10-15 23:58
Wow! These pros and cons lists are great. By combining them together we will maximize the knowledge about these two options on the wiki.
In fact, this is going so well that we should think about other options beyond Unraid/ZFS and post pros and cons lists on them too. Drobos, software RAID5/6, hardware RAID 5/6, WHS, Other NASes, Flexraid, QNAP, and whatever else people store media on.
I can help with the RAID 5/6 one, as I have messed with Linux software RAID a lot. Also I can help with WHS as I have helped setup two of those systems!
This is great guys, I think this will really help some people in the future...
- darkscout - 2010-10-16 00:02
Raid-z3 is also available. Allowing up to 3 disk failures.
All drives do NOT have to be online to access the data.
+ Also has Nexenta, NexentaStor, GNU/kFreeBSD, etc
You do not need a ton of spare drives. 2 at minimum.
zpool add tank mirror disk1 disk2
zpool add tank raidz2 disk1 disk2 disk3 disk4 disk5
- poofyhairguy - 2010-10-16 00:04
+Available on Pre-Built Systems
+Very easy to use
+Provides autobacking up of Windows systems on the network with the WHS box
+Has large pluggin community and easy pluggin development
+You can add any size or type of drive you want and it will use the maximum space on those drives
+Easy remote access
-Uses duplication for protection, not parity (almost a double minus)
-Costs money and is closed software
-Prebuilt systems usually have a low (4 or less) drive count
-Moderare performance when compared to a striped RAID solution
- froggit - 2010-10-16 00:06
Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?:
-Can have up to three parity drives per vdev, and n-drive mirrors
-Yet can be used by freely available OSes like FreeBSD, OpenIndiana/Illumos, Linux* (* http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux)
* with RAID-Z1, RAID-Z2 and RAID-Z3 (but not mirrors) data is striped so data on drive can't be read on other computers that don't have ZFS read capability
* you must use the same size drives within a vdev or you waste any space larger than the smallest drive in the vdev
* and when data is accessed from a vdev all the drives in that vdev spin up because of the striped data or mirror (*1)
-Best Uses: Servers with some important non-media data. Mediaservers benefit from ZFS too, with faster access for multiple HTPCs, anything commercial, places where you would usually use RAID 5/6 (basically ZFS makes those RAId levels obsolete).
*1: but the drives are already spinning unless power management is used to spin-down the drives. Spinning drives is also a pro as it gives faster access because you don't need to wait for drive(s) to spin up.
-Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life (but what about spin-up/down wear and tear on drive components?)
Downside include: when a non-parity drive dies you will probably lose all data on that drive, unless you are able to recover it using tools like (name them here or point to recovery URL perhaps?), or you have backups. In case of data loss you will need to use backups if you have them, or re-rip your media from original media.
- froggit - 2010-10-16 00:12
gadgetman Wrote:@froggit: not to belittle any potential problem, but correct me if I'm wrong here: the possibility of bitrot happening is far less than a drive failure, within the same amount of time.
I don't remember the probability of bit rot occurring in a certain time-frame, but I seem to remember that it's not that rare, in fact, reasonably common.
If I dig out something informative, I'll post it here. In the meantime, this might be interesting:
- froggit - 2010-10-16 00:18
harryzimm Wrote:Nice comparisons guys. It's good to know that some people can see the positives in other than what they have chosen for their own setup. Stop taking this so personally (froggit). I think the amount of research you have put in has clouded your judgment. Take a step back once in a while. You don't need to prove your setup works for you, we believe you.
It was the research I did that led me to considering usage of ZFS: it was right for me because I did *almost* lose a lot of irreplaceable data, and I said 'never again'.
But I accept other people will use whatever they want, and of course, that is fine, I just wanted to throw ZFS into the discussion to show that it is another strong contender, and let others decide on whatever they want, armed with some facts about ZFS etc. Try not to get so uptight if someone else tries to show some other solution backed up by some data and facts. No flames please. As you said, let's move on... it seems we are now moving onto creating a useful wiki of pros and cons of the various solutions suggested here in this thread, and I think that is a very useful and practical outcome.