Kodi Community Forum
FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Discussions (https://forum.kodi.tv/forumdisplay.php?fid=222)
+--- Forum: Hardware (https://forum.kodi.tv/forumdisplay.php?fid=112)
+--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17


- gadgetman - 2010-10-15

markguy Wrote:I think it would be awesome if this thread could stay above a schoolground level of maturity. I don't see anyone flocking to a particular solution based on what's being displayed here, frankly.

One of the things that concerns me most about the ZFS solution is the part of Simon's blog described as RAIDZ expansion. If I have 14TB of data, I have to have at least 28TB of storage to increase the size of the storage pool. Actually, it's 28TB, plus whatever you want to grow the pool by. I'd have a hard time justifying buying, building and managing a spare box just to keep a spare, empty 14TB of space around.

To someone like me, who uses unRAID, but is interested in ZFS's robustness, it seems a better idea might be to have two boxes, an unRAID box for replaceable media rips that grows pretty effortlessly as needed and another using ZFS for storing irreplaceable documents, photos, to-do lists (which my wife claims are critically important).

Can't go wrong with that, really. (unraid for media, zfs (on whatever) for 'your preciousssssss')

I'm scaling down my opensolaris based zfs box and testing out zfs on FreeNAS (get the modified version by a cool japanese dude, it supports zfs v13; better performance and reliability than the zfs included in the latest official freenas). So far so good... it's not as fast as the opensolaris screamer (+200MB/sec with 5 x 500GB drives back when I still had it running), but the simplified setup-maintenance and operation is worthwhile to me.

I'm getting older and lazier. Who cares I'm losing 20% performance if I can spend less than half the time to setup and maintain it.


- markguy - 2010-10-15

froggit Wrote:If you want to simply expand a ZFS storage pool, you either:
1. add a new vdev (a bunch of drives, as many as you like >1), OR
2. replace drives in an existing vdev with larger ones

If I have 2TB drives, it's topped, for all intents and purposes. Next year, if we're all very lucky, motherboards will learn to love 4k sectors, but not so much right now. Plus, what do I do with all those perfectly good drives I yank?

Each vdev requires at least three drives for RAIDZ, I thought? And for that, you get one drive of storage capacity. You could add more drives at once to mitigate that, of course. I'm not saying those are impossible to work around, but I also wouldn't describe that as simple.


- markguy - 2010-10-15

gadgetman Wrote:I'm getting older and lazier. Who cares I'm losing 20% performance if I can spend less than half the time to setup and maintain it.

+1

The thought of trying to maintain two different systems has been the real show stopper for me trying a ZFS solution up until this point, but the pitchfork wielding mob would show up if the family photos went away.


- teaguecl - 2010-10-15

Now that it seems the flames have subsided, how about if we consolidate the wisdom of this thread into a pros/cons list for these two solutions? Maybe put a new entry on the wiki for it?


- froggit - 2010-10-15

It's a pity that some unRAID users seem to get upset when one points out the weaknesses of it in the area of data loss.

I agree that for many people, probably the best feature of it is the ability to use all your old drives as required. It certainly is a cost-effective feature, and can fully understand that people really like this feature.

However, for anyone who values their data & who doesn't want to re-rip their DVDs due to data loss, personally I think ZFS is the preferable solution for storing media on, due to its built-in (1) 256-bit block checksums which allow bit rot to be detected, and (2) 'scrub' command that will find and correct any bit rot, and (3) ability to specify varying degrees of redundancy, according to one's wallet and paranoia level.

That is all. Let's let other people choose what they want, armed with some facts about the different pros and cons.

One point that is worth mentioning is the fact that many people, as well as using a NAS to store their media, also want to store their irreplaceable files, and in these situations, running one NAS must surely be preferable than running two.

Lastly, to the person who said if 3 drives die out of 16, s/he is seemingly assuming that the creator of this 15-drive ZFS array would use one vdev with double parity. This would never be recommended (see ZFS Best Practices Guide). More likely, the user would create smaller vdevs to mitigate risk. And really, one should have backups, so even if n-disks die, you can restore in the case that you can't rebuild dead drive(s) for any reason.


- harryzimm - 2010-10-15

Quote:That is all. Let's let other people choose what they want, armed with some facts about the different pros and cons.

Congratulations. We can all move on now Big Grin

cheers


- poofyhairguy - 2010-10-15

teaguecl Wrote:Now that it seems the flames have subsided, how about if we consolidate the wisdom of this thread into a pros/cons list for these two solutions? Maybe put a new entry on the wiki for it?

ZFS
-Far more robust than hardware RAID solutions while delivering equivalent performance
-Includes real time protection against "bit rot."
-Allows you to put together many arrays (called vdevs) into a single storage pool. This allows you to customize how much redundancy you have and optimize a solution for your situation
-Can have up to three parity drives
-Faster than Unraid on writes and reads, sometimes by a large amount
-Array can be moved to any system that support ZFS, and therefore is no OS dependent
-Designed for corporate use, so has some real money behind it
-Yet can be used by freely available OSes like FreeBSD
-Downsides include: data is striped so data on drive can't be read on other computers, you must use the same size drives or you waste any space larger than the smallest drive in the vdev, and when data is accessed from a vdev all the drives in that vdev spin up because of the striped data
-Best Uses: Servers with some important non-media data. Mediaserver with many (4+) clients, anything commercial, places where you would usually use RAID 5/6 (basically ZFS makes those RAId levels obsolete).

Unraid
-Allows you to mix and match drives of different sizes and makes into a single array of pooled storage
-Allows you to pull the drives of the array out and read the data on them on another computer
-Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life
-Unraid allows for the growing of the array in size by replacing one drive at a time with full use of that drive after addition and no data loss
-Downsides include: Unraid costs money for real versions, Unraid's write speeds are pretty low without a cache drive, Unraid's read speeds are slightly lower than the drives by themselves, Unraid has no protection against "bit rot," Unraid relies on its OS based on the primitive Slackware Linux, Unraid currently only allows for one parity drive
-Best Uses: A media server that is grown periodically as storage is needed, one cheapest disk available at a time.



How about that?


- froggit - 2010-10-15

maxinc Wrote:...but for insanely paranoid people like me, unRAID is still better for data protection since it minimises the loss in case of a disaster.

I have backups (different geographic locations), so in a disaster I would simply restore from the backup NAS in the rare event that rebuilding dead drive(s) failed.


- gadgetman - 2010-10-15

I'll pitch in.. (pros and cons are relative to each other; comments made in the context of media storage box)

ZFS:
+ Opensource.
+ Many 'free' implementations available: opensolaris (and derivatives such as nexenta), freebsd (and derivatives such as freenas), linux (on fuse, not recommended yet due to performance and stability issues).
+ Very high performance especially on its native distribution, opensolaris (and it scales well with more drives).
+ (Continuous) protection against bitrot (will also self heal).
+ Double and triple parity variant available (Raid-z2 & raid-z3).
- Generally more complex to setup and maintain (except the one on freeNAS); but... it's more flexible and can be tailored for more complex access control requirements.
- All drives have to be powered on (spinning) to access the data.
- Data is striped thru all drives evenly. You cannot access the data on per-disk basis (for maintenance, recovery, etc).
- All drives have to be the same size (per volume) for raidz.
- Volume expansion is tedious. Able to string multiple zfs (virtual) devices together into a same volume (similar to spanning a volume over multiple raid sets).

unRAID:
+ Available in prebuilt boxes.
+ Simple install (embedded distribution on a USB stick).
+ Simple to maintain (most everything is done through the webgui)
+ Robust volume construction: mix and match any drive size and they will all get (raid) parity protection.
+ Robust volume expansion: You can replace any drive with a bigger one. You can add one drive at a time.
+ Robust power management: Files exists per-drive (no striping), so only the drives being accessed need to be active. All others can remain idle.
+ The individual data drives are formatted like normal reiserfs drives, can be accessed directly (drivers for windows, osx and *nix are available) for maintenance & recovery purposes.
- Performance is similar to a single-drive NAS. It doesn't scale at all.
- Commercial & proprietary codebase. Free version available for up to 3 drives.
- Limited access control management.


- froggit - 2010-10-15

markguy Wrote:If I have 2TB drives, it's topped, for all intents and purposes. Next year, if we're all very lucky, motherboards will learn to love 4k sectors, but not so much right now.

ZFS can already handle 4k sector drives, but you need to align them IIRC.

markguy Wrote:Plus, what do I do with all those perfectly good drives I yank?

A later project for me one day is to build a large ZFS-based NORCO system and there I will throw all my old drives. I have a number of 750GB drives and a number of 2TB drives, so when these are no longer needed I will simply create a couple of vdevs, one vdev for each drive size, and then create a massive pool with whatever parity level I decide on, and this will make a great, expandable backup box with redundancy built in from the start. So it is possible to make use of old drives in this way.

markguy Wrote:Each vdev requires at least three drives for RAIDZ, I thought? And for that, you get one drive of storage capacity. You could add more drives at once to mitigate that, of course. I'm not saying those are impossible to work around, but I also wouldn't describe that as simple.

I might be wrong, but I think it's possible to make a 2-drive RAID-Z1 vdev, but if using 2 drives only for a vdev it would make more sense to create a 2-drive mirror vdev.

Generally, people planning ZFS-based systems do some upfront planning to decide how to structure data and parity to suit their wallet/paranoia level or corporate guidelines if it's a company.


- froggit - 2010-10-15

teaguecl Wrote:Now that it seems the flames have subsided, how about if we consolidate the wisdom of this thread into a pros/cons list for these two solutions? Maybe put a new entry on the wiki for it?

Good idea. I'll contribute towards the ZFS pros/cons.


- gadgetman - 2010-10-15

@froggit: not to belittle any potential problem, but correct me if I'm wrong here: the possibility of bitrot happening is far less than a drive failure, within the same amount of time.

Not to mention, bitrot should not happen on read-only access on drives- which is like 90% use of media content servers.


- harryzimm - 2010-10-15

Nice comparisons guys. It's good to know that some people can see the positives in other than what they have chosen for their own setup. Stop taking this so personally (froggit). I think the amount of research you have put in has clouded your judgment. Take a step back once in a while. You don't need to prove your setup works for you, we believe you.

cheers


- darkscout - 2010-10-15

Quote:unRAID is also cheaper. The $60 for the the license cost is easily recovered by its flexibility to use different types and sizes of hard drives.
Quote:-Because RAID/ZFS requires all the drives to be the same make and size for optimal results you often buy the drives at one out of single batch that could have issues

Why does everyone keep saying this? It's not true. True, with ANY RAID (even unRAID) you're going to get better performance with matched drives. If you have mismatched drives, you either give up security or space. I have mismatched drives right now for my Xen virtual disks.


Quote:One of the things that concerns me most about the ZFS solution is the part of Simon's blog described as RAIDZ expansion. If I have 14TB of data, I have to have at least 28TB of storage to increase the size of the storage pool. Actually, it's 28TB, plus whatever you want to grow the pool by. I'd have a hard time justifying buying, building and managing a spare box just to keep a spare, empty 14TB of space around.

You seem to be confused. You can't expand a vdev. You can certainly expand pools. You should read up a bit more on how they all relate. If you have 14TB of data and want to expand the system, all you need to do is slap more drives into a vdev then add the vdev to the pool.

And giggity. I just re-read the Wiki. ZFS is in GNU/kFreeBSD. So that's yet another option.


- poofyhairguy - 2010-10-15

Wow! These pros and cons lists are great. By combining them together we will maximize the knowledge about these two options on the wiki.

In fact, this is going so well that we should think about other options beyond Unraid/ZFS and post pros and cons lists on them too. Drobos, software RAID5/6, hardware RAID 5/6, WHS, Other NASes, Flexraid, QNAP, and whatever else people store media on.

I can help with the RAID 5/6 one, as I have messed with Linux software RAID a lot. Also I can help with WHS as I have helped setup two of those systems!

This is great guys, I think this will really help some people in the future...