FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version
+- XBMC Community Forum (http://forum.xbmc.org)
+-- Forum: Off-Topic (/forumdisplay.php?fid=34)
+--- Forum: Hardware for XBMC (/forumdisplay.php?fid=112)
+--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811)
- alex84 - 2010-10-13 08:37
Damn good description there darkscout.
What OS do you recommend for storage server that can run ZFS nicely ?
- adelias - 2010-10-13 09:17
darkscout Wrote:What happens when you get bit-rot? What happens when your drive thinks that it is good but you're just replicating and taking parity of bad data? What happens when you lose 2 drives?
When you lose 2 drives, yes you will lose the data on those 2 drives(not entire array).
- jvdb - 2010-10-13 09:20
poofyhairguy Wrote:I personally am not scared of having more than one HD fail at a time because I always watch the SMART data my Unraid box emails to me and because I mix and match drives to avoid bad batches.
I'm not saying it isn't bad to watch SMART, but I wouldn't really trust it to indicate a failing drive:
"we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. This result suggests that SMART models are more useful in predicting trends for large aggregate populations than for individual components."
Quote:In fact in my newest Unraid box I am trying to collect every 1.5tb and 2tb drive on the market into a single array (and I am a WD Black 2TB away from succeeding) to avoid any one model or batch being a dude. You can't do that with ZFS.
You're wrong. I helped a friend build a freenas box who is not concerned about losing the data on his media server. He only wants maximum storage space (nothing wasted in parity), and the ability to add drives to a single volume. He started out with a single Samsung 1.5TB (single drive stripe added to pool). Just last month we added a 2TB WD (again single drive stripe added to pool). I don't consider this wise (I've seen a lot of drives fail), but it is super easy to do.
- darkscout - 2010-10-13 09:20
NexentaStor for a strict server OS.
Nexenta for a general apt OS.
OpenIndiana if you want to keep things as close to OpenSolaris as possible.
- poofyhairguy - 2010-10-13 10:21
jvdb Wrote:You're wrong. I helped a friend build a freenas box who is not concerned about losing the data on his media server. He only wants maximum storage space (nothing wasted in parity), and the ability to add drives to a single volume. He started out with a single Samsung 1.5TB (single drive stripe added to pool). Just last month we added a 2TB WD (again single drive stripe added to pool). I don't consider this wise (I've seen a lot of drives fail), but it is super easy to do.
That is not quite the same thing. That is more some sort of JBOD mode.
What I was referring to was the ability to have different sized drives AND have parity protection AND use all the space on the drives. In my Unraid box one of my 2TB drives acts as a parity drive for my many 1.5TB and 2TB drives. Can ZFS do that?
- froggit - 2010-10-13 12:51
darkscout Wrote:If any byte of a copy is duplicated, ZFS will make sure that you don't needlessly duplicate files. Say I copy 1MB tank/Pictures/1.jpg to tank/Pictures/2.jpg. On normal RAIDs (and unRAID included), you will use 2MB of data. With ZFS you still only use 1MB.
This will only happen if the zfs filesystem that the file is in has deduplication turned on
darkscout Wrote:** Glad to see the ZFS advocates finally come out. I felt like I was a loner for a while
Yep: It seems the ZFS advocates here are you, me and panicnz
- froggit - 2010-10-13 13:21
poofyhairguy Wrote:But is there ANY WAY to configure ZFS to NOT stripe the data in the array so individual drives can be pulled out and read?
Yep, you create a mirror vdev. You can add as many mirror vdevs as you like to your zfs storage pool. Any disk in a mirror can be removed.
Quote:Of different sizes? I thought they all had to be the same size?
You can use different sized drives, but the smallest capacity will be used. E.g. 500GB, 1TB drives used, will only give 500GB capacity for a mirror.
So yes you can do this, but in reality you plan ahead and chuck a bunch of same-sized drives into a new vdev when expanding the pool, or building a new system.
Quote:Basically what I am saying is that Unraid lets you have an array with different sized drives (and use all the space on those different sized drives). If ZFS CAN use different sized drives and use all the space on those drives please correct me. My understanding is that a ZFS array cannot grow like Unraid or RAID 5 can.
They are different, see what I wrote above.
You can expand an existing vdev one drive at a time, replacing with bigger drives.
Or you can just add a new vdev to expand the pool. It's your choice.
Quote:So basically you are saying "Sure you can replace smaller drives with larger ones, but you won't be able to use that extra space until ALL the small drives are replaced." Is that what you are saying?
ZFS was designed to allow (1) virtually limitless amounts of storage capacity, (2) cheap disk hardware to be used, (3) RAID in software to avoid dependency and cost of using HBA RAID cards, (4) to protect data from loss, either drive loss or bit rot.
Quote:What makes me scratch my head is the fact that so many people push an enterprise system for home use.
ZFS was designed for business users, where the prospect of adding 4 or 6 drives etc does not make the person cry about the cost of the drives. But this still allows home users to benefit from ZFS' superior data protection techniques if you are willing to learn a little about how to set it up, and plan drive purchases accordingly.
And you can still use a mixture of different sized drives with ZFS. This is what I did once:
1. I had a load of various sized old drives lying around
2. I hooked them all up into an OpenSolaris box
3. Created a non-redundant pool (no parity):
# zpool create tank drive1 drive2 drive3 drive4 drive5
This gave me the capacity of all the drives added together, so no capacity was lost. But crucially, this configuration had no redundancy, as the vdev specified no parity, so if a drive died I would lose everything because the data is striped across the drives.
This was an experimental machine I used at one time for giving a large backup space. Later when I had more money I bought new drives for this backup machine so that I could survive drive losses.
Quote:For media use what matters most:
To me, my data is still data regardless of what type of data it is, and I am not willing to lose any of it, or go through re-ripping exercises, identifying movie folder names accurately again prior to XBMC library scraping. This is time I simply do not have, or at least am not willing to do again. And for those reasons, I am not willing to lose any data.
But each person is different, and so each person needs to decide what's important to them and choose a solution that fits their needs.
However, I do find, like someone already mentioned, that people tend to choose a solution that they first discover, whether it's unRAID, FreeNAS or whatever. The first solution found is rarely the best.
- jeroen94704 - 2010-10-13 13:24
PatrickVogeli Wrote:I already had asked thess questions, but I haven't got answers: how does freenas behave agains a multiple drive failure? Does it allow you to add drives as you need more space? Does it need raid enabled in bios or a hardware raid controller? Can you mix different type of drives (sata, ide, different sizes)?
I used to have a FreeNAS based system, but switched to Ubuntu Server instead. FreeNAS works well if your NAS is really just a NAS, and nothing more.
The questions you have depend more on the RAID setup you choose, and not on what FreeNAS supports. My knowledge of RAID is basic at best, so correct me if I'm wrong, but:
- RAID 5 supports single drive failure, while RAID 6 supports 2-drive failures
- RAID does indeed allow you to add more drives as needed. This does require rebuilding the array, but this can be done without interrupting operations
- FreeNAS supports hardware RAID and software RAID.
- I suspect it is possible to mix SATA/IDE drives, but mixing drive sizes will lead to waste: for a RAID 5 array, the total capacity is (n-1) times the size of the smallest drive in the array (where n is the number of drives).
Hope this helps,
- froggit - 2010-10-13 13:40
TugboatBill Wrote:So if my XBMC client want a movie from a ZFS system does it spin up all the drives or just the one the movie is on?
All drives spin when read from, in general, unless the movie is in cache.
If you worry about electricity prices, you just set the power management settings within the OS to spin-down the drives after 5 minutes or whatever. The 'green' drives usually auto-park the heads which reduces power usage too, even if the drive is spinning.
People have got very brainwashed these days about using electricity and the cost of it. If you have a 50W server on, it would take 20 hours before it consumed 1 KWh of electric, and the price of 1KWh of electric is surprisingly low, even in these days of high utility charges. Do the math.
The best way to save money is to turn stuff off when not in use. So it's funny when people worry about a disk rotating, yet many of these same people will leave the NAS on 24/7. Go figure.
TugboatBill Wrote:I have a share "Movies" on a ZFS system. I rip to it and eventually I fill it up. Can I just add another drive to it (of a larger capacity)? Is it just a simple prep the drive (format) and tell ZFS that it's available to the Movies share?
When you want to expand a ZFS storage pool, you have 2 options:
1. Upgrade an existing vdev by replacing each drive within it with a larger one, or
2. Add an extra vdev using same-sized drives.
The extra capacity available from 1. or 2. above will appear within the storage pool.
The ZFS designers wanted to make storage as simple as memory. That is, if you want to add more memory to your system, you add a new RAM stick. You don't have to do anything else to make the new memory appear. There is a memory pool and your new RAM will magically appear in the new memory pool when you reboot. And that's how they wanted to make storage: easy as adding new drives, and the extra capacity appears within the storage pool.
And finally, you don't have to format drives when they are added to a ZFS storage pool. No formatting for hours and hours and hours...
ZFS discussion - froggit - 2010-10-13 14:53
For anyone interested in preserving their data over long periods of time, you might find this discussion with the ZFS designers very useful:
A Conversation with Jeff Bonwick and Bill Moore: