• 1
  • 3
  • 4
  • 5(current)
  • 6
  • 7
  • 17
FreeNAS versus unRAID as the operating-system for a DIY NAS?
#61
Damn good description there darkscout.

What OS do you recommend for storage server that can run ZFS nicely ?
---------------------------------------------------
Intel NUC Haswell D34010WYK | ATV2 | Logitech Harmony One | Onkyo TX-NR808 Receiver | QNAP 809 | APC Back-UPS RS 550
Reply
#62
darkscout Wrote:What happens when you get bit-rot? What happens when your drive thinks that it is good but you're just replicating and taking parity of bad data? What happens when you lose 2 drives?

When you lose 2 drives, yes you will lose the data on those 2 drives(not entire array).
Reply
#63
poofyhairguy Wrote:I personally am not scared of having more than one HD fail at a time because I always watch the SMART data my Unraid box emails to me and because I mix and match drives to avoid bad batches.

I'm not saying it isn't bad to watch SMART, but I wouldn't really trust it to indicate a failing drive:

http://static.googleusercontent.com/exte...ilures.pdf

"we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. This result suggests that SMART models are more useful in predicting trends for large aggregate populations than for individual components."

Quote:In fact in my newest Unraid box I am trying to collect every 1.5tb and 2tb drive on the market into a single array (and I am a WD Black 2TB away from succeeding) to avoid any one model or batch being a dude. You can't do that with ZFS.

You're wrong. I helped a friend build a freenas box who is not concerned about losing the data on his media server. He only wants maximum storage space (nothing wasted in parity), and the ability to add drives to a single volume. He started out with a single Samsung 1.5TB (single drive stripe added to pool). Just last month we added a 2TB WD (again single drive stripe added to pool). I don't consider this wise (I've seen a lot of drives fail), but it is super easy to do.
Reply
#64
NexentaStor for a strict server OS.
Nexenta for a general apt OS.
OpenIndiana if you want to keep things as close to OpenSolaris as possible.
Reply
#65
jvdb Wrote:You're wrong. I helped a friend build a freenas box who is not concerned about losing the data on his media server. He only wants maximum storage space (nothing wasted in parity), and the ability to add drives to a single volume. He started out with a single Samsung 1.5TB (single drive stripe added to pool). Just last month we added a 2TB WD (again single drive stripe added to pool). I don't consider this wise (I've seen a lot of drives fail), but it is super easy to do.

That is not quite the same thing. That is more some sort of JBOD mode.

What I was referring to was the ability to have different sized drives AND have parity protection AND use all the space on the drives. In my Unraid box one of my 2TB drives acts as a parity drive for my many 1.5TB and 2TB drives. Can ZFS do that?

Reply
#66
darkscout Wrote:If any byte of a copy is duplicated, ZFS will make sure that you don't needlessly duplicate files. Say I copy 1MB tank/Pictures/1.jpg to tank/Pictures/2.jpg. On normal RAIDs (and unRAID included), you will use 2MB of data. With ZFS you still only use 1MB.

This will only happen if the zfs filesystem that the file is in has deduplication turned on

darkscout Wrote:** Glad to see the ZFS advocates finally come out. I felt like I was a loner for a while Smile

Yep: It seems the ZFS advocates here are you, me and panicnz Laugh
Reply
#67
poofyhairguy Wrote:But is there ANY WAY to configure ZFS to NOT stripe the data in the array so individual drives can be pulled out and read?

Yep, you create a mirror vdev. You can add as many mirror vdevs as you like to your zfs storage pool. Any disk in a mirror can be removed.

Quote:Of different sizes? I thought they all had to be the same size?

You can use different sized drives, but the smallest capacity will be used. E.g. 500GB, 1TB drives used, will only give 500GB capacity for a mirror.

So yes you can do this, but in reality you plan ahead and chuck a bunch of same-sized drives into a new vdev when expanding the pool, or building a new system.


Quote:Basically what I am saying is that Unraid lets you have an array with different sized drives (and use all the space on those different sized drives). If ZFS CAN use different sized drives and use all the space on those drives please correct me. My understanding is that a ZFS array cannot grow like Unraid or RAID 5 can.

Not really a problem with a media server. A HUGE problem with a server that is acting as a business database. That is why I would never put Unraid in my office.

They are different, see what I wrote above.

You can expand an existing vdev one drive at a time, replacing with bigger drives.

Or you can just add a new vdev to expand the pool. It's your choice.

Quote:So basically you are saying "Sure you can replace smaller drives with larger ones, but you won't be able to use that extra space until ALL the small drives are replaced." Is that what you are saying?

Because quite frankly for home use that sucks, and is reason alone to pay for Unraid or WHS for many people.

ZFS was designed to allow (1) virtually limitless amounts of storage capacity, (2) cheap disk hardware to be used, (3) RAID in software to avoid dependency and cost of using HBA RAID cards, (4) to protect data from loss, either drive loss or bit rot.

Quote:What makes me scratch my head is the fact that so many people push an enterprise system for home use.

ZFS was designed for business users, where the prospect of adding 4 or 6 drives etc does not make the person cry about the cost of the drives. But this still allows home users to benefit from ZFS' superior data protection techniques if you are willing to learn a little about how to set it up, and plan drive purchases accordingly.

And you can still use a mixture of different sized drives with ZFS. This is what I did once:
1. I had a load of various sized old drives lying around
2. I hooked them all up into an OpenSolaris box
3. Created a non-redundant pool (no parity):
# zpool create tank drive1 drive2 drive3 drive4 drive5

This gave me the capacity of all the drives added together, so no capacity was lost. But crucially, this configuration had no redundancy, as the vdev specified no parity, so if a drive died I would lose everything because the data is striped across the drives.

This was an experimental machine I used at one time for giving a large backup space. Later when I had more money I bought new drives for this backup machine so that I could survive drive losses.

Quote:For media use what matters most:

-Data is reasonably secure so you don't have to rerip all your DVDs

-To be able to add whatever drive is cheapest on the market at the time you need more space to the array and use all the space on the drive no matter what you originally had in the array

-To be able to saturate a cheap gigabit network with large and constant reads from the server

-To be able to have single folders that span multiple drives

Unraid gives all that, and pretty much nothing more. ZFS can't do all that (unless you CAN use different sized drives in a single ZFS array and use all the space), but adds in a bunch of things that media users don't need.

To me, my data is still data regardless of what type of data it is, and I am not willing to lose any of it, or go through re-ripping exercises, identifying movie folder names accurately again prior to XBMC library scraping. This is time I simply do not have, or at least am not willing to do again. And for those reasons, I am not willing to lose any data.

But each person is different, and so each person needs to decide what's important to them and choose a solution that fits their needs.

However, I do find, like someone already mentioned, that people tend to choose a solution that they first discover, whether it's unRAID, FreeNAS or whatever. The first solution found is rarely the best.
Reply
#68
PatrickVogeli Wrote:I already had asked thess questions, but I haven't got answers: how does freenas behave agains a multiple drive failure? Does it allow you to add drives as you need more space? Does it need raid enabled in bios or a hardware raid controller? Can you mix different type of drives (sata, ide, different sizes)?

I used to have a FreeNAS based system, but switched to Ubuntu Server instead. FreeNAS works well if your NAS is really just a NAS, and nothing more.

The questions you have depend more on the RAID setup you choose, and not on what FreeNAS supports. My knowledge of RAID is basic at best, so correct me if I'm wrong, but:

- RAID 5 supports single drive failure, while RAID 6 supports 2-drive failures
- RAID does indeed allow you to add more drives as needed. This does require rebuilding the array, but this can be done without interrupting operations
- FreeNAS supports hardware RAID and software RAID.
- I suspect it is possible to mix SATA/IDE drives, but mixing drive sizes will lead to waste: for a RAID 5 array, the total capacity is (n-1) times the size of the smallest drive in the array (where n is the number of drives).

Hope this helps,

Jeroen
Reply
#69
TugboatBill Wrote:So if my XBMC client want a movie from a ZFS system does it spin up all the drives or just the one the movie is on?

All drives spin when read from, in general, unless the movie is in cache.

If you worry about electricity prices, you just set the power management settings within the OS to spin-down the drives after 5 minutes or whatever. The 'green' drives usually auto-park the heads which reduces power usage too, even if the drive is spinning.

People have got very brainwashed these days about using electricity and the cost of it. If you have a 50W server on, it would take 20 hours before it consumed 1 KWh of electric, and the price of 1KWh of electric is surprisingly low, even in these days of high utility charges. Do the math.

The best way to save money is to turn stuff off when not in use. So it's funny when people worry about a disk rotating, yet many of these same people will leave the NAS on 24/7. Go figure. Huh

TugboatBill Wrote:I have a share "Movies" on a ZFS system. I rip to it and eventually I fill it up. Can I just add another drive to it (of a larger capacity)? Is it just a simple prep the drive (format) and tell ZFS that it's available to the Movies share?

When you want to expand a ZFS storage pool, you have 2 options:
1. Upgrade an existing vdev by replacing each drive within it with a larger one, or
2. Add an extra vdev using same-sized drives.

The extra capacity available from 1. or 2. above will appear within the storage pool.

The ZFS designers wanted to make storage as simple as memory. That is, if you want to add more memory to your system, you add a new RAM stick. You don't have to do anything else to make the new memory appear. There is a memory pool and your new RAM will magically appear in the new memory pool when you reboot. And that's how they wanted to make storage: easy as adding new drives, and the extra capacity appears within the storage pool.

And finally, you don't have to format drives when they are added to a ZFS storage pool. No formatting for hours and hours and hours... Big Grin
Reply
#70
For anyone interested in preserving their data over long periods of time, you might find this discussion with the ZFS designers very useful:

A Conversation with Jeff Bonwick and Bill Moore:
http://queue.acm.org/detail.cfm?id=1317400
Reply
#71
poofyhairguy Wrote:I will say that on Unraid's forum there are many users that have used the system for over 4 years (just upgrading the arrays one hard drive and mobo at a time) and not once have I read about any "bit rot." Just lots of happy customers with no data loss.

Could you tell me which tools unRAID has to detect bit rot?
I suspect you can't detect bit rot with unRAID.
The reason you can detect bit rot with ZFS is because for every block for every file, ZFS creates a 256-bit block checksum. If the file gets a read error when the file is read back (from a file read or pool scrub), the corrupted block(s) are automatically rebuilt.

poofyhairguy Wrote:What I think you are referring to (in other comparisons I have read about ZFS vs traditional RAID) is the infamous RAID 5/6 "write hole." This problem does not affect Unraid, because Unraid is based on RAID 4. In fact, its safer than RAID 4 because its not stripped, and in most cases of data corruption in normal RAID comes from the striping of the data.

The RAID 5 write hole is something different than bit rot. It was caused by, for example, power loss leading to an incomplete stripe write + parity write, leading to inconsistent file system state. NVRAM onboard the RAID HBA was the fix, but these were expensive. Anyway, hardware RAID is no longer necessary due to abundant free CPU cycles, and software RAID gives the huge benefit of not being tied to a particular vendor's RAID card.

And with ZFS you can take out all your drives from a machine running one OS, e.g. Solaris/OpenSolaris, and put them in a machine running a different OS, e.g. FreeBSD, and it will work. In other words ZFS is endian-neutral. That's an impressive feature, as use of ZFS does not tie you to a specific OS.
Reply
#72
froggit Wrote:... as use of ZFS does not tie you to a specific OS.
Except that it only runs properly on OpenSolaris - which is dead. I'm on your side that FreeNAS+ZFS is a fantastic solution, but lets not get crazy with the comparison. unRAID works very well for multimedia storage, and is very simple to use - that makes it extremely relevant to many even if it's lacking some enterprise level features.
Reply
#73
teaguecl Wrote:Except that it only runs properly on OpenSolaris - which is dead. I'm on your side that FreeNAS+ZFS is a fantastic solution, but lets not get crazy with the comparison. unRAID works very well for multimedia storage, and is very simple to use - that makes it extremely relevant to many even if it's lacking some enterprise level features.

Or Solaris*. Or Nexenta/NexentaStor**. Or Illumos. Or OpenIndiana. Or Linux through Fuse.
* You can still download it for free.
** Very FreeNAS like
Reply
#74
Everything I'm finding on "bit rot" for hard drives boils down to:

1. The hard drive has CRC error checking built in which corrects virtually all "bit rot".

2. ZFS can offer an additional layer of protection against bit rot.

3. "Bit rot" appears to be more of a ZFS "sales" feature than anything else.

I've worked in IT for quite a while. ZFS is the only file array system I've found that promotes additional protection against "bit rot". Most enterprise organizations don't use it yet seem to get along fine without any "bit rot" problems. Yes, it is an additional layer of protection, but a mirrored raid 6 array gives great protection too - it just isn't applicable in a media storage system in a home.
Reply
#75
Quote:
"I have a share "Movies" on a ZFS system. I rip to it and eventually I fill it up. Can I just add another drive to it (of a larger capacity)? Is it just a simple prep the drive (format) and tell ZFS that it's available to the Movies share?"

froggit Wrote:All drives spin when read from, in general, unless the movie is in cache.

When you want to expand a ZFS storage pool, you have 2 options:
1. Upgrade an existing vdev by replacing each drive within it with a larger one, or
2. Add an extra vdev using same-sized drives.

The extra capacity available from 1. or 2. above will appear within the storage pool.

The ZFS designers wanted to make storage as simple as memory. That is, if you want to add more memory to your system, you add a new RAM stick. You don't have to do anything else to make the new memory appear. There is a memory pool and your new RAM will magically appear in the new memory pool when you reboot. And that's how they wanted to make storage: easy as adding new drives, and the extra capacity appears within the storage pool.

And finally, you don't have to format drives when they are added to a ZFS storage pool. No formatting for hours and hours and hours... Big Grin


So I have to add a mirror (2 drives) or replace every drive in the array with a larger one?
Reply
  • 1
  • 3
  • 4
  • 5(current)
  • 6
  • 7
  • 17

Logout Mark Read Team Forum Stats Members Help
FreeNAS versus unRAID as the operating-system for a DIY NAS?0