Server OS for NAS?
#16
(2014-08-08, 00:40)MrCrispy Wrote: IMO safety and availability of data is the single most important criteria. The dangers of a RAID reconstruction are too high, and the costs of reripping all your media are also high. Plus you don't lose anything with a parity based solution that keeps data in native format. In fact I'd like to hear of a single advantage of using RAID. I can't think of one.
In light of your comments I started rethinking my RAID setup (mdadm) and I've decided to move away from true RAID and start using SnapRAID. For pooling I'll use mhddfs. Aside from partial recovery which is often a pain/impossible with RAID, I don't have to worry about the size of drives I'm adding anymore either (with a true RAID setup you want the disks to be the same size, so upgrading usually means paying quite a lot of money and trying to sell your old drives). Only disadvantage is that SnapRAID isn't realtime, so I might lose some data between sync and drive failure. Luckily, this is usually data that's easily replaced again.

This doesn't really help OP, though Wink.

@OP: since you want something easy and don't really want anything special, you can just go with a Windows setup (if you're comfortable with that OS). Personally I'd go with something as Ubuntu + Amahi as frontend (if you really want a frontend), but this does mean a slight learning curve.
Reply
#17
(2014-08-08, 02:15)two515ty Wrote: What are the dangers of a RAID reconstruction? I'm curious myself because I'm planning on migrating to a RAID-Z2 (RAID6) system in the future, but I've never really heard about RAID reconstruction being dangerous.
The theory is that you're putting too much stress on the drive while rebuilding. Personally, I think it's a bunch of bunk in a home use scenario. The drives aren't forced to do a pile of seeks thrashing the drive while rebuilding. It's a continuous sequential read operation from all the drives except the drive that's being rebuilt to which is getting a continuous sequential write. Now, if you were trying do a bunch of reading and writing to the array while rebuilding it you could put additional stress on the drives, but that's probably unlikely for the typical home user.
Reply
#18
(2014-08-08, 02:15)two515ty Wrote: What are the dangers of a RAID reconstruction? I'm curious myself because I'm planning on migrating to a RAID-Z2 (RAID6) system in the future, but I've never really heard about RAID reconstruction being dangerous.

Well the problem is if something goes wrong during the reconstruction. Two different cases being power failure or unrecoverable read error. Reconstruction takes a LONG time, if you lose power during this time or your machine opts out for whatever reason, you're a goner. URE's are something which there are a few articles on and are a compelling reason to go for RAID6 over 5 in larger storage arrays - if you were running RAID 5 and a disk failed, then you went to reconstruct then met a URE along the way the reconstruction will fail - an estimation of the odds of this happening can be calculated from manufacturers drive specs, I think over 12TB or something and it starts looking grim for RAID 5, RAID 6 is significantly more resilient. A good UPS is also key especially during reconstruction.
Reply
#19
(2014-08-08, 02:35)nevillebartos Wrote: Well the problem is if something goes wrong during the reconstruction. Two different cases being power failure or unrecoverable read error. Reconstruction takes a LONG time, if you lose power during this time or your machine opts out for whatever reason, you're a goner. URE's are something which there are a few articles on and are a compelling reason to go for RAID6 over 5 in larger storage arrays - if you were running RAID 5 and a disk failed, then you went to reconstruct then met a URE along the way the reconstruction will fail - an estimation of the odds of this happening can be calculated from manufacturers drive specs, I think over 12TB or something and it starts looking grim for RAID 5, RAID 6 is significantly more resilient. A good UPS is also key especially during reconstruction.
Any decent controller shouldn't have a problem with a power failure during a rebuild. However, typically a system like that will be on a UPS. However, IMHO RAID-6 is the way to go.
Reply
#20
(2014-08-08, 02:33)Stereodude Wrote: The theory is that you're putting too much stress on the drive while rebuilding. Personally, I think it's a bunch of bunk in a home use scenario. The drives aren't forced to do a pile of seeks thrashing the drive while rebuilding. It's a continuous sequential read operation from all the drives except the drive that's being rebuilt to which is getting a continuous sequential write. Now, if you were trying do a bunch of reading and writing to the array while rebuilding it you could put additional stress on the drives, but that's probably unlikely for the typical home user.

An interesting point - especially when we consider that when we put these things together it's generally with a bunch of drives bought at once, so chances are same manufacturer, same age, from the same batch... If one goes, it may not be unlikely that another is close to death... Stressing all drives during the rebuild may push another drive over the edge.
Reply
#21
I would recommend a proper home server build including ECC RAM and a ZFS compatible OS. Whether you use Z-RAID(1-3) or not, you can ensure that your data is indeed safe on the server. Please just stick with software RAID if you build an array. Hardware RAID can end in tears if you're not suitably prepared. There is also Greyhole, but I haven't read up on it for some time.
Reply
#22
(2014-08-08, 02:30)XBL. Wrote: @OP: since you want something easy and don't really want anything special, you can just go with a Windows setup (if you're comfortable with that OS). Personally I'd go with something as Ubuntu + Amahi as frontend (if you really want a frontend), but this does mean a slight learning curve.

Windows wouldn't work for the OP's needs, since he says he wants it to run from a USB drive. It looks like Amahi isn't free, in which case I'd personally pass on it.

(2014-08-08, 02:33)Stereodude Wrote:
(2014-08-08, 02:15)two515ty Wrote: What are the dangers of a RAID reconstruction? I'm curious myself because I'm planning on migrating to a RAID-Z2 (RAID6) system in the future, but I've never really heard about RAID reconstruction being dangerous.
The theory is that you're putting too much stress on the drive while rebuilding. Personally, I think it's a bunch of bunk in a home use scenario. The drives aren't forced to do a pile of seeks thrashing the drive while rebuilding. It's a continuous sequential read operation from all the drives except the drive that's being rebuilt to which is getting a continuous sequential write. Now, if you were trying do a bunch of reading and writing to the array while rebuilding it you could put additional stress on the drives, but that's probably unlikely for the typical home user.

That's what I thought. I figure that for most home users, a rebuild with just a few drives isn't going to kill your hardware, but I'm just speculating. I haven't had any hardware failures on my NAS (tackles tree), but I don't imagine so many people would use RAID if its recovery mechanism was so "dangerous."

(2014-08-08, 02:35)nevillebartos Wrote:
(2014-08-08, 02:15)two515ty Wrote: What are the dangers of a RAID reconstruction? I'm curious myself because I'm planning on migrating to a RAID-Z2 (RAID6) system in the future, but I've never really heard about RAID reconstruction being dangerous.

Well the problem is if something goes wrong during the reconstruction. Two different cases being power failure or unrecoverable read error. Reconstruction takes a LONG time, if you lose power during this time or your machine opts out for whatever reason, you're a goner. URE's are something which there are a few articles on and are a compelling reason to go for RAID6 over 5 in larger storage arrays - if you were running RAID 5 and a disk failed, then you went to reconstruct then met a URE along the way the reconstruction will fail - an estimation of the odds of this happening can be calculated from manufacturers drive specs, I think over 12TB or something and it starts looking grim for RAID 5, RAID 6 is significantly more resilient. A good UPS is also key especially during reconstruction.

Power failure shouldn't be a problem IMO, as any one concerned about the safety of their data should have their server running on a UPS.

As far as URE's and 12 TB, I've never really heard of such issues when I was reading about RAID-Z and FreeNAS. In any case, when talking about the OP who has 8 separate disks, nothing less than RAID-Z2/RAID6 should be used IMO. It seems that he is set on running each disk as an independent volume, so any discussion of RAID isn't really applicable either way.

(2014-08-08, 02:38)nevillebartos Wrote:
(2014-08-08, 02:33)Stereodude Wrote: The theory is that you're putting too much stress on the drive while rebuilding. Personally, I think it's a bunch of bunk in a home use scenario. The drives aren't forced to do a pile of seeks thrashing the drive while rebuilding. It's a continuous sequential read operation from all the drives except the drive that's being rebuilt to which is getting a continuous sequential write. Now, if you were trying do a bunch of reading and writing to the array while rebuilding it you could put additional stress on the drives, but that's probably unlikely for the typical home user.

An interesting point - especially when we consider that when we put these things together it's generally with a bunch of drives bought at once, so chances are same manufacturer, same age, from the same batch... If one goes, it may not be unlikely that another is close to death... Stressing all drives during the rebuild may push another drive over the edge.

In the research that I did about buying drives for a NAS, many people actually recommended mixing and matching drives from different manufacturers, merchants, and purchase dates to help reduce the risk of buying a bad batch of drives all at once.

(2014-08-08, 03:14)Soul_Est Wrote: I would recommend a proper home server build including ECC RAM and a ZFS compatible OS. Whether you use Z-RAID(1-3) or not, you can ensure that your data is indeed safe on the server. Please just stick with software RAID if you build an array. Hardware RAID can end in tears if you're not suitably prepared. There is also Greyhole, but I haven't read up on it for some time.

+1. Software RAID is the way to go nowadays. Hardware RAID is just not ideal, given the low benefits-to-risks ratio. Personally, I use FreeNAS without ECC RAM, but I also don't have any extremely important data on my server either. I don't know that I'd really recommend the extra cost of ECC for the OP, since he seems comfortable re-ripping media as needed.
Reply
#23
Amahi Home Server http://www.amahi.org is free and offers an ease Linux based (Fedora 19) home server.

Your 8 drives can be redundantly pooled (not striped) with a feature called Greyhole http://www.greyhole.net/.which is integrated into Amahi.

It offers easy setup and administration.
Reply
#24
It just so happens that I was researching ZFS and ECC RAM earlier today, two515ty. This thread: http://forums.freenas.org/index.php?thre...zfs.15449/ explained everything perfectly. May not be for the OP, but many (including myself) will definitely use it.
Reply
#25
(2014-08-08, 03:35)two515ty Wrote: It looks like Amahi isn't free, in which case I'd personally pass on it.

Amahi is free. There are some paid "one click install" apps like Sickbeard and Couch Potato that they make no bones about that the "paid" part is just for the "one click install for users who don't want to bother learning how to install them" You can do anything and nothing is "locked down". They have some paid services but nothing that is even close to required.

Their current release is built on Fedora 19, the previous release was built on Ubuntu 12.04.

It's a good way to get your feet wet with a free Linux server OS that is easy to setup and use and does not put you in a box.

Heck, you can install it on top of a Fedora Linux with ZFS (without Greyhole of course Rofl)... but if you can setup ZFS, you are not a Linux noob.
Reply
#26
I'm with you on the free part of Amahi smitopher. I still feel as though the resource usage is too high for Fedora + Amahi. FreeNAS, NAS4Free, or even a roll-your-own (RYO) solution would be better suited here to save money on the computing components and put it towards storage and a server class motherboard.
Reply
#27
I'm running Amahi/Fedora headless with a Intel Atom with 4 gig.of ram and I have no resource issues. Without a GUI, the footprint of Fedora is no bigger than any dedicated NAS OS. I mean... they all are just Linux.
Reply
#28
FreeNAS and NAS4Free aren't but I understand what you meant.
Reply
#29
(2014-08-08, 00:40)MrCrispy Wrote: IMO safety and availability of data is the single most important criteria. The dangers of a RAID reconstruction are too high, and the costs of reripping all your media are also high. Plus you don't lose anything with a parity based solution that keeps data in native format. In fact I'd like to hear of a single advantage of using RAID. I can't think of one.

Why do you think they use raid in datacenters then?

This is why we use a "redundant array of inexpensive disks". So we can have some data redundancy in case of HDD failure.

When not using raid and you have a HDD failure you lose everything on that disk. When using a raid6/raidZ2 setup you can lose 2 disk before you lose any data. At this day and age if you have more that 6 disks and aren't using at least raid5, then you are crazy Tongue HDD's are relatively cheap (especially compared to my time) so someone saying that they cannot spare the disk space is not a good enough excuse.

Uptime and speed are the main advantages to raid.

(2014-08-08, 03:35)two515ty Wrote: +1. Software RAID is the way to go nowadays. Hardware RAID is just not ideal, given the low benefits-to-risks ratio. Personally, I use FreeNAS without ECC RAM, but I also don't have any extremely important data on my server either. I don't know that I'd really recommend the extra cost of ECC for the OP, since he seems comfortable re-ripping media as needed.

I have 16GB of non-ECC ram and haven't had any issues yet. Not a single error on any scrubs yet. Would I like ECC ram? Yes, but I didn't have any at the time I built my server Smile
"PPC is too slow, your CPU has no balls to handle HD content." ~ Davilla
"Maybe it's a toaster. Who knows, but it has nothing to do with us." ~ Ned Scott
Reply
#30
(2014-08-08, 04:13)Soul_Est Wrote: It just so happens that I was researching ZFS and ECC RAM earlier today, two515ty. This thread: http://forums.freenas.org/index.php?thre...zfs.15449/ explained everything perfectly. May not be for the OP, but many (including myself) will definitely use it.
I think I might have read that thread long ago, but I might be confusing it with one amongst many saying the saying the same thing.

(2014-08-08, 04:24)smitopher Wrote:
(2014-08-08, 03:35)two515ty Wrote: It looks like Amahi isn't free, in which case I'd personally pass on it.

Amahi is free.

Indeed, I was mistaken. I didn't read the website carefully enough. I saw there was a store section and figured it was to buy licenses lol. Whoops!

(2014-08-08, 08:02)lrusak Wrote:
(2014-08-08, 03:35)two515ty Wrote: +1. Software RAID is the way to go nowadays. Hardware RAID is just not ideal, given the low benefits-to-risks ratio. Personally, I use FreeNAS without ECC RAM, but I also don't have any extremely important data on my server either. I don't know that I'd really recommend the extra cost of ECC for the OP, since he seems comfortable re-ripping media as needed.

I have 16GB of non-ECC ram and haven't had any issues yet. Not a single error on any scrubs yet. Would I like ECC ram? Yes, but I didn't have any at the time I built my server Smile

Aye, same here. I'm riding dirty and don't care Cool. It's just that the cost for ECC RAM would be too high, and wouldn't offer enough benefit (meaning that I won't cry if my data is corrupted) to warrant the cost. I wanted a mini-ITX build, and there just aren't many mini-ITX boards that support ECC that are affordable. I have an ECS NM70-i2 with 16 GB of Kingston RAM running FreeNAS and it's been running quite well for me.
Reply

Logout Mark Read Team Forum Stats Members Help
Server OS for NAS?0