• 1
  • 2(current)
  • 3
  • 4
  • 5
  • 7
Raid
#16
jhsrennie Wrote:All this is getting a bit frustrating. Just for the record, I look after several hundred Dell servers with Perc RAID controllers and four to eight disk (usually Samsung disks) RAID5 or 6 arrays. In fact as I type this I my own server is contentedly writing to a four disk RAID5 array on a Perc controller.

Anyhow, the performance of even the relatively puny four disk RAID5 on my server is substantially faster than the Gigabit network it's connected to. I get steady 100MB/s speeds copying to and from the server across the network. The big arrays are ridiculously fast. We use them on Hyper-V servers running half a dozen virtual machines and disk speed is rarely an issue.

The Perc is a relatively high end controller and would cost you around £250 new, though they're available for around £100 on ebay. My point is that it isn't that hard to get stellar performance from RAID.

JR

Frustrating is an understatement my friend lol. It's taking 8 hours to copy 1TB of data from a USB3 drive and that doesn't seem right. Even copying a single rip over the network to the HTPC takes 30 minutes. I just can't understand how anyone can have their media remotely and streaming to the HTPC over the network, unless I set it up wrong. I was having 20 minute buffers and 3 seconds of play loops yesterday.
Server: Synology Diskstation 1511+ with 8x WD Red NAS 3TB drives, DSM 5.2
Main HTPC: Home Built i3, 8GB RAM, Corsair 128GB SSD, nVidia 630GTX, Harmony Home Control, Pioneer VSX-53, Panasonic VT30 65" 3D TV, Windows 10, Isengard
Bedroom HTPC: Zotac-ID 41 8GB RAM, 128GB SSD, Rii micro keyboard remote, Samsung HW-E550, Sony 32" Google TV, OpenElec 6.0 beta 4
Reply
#17
Firstly I know very little about this subject but when I recently updated my DNS-323 with Alt-F firmware (so I could use the USB port for an external HDD to use RAID5) it took over 20 hours to prepare the drives initially.

So my question is have you waited for to setup properly first before transferring files?
Reply
#18
Maybe I'm being overly simplistic, but what do we know about your network? If it is taking that long to move files over the network, maybe the problem isn't completely isolated to the hard drives and selected RAID solution.

For the record, I have gigabit Ethernet running over Cat5e cables, and I am able to drive four simultaneous streams from my Drobo (connected via USB2.0 to my "main" PC running Windows 7), with at least two of these streams being HD/1080p. All of the files are MKV rips and playback with very little problem.

As an aside, this type of topic is why I chose Drobo for my NAS solution. I definitely have enough surplus computer equipment lying around where I could easily scrap together a NAS running unRAID or FreeNAS, but at the end of the day, I chose the Drobo because all I had to do was literally pop in the drives, install the Drobo software, set up a share and GO! As part of the "purchase justification" to the wife, I calculated my personal salary rate times the amount of time necessary to get a DIY solution up and running, then I added to that the current US mileage reimbursement rate times the distance to the local computer store, with that value being multiplied by the number of trips to the local computer store. There was an environmental fee applied to all of the "old" parts that were replaced by "new" parts on the second trip to the computer store which likely still did not solve the problem fully. Finally, there is the consulting fee I have to now pay my IT friend who drinks really expensive vodka to finally come over and get all of this working for me. So, in the end, it was much cheaper for me to buy the Drobo Rofl
Reply
#19
I don't see why you would have any issues running raid 5 with the configuration that you are describing, unless you are using low end drives, such as the green western digital drives. If you have a decent true hardware raid controller (not the fake raid controllers like the low end promise products) you should have very respectable performance.

I am using the Adaptec 52445 controller with 8 drives in a raid 5 array and I have 3 XBMC systems using this as the media storage through SMB. My system does not skip a beat, and I am maximizing my storage capacity (Eight Seagate 3TB drives) while still being fully redundant. The 52445 controller is a bit pricey, however, there are other controllers in its class that cost less (my controller can handle up to 26 direct connected drives).

If you are experiencing performance issues with hardware raid, you can try adjusting the stripe size, which can dramatically effect performance, as well as the block size.

Also, what slot do you have the controller plugged into on your storage system?

Also, as posted above, your network is highly likely to be a bottle neck. Ideally, you want to be running across a gigabit switched network. Also, you want to keep the connectivity between devices simple as possible. The storage and the XBMC devices should be connected to the same network switch.
Reply
#20
Gr8rtek Wrote:As an aside, this type of topic is why I chose Drobo for my NAS solution. I definitely have enough surplus computer equipment lying around where I could easily scrap together a NAS running unRAID or FreeNAS, but at the end of the day, I chose the Drobo because all I had to do was literally pop in the drives, install the Drobo software, set up a share and GO! As part of the "purchase justification" to the wife, I calculated my personal salary rate times the amount of time necessary to get a DIY solution up and running, then I added to that the current US mileage reimbursement rate times the distance to the local computer store, with that value being multiplied by the number of trips to the local computer store. There was an environmental fee applied to all of the "old" parts that were replaced by "new" parts on the second trip to the computer store which likely still did not solve the problem fully. Finally, there is the consulting fee I have to now pay my IT friend who drinks really expensive vodka to finally come over and get all of this working for me. So, in the end, it was much cheaper for me to buy the Drobo Rofl

CHICKEN!!! Laugh
HTPC: LibreELEC 7 on Shuttle XS35GTv2 & Raspberry Pi 3
NAS: NAS4Free 2x 3TB Raid1
Reply
#21
I have experience with RAID 5/6 in my setup and haven't had much trouble. I've had the following two systems as servers at one time or another:
Intel MB with 6 SATA ports
Pentium dual-core CPU
4GB DDR3
5x500GB consumer drives, later replaced with 5x2TB consumer drives
Fedora Core
mdraid (linux software raid)

Gigabyte 790fx MB with 8 SATA ports
AMD Phenom II X4 810
8GB
Fedora Core and Ubuntu 11.10
5x2TB drives

I started with the first system, swapped the drives at one point, later on the MB, then upgraded the OS. Once the RAID was moved to the 2TB drives it hasn't been redone as it moved from one MB to the other and from Fedora to Ubuntu without trouble. That's one of the benefits of software RAID with linux, I can plug those drives into any system with mdraid installed and they will be recognized as a RAID array and available on boot. The 2TB array uses XFS, I think I used ext4 on the older array, I can't recall for certain.

The first system I found my access from my windows PC to be limited to 70MB/s or thereabouts in sustained copies over the network of large files. I never could get full gigabit speeds out of it. As soon as I swapped MBs, which included the RAM upgrade, speeds exceeded 100MB/s. I see peak speeds of 130MB/s, but typically average around 115MB/s during sustained copies of large files (based on the Windows copy dialog and my own rough estimates). Copying a large amount of smaller files results in much slower copying. I've found that I can even obtain these speeds when going over the old CAT 5 wiring my builder installed.

So software RAID with off the shelf consumer products works in my experience. I've done no special tuning with the hard drives or network cards. My server is probably a bit overkill for such a device, but I did add a nvidia card to it so it also serves as my primary XBMC system, hosts the database for XBMC, and runs the squeezebox server software. Because of this multi-purpose role I haven't tried anything like unRAID or FreeNAS.

Alas there are a lot of components in a RAID setup that can cause problems. One thing you can do is try copying to a single drive on the server, both over the network and over USB, and see how your system handles that.
Reply
#22
Orclas Wrote:CHICKEN!!! Laugh

LOL, probably so! But I figured I would lose my a$$ on just the trips to the local Fry's store, so it was well worth $300 for the base Drobo. And naturally, just after I purchased mine, Costco offered up the Drobo FS with 6TB of storage for $800. Oh, wellHuh
Reply
#23
Hardware raid changed recently due to the lack of TLER support as I stated in the 2nd post. Not many people know this and try tons of options to find out why their array is so slow all of a sudden....
If you run an array with enterprise level drives (seagate constellation, WD REx) then you shouldn't have a problem running hardware raid.

PS1.. Hardware raid is with a controller, like the LSI you are describing...your onboard raid is fake raid... it's the slowest option possible and if your mb dies, you are probably scr*wed....

At the moment I am running 2x10 2Tbyte drives in 2 raidz2 arrays. I have write speeds of around 130 Mbyte/s over dualport gigabit lan.
The good thing about raidz2 is that you don't need an expensive raid controller... It's softraid. Lots of people despise softraid cause it is eating up your resources.. This was true in the Pentium 3 period, but nowadays cpu's are so fast that you hardly notice cpu load.

Raidz2 is the softraid counterpart of raid6... raidz is comparible to raid5... I also have the option of Raidz3 which has 3 parity drives..

Please google ZFS, or try installing FreeNAS, it's free and setup in 10 minutes...
If you don't want a dedicated machine but still want to use ZFS you can do so in the XBMCBuntu release, just install ZFS-Fuse... that way you have all advantages of ZFS in your existing HTPC setup.

If you decided to google, also search for the term TLER, and you'll understand why hardware raid with cheap disks no longer is a good choice... At the first read error the disk gets kicked out of the array, will be found again and you are rebuilding the array for 72 hours, and again and again... This will probably not happen within the first few days, but very likely within a week or 2....

Note: The early "Green" models from Western Digital (EADS produced before november 2010 still have the TLER function.
The WD tool to enable/disable TLER does not work on the newer models.

Samsung/seagate also have TLER but call it different.... I think Hitachi still supports it but can not 100% confirm this...
Reply
#24
Thanks Gollum. I recently bought a Zotac to connect to my DVD and the machine I had before with XBMC on it will now stream my media. The 4x 2TB drives I have are http://www.newegg.com/Product/Product.as...6822148681

Probably a poor choice, eh?

As for the controller, the LSI only does SATA II so I figured I'd do better to go onto the motherboard and get SATA III and I am running RAID 10 right now. Do you recommend at the very least that I get better drives or a better controller? Everything seems work fine now, except for a minor hiccup or two when I first start streaming a show or movie.

What about Windows 7 software RAID? That way if the mobo went bad, I wouldn't lose my data since as soon as I reinstalled Windows the array would be there.
Server: Synology Diskstation 1511+ with 8x WD Red NAS 3TB drives, DSM 5.2
Main HTPC: Home Built i3, 8GB RAM, Corsair 128GB SSD, nVidia 630GTX, Harmony Home Control, Pioneer VSX-53, Panasonic VT30 65" 3D TV, Windows 10, Isengard
Bedroom HTPC: Zotac-ID 41 8GB RAM, 128GB SSD, Rii micro keyboard remote, Samsung HW-E550, Sony 32" Google TV, OpenElec 6.0 beta 4
Reply
#25
the other major benefit of ZFS is you can move the pool (raid array) to any other computer with ZFS FS installed really easily - plus the bit rot detection, dedupe and compression (for all your pics, mp3s and normal files) that us ZFS converts go on about.
Reply
#26
You mean physically move the drives? What if I want to install ZFS AND install enterprise level HD's for my server? Is there any way to mirror the data or would I have to just back up across the network to my main PC and then restore to the new ZFS raid?
Server: Synology Diskstation 1511+ with 8x WD Red NAS 3TB drives, DSM 5.2
Main HTPC: Home Built i3, 8GB RAM, Corsair 128GB SSD, nVidia 630GTX, Harmony Home Control, Pioneer VSX-53, Panasonic VT30 65" 3D TV, Windows 10, Isengard
Bedroom HTPC: Zotac-ID 41 8GB RAM, 128GB SSD, Rii micro keyboard remote, Samsung HW-E550, Sony 32" Google TV, OpenElec 6.0 beta 4
Reply
#27
yes - I mean you can simply unplug the drives and insert them into another computer (you should really do an 'export/import' command, but if your hardware has failed you can get away without doing that). Plus ZFS supports distributed replication/snapshots too.

Although I'd question the need for the enterprise level HD's unless you're looking at MTBF's etc.

edit: Snapshots and Clones of snapshots are very cool - I used it to share our music library; I snapshot'd all our music, made a clone that I shared to my wife's Mac/iTunes then she deleted all the crap she doesn't listen to and added her own stuff. Which shared the 50gb of music as two separate shares using only the original 50Gb of storage. Which I guess is another way of publishing a movie library for your kids with all teh adult stuff removed.

Basically ZFS gives you the same features of a Enterprise NetApp (I know that as I look after 7 NetApp filers with over 200Tb or storage)
Reply
#28
What is MTBF?

So, you're saying keep my exact setup but by using ZFS I would be in much better shape to do what I'm doing?

What's the best way to archive all my data before switching filesystems?
Server: Synology Diskstation 1511+ with 8x WD Red NAS 3TB drives, DSM 5.2
Main HTPC: Home Built i3, 8GB RAM, Corsair 128GB SSD, nVidia 630GTX, Harmony Home Control, Pioneer VSX-53, Panasonic VT30 65" 3D TV, Windows 10, Isengard
Bedroom HTPC: Zotac-ID 41 8GB RAM, 128GB SSD, Rii micro keyboard remote, Samsung HW-E550, Sony 32" Google TV, OpenElec 6.0 beta 4
Reply
#29
mean time between failure - as others have posted, I have no idea what your setup is; ZFS as I've described it is really best suited to a dedicated server
Reply
#30
you should head over here http://forum.xbmc.org/forumdisplay.php?fid=112
Reply
  • 1
  • 2(current)
  • 3
  • 4
  • 5
  • 7

Logout Mark Read Team Forum Stats Members Help
Raid0