Western Digital Green Vs Red for Server
#31
As always, Assassin is hitting the nail on the head again.

Powering-down drives is the best way to save electricity. My server is on 24/7 for accessibility but 4 of the 5 drives are probably powered-down 90% of that time. Even if you're watching movies 24/7, that means only one drive is truly serving media at any one time, thus allowing 4 other drives of a 5-drive array to be powered-down.

Newegg is charging crazy prices for the Red drives. A 3TB Red is over $100 more expensive than a 3TB Seagate. I would rather get 3x3TB Seagate than 2x3TB Red and use the third drive as backup in a RAID5 setup.

The longer warranty is nice but for a home media server that is adequately cooled and powering-down drives as necessary, one doesn't really need the uptime and MTBF ratings of the "better" drive.

The only time I would go with a Red is if I'm running a webserver that keeps the drive powered-up 24/7.
Reply
#32
(2012-08-29, 00:02)cowfodder Wrote: People seem to forget what RAID actually stands for. Redundant array of INEXPENSIVE drives. The whole point is to be able to use cheaper drives. I've been using WD green drives for years with no issue, with them on 24/7 serving media to my system. The newer green drives will also spin up to 7200 rpm under heavy load. I say go with the green, unless the red happens to be cheaper.

The current definition of RAID is redudant array of independent disks. Here is a simple scenario where the Green drives will cause your RAID to fail and the Red drives would allow it to survive.

-Array starts off operating as normal, but drive 3 has a bad sector that cropped up a few months back. This has gone unnoticed because the bad sector was part of a rarely accessed file.
-During operation, drive 1 encounters a new bad sector.
-Since drive 1 is a consumer (Green) drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
-The RAID controller exceeds its timeout threshold waiting on drive 1 and marks it offline.
-Array is now in degraded status with drive 1 marked as failed.
-User replaces drive 1. RAID controller initiates rebuild using parity data from the other drives.
-During rebuild, RAID controller encounters the bad sector on drive 3.
-Since drive 3 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
-The RAID controller exceeds its timeout threshold waiting on drive 3 and marks it offline.
-Rebuild fails.

At the end of the day most people here are talking about simple media servers where all the data can be recovered one way or the other, but that doesn't mean you should pretend there are no real world advantages to the Red drives.
HTPC: i5 3570K || Noctua NH-L12 || ASRock Z77E-ITX || 8GB Samsung Eco || Intel 330 120GB || Lian Li PC-Q09 ||
Main Desktop: i7 2600k @ 4.8Ghz || Epic 180 on Epic T1000 TIM || Asus Z68 Deluxe || 16GB Ripjaws @ 2133 ||
|| Maingear Shift || EVGA 8800GTS 640MB || OWC Mecury Electra 3G || 320GB, 2x500GB, 1.5Tb, 2x2TB ||
Reply
#33
(2012-08-29, 14:24)bznotins Wrote: As always, Assassin is hitting the nail on the head again.

Powering-down drives is the best way to save electricity. My server is on 24/7 for accessibility but 4 of the 5 drives are probably powered-down 90% of that time. Even if you're watching movies 24/7, that means only one drive is truly serving media at any one time, thus allowing 4 other drives of a 5-drive array to be powered-down.

Newegg is charging crazy prices for the Red drives. A 3TB Red is over $100 more expensive than a 3TB Seagate. I would rather get 3x3TB Seagate than 2x3TB Red and use the third drive as backup in a RAID5 setup.

The longer warranty is nice but for a home media server that is adequately cooled and powering-down drives as necessary, one doesn't really need the uptime and MTBF ratings of the "better" drive.

The only time I would go with a Red is if I'm running a webserver that keeps the drive powered-up 24/7.

The price difference is due to Seagate having a bad reputation in recent years for releasing faulty firmware that caused all kinds of issues and WD Reds being in high demand. Again, if you would read the reviews, the original price of the 3TB Reds was $179 or about $20 more expensive than the Greens. I am glad the Green drives are working out for you, but a simple google search will show you that the Green drives are not bullet proof or some kind of miracle product.
HTPC: i5 3570K || Noctua NH-L12 || ASRock Z77E-ITX || 8GB Samsung Eco || Intel 330 120GB || Lian Li PC-Q09 ||
Main Desktop: i7 2600k @ 4.8Ghz || Epic 180 on Epic T1000 TIM || Asus Z68 Deluxe || 16GB Ripjaws @ 2133 ||
|| Maingear Shift || EVGA 8800GTS 640MB || OWC Mecury Electra 3G || 320GB, 2x500GB, 1.5Tb, 2x2TB ||
Reply
#34
(2012-08-29, 14:47)SSDD Wrote: The price difference is due to Seagate having a bad reputation in recent years for releasing faulty firmware that caused all kinds of issues and WD Reds being in high demand. Again, if you would read the reviews, the original price of the 3TB Reds was $179 or about $20 more expensive than the Greens. I am glad the Green drives are working out for you, but a simple google search will show you that the Green drives are not bullet proof or some kind of miracle product.

The $150 3TB Seagates to which I refer aren't green drives.

At $20 more expensive, the Reds might make sense. But at a $100+ premium (current street pricing at NewEgg and Amazon), they don't.

I didn't say the [cheaper] drives were a miracle product or bulletproof. That's why I run a fully redundant backup of mine. I just said the Reds aren't worth the $100+ premium. /shrug

Reply
#35
(2012-08-29, 10:20)CpTHOOK Wrote:
(2012-08-29, 09:37)Beer40oz Wrote: Wondering if the WDIdle3 to stop the parking of the head is required for them or not?
Thanx...Wink
Yes...!! I just bought two of them this past week to add to my new unRAID build and the WDIdle3 will work to disable the drive-park timer! NewEgg had them for $94 a piece.

Sounds like a plan.... I will try it out. Just like my other WD drives EARS and EADS... dam heads...
So any special steps to disable it on the EARX? Been reading different things and some say to do it and some say not to do it.... some say it can not be disabled.... Wink

Any clicking sounds after you did it? hehe
Reply
#36
Hmm just saw my WD Green 2TB was shipped today, hope it wont cause to much problems. But also alot of people have problems with SSD Vertex 2 Series wich is running for over a year and half now on my desktop without any problems. So you never know what happens.
Reply
#37
Beer....

Post the links to the articles U read. Why are they saying not to disable the timers? Lmao.... no clicking so far bro!

What do you think about these Red drives? Based on the advantages listed in this post so far... U think they are worth the investment. I have room in my server for 3 more drives, all of which will be 3TB
Reply
#38
Hook,

Now I can not find that post... just reviews and such from newegg.... something about clicking... hehe.
I will be buying some EARX and stop the head parking....

The red drives sound really awesome. But very expensive at the moment. I will stick with the caviar green's... down the road maybe buy some red's when they have been out on the market a little longer out and cheaper.

Tons of people have had regular HD's for years with out no troubles....
Reply
#39
You can disable head park – '/D' option I believe that you may need to monitor and do more than once if it doesn't take.

Also, until REDs come down in price I can't see them worth the $110 premium.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#40
Hi there!

I resurrect this thread to ask if somebody has some more info for the Green vs Red debate. In the next month or 2 I will be buying 2 3TB drives, and I'm set on the WD Green or Red, but I don't what to do, price difference is about 20€ I think.

This will go into a linux server (Flexraid or Snapraid, running on top of linux mint). I saw the green drives do have problems with LCC getting very high, which are solved using their own wdidle3 tool to set the park time from 8seconds to 5minutes. I understand what this does is completely stopping the drive (at the mechanical level) after 5 minutes of no use (8 seconds if you don't change it), but does this feature work independently of the OS you are using? Can the OS bypass that feature?

What happens with the Red? I have read they don't have this intellipark stuff, they are rated for 24/7 operation and have some improved design to minimize vibration and so on. But what happens when the drive is not being used? I've read you have to set the OS to park heads when idle, true? That's done via HDPARM, right?

I'm interested in the power consumption, which one would be lower? Which one would you get and why, considering the mentioned 20€ that the red costs more? The 20€ are fine, but I want to make sure the Red is a better suited drive for my needs and that it won't cost me a fortune in electricity.

Thanks!
Reply
#41
(2013-03-02, 09:58)PatrickVogeli Wrote: Hi there!

I resurrect this thread to ask if somebody has some more info for the Green vs Red debate. In the next month or 2 I will be buying 2 3TB drives, and I'm set on the WD Green or Red, but I don't what to do, price difference is about 20€ I think.

This will go into a linux server (Flexraid or Snapraid, running on top of linux mint). I saw the green drives do have problems with LCC getting very high, which are solved using their own wdidle3 tool to set the park time from 8seconds to 5minutes. I understand what this does is completely stopping the drive (at the mechanical level) after 5 minutes of no use (8 seconds if you don't change it), but does this feature work independently of the OS you are using? Can the OS bypass that feature?

What happens with the Red? I have read they don't have this intellipark stuff, they are rated for 24/7 operation and have some improved design to minimize vibration and so on. But what happens when the drive is not being used? I've read you have to set the OS to park heads when idle, true? That's done via HDPARM, right?

I'm interested in the power consumption, which one would be lower? Which one would you get and why, considering the mentioned 20€ that the red costs more? The 20€ are fine, but I want to make sure the Red is a better suited drive for my needs and that it won't cost me a fortune in electricity.

Thanks!


Hey Patrick,

I have a WD Green drive in my NAS that has been running pretty much 24/7 for the past 3 years and it is only just showing signs of failure(loud seeking, SATA resets in the syslog etc.)

I have just dropped some cash on a pair of 2tb WD reds to swap out my old drives in my NAS.

https://www.youtube.com/watch?feature=fv...dyy6o&NR=1


Code:
Feb 25 18:56:32 NAS4220B user.err kernel: [189878.848450] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
Feb 25 18:56:32 NAS4220B user.err kernel: [189878.887895] ata1.00: BMDMA stat 0x5
Feb 25 18:56:32 NAS4220B user.err kernel: [189878.913310] ata1.00: cmd 35/00:00:32:f9:b2/00:04:57:00:00/e0 tag 0 dma 524288 out
Feb 25 18:56:32 NAS4220B user.err kernel: [189878.913310]          res 51/84:00:32:f9:b2/84:04:57:00:00/e0 Emask 0x10 (ATA bus error)
Feb 25 18:56:32 NAS4220B user.info kernel: [189879.007287] ata1: soft resetting link
Feb 25 18:56:32 NAS4220B user.info kernel: [189879.348825] ata1.00: configured for UDMA/133
Feb 25 18:56:32 NAS4220B user.info kernel: [189879.375201] ata1: EH complete
Feb 25 19:05:40 NAS4220B user.err kernel: [190427.271659] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
Feb 25 19:05:40 NAS4220B user.err kernel: [190427.310900] ata1.00: BMDMA stat 0x5
Feb 25 19:05:40 NAS4220B user.err kernel: [190427.332445] ata1.00: cmd 35/00:00:12:c8:b8/00:04:60:00:00/e0 tag 0 dma 524288 out
Feb 25 19:05:40 NAS4220B user.err kernel: [190427.332445]          res 51/84:00:12:c8:b8/84:04:60:00:00/e0 Emask 0x10 (ATA bus error)
Feb 25 19:05:40 NAS4220B user.info kernel: [190427.426462] ata1: soft resetting link
Feb 25 19:05:40 NAS4220B user.info kernel: [190427.640325] ata1.00: configured for UDMA/133
Feb 25 19:05:40 NAS4220B user.info kernel: [190427.666676] ata1: EH complete
Feb 25 20:13:18 NAS4220B daemon.info smartd[2268]: Device: /dev/sdb [SAT], is back in ACTIVE or IDLE mode, resuming checks (5 checks skipped)
Feb 25 20:13:18 NAS4220B daemon.info smartd[2268]: Device: /dev/sdb [SAT], Temperature 42 Celsius reached limit of 40 Celsius (Min/Max 41/42)
Feb 25 22:09:37 NAS4220B user.err kernel: [201463.793245] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
Feb 25 22:09:37 NAS4220B user.err kernel: [201463.834274] ata1.00: BMDMA stat 0x5
Feb 25 22:09:37 NAS4220B user.err kernel: [201463.856172] ata1.00: cmd 35/00:00:da:d8:1b/00:04:00:00:00/e0 tag 0 dma 524288 out
Feb 25 22:09:37 NAS4220B user.err kernel: [201463.856172]          res 51/84:00:da:d8:1b/84:04:00:00:00/e0 Emask 0x10 (ATA bus error)
Feb 25 22:09:37 NAS4220B user.info kernel: [201463.950307] ata1: soft resetting link
Feb 25 22:09:37 NAS4220B user.info kernel: [201464.165488] ata1.00: configured for UDMA/133
Feb 25 22:09:37 NAS4220B user.info kernel: [201464.191835] ata1: EH complete
Feb 25 22:46:26 NAS4220B user.err kernel: [203673.197404] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
Feb 25 22:46:26 NAS4220B user.err kernel: [203673.236825] ata1.00: BMDMA stat 0x5
Feb 25 22:46:26 NAS4220B user.err kernel: [203673.258383] ata1.00: cmd 35/00:00:8a:50:52/00:04:1d:00:00/e0 tag 0 dma 524288 out
Feb 25 22:46:26 NAS4220B user.err kernel: [203673.258383]          res 51/84:00:8a:50:52/84:04:1d:00:00/e0 Emask 0x10 (ATA bus error)
Feb 25 22:46:26 NAS4220B user.info kernel: [203673.352338] ata1: soft resetting link
Feb 25 22:46:26 NAS4220B user.info kernel: [203673.575351] ata1.00: configured for UDMA/133
Feb 25 22:46:26 NAS4220B user.info kernel: [203673.601717] ata1: EH complete
HTPC
XBMC v12 RC3 with myth TV backend || AMD Sempron 145 || 1gb DDR3 || ECS MCP61M-M3 Version 3 ( no core unlocker) || TBS6981 DVB-S2 tuner || ATI Radeon HD 5450 || 32gb Crucial SSD

NAS
IB-NAS4220-B HW revision 1.2 || 2x 1TB HDD RAID 1 || Open WRT firmware.
Reply
#42
If price differential not too great go for the Red.

Also, wdidle can be used to disable the Intellipark with no ill affects - I've done this on all my WD Green drives. 'WDIDLE3.exe /D'

This is independent of the OS, however, before / after you do this monitor the load_cycle_count .. I've had a couple where I had to run wdidle more than once for it to stick.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#43
Hey guys...I realize I'm bumping a pretty old thread. BUT, I wanted to inform that Newegg is really starting to drop the prices on these RED drives. I caught a 24h promo code and got two 3TB drives for around $125 a piece. That's even cheaper than the Green one's. Just a heads up to keep an eye out if you were thinking of adding one or two to your server array.
Reply
#44
REDs on SATA3 are a file transfer dream! 135-40MB/s file transfer thank you please!
Modded MK1 NUC - CLICK ----- NUC Wiki - CLICK

Bay Trail NUC FTW!

I've donated, have you?

Reply
#45
I have never had any problems with Greens after using wdidle..I suspect that the reds are in fact slow Blacks rated at 5400*5900 rpm
Reply

Logout Mark Read Team Forum Stats Members Help
Western Digital Green Vs Red for Server0