• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Green vs Red drives for 24/7 operation?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

ps2cho

Member
Joined
Oct 13, 2004
On the face of it, the obvious answer is RED.
But WHY?? Is the internal hardware physicaly different, or is it simply optimized firmware?
Other than the listed marketing efforts, WHY is the red a better option for 24/7 run time? What is the reason that Red is better?

I am reading that on the green drives you can change the head parking time, so that's eliminated (link)

For my server:
24/7
NON-RAID NAS
Extremely light load
Cool environment
Do not care about speed
No not care about sound

I'm not looking for WD's Marketing efforts repeated. What is the REAL scoop on why I should spend a 1/4 more of my cash on it? Or for my goals, are Green's fine?
 
Greens should do fine.

That said, Reds were developed for NAS operation where Greens were developed to be quiet and low power for a desktop.

Also, the Reds do have a longer warranty.
 
Right, but what does "developed for NAS operation" actually mean?
I can call my dog "developed for saving lives" by simply teaching him to sit at cross walks. Doesn't exactly mean hes saving lives as it suggests.

Aside from the warranty, because I've google searched ALL OVER and cannot find any credible data sheets that suggest failure rates are any less on consumer, "nas", or enterprise drives at all anywhere...
Are we really just paying for the factory changing the firmware for us, and throwing an extra year of warranty and that's it?
 
When considering which to get, it comes down to:

1) Head parking (can be changed)
2) Speed (may not matter)
3) TLER (link)

TLER is the big one.
 
Most of my Green drives died early/eventual deaths in a NAS, I believe mainly because of the constant parking of the heads. The Reds so far have been great, with no failures as of yet.
 
Generally drives developed as NAS and enterprise drives have a higher MTBF (Mean time between failure) than desktop drives, be it a better designed stepper motor for the read/write heads, a better motor for the platters, or even a better designed firmware to handle longer run times. You can usually search for white papers on various pieces of hardware a company releases.
 
Do you know of one that shows a correlation between lifespan and MTBF's...because I cannot find one. An independent report obviously, not a company release.

I have an old Seagate 7200.11 that has 35k hours on it and its still working. That works out to be almost dead on 4 years run time straight. Not a special red drive...just a normal drive.

Here's something I found:
Enterprise vs. Consumer Drives
At first glance, it seems the enterprise drives don’t have that many failures. While true, the failure rate of enterprise drives is actually higher than that of the consumer drives!

Enterprise Drives Consumer Drives
Drive-Years of Service 368 14719
Number of Failures 17 613
Annual Failure Rate 4.6% 4.2%
It turns out that the consumer drive failure rate does go up after three years, but all three of the first three years are pretty good. We have no data on enterprise drives older than two years, so we don’t know if they will also have an increase in failure rate. It could be that the vaunted reliability of enterprise drives kicks in after two years, but because we haven’t seen any of that reliability in the first two years, I’m skeptical.

You might object to these numbers because the usage of the drives is different. The enterprise drives are used heavily. The consumer drives are in continual use storing users’ updated files and they are up and running all the time, but the usage is lighter. On the other hand, the enterprise drives we have are coddled in well-ventilated low-vibration enclosures, while the consumer drives are in Backblaze Storage Pods, which do have a fair amount of vibration. In fact, the most recent design change to the pod was to reduce vibration.

Overall, I argue that the enterprise drives we have are treated as well as the consumer drives. And the enterprise drives are failing more.
 
Last edited:
I remember hearing that the main difference between Reds and Greens was that Green drives have a longer check-in time on system startup, which on occasion can cause them to drop from RAID. WD shortened the time on the Reds, so they worked better in RAID and with a wider range of controllers. I don't know if that's changed lately or if it was even that accurate to begin with :shrug:

But if there is some truth to it and you're not going to RAID them, might as well save the money and get the Greens, unless you want the longer warranty.
 
Last edited:
You are thinking of TLER, as linked above. It isn't a problem at startup.
 
In my Synology I had 3x 1TB WD Greens in Raid 5 for almost 2 years before I upgraded them. It was on 24/7 constant operation. I remember seeing some of my 1TB drive having over 10,000 start/stop operations on them though and still running today just in much less demanding environment.

Originally I thought the RED had major issues with failure rates. If solved great! Though I never had an issue with my Green drives in NAS/Raid condition.
 
The Reds have been getting poor reviews compared to the rest of WDs lineup. Even the Greens are getting better reviews. If running in RAID, I would only trust the WD Enterprise drives. For 24/7 non-RAID operation, I would use (and do use) only Blacks. I wouldn't trust Greens for 24/7 use but I do use them for backup drives since they don't run continuously. They run an average of less than 15 minutes a day each every other month (I have some drives I keep locally and others in a safe deposit box that I swap out no less than once a month).
 
Where are reds getting bad reviews? From the forums I read where knowledgeable people use the drives, I've heard nothing but great things.
 
Where are reds getting bad reviews? From the forums I read where knowledgeable people use the drives, I've heard nothing but great things.

When first released there was a lot of bad press about them. Drives failing, clicking noises, was the ones I was seeing.

Its been a while since the release and I hope things changed, and would expect it as its good profit margin for them and they are selling them with their NAS boxes now. WD does make dependable drives I know that as all the Greens and the Blacks I've had used and still using to this date. Actually I think I might of had my first WD fail on me... a 640GB drive (1 of 3 i bought at the same time).

Not trying to say they suck, just that I remember hearing bad things about them.
 
I've had a 2TB green drive (along with a couple Seagates and a Samsung in my NAS boxes. One is local and on 24x7 and the other is remote and is on a couple hours/day when receiving backup updates. The Green (in the remote) has 10705 hours (1.22 years) power on time, 798 power cycles and 706334 load cycle counts. (Anecdotal case of one.) It's sister drive was a Seagate that had ever increasing remapped sector count and which I replaced with a shiny new 3TB Red. :D I paid the extra for the Red mostly for the extra warranty and potential benefits in a RAID. None of the other drives (these and 5 or 6 200GB barracudas) have ever given me any difficulty in a RAID but I'm sure if I had enough, I would see drives dropping. Then again, maybe Linux RAID is tweaked to work well with normal (non-TLER) drive behavior.
 
When first released there was a lot of bad press about them. Drives failing, clicking noises, was the ones I was seeing.

Its been a while since the release and I hope things changed, and would expect it as its good profit margin for them and they are selling them with their NAS boxes now. WD does make dependable drives I know that as all the Greens and the Blacks I've had used and still using to this date. Actually I think I might of had my first WD fail on me... a 640GB drive (1 of 3 i bought at the same time).

Not trying to say they suck, just that I remember hearing bad things about them.
If they were bought from Newegg, that is probably the culprit. They were shipping the drives with extremely poor padding and WD picked up on this very quickly.
 
I too remember hearing bad things, but only about how the Reds were nothing more then Greens without the head park, and platter spin down (plus TLER enabled). Saw plenty of posts alluding to the Reds being as bad as Greens in a 24/7 NAS setup because of thier Green heritage, but the favorable postings and reviews far outweighed what others alluded the Reds would be.

Again, from my own experience, I would never use a Green in a NAS ever again. I had Greens spitting up bad clusters left and right within months of putting them in, various click of death symptoms, and one which the surface test map clearly showed the pattern of a head strike, and nearly all eventually had to be replaced under warranty.
 
If they were bought from Newegg, that is probably the culprit. They were shipping the drives with extremely poor padding and WD picked up on this very quickly.

I could see this being a problem with the Greens I first purchased when I first built my NAS, as all the Greens I purchased were from NewEgg. Most of the replacements from WD are still in use, though not in a 24/7 NAS setup. Just mainly for backups.

Whether is it the NewEgg shipping problem, I can't say for sure. The drives arrived to me in what I considered to be adequate protection. I also purchased most of the Reds from NewEgg, and have not had any problems with them.
 
If they were bought from Newegg, that is probably the culprit. They were shipping the drives with extremely poor padding and WD picked up on this very quickly.

NewEgg has had shipping issues but the Greens, Blues, and Blacks still get better reviews than the Reds.
 
Back