• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[dead deal]WD Green 2TB $100 @ mWave

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Status
Not open for further replies.
know the differance between an ears and an eads , some people who dont know , didnt do to well with them. that is sometimes why they sell for so cheap even though they are good and use the "NEW" 4096 sector stuff.
there is both a jumper you use, or a alignment tool, improperly used or changed midstream causes your data to be sorta lost, and the speed to be slow. i dont totally understand it, but people getting them SHOULD understand. At least before you actually install and partiton. I use them , they work great, and i use the jumper myself.
they now have a big TAG on the top that you cant miss, that tells some info.
.
 
Last edited:
i am using mine in raid 0 , but its a certian type of raid (intel matrix with intel controller With bios raid), and I had to use some major tricks to pull it off. many tricks, things they said were impossible, like 4t raid0 in 32bit system, i managed to accomplish in 2 arrays.
have had it set now for months and all the drives have 80% fill and i defrag/reorder, and all sorts of data shifting, and filling with no problems.

they were parking like crasy as singles, i tried some utilities to attempt to reduce that, i dont know if it did anything. i did not flash them, with anything other than attempts at messing with the ACM and Park excesses. I dont know if the intel raid driver, and the newer one, would be trying to insure that parking doesnt occur, but they dont seem to park anymore (excessivly).

But (and a big but) in my system i use it partitioned for the system drive AND the other stuff, and the other green raid0 is the paging disk, so the system is always herassing it for stupid stuff like logging and paging and all the stuff that goes on. and the power properties are set for no spindown.
the drive items i uase in raid 0 are always exact same drive, same batch same style. they have always worked like a team of horses stopping and going as that team. i am still scared to spindown (OS power manage) the 4t array, someday i will get so bold.

working them outside of the SYSTEM, minus intels driver control (having raid bios still), has also not been an issue, but outside of the system for cloning and backup , i dont sit there and wait for anything (to park) so i dont see any problems with that yet either. i also dont duel boot this sytem, or attempt to run other systems except for boot disks.

i took a really big GUESS , having used previously 2 X 1T raid ACS greens and never having any problems with the intel raid, even with spindowns, this was just 3 more tricks to get it to work. and the Heck of having somewhere to store 4T , like how was i going to do that. and store it long enough to finish full testing.

even for a "green" drive the TIME they have set for parking is Rediculous, that doesnt save power or the parts, it just re-parks and re-parks, they need to extend the time to a logical time .

Biggest Problems i was knowing would exist with raid0 would be:

is one drive going to park, or park for to long, then they dont wake up simeltaneous, and the driver is brainless, but i had been parking and spinning down teams for years without a problem on 2 different quality raid systems. Would i risk it with these Park-A-Thon drives?

are ANY drives going to get bad sectors and need a recovery , often bad sectors are sectioned out because of a system freak-up not because the drive is stuffed. I had tested the 2 1t greens and for the whole year there had not been "new bad sectors" If it has to fix one, then i am screwed, it is assume that the one drive wont react till this bad sector is re-written elsewhere, isnt that where a cache come in usefull?

Overclocking wildly can cause a system freak-up, having memory screwed up , overheats, bad power, all those thing could also get a drive/system to THINK there was a bad sector. sure its all done on the drive, but it seems to me that most bad sectors occur on Bad systems, impact, and bad drives, not out of the blue.

My drives are on the front with cool air intake, the temperatures in my system are thermal stabalised, all fans and everything try to keep the thermal conditions Similar (asumption that there is less thermal recalibrations of drives, and less expansion contraction of anything). The computer is where its drives wont be hit while spinning, causing head crashes and all.
The only times i have experienced the fault of having bad sectors be re-written , in any quantity, in many years of running raid0 in my systems was via the human, the software and OS or drivers screwing up , else the drive itself was totally finished/failing anyways. Would i risk it with these drives that Finish a bad sector issue before continuing?
( I just dont get it)

The 32 bit system will not access a full 4T array. With intel matrix (in the OS) magically i was able to create 2 seperated arrays without any speed issues , the intel BIOS now sees Both arrays as assembled Via the stuff written in the drive.
would I risk Buying drives that were to BIG for my 32bit system?

I am editing video with them besides , 2x1T greens and 2X2T ears (from & to) , its impossible ! but i was to CHEAP to get the right stuff. If i had the extra money to spend, it would have been a whole lot easier to get something like "enterprise" ,and try to cool them.
.
 
Last edited:
i am using mine in raid 0 , but its a certian type of raid (intel matrix with intel controller), and I had to use some major tricks to pull it off.
Any type of RAID can cause it to fail. The drives are allowed to "not respond" for up to 90 seconds. I suggest you read that entire thread I linked in my last post. It is very bad having these in RAID, especially RAID 0. It does not matter what controller you have. You may not have issues for months, but it is still a threat to the stability of your array.
 
Any type of RAID can cause it to fail. The drives are allowed to "not respond" for up to 90 seconds. I suggest you read that entire thread I linked in my last post. It is very bad having these in RAID, especially RAID 0. It does not matter what controller you have. You may not have issues for months, but it is still a threat to the stability of your array.


i had already read about the issue before i took the risk.
i had already been running greens in raid for as long as they have been out.
i had to go through and Seperate the freaking 1T drives from raid0 , get them back to singles, and test for bad sector accumulations, because of that info, there were not any.

why would a few bad sector re-writes take 90 seconds? i need to know.

after having decomissioned drives that i had used for years , before selling them to others, i ran those test thing that show the bad sector accumulations on a drive , out of all drives tested after pulled, only 1 listed a single bad sector accumulation during my use, as we know sectors are marked off by the manfacture before we get them.
How will i get bad sectors on my drives, when MY use hasnt shown them to occur?

some Bad sectors logged out by drives/system during my use prior, i recommisioned as NOT being bad sectors, they occured during stupid times (ex failed overclocking), were logged out because of ME and stupid things i was doing, they were not actual drive problems but me , i told the utilities to test them again, and I quit doing stupid things that get them logged out.

Then, the last thing i need to know.
if ONE drive out of the team, STOPS working for say a week, what happens? the only thing i have seen happen when disconnecting it on purpose, is 1/2 of the data is not there for THAT particular write Corruption of that data.
reconnect drive and reboot, toss out the data that was written when the drive was missing.

truely what am i missing? i go through and painstakingly check this stuff because of the info provided?
.
 
Last edited:
i had already read about the issue before i took the risk.
i had already been running greens in raid for as long as they have been out.

why would a few bad sector re-writes take 90 seconds? i need to know.

after having decomissioned drives that i had used for years , before selling them to others, i ran those test thing that show the bad sector accumulations on a drive , out of all drives tested after pulled, only 1 listed a single bad sector accumulation by ME, as we know sectors are makred off by the manfacture before we get them.
How will i get bad sectors on my drives, when MY use hasnt shown them to occur?

Then, the last thing i need to know.
if ONE drive out of the team, STOPS working for say a week, what happens? the only thing i have seen happen when disconnecting it on purpose, is 1/2 of the data is not there for THAT particular write.
reconnect drive and reboot, toss out the data that was written when the drive was missing.
I will start this off with, "just because you didn't have problems, doesn't mean it isn't an actual problem". This is something that has been proven.

If the drive detects an error, it can go offline for up to 90 seconds. That is just the high-side limit in case it takes too long or it can't do it quick enough. It can go offline for read or write errors. It doesn't mean a sector was bad but it is included in the reasons it can go offline.

If one drive stops working, it depends on what type of RAID array you are running. If you are running RAID 0, any drive failure means complete data loss. If you are running RAID 1, all the drives can fail except one. If you are running RAID 5, one drive can fail and the array is still fine; any more than that, complete data loss. Etc.

That being said, we really shouldn't be discussing this here. If you want to continue this discussion, please follow the link in my last post.
 
That being said, we really shouldn't be discussing this here. If you want to continue this discussion, please follow the link in my last post.

i dont want to read more pages of that, already read about 120 pages at 6 fourms about the problems, the discussion is endless, and it is a FACT! like your saying. people should avoid these things for Raid indeed.
I would have to read another 3 pages , and 10 links to comment there, dont want to do that, and i dont want to argue about facts.

I have defied every fact do date, and overtested them, ONLY for ME.

If you are running RAID 0, any drive failure means complete data loss . . . of the data that was in transit or cached at the time, the rest is fine. unless the drive itself is failed.

.
 
Last edited:
it depends on what type of RAID array you are running. .

totally, and the raid controller, and the system, and the drivers, and how the person uses/treats it, the power, and could we assume that expansion and contraction and thermal recalibration still exists? if a thermal reclaibration had to occur, how long would it take to jump to a few places on the drive, do a location check of the data, then return to the regularly scheduled program?

the redundant raid systems, they got screwed. one tiny observation that a drive slipped for a second, and the controller tells it to recheck EVERY piece of data on the entire drive . :eek: bet that is fun having 2 terra to be compared through your busses while you wait for the system to become responcive again.
Them server guys got all the fun :burn:
The problem is lots more visable with the redundant systems, and the higher end raid controller cards that know everything that the drive is doing all the time. Unlike the psudo "hardware" raid that is almost software raid , with some hardware for negotiations, it doesnt disconnect and bail, it gives the drive some time first THEN it bails.

then tack in this "Sector shift", for the "new improved" sector sizes and the older operating systems that dont understand it. One user error and between the system and the drive , and the drive and system will be trying to figure out where the data is for a DAY and still not find it. Missed it by > < much.

they use the sector re-aligning (say) then hop over to another system (64-32-linux), or a boot disk or re-partition, alter the MBR, or move a simple jumper, and it would act like the drive is 100% dead, if not 50% slow, with it only being another "user error" (which i am just as good at creating as anyone else)

and some of the info i read before i got the drives even, about how to "align" IMO was sure to get the whole system hosed, just didnt make sence, but it was the authority on it.

.
 
Last edited:
Edit your post and choose "Go Advanced" and you can edit the thread title to put dead before it if you want :D

That only changes the bold line at the top of the first post. It doesn't change the title displayed on the forum listing nor the browser title. There's a [DEAD] tag that can be attached to the title, but it seems to be moderator-only.
 
I bought 24 drives in bulk that were advertised as EADS but I received EARS. Be warned, in my reading during my struggles with them, I have learned that pretty much all WD 2TB drives on sale are EARS. Even if they say EADS, they are lying and they are allowed to do this as long as they do not directly guarantee EADS (the site's fine print will have a phrase somewhere that they are allowed to make tiny changes like this to accommodate stock).

Anyway, read though my recent posts in the storage forums for my ordeal with them. My advice is to stay away from them, but if you must, make sure you are not using raid. The advance format is tolerable but you should know that because WD made this move early, their implementation is crap. 4k sectors will become standard without a emulation layer eventually. You would be much better off to wait till that happens.
 
Status
Not open for further replies.
Back