• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Question about NAS disk drives

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

trents

Senior Member
Joined
Dec 27, 2008
How necessary is it to use NAS certified drives in a NAS enclosure? This would be a two drive enclosure running in RAID 1.
 
Yeah, I saw that one already. Just wondering what folks have actually experienced in real life use.
 
I run a stack of constellation ES 2TB drives (10 in raid 50) haven't had any trouble out of them. I honestly don't know how their design compares to a desktop or NAS style drive.
 
Hi there.

You can use a standard drives in a NAS if you want, but I personally would prefer to use those specifically designed for that because they have the necessary firmware for such usage.

Basically NAS drives are optimized to use less power, cause less vibration and a specific build-in feature called TLER, which stops the hard drive from entering into a deep recovery cycle. For instance, a desktop drive will try, try and try again to get your data back if a sector's not reading properly and this will result in timeouts, etc. A NAS not be dropped from a RAID array since it will enter in deep recovery cycle to attempt to repair the error, recover the data from the problematic area, and then reallocate a dedicated area to replace the problematic area.

Hope this helps and feel free to ask any questions you may have. :)
 
If you live next to a Fry's, you can get 4TB coolspin drives (megascale enterprise drives now) for 90 bucks with their coupon every once in a while.
 
Hi there.

You can use a standard drives in a NAS if you want, but I personally would prefer to use those specifically designed for that because they have the necessary firmware for such usage.

Basically NAS drives are optimized to use less power, cause less vibration and a specific build-in feature called TLER, which stops the hard drive from entering into a deep recovery cycle. For instance, a desktop drive will try, try and try again to get your data back if a sector's not reading properly and this will result in timeouts, etc. A NAS not be dropped from a RAID array since it will enter in deep recovery cycle to attempt to repair the error, recover the data from the problematic area, and then reallocate a dedicated area to replace the problematic area.

Hope this helps and feel free to ask any questions you may have. :)

Thank you. That was informative.

- - - Updated - - -

If you live next to a Fry's, you can get 4TB coolspin drives (megascale enterprise drives now) for 90 bucks with their coupon every once in a while.

Don't live close to one.
 
At work I'm using drives designed for 24/7 work and I've never had issues with them ( many clients with 2-4 bay NAS set as daily backup for servers ). Since Seagate isn't reliable enough and Hitachi isn't easily available in my local distribution then I stick to WD RE, Red or Purple. I think I had 1 RMA in past ~4 years so really can't complain.
 
My main backup server used to run WD Reds in 3 data + 1 parity arrangement. It is not currently a 24/7 available server although it could be. Since it is only used for backups I only turn it on when needed. All was fine until one of the drives dropped out of the array. SMART reported seek errors. I ran a WD's tool on it, which reported it was fine. By this point, the seek errors had disappeared from SMART. So now I have a disk that appears fine and I can't trust. WD rep said if it passes the extended test, it is good to go. I've relegated it to a game install drive in other PC and it hasn't thrown a wobbly since.

Still, in order to rebuild my original array quickly to restore redundancy, I re-used a desktop grade Toshiba from a few years ago. Just worked fine. I don't know about 24/7 running ability in the longer term, but for now it is fine. From this I decided that single redundancy isn't comfortable enough so I've gone into a 4 data + 2 parity arrangement now, and I'm debating putting in a hot spare if I start running 24/7.

Right now, I'm not sure the price premium for NAS drives are worth it. I'm going full binary here, a drive is 100% good or not. 99% isn't acceptable and I'll replace it. So, TLER or not doesn't matter to me. That it had an error state is enough for me to remove it from this job.
 
At work I'm using drives designed for 24/7 work and I've never had issues with them ( many clients with 2-4 bay NAS set as daily backup for servers ). Since Seagate isn't reliable enough and Hitachi isn't easily available in my local distribution then I stick to WD RE, Red or Purple. I think I had 1 RMA in past ~4 years so really can't complain.

Out of curiosity what isn't reliable enough about the better seagate drives? They actually carry about the RMA rate as WD...
 
Out of curiosity what isn't reliable enough about the better seagate drives? They actually carry about the RMA rate as WD...

I'd like to know that too. Nothing I can find statistically out there says there is much difference. About 8 or 9 years ago Seagate had a terrible firmware problem that caused high premature failure rates but I think we are past that now.
 
1. Are there differences in bearing size or quality, other than NAS/enterprise drives having an extra one secured to the drive cover?

2. Are the chips that drive the motor and move the heads rated to withstand more power?

3. Are the motors different?

I have some HGST 7K6000 UltraStar drives because Fry's offered them for the same price as WD Blacks
 
Back