• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Merits and demerits of chkdsk

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

hafa

Member
Joined
Apr 19, 2003
Location
A tiny dot in the middle of the Pacific
This is a continuation of the discussion started in this thread; moved here to keep it OT:

Watch out if you use Chkdsk. It may falsely report invalid files and delete them. I lost photos that way.

chkdsk will move repaired files to a hidden directory, found.x. Just because the file is there and apparently correct, does not mean it has its integrity intact.

And NTFS don't mess up just from a power loss, because it has a journal.

If the disk is writing during the outage and/or write caching is enabled, file system integrity, including journal entries, may be compromised.

FAT32 is like Ext2 in Linux.

Since FAT32 is very rarely used on hard drives, this is irrelevant.

Chkdsk's roots come from the time when HDDs didn't even support SMART and didn't support remapping.

SMART is simply a diagnostic tool to indicate potential failure (1 or 0); it is not a utility to correct file system errors. Remapping is irrelevant to file system integrity and is thus irrelevant in discourse regarding the merits and demerits of chkdsk.
 
Remapping is irrelevant to file system integrity and is thus irrelevant in discourse regarding the merits and demerits of chkdsk.

I was under the impression that todays HDs have spare sectors used to map out bad blocks and thus make the drive appear to be contiguous when in fact they may not be. Perhaps only SSDs do that, but I thought HDs did as well.
 
I was under the impression that todays HDs have spare sectors used to map out bad blocks and thus make the drive appear to be contiguous when in fact they may not be. Perhaps only SSDs do that, but I thought HDs did as well.

for hard drives:
i am under the impression, that believed to be bad sectors are closed off and marked as bad, the list stored somewhere on the hard disk itself.
then during deep full scans these claimed bad sector items can be checked and determine if they are actually bad or just a momentary hard failure that does not factually determine that the sector itself is bad or unusable?
That even before you get a drive, there can be sectors of the drive , or even areas of many sectors (one before one after, and the bad one), that are marked off as bad, and will always be marked out as bad.

i call the NEW bad sectors "accrued new bad sectors" many times there was nothing actually wrong with the drive there, just the User :) and a deep scan of the whole thing does some sort of testing to determine if it actually was?

some users would never allow for these "new bad sectors" to be re-used after testing, and would avoid allowing them to be re-used ever, because ya just never know if they were a bit flakey, or it was just a minor power glitch or minor error , not a surface error.

logging out a few sectors as bad, and not using them, just makes the disk available space smaller (in hard drives). sectors WERE so small with 512 , say 18 logged out would be 9K. then with 4k sectors 72k for that example, not really noticable unless a person is looking hard, or things are going very badly.

but things change so much, and it is so confusing what really happens, i am only guessing that is how it works.
long ago i had seen a list of the manufacture marked out sectors, and what the new accrued errors were, with some utility.
it was rare for there to ever be an actuall new surface damage, when treated like gold on a silk pillow.
The drive head never actually touches the surface, , , , , , until i give it a good wack :)
.
 
Last edited:
I was under the impression that todays HDs have spare sectors used to map out bad blocks and thus make the drive appear to be contiguous when in fact they may not be. Perhaps only SSDs do that, but I thought HDs did as well.

Only SSD discs have that area and it's called Over Provisioning (OP) area - not all manufacturers will tell us how big that area is, but those who gave out information say it's around 5-8% of the discs total area.

OP area is used for Garbace Collection
 
I was under the impression that todays HDs have spare sectors used to map out bad blocks and thus make the drive appear to be contiguous when in fact they may not be. Perhaps only SSDs do that, but I thought HDs did as well.

Drive mapping, as the term is currently and commonly used, is simply assigning a letter to a physical drive or to a shared folder on a network; it does not refer to any qualities, inherent or otherwise, in regards to file system integrity.
 
Drive mapping, as the term is currently and commonly used, is simply assigning a letter to a physical drive or to a shared folder on a network; it does not refer to any qualities, inherent or otherwise, in regards to file system integrity.

There are several references like this one from Wiki:

"A modern hard drive comes with many spare sectors. When a sector is found to be bad by the firmware of a disk controller, the disk controller remaps the logical sector to a different physical sector. In the normal operation of a hard drive, the detection and remapping of bad sectors should take place in a manner transparent to the rest of the system."

http://en.wikipedia.org/wiki/Bad_sector

I don't know if this is just confusion between SSDs and HDs or if modern HDs actually do map out bad blocks automatically without you ever seeing them. I have noticed that not one of my more modern drives ever gets a bad sector - which is odd.
 
There are several references like this one from Wiki:

"A modern hard drive comes with many spare sectors. When a sector is found to be bad by the firmware of a disk controller, the disk controller remaps the logical sector to a different physical sector. In the normal operation of a hard drive, the detection and remapping of bad sectors should take place in a manner transparent to the rest of the system."

http://en.wikipedia.org/wiki/Bad_sector

I don't know if this is just confusion between SSDs and HDs or if modern HDs actually do map out bad blocks automatically without you ever seeing them. I have noticed that not one of my more modern drives ever gets a bad sector - which is odd.

That was always my understanding, that the bad blocks are automatically remapped.
 
thats what it says here

HARDWARE

Sectors requiring extended retries to
recover are rewritten and read back to ensure the storage integrity of the sector. If read
back performed is still less than optimal, the sector will be relocated to a new good sector.
http://www.wdc.com/wdproducts/library/other/2579-850105.pdf

so my data is old, and things have changed. i wont tell you about the times we had to manually type in manufactures bad blocks after a low-level :)

http://seagate.custkb.com/seagate/c...d=196351&NewLang=en&Hilite=sector+realocation
Bad sectors can often be corrected by using a spare sector built into the drive. However, any information written to a bad sector is usually lost.
There are several methods for finding and correcting bad sectors.


which doesnt explain how they can be unsectioned off as bad , if they have been relocated by the drive, transparent to the OSes files sytems? or not?

OS FILE SYSTEM

http://technet.microsoft.com/en-us/library/bb457122.aspx
NTFS identifies and remaps bad sectors during the course of normal operations

so there is probably still a difference the ones the drive tested and sets out hardware , and the ones that the file system sees and avoids, that can be reset ?

http://www.stevestechresource.com/str/threads/2011-Apr-11/2450/2453.html
"When CHKDSK finds an unreadable sector, NTFS adds the cluster that contains that sector to its list of bad clusters. If the bad cluster is in use, CHKDSK allocates a new cluster to do the job of the bad cluster. If you are using a fault-tolerant disk, NTFS recovers the bad cluster's data and writes the data to the newly allocated cluster. Otherwise, the new cluster is filled with a pattern of 0xFF bytes.

If NTFS encounters unreadable sectors during the course of normal operation, NTFS remaps the sectors in the same way that it does when CHKDSK runs. Therefore, using the /R switch is usually not essential. However, using the /R switch is a convenient way to scan the entire volume if you suspect that a disk might have bad sectors."

so there is no reset in the OS normal operation or chkdsk either, just a remap for survival.
if it is possible to be falsely triggering such events, and i think it is, the drive would get pretty messy or slow even? Many people report a drive going to hell in a handbasket, that tests out as "usable" with manufacture tests. this makes them very angry that the disk cannot be warrentied.

so other than hard relocations, can a manufactures utility still clear out , test and reset anything the OS FS set?


unbelieveable how little that is explained at the manufactures site.
.
 
Last edited:
That was always my understanding, that the bad blocks are automatically remapped.

This is what I was under the impression too. If it can't be remapped its just labeled as a bad sector and blocked out and you'll loose that little bit of drive space.

Then again if something like that happens in a mechanical drive typically from what I've seen is the drive is starting to die and it will start chewing up more sectors til it hits a total failure point.
 
Only SSD discs have that area and it's called Over Provisioning (OP) area - not all manufacturers will tell us how big that area is, but those who gave out information say it's around 5-8% of the discs total area.
It sounds like HDDs have a bit of what you might call "over provisioning" as well, but you're right that's it's not really used the same way. SSDs have large OP areas that they use to balance out write wear across the other sectors as well as replace failed sectors, while HDDs have a bunch of spare sectors in the much less likely event that a sector fails. HDD sectors aren't supposed to fail over time like an SSD - there isn't a limited number of write cycles - so my guess is that the number of spare sectors is considerably less.

A sector is usually 512 bytes (they're starting to transition to 4k sectors now) so even with 4k sectors, having 1000 extra sectors amounts to 4mb of extra space. Nothing like the extra few GB on an SSD.
 
There are some cases where you can have a lot of extra space on a drive. Usually because of marketing. For example a drive with two 300GB platters is sold as a 500GB drive because they've determined that that is the size that sells best.
 
Great info, orion456, Psycogeec. Thanks for the links.

Semantics: A distinction needs to be made between "drive mapping" and "sector remapping" (or even better yet, "Cluster Remapping"). Two very different things.
 
Back