There is a lot of stuff written on the Net, as usual. However, i think, most of the content is outdated or even hinting to some wrong directions. This will start from Fat32 which is basically a legacy replaced by exFat and may end with recommendations regarding "full HDD formatting".
Now, as far as i know, nowadays the new Win 10 OS will automatically check a drive for errors and keep track of it, without the use of a external tool (not even a internal one).
And, for the newer drives i guess it will even keep a own record for its errors, basically "on the fly", as soon as data is written or read. New drives are pretty complex with many internal technology, which can not be compared with the old stuff many years ago. It will need lesser handling of those matters, by the user, than ever before. To some extend, it can even become overkill checking and using tools, because it will just cause unnecessary "interception" on a process already done by the drive or OS itself.
I think, it`s more a matter of believe than with any true evidence. Many software-companys want us to believe "you really need this tool else your doomed and what else..." but in most cases this is just marketing, not much more. Most likely, the individual luck on any drive is playing a more important role than any of the "checkings" a user could do. They say "use this and that" but ultimately it just doesn`t matter. With bad luck, any of the drive i use can instantly be burned and not any tool was ever detecting it in advance. On the other hand, maybe a drive just last forever and some tool may find it suspicious, but it`s working fine.
What i usually do is just to check the integrity of the files. For example my Anime-Movies got a own "hash-number" which can be read and pretty accurately tell me if there is some bits lost". As for my games, i usually use GoG-Games, they pretty much make the same automatically, using a redundancy check, thats it. Steam can do the same using a acccount-tool. Ultimately as long as the data is not corrupted it`s safe to say a drive is working proper. Most of the other evidence is not truly something i put much trust on. Over time there can be some bad sectors, just because of age: Works the same as before, i check redundancy and if a file is bad on a drive, i still have the same backup on another drive in order to restore.
I`m not even sure if it matters making a "full format" for a new drive that already is preformated by the manufacturer, it should be checked for errors already. Maybe not even a fresh (not preformated) drive may need a full formatting.
So, still, in order to feel more safe i am now reformatting one of the huge 10 TB drive into NTFS format and even using a "full reformat", just because maybe "it helps"... however, i am absolutely not sure and it takes a huge amount of time on such a huge drivem maybe 12 hours in a row, as it will have to check every single bit on this drive.
Some sources are telling me, NTFS is much more supreme compared to exFat because it got file-journaling, which will help the drive recovering the files in term of power or any other data failure. On the other hand, usually there is many files and in term of a failure usually the file currently written on the drive will become corrupt, but usually not any other parts. In rare cases it may destroy the file-system of a drive, if so... the data could be doomed because the drive will be unable to access to. I dunno if NTFS is so much better handling it, in term of such a "system loss"... ultimately, it should simply never happen. I guess there is some other "protection mechanism" in term of failure, for example some HDDs got a internal NAND, which is not volatile and may store some cached data, even at time of power failure due to different circumstances. I` m just not sure because the technology nowadays from many manufacturers has turned to a state incredibly complex.
To me, im pretty unsure... exFat got the advantage for being readable by as good as any device out there, and NTFS could be more secure. However, if the drive truly fails, nothing will be able to secure the data. Best deal is to have another full backup in such a case, instead of putting to much trust into "supreme file system solutions":
So, as i can not truly decide and got a full backup on 2 separated drives, i simply made one drive with exFat and the other drive with NTFS, this way, there is close to no device unable to read the data and in term of a drive failure, it doesn`t matter which drive was failing. Truly not sure if this is "the holy grail" because to me... either the drive is running properly or ... if not... it need to be replaced, software can not truly help. Small failures should be detected automatically, as i already told. I will always notice a heavy failure... this is not a matter that need to be checked.
Recently i was able to retrieve old data, with over 3 TB size, from a almost 10 year old QNAP-NAS-device. It took me a lot of "sh...." and some difficult "figuring out" on how to make this NAS run once again, and to gain access to those files, but ultimately i suceeded and the NAS including almost 10 year old HDDs was working properly... so all the files are restored. So, if people say "a HDD will live about 5 years", i think, if a drive is not abused or misused even twice the amount should still work fine. I think, it`s very individual if a drive is failing or not, there is no "exact number" able to set a rule.
Finally, i just wonder, how are other people handling it (file system, way to format, how many years for disc life, what kind of check is actually useful nowadays) or maybe someone else got some good ideas or hints related to those matters. If so... any improvement i can achieve might be useful. Usually i rather want to be minimalistic, not doing anything that is not necessary and rather trust on a manual management.
Now, as far as i know, nowadays the new Win 10 OS will automatically check a drive for errors and keep track of it, without the use of a external tool (not even a internal one).
And, for the newer drives i guess it will even keep a own record for its errors, basically "on the fly", as soon as data is written or read. New drives are pretty complex with many internal technology, which can not be compared with the old stuff many years ago. It will need lesser handling of those matters, by the user, than ever before. To some extend, it can even become overkill checking and using tools, because it will just cause unnecessary "interception" on a process already done by the drive or OS itself.
I think, it`s more a matter of believe than with any true evidence. Many software-companys want us to believe "you really need this tool else your doomed and what else..." but in most cases this is just marketing, not much more. Most likely, the individual luck on any drive is playing a more important role than any of the "checkings" a user could do. They say "use this and that" but ultimately it just doesn`t matter. With bad luck, any of the drive i use can instantly be burned and not any tool was ever detecting it in advance. On the other hand, maybe a drive just last forever and some tool may find it suspicious, but it`s working fine.
What i usually do is just to check the integrity of the files. For example my Anime-Movies got a own "hash-number" which can be read and pretty accurately tell me if there is some bits lost". As for my games, i usually use GoG-Games, they pretty much make the same automatically, using a redundancy check, thats it. Steam can do the same using a acccount-tool. Ultimately as long as the data is not corrupted it`s safe to say a drive is working proper. Most of the other evidence is not truly something i put much trust on. Over time there can be some bad sectors, just because of age: Works the same as before, i check redundancy and if a file is bad on a drive, i still have the same backup on another drive in order to restore.
I`m not even sure if it matters making a "full format" for a new drive that already is preformated by the manufacturer, it should be checked for errors already. Maybe not even a fresh (not preformated) drive may need a full formatting.
So, still, in order to feel more safe i am now reformatting one of the huge 10 TB drive into NTFS format and even using a "full reformat", just because maybe "it helps"... however, i am absolutely not sure and it takes a huge amount of time on such a huge drivem maybe 12 hours in a row, as it will have to check every single bit on this drive.
Some sources are telling me, NTFS is much more supreme compared to exFat because it got file-journaling, which will help the drive recovering the files in term of power or any other data failure. On the other hand, usually there is many files and in term of a failure usually the file currently written on the drive will become corrupt, but usually not any other parts. In rare cases it may destroy the file-system of a drive, if so... the data could be doomed because the drive will be unable to access to. I dunno if NTFS is so much better handling it, in term of such a "system loss"... ultimately, it should simply never happen. I guess there is some other "protection mechanism" in term of failure, for example some HDDs got a internal NAND, which is not volatile and may store some cached data, even at time of power failure due to different circumstances. I` m just not sure because the technology nowadays from many manufacturers has turned to a state incredibly complex.
To me, im pretty unsure... exFat got the advantage for being readable by as good as any device out there, and NTFS could be more secure. However, if the drive truly fails, nothing will be able to secure the data. Best deal is to have another full backup in such a case, instead of putting to much trust into "supreme file system solutions":
So, as i can not truly decide and got a full backup on 2 separated drives, i simply made one drive with exFat and the other drive with NTFS, this way, there is close to no device unable to read the data and in term of a drive failure, it doesn`t matter which drive was failing. Truly not sure if this is "the holy grail" because to me... either the drive is running properly or ... if not... it need to be replaced, software can not truly help. Small failures should be detected automatically, as i already told. I will always notice a heavy failure... this is not a matter that need to be checked.
Recently i was able to retrieve old data, with over 3 TB size, from a almost 10 year old QNAP-NAS-device. It took me a lot of "sh...." and some difficult "figuring out" on how to make this NAS run once again, and to gain access to those files, but ultimately i suceeded and the NAS including almost 10 year old HDDs was working properly... so all the files are restored. So, if people say "a HDD will live about 5 years", i think, if a drive is not abused or misused even twice the amount should still work fine. I think, it`s very individual if a drive is failing or not, there is no "exact number" able to set a rule.
Finally, i just wonder, how are other people handling it (file system, way to format, how many years for disc life, what kind of check is actually useful nowadays) or maybe someone else got some good ideas or hints related to those matters. If so... any improvement i can achieve might be useful. Usually i rather want to be minimalistic, not doing anything that is not necessary and rather trust on a manual management.