The OS manufacturers are at fault for continuing to refer to the available space incorrectly, not for adopting a different system. Linux is one of the few (only?) which has implemented the IEC 60027-2 standard and shows the suffixes correctly, which again, has been around since late 2000.
Your also forgetting that when computers were first introduced, there were no standards. Standards arose when a common piece of hardware was used frequently by many in the scientific community and was constantly being referenced (dont forget that [rich] consumers didnt get thier hands on a personal computer till the 70's, yet computers were around since the 50's, plenty of time to come up with the original standards).
Take 'byte' for instance. How many bits are in a single byte? If your thinking about our modern day PC's that we're using right now, yes 8 bits would = 1 byte. So would 16 and 32, and now, 64. But even with that we still refer to 8 bits = 1 byte. And the reason that still holds true today is because of the dominant microprocessor that worked 8-bits at a time way back then, before consumer PCs, before modern day PC's, before the PC that is sitting in your home right now. But there were systems way back then that processed 10bits at a time (as well as other numbers of bits), and that 10bit computer was referred to as a decimal computer even though it still processed binary bits, and those 10 bits were still referred to as 1 byte. Now what if today we were still working with a 10bit (decimal) based computer, yet the OS still referred to large byte sizes by binary methods? Who would be "wrong" then?
Yes, computers work with 1's and 0's, I'm not arguing that, and that is what a bit (binary digit) is. A
byte however, is
neither binary or decimal, its a
measurement of bits, which can have any number bits within it. Again, modern systems, that would refer to 8bits. Other systems, who knows, it depends on the system itself. Either way, a byte is a byte. Its 1 group of bits. Its not a 1. Its not a 0. Its not a multiple of 10. Its not a multiple of 2. Its not a multiple of 100, 1000, 1024, or anything else. Its a byte. Nothing more.
Its
how the manufacturers and programmers are reffering to a large sum of bytes with a suffix that is the problem. They're using the wrong suffixes for the amount they are representing.
1,000,000 bytes = 1 MB
1,000,000 bytes = 0.95 MiB
1,000,000 bytes ≠ 0.95 MB
1,000,000 bytes ≠ 1 MiB
The above four lines are true and correct with the current standards. The MB is referring to the measurement of a power of 1000. The MiB is referring to a measurement of a power of 1024.
Its just a measurement. A short way of representing a large value.
Forget that computers process binary bits. Forget that ram doesnt come in exact multiples of 1000. None of that has to deal with how we refer to large volumes of bytes in a condensed and simple manner.
And all that really has nothing to do with how the values are represented.
Heres a question for all of you: How many quarts are in 10 gallons?