• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Seagate Hard Drive Settlement Now Online - Damn!!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
MarkS: I love the fact that you haven't bothered to read through the thread. Good job!

What I find odd from what I've read is the insistence from some members that OS manufacturers are somehow at fault here for "adopting" a binary system rather than a decimal system. It would seem that many of the members of this computer forum either do not know or have forgotten that operating systems predate hard drives and the decimal system (as it relates to this discussion) that hard drive manufacturers have adopted. Computers are binary systems and all applications and files that are stored on the drives are binary (not because of the OS, but because of the processor). Drive manufactures are saving a bundle by not having enough sectors on the platters to hold a true MB or GB of data. Bundled with the fact that the average computer user (and more advanced users as it would seem...) does not understand how a computer operates on the most basic level or stores and processes data means that they can get away with nothing more than a little disclaimer.
 
What I find odd from what I've read is the insistence from some members that OS manufacturers are somehow at fault here for "adopting" a binary system rather than a decimal system.

The OS manufacturers are at fault for continuing to refer to the available space incorrectly, not for adopting a different system. Linux is one of the few (only?) which has implemented the IEC 60027-2 standard and shows the suffixes correctly, which again, has been around since late 2000.

It would seem that many of the members of this computer forum either do not know or have forgotten that operating systems predate hard drives and the decimal system (as it relates to this discussion) that hard drive manufacturers have adopted.

Your also forgetting that when computers were first introduced, there were no standards. Standards arose when a common piece of hardware was used frequently by many in the scientific community and was constantly being referenced (dont forget that [rich] consumers didnt get thier hands on a personal computer till the 70's, yet computers were around since the 50's, plenty of time to come up with the original standards).

Take 'byte' for instance. How many bits are in a single byte? If your thinking about our modern day PC's that we're using right now, yes 8 bits would = 1 byte. So would 16 and 32, and now, 64. But even with that we still refer to 8 bits = 1 byte. And the reason that still holds true today is because of the dominant microprocessor that worked 8-bits at a time way back then, before consumer PCs, before modern day PC's, before the PC that is sitting in your home right now. But there were systems way back then that processed 10bits at a time (as well as other numbers of bits), and that 10bit computer was referred to as a decimal computer even though it still processed binary bits, and those 10 bits were still referred to as 1 byte. Now what if today we were still working with a 10bit (decimal) based computer, yet the OS still referred to large byte sizes by binary methods? Who would be "wrong" then?


Computers are binary systems and all applications and files that are stored on the drives are binary (not because of the OS, but because of the processor).

Yes, computers work with 1's and 0's, I'm not arguing that, and that is what a bit (binary digit) is. A byte however, is neither binary or decimal, its a measurement of bits, which can have any number bits within it. Again, modern systems, that would refer to 8bits. Other systems, who knows, it depends on the system itself. Either way, a byte is a byte. Its 1 group of bits. Its not a 1. Its not a 0. Its not a multiple of 10. Its not a multiple of 2. Its not a multiple of 100, 1000, 1024, or anything else. Its a byte. Nothing more.

Its how the manufacturers and programmers are reffering to a large sum of bytes with a suffix that is the problem. They're using the wrong suffixes for the amount they are representing.

1,000,000 bytes = 1 MB
1,000,000 bytes = 0.95 MiB
1,000,000 bytes ≠ 0.95 MB
1,000,000 bytes ≠ 1 MiB

The above four lines are true and correct with the current standards. The MB is referring to the measurement of a power of 1000. The MiB is referring to a measurement of a power of 1024.

Its just a measurement. A short way of representing a large value.

Forget that computers process binary bits. Forget that ram doesnt come in exact multiples of 1000. None of that has to deal with how we refer to large volumes of bytes in a condensed and simple manner.

Drive manufactures are saving a bundle by not having enough sectors on the platters to hold a true MB or GB of data. Bundled with the fact that the average computer user (and more advanced users as it would seem...) does not understand how a computer operates on the most basic level or stores and processes data means that they can get away with nothing more than a little disclaimer.

And all that really has nothing to do with how the values are represented. :bang head

Heres a question for all of you: How many quarts are in 10 gallons?
 
Last edited:
The OS manufacturers are at fault for continuing to refer to the available space incorrectly, not for adopting a different system. Linux is one of the few (only?) which has implemented the IEC 60027-2 standard and shows the suffixes correctly, which again, has been around since late 2000.



Your also forgetting that when computers were first introduced, there were no standards. Standards arose when a common piece of hardware was used frequently by many in the scientific community and was constantly being referenced (dont forget that [rich] consumers didnt get thier hands on a personal computer till the 70's, yet computers were around since the 50's, plenty of time to come up with the original standards).

Take 'byte' for instance. How many bits are in a single byte? If your thinking about our modern day PC's that we're using right now, yes 8 bits would = 1 byte. So would 16 and 32, and now, 64. But even with that we still refer to 8 bits = 1 byte. And the reason that still holds true today is because of the dominant microprocessor that worked 8-bits at a time way back then, before consumer PCs, before modern day PC's, before the PC that is sitting in your home right now. But there were systems way back then that processed 10bits at a time (as well as other numbers of bits), and that 10bit computer was referred to as a decimal computer even though it still processed binary bits, and those 10 bits were still referred to as 1 byte. Now what if today we were still working with a 10bit (decimal) based computer, yet the OS still referred to large byte sizes by binary methods? Who would be "wrong" then?




Yes, computers work with 1's and 0's, I'm not arguing that, and that is what a bit (binary digit) is. A byte however, is neither binary or decimal, its a measurement of bits, which can have any number bits within it. Again, modern systems, that would refer to 8bits. Other systems, who knows, it depends on the system itself. Either way, a byte is a byte. Its 1 group of bits. Its not a 1. Its not a 0. Its not a multiple of 10. Its not a multiple of 2. Its not a multiple of 100, 1000, 1024, or anything else. Its a byte. Nothing more.

Its how the manufacturers and programmers are reffering to a large sum of bytes with a suffix that is the problem. They're using the wrong suffixes for the amount they are representing.

1,000,000 bytes = 1 MB
1,000,000 bytes = 0.95 MiB
1,000,000 bytes ≠ 0.95 MB
1,000,000 bytes ≠ 1 MiB

The above four lines are true and correct with the current standards. The MB is referring to the measurement of a power of 1000. The MiB is referring to a measurement of a power of 1024.

Its just a measurement. A short way of representing a large value.

Forget that computers process binary bits. Forget that ram doesnt come in exact multiples of 1000. None of that has to deal with how we refer to large volumes of bytes in a condensed and simple manner.



And all that really has nothing to do with how the values are represented. :bang head

Heres a question for all of you: How many quarts are in 10 gallons?


Ok mpegger...since you liked to quote wiki a few posts ago...

"The word "byte" has numerous closely related meanings:

A contiguous sequence of a fixed number of bits (binary digits). The use of a byte to mean 8 bits has become nearly ubiquitous.
A contiguous sequence of bits within a binary computer that comprises the smallest addressable sub-field of the computer's natural word-size."

See binary? I do. See binary system? So if the computer is binary, then all measurements are in binary. It's like saying 10 gallons has 40 quarts, but the new standard is the liter and their are now magically 40 liters in 10 gallons. Get it?

more from your friend wiki
"The term "byte" comes from "bite," as in the smallest amount of data a computer could "bite" at once. The spelling change not only reduced the chance of a "bite" being mistaken for a "bit," but also was consistent with the penchant of early computer scientists to make up words and change spellings. However, back in the 1960s, the luminaries at IBM Education Department in the UK were teaching that a bit was a Binary digIT and a byte was a BinarY TuplE (from n-tuple, i.e. [quin]tuple, [sex]tuple, [sep]tuple, [oc]tuple ...), turning "byte" into a backronym"

Wow.... Binary digit.... Byte... Binary.....

"A byte was also often referred to as "an 8-bit byte", reinforcing the notion that it was a tuple of n bits, and that other sizes were possible."

"A contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, or from a disk-drive head, which is the smallest meaningful unit of data. "

See binary data stream??? See DISK DRIVE HEAD in the same sentence??? It is BINARY Data that they are re-measuring...

Just for some fun:
"Following "bit," "byte," and "nybble," there have been some analogical attempts to construct unambiguous terms for bit blocks of other sizes.[4] All of these are strictly jargon, and not very common.

2 bits: crumb, quad, quarter, tayste, tydbit
4 bits: nibble, nybble
5 bits: nickle
10 bits: deckle
16 bits: playte, chawmp (on a 32-bit machine)
18 bits: chawmp (on a 36-bit machine)
32 bits: dynner, gawble (on a 32-bit machine)
48 bits: gawble (under circumstances that remain obscure) "

The computer is a binary system. It's OS runs using binary. The hard drives USED to be measure in binary, before they switched to SI..... Mega means:2^20 or 1024^2 but when they switched (key word there, switched, remember they used to be binary....) to the SI they went with 10^6 or 1000^3....

It doesn't matter what the IEC or whoever "made the new standard" it was different before that. It makes the hard drives so inflated space... The hard drive manufactors are wrong. Loss their case. <-- IE They pleaded their case to a judge who listened to both highly educated sides and said, yes, they are wrong... See my gallon/liter/quart comment above... Stop spout how in 2000 the IEC did this.... That is doesn't matter. All that means is someone with enough pull made the change. Hell if we get enough dairy processors together, I bet we can change the gallon measurement from 4 quarts to a gallon to 4 liter per gallon, all it takes is money....I bet you would like 4 liters to a gallon since it is a decimel! :bang head
 
I took the below quotes from (http://en.wikipedia.org/wiki/Si_system) under the "Units" section.

Wikipedia said:
The international system of units consists of a set of units together with a set of prefixes.

and this one...

wikipedia said:
A prefix may be added to units to produce a multiple of the original unit. All multiples are integer powers of ten.

So the above quote (from wiki) states that the prefix is base 10, not base 8. So taking that into account I reach this conclusion:

1 Gb = 1,000,000,000 bytes
1 Gibi = 1,073,741,824 bytes

Also if you look at Table 2 of the same refrenced webpage. You will see that all the prefixes are show as 10^x.

So Giga (G) = 10^9.
 
Wow, just wow.

In any case, since you in all your wisdom know about MS showing the capacity in binary, surely Seagate knows this and can make the necessary change to their hardware so when we plug it in it shows the capacity we paid for and without us having to do an algebraic equation to confirm it. Very simply solution indeed.

In my opinion, relatively very few people are as knowledgeable about these matters as you guys are. Your regular family man Joe needs to buy a 120GB hard drive, he goes and buys one that's marked to have 120GB. He plugs it in and finds out that he's missing close to 10GB of space. In my opinion, the hard drive companies should advertise how much space you actually get to use.
So Seagate should change the way that hard drives have been measured since 1956 because of Microsoft? Screw that, I use Linux mostly and they managed to do it right. Don't dumb things down or make them flat out wrong just because "family man Joe" is a tard that can't read on the box of the hard drive that they're measuring it in decimal.

The harddrive makers knew when they made the drive that it would take 1024 bytes to make a kilobyte etc. and they chose to ignore it and instead purposely mis-lead people by using another system.

I'd like to see the result of this case make all harddrive manufactues convert to the number system that is followed on which the product is used. Aka 1024 bytes to make a kilobyte etc. nothing more.
What product is that? Once again, why should they change the way it's always been? They're reporting it correctly, my 500GB hard drive is exactly what it's supposed to be, 500,000,000,000 bytes.

You just helped me prove my point, thanks

The average user has no clue the difference between GiB and GB so let's just keep it simple and instead of throwing more suffixes at us, just show us the two numbers we pay attention to, the capacity of the drive we just bought and the capacity MS is going to tell us we have.
Showing the capacity in GB/GiB wouldn't be a bad thing, but if family-man Joe is too stupid to read the package in the first place to see that they're using decimal, I doubt he's smart enough to know the difference. Or to be bothered with a quick Google search and educating himself.

I agree with the lawsuit. The Hard Drive manufactors have twisted the definition of bytes. It has always been binary (look at your ram... 512? =512MB, not 500MB. 1024MB -1 GB) See that analogy. That is how it is to be measure, BINARY. The hard drive manufactors in their race to produce the largest drive changed what they call bytes. Imagine going to the grocery store to buy a gallon of milk and getting only 3 quarts because the dairy industry decided to change what they call a gallon. Bytes in Binary IS Industry standard.
Face it. There was none of this bi- prefix nonsense until HDD manufacturers came along. The original usage was and always has been to use powers of two.
Wrong wrong wrong. All wrong. Here's the first hard drive, made in 1956. Notice that it was 5MB, which consisted of 5,000,000 bytes.

http://www.techeblog.com/index.php/tech-gadget/worlds-first-hard-drive-1956

Is this law suite stupid, no. Should they be required to pay damages, no. Should they and all other be required to change their advertisements, yes. Most of us may know what the disclaimer means, but how many of your parents do, or neighbors? We may be arguing about the difference in bytes and bites but 99% of the population couldn't care less. When they buy a 100GB drive they want to be able to put a 100GB of stuff on it, not exactly to much to ask. What they and everyone else has been doing is deceitful, just because everyone does it does not make it alright. Still they should not have to pay any form of damages, just change the packaging.
They ARE getting 100GB! MS and other OS manufacturers are reporting things incorrectly, don't keep things WRONG because people are dumbasses.

My 160GB drive has 160,031,014,912. I got 31 million free bonus bytes.
LOL. Bonus Bytes made me laugh...
 
maxfly: Done, but IMNSHO the original was more accurate.

maelstromracing: Please read through this before making an "ignorant" and "uninformed" response like my last post was. :bang head

A byte is 8 bits. No base necessary. The number 8 can be expressed in decimal a 8, hex as 8, binary as 1000, octal as 10, etc. The bytes and bits in RAM and HDs (whether binary or decimal) is the same. The SI prefix is what determines the multiple, whether positve (10 is deci, 100 is hecta, 1000 is kilo) or fractional (.1 is deci, .01 is centi, .001 is milli). Elementary math classes generally teach this. There was originally no SI prefix to denote the binary multiples commonly used in RAM modules (1024, 1024^2 [1048576], 1024^3 [1073741824]). Therefore, the standard decimal SI prefixes k (note that in SI, an uppercase K means Kelvin, which I have previously used incorrectly), M, and G were used interchangably for both decimal and binary multiples. On early computers, the only users were quite knowledgable and knew the difference, so weren't confused. Now, with computers available to so many, those of lesser knowledge are easily confused by this. Hence the introduction of Kibi, Mebi, Gibi, etc. in 1999. Said prefixes are the binary ones.

Would you please understand that these prefixes are not exclusive to SI? As bytes are not SI units, neither are the prefixes used for them. Why should we arbitrarily force bytes into an unrelated system of measurement?

SI did not just pull the prefixes out of the air and trademark them. They were commonly used prefixes. Furthermore, these prefixes were used in computing before SI was standardized.

Microsoft, Linux, Apple, and everyone else are reporting sizes correctly. and the way they've always been calculated until relatively recently. It's people like media manufacturers and you who've arbitrarily decided that SI has a monopoly on Greek prefixes and we have to force bytes into SI. Why should everyone change over because some companies realized it'd be cheaper for them to market their products as being bigger than they were and some purist SI fanatics were enraged that somebody else was making use of a couple of prefixes?

Invent a computer than runs on base 10, and then you'll have a point.
 
Last edited:
Would you please understand that these prefixes are not exclusive to SI? As bytes are not SI units, neither are the prefixes used for them. Why should we arbitrarily force bytes into an unrelated system of measurement?

SI did not just pull the prefixes out of the air and trademark them. They were commonly used prefixes. Furthermore, these prefixes were used in computing before SI was standardized.

Microsoft, Linux, Apple, and everyone else are reporting sizes correctly. and the way they've always been calculated until relatively recently. It's people like media manufacturers and you who've arbitrarily decided that SI has a monopoly on Greek prefixes and we have to force bytes into SI. Why should everyone change over because some companies realized it'd be cheaper for them to market their products as being bigger than they were and some purist SI fanatics were enraged that somebody else was making use of a couple of prefixes?

Invent a computer than runs on base 10, and then you'll have a point.

Did you fail to notice that the first hard drive from 1956 was calculated on the decimal system, or that Linux/UNIX has generally always calculated on the decimal system as well?
 
I've got a 200 giger from seagate from about 4 years ago. I wouldn't even dream of trying to get money back for it. Almost every idiot that knows anything about technology knows you aren't going to have that exact amount of usable storage space when it states you're only getting a decimal instead of the actual number of bytes in a GB.


200,000,000,000 / 1024 / 1024 / 1024 = 186.264514923095703125


I knew I'd be getting 186.2GB durr???
 
Last edited:
Should ram also note that it will not have the advertised bandwidth unless you clock them right? Cause I m sure that is something far more obscure for the common man. Crucial here I come, time to get my ddr3 chips for free!
 
Back