Talk:Binary prefix/Archive 6

Consumer confusion section
Let's talk about the consumer confusion section. I like the recent edit by Tom94022, but it still illustrates a recurring problem, though not a serious one, which is that editors are inserting their personal assumptions and speculations about why the binary prefixes were used or why they are still used.

I've been involved in the computer industry only since 1991, so I don't know what or even if hard drive companies, software companies, or memory companies had motives to their use of the SI prefixes. Since my involvement in the industry in a variety of roles including programming and quality assurance, I know with a little authority, that memory companies and Flash drive have no motivation to classifying their devices with the SI prefix whether the actual capacity is 2^30 or 10^9 bytes. It's that way because "it's always been done that way". If we produce a program that reports the size of a file, the remaining capacity of a drive, or the amount of installed physical system memory, it's always the SI prefix using the binary calculation. Over the years, I've attended numerous seminars, trade shows, and industry executive meetings, and from what I've gathered, the philosophy is pretty common. No one in the various computer product industries is working seriously to implement kibi, mebi, gibi prefixes because it would probably add to the consumer confusion and otherwise produce no significant benefit.

That's all anecdotal and speculative and I make no offer of proof. I fear the same is true for the following statements: a) "the discrepancy could easily pass for manufacturing overhead", b) "manufacturers rapidly developed the capability to make much larger memory, and the discrepancy became increasingly difficult to ignore", c) "The confusion appears to relate to the advent of graphical user interfaces where there was not sufficient space to provide sufficient digits fully state capacity.", d) "some computer programmers were unaware that disk drive manufacturers used the SI notation when specifying and/or advertising capacity"


 * a. seems to imply a motivation. I've never heard of any evidence to support a motivation.
 * b. implies that manufacturers created the new prefixes to address a problem that was not theirs, and still have never used.
 * c. "appears" implies original research, but I don't think it was screen realestate that drove the use of prefixes. Think of the retail packaging for the 3.5" my next to last PC had, "3.5-inch 1,474,560 byte floppy disk".  The last hard drive I bought would have needed a larger box just to fit the drive size in bytes.  It's also just human-nature to use a shortening calculation and prefix to express really large numbers, so I think it's inaccurate and unfair to blame the confusion on the space limitations that may exist in some GUI's.
 * d. based on my own experience, this doesn't matter and needs some evidence since the programmers might just have likely felt that a megabyte should be 1024^2 and not 1000^2, regardless of what is being measured.

There's a lot of speculation in this section (but exclusively there), that could easily be misused. Instead of editing and waiting for the revert or new speculation, I think we should discuss it here first. --JJLatWiki 22:47, 29 January 2007 (UTC)
 * While it is true that since 1991 "it's always been done that way," it wasn't done that way in the 1960's or probably even the 1970's so there was a transition in the 80's and to the extent we can identify it we should. With regard to your comments on the sentances:
 * a, & b. - we agree, that led me to try to better edit this section.
 * c. We are talking about consumer confusion in this section. It is impossible to prove a negative but can anyone cite any significant such confusion with DOS, IBM MVS, DEC TOPS, UNIX or any other command line (pre-GUI) OS?  Isn't it the GUI reported value differing from the manufacturers advertised value that leads to the consumer confusion?  Perhaps we can word smith this sentence better, but the confusion does come out of this difference. BTW, I first became aware of the confusion in the late 80's at SyQuest when we started getting consumer complaints from Macintosh users that the 88 MB SyQuest cartridge was only 85 or so MB (as reported by Apple).  The problem then appeared in the Windows world, as I recall with Windows 3.11 (it may have always been in Windows but until 3.11 the market was small).
 * d. This sentence was inherited from the prior edit with minor word-smithing and can be dropped.Tom94022 02:26, 30 January 2007 (UTC)


 * "to the extent we can identify it we should", therein lies the rub. You've certainly demonstrated a superior knowledge of the history, but without a reliable source to cite, it violates the original research guideline.  The original sources of the confusion need proper citation?  Even if you are one of the programmers of Windows 1.0 and you were involved in the meetings where the decision was made to called a megabyte 1024^2, it must be verifiable.  Because, to take a counter position on phrase "c", one could speculate that the fact that the increase in consumer complaints about the discrepancy between rated MB capacity and GUI reported MB capacity is coincidental.  GUI's are largely responsible for the increase in sales of computers and around the same time, drive sizes were reaching capacities that made the discrepancy more obvious.  If PC's hadn't become so popular or drive sizes had remained relatively small, the amount of confusion would be less newsworthy.  And let's not ignore the increase in frivolous litigation that drew even more attention to the discrepancy.


 * I think it's very likely that prior to the proliferation of GUI's, there was some program, a utility probably, that ran on DOS or Unix or some other non-GUI'd OS that presented the used space or free space of a disk or tape using the SI prefix and a binary calculation. Those programmers became the foundation for development at other companies that created early multitasking UI's, first text-based, then eventually graphical UI's.  If you know different, WP considers it original research without verifiable reliable sources and without verifiable reliable sources, we have to be careful about what we say.  --JJLatWiki 17:31, 30 January 2007 (UTC)


 * The various operating systems speak for themselves and in that sense what I am stating is a compilation which is not a violation of original research guideline.  I can speak from direct experience with IBM MVS and IBM DOS, MSDOS, PCDOS, all flavors of Microsoft Windows, some UNIX and some Macintosh.  I have less experience with CPM and several Digital OS's.  While your speculative "program, a utility probably, that ran on DOS or Unix or some other non-GUI'd OS" may exist it was likely obscure enough to not cause any consumer confusion.  In fact since we are talking about "consumer confusion" aren't we really just talking about the transition from MS/PC DOS to Windows (and perhaps Apple DOS to Macintosh OS), since most if not all of the other OS's are arguably not consumer products.  I agree with you that why Microsoft and Apple chose M = 2^20 for storage is not relevant, but the fact that Microsoft did so in changing from command line to GUI and thereby displayed a number different from that provided by the storage manufacturer is the source of the confusion to 95% of the consumers.Tom94022 20:15, 30 January 2007 (UTC)


 * What do mean by, "The various operating systems speak for themselves"? That I should know where to find the evidence to support phrase c?  Because I don't.  I think phrase c and what you said above ("since we are talking about "consumer confusion" aren't we really just talking about the transition from MS/PC DOS to Windows") implies some causal connection, that calls for a citation.  I think if Windows or any other GUI had never gained traction, drive sizes would have probably driven a change to abbreviated byte counts.  It's still primarily Microsoft and Apple's fault for the extent of the confusion, but I think it's not the transition to GUIs that is the cause.  --JJLatWiki 03:36, 31 January 2007 (UTC)


 * Sorry if my language caused confusion; what i meant is that the interfaces presented to the user by each operating system is a fact that speaks for its self and therefore it follows that where you find evidence to support c. is by inspecting the various outputs of the various OS's. Personally, to refresh my recollection, I spent a few hours several months ago actually booting up various MS/PC DOS's from 2.11 thru 5, Win 3.11 and Win 95.  Can we agree that the source of confusion is the OS's presentation to consumers of storage capacity with prexfix's that absent explanation appear to be SI prefixes?   Then doesn't the earliest of such presentations mark the beginning of confusion? The various MS/PCDOS utilities relating to storage, DIR, SCANDISK, CHKDISK, FDISK basically did not use prefix's.  Apple DOS did not handle HDD's.  Macintosh used K from the beginning and Windows used K and M at least as early as Windows 3.11.   So can u support it, if I change the sentence to read something like:


 * Consumer confusion appears to begin subsequent to the advent of Microsoft and Apple graphical user interfaces where rather than state capacity with sufficient digits (as typical of prior command line interfaces ) the programmers chose deviant usage, without prominent explanation, of SI system prefixes.
 * I personally haven't completely verified the Macintosh statement so that is why I put a citation required there, but see the screenshots at:

http://applemuseum.bott.org/sections/images/screenshots/system1/desktop.gif (Finder 1)
 * http://applemuseum.bott.org/sections/os.html


 * http://www.sciencequest.org/support/computers/mac/topics/drive_setup.html (Drive Setup v1.7.3 circa 1999)


 * The original MAC drive setup utility was "HD SC Setup" so what i really would like to see is the screen shots from early MAC "HD SC Setup" presentations. Maybe a MAC historian can help us :-)


 * Tom94022 18:21, 31 January 2007 (UTC)


 * "Can we agree that the source of confusion is the OS's presentation to consumers of storage capacity with prexfix's that absent explanation appear to be SI prefixes?" For the most part.  How about a little rephrase to something like, "The biggest source of confusion is the storage capacity and file size presentation by OS's using SI prefix's but binary calculations without any explanation."


 * "Then doesn't the earliest of such presentations mark the beginning of confusion?" Except that the earliest such presentation is probably not an OS.  Even though OS's now are the source of the extent of the consumer confusion and the only reason the confusion is notable, I think the earliest such presentation is probably unprovable.  So we are left to speculate about which company or programmer or manager decided to call a megabyte of storage 1024^2 bytes and not 1000^2 bytes.  I think this confusion is more relative to the binary prefix and not specifically to megabyte.  Someone, somewhere, was probably confused why their 360KB 5.25" floppy could not store a 355KB file, or something similar.  But very few people ever cared until people noticed their 120MB hard drive seemed to be missing 6MB.  And people really started talking when their 40GB seemed to be missing 3000MB.  Unless someone can show that an OS was the first source of confusion, the only facts available relate to the present time.  I'll try fire up my old Apple IIe with ProDOS to see what it shows.  Did ProDOS support hard drives?  I know such drives were available for my IIe.  Damn!  Was a hard drive available in my beloved TI 99/4a?  --JJLatWiki 18:21, 1 February 2007 (UTC)

As far as I can remember K always meant 1024 in the home computer world, e.g. "Personal Computer World" from 1978. Predates apple and microsoft? Here's a Sinclair ZX80 ad for example: http://www.oldcomputers.net/pics/ZX80-ad.jpg Quirkie 00:20, 30 April 2007 (UTC)
 * The Sinclair citation says nothing about the usage of K with regard to secondary storage - it would be interesting to see how Sinclair described its cassette storage device and how the OS reported it, the citation gives no such information. There is little dispute that K commonly meant 1024 with regard to main memory of home and personal computers, probably from the very beginning.  The question we are trying to resolve is how did such usage leak into reporting of secondary storage device capacity.  Macintosh so far is the earliest such substantial usage.Tom94022 03:49, 30 April 2007 (UTC)


 * As a reference: The DOS Manual from Apple, (c)1980, state in Appendix C: (I'm translating from the french edition that I own)
 * "In a system DISK II, the information stored on floppy disk is recorded on 35 concentric zones called 'tracks'. The track are numbered from the outside with track $00 to the inside, track $22. ... Each track is divided in 16 segments, called sectors. The sector are numbered from $0 to $F. Each sector can hold up to 256 bytes, or $100 bytes of information."
 * That is 143360 bytes. Browsing of vintage computer magazines, I have a April 1983 edition of 'Micro-Systeme'. Page 152, an add for an Apple II clearly state that it has a 'Drive 140K', just to confirm something I've known for 25 years: a Apple II floppy disk was 140K, not 143 'marketing' K.
 * The Glossary of the Appli II, Reference Manual (c)1979 Apple, state : "Kilobyte: Kilo-octet : 1024 octets" and "K :Initial of “Kilo” signifing, in general 1000, but in computers science ('informatique' in the original) K signify 1024 = 210 ($400 in hexadecimal)
 * The Glossary of the Apple IIgs(c)1986 Apple, state: "K or Kilobyte: Unit of storage capacity or a computer and of its memory capacities. One k is equivalent to 1024 bytes or 1 kilobyte.
 * Note that 7 year later the manual doesn't bother with the decimal meaning outside of the field.
 * The Apple 3.5 Drive Owner Guide(c)Apple, 1986, in Appendix B: Apple 3.5 Drive Specification, make reference to a 'formated data capacity of 819.2 kilobytes per drive and 409.6 kilobytes per surface. yet page 7, 14, 18, 20, 21, 23, 25 it refer to 800K or 400K disks.
 * Regarding storage on tape, no-one at the time made any clear claim about how much one could store on a tape. The main reason was that it was 'audio' tape, whose length was approximate anyway. The Apple II Reference Manual, p46, claim "it take about 35 second ( plus 10 second for the initial marker) to save 4096 bytes... It doesn't make use of KB, but as expected use a power of 2 as a representative example.
 * Another data point is the fact that, on IBM mainframe system, size of object on storage were expressed in TRK (tracks) or CYL (cylinders), only recently (relative to the history of JCL) JCL has been extended to allow the specification of K,M in the SPACE specification of a DD card for example. An here again K and M are used in the conventional sens.
 * In conclusion: The usage of decimal notation for K,M and G is limited to Marketing material of hard drive storage. When these storage devices are in use and connected to a system, size, when reported in KB or MB, is always reported in the standard meaning in the field, namely 2^10 an 2^20 bytes respectively, the usage of KB and MB didn't suddenly "leak into reporting of secondary storage device capacity", it has been consistently and exclusively the case in software, from mainframe to micro that when used K,KB,M,MB... had the conventional meaning in the field. Shmget 02:28, 21 May 2007 (UTC)


 * Yes, floppies have often been measured using binary prefixes, which is where the Microsoft disk drive size convention originates.


 * The earliest floppies were actually measured with decimal units, though, like the "1.5 megabit" Memorex 650 from 1972, for example. (See Memorex 650 Flexible Disc File - OEM Manual)


 * Decimal prefixes weren't adopted for marketing purposes; they were in use by these engineers before their devices were even on the drawing board.


 * Binary units were later adopted because it meshed well with the addressing scheme used to access blocks of memory. Many of the earliest (1960s) instances of abbreviations like "40K" were decimal.  "Standard meaning in the field" never really existed.


 * The meanings of these abbreviations have changed with time and context, and have been ambiguous for most of computing history. It's not a conspiracy.  :-)  — Omegatron 13:52, 1 June 2007 (UTC)


 * The earliest floppies were actually measured with decimal units, though, like the "1.5 megabit" Memorex 650 from 1972, for example. (See Memorex 650 Flexible Disc File - OEM Manual)
 * And please note that 1/ it is bit not byte. and 2/ there is not use of KB, MB in this document. The question is not about if someone use 'megabits', but when did they use MB as a unit in marketing material without explanation (i.e implying that the meaning is well know and understood by the targeted audience - the title of this section is the so-called 'consumer confusion', remember ? ) -- Shmget 17:36, 1 June 2007 (UTC)
 * Decimal prefixes weren't adopted for marketing purposes; they were in use by these engineers before their devices were even on the drawing board.
 * one doesn't preclude the other. There are plenty of notation, acronyms, etc.. we use during development that do not make it to the user manual and even less into marketing meterial, if for no other reason that marketing material is not written by engineers. -- Shmget
 * Binary units were later adopted because it meshed well with the addressing scheme used to access blocks of memory. Many of the earliest (1960s) instances of abbreviations like "40K" were decimal.  "Standard meaning in the field" never really existed.
 * except that to this day, hard drive manufacturer use MB without any footnote or explanation to designate their memory cache, which is an admission that the meaning is clear i.e. has 'a standard meaning in the field'. They also use MB/GB with a foot note when they means it as 'decimal', which illustrate that this particular meaning is NOT what a non-expert reader would expect. -- Shmget 17:36, 1 June 2007 (UTC)
 * The meanings of these abbreviations have changed with time and context, and have been ambiguous for most of computing history. It's not a conspiracy.  :-)  — Omegatron 13:52, 1 June 2007 (UTC)
 * The systematic use of the 'MB' symbol in hard-drive marketing material came in the 80s. One reason it wasn't use before was becaue these symbol would not have been recognized by the public (just like processor frequency where advertised as 1200 Mhz (for example), for a while, until GHz became common enough that it could be use in marketing material). Marketing people don't do stuff out of tradition or purism, them make their choice of words based on marketing (i.e how much can you lie to make you look better without being caught). if one K had been 978 bytes instead of 1024, I bet you that today, marketing material for hard drive would use that meaning, no matter what the in-house engineers think or use.
 * To use a familiar notion of about the same period. The question is no who was first of Betamax or VHS, nor which one engineers at Sony used or liked better, the question is that when you order a TV program on 'video tape' without other specification, could it reasonably be expected to be a Betamax ? -- Shmget 17:36, 1 June 2007 (UTC)

The question is not about if someone use 'megabits', but when did they use MB as a unit in marketing material without explanation
 * No. It's about when manufacturers first started using decimal and binary units.  You've claimed that the binary sense of units was ubiquitous at some point in time and then the hard drive manufacturers, by themselves, transitioned to using decimal units in a deliberate attempt to mislead their customers.  I've shown that decimal units came first, and you've provided absolutely no evidence of intentionally misleading figures.

if for no other reason that marketing material is not written by engineers
 * There are both technical and marketing documents on this page. None support your theory.

The systematic use of the 'MB' symbol in hard-drive marketing material came in the 80s.
 * Hard drives have always been measured in decimal units. Do you have evidence to refute this?
 * Are you asking me to prove a negative existance ???? But really, you are harping on the confusion between decimal and decimal unit. While it is true taht back in the 60s IBM where advertising the size of his storage device in 'million of byte/word...', it is not true that they 'always have used' KB/MB/GB. For one thing the never used KB in the so called 'decimal' sens (so-called because K is not a SI prefix), and the use of MB followed the widespread use of K and KB, which meant and still mean 1024 bytes.... - Shmget 05:38, 2 June 2007 (UTC)
 * I'd be delighted to see such evidence. It doesn't matter if they abbreviated with the word "millions", "M", "Mb", or "GB".
 * Of course it matter, that is the ONLY thing that matter here. the significance of 'MB', not Mb, M, million, MByte or M-By or whatever other notation one can think of. The Whole point is what MB means. -- Shmget 05:38, 2 June 2007 (UTC)
 * The meaning has been the same in every document I've ever seen. If you have a document that describes hard drive capacity in a binary sense, please present it.
 * That is a false dichotomy. The argument is not that hard-drive manufacture 'switched' from binary unit to decimal unit, but that they surfed the wave of popularity of these binary unit and started to use these KB/MB/GB symbol, when before they were spelling it out our using other abbreviations. divertin gthe disciton over mention of 'a million bytes' in the so-called 'decimal meaning' (what else could that be) is a Red Herring. Of course that 'a million of word' is 10^6, but that has nothing to do with the argument.
 * It is completely irrational to claim that hard drive manufacturers used decimal abbreviations for decades and then switched to a different type of decimal abbreviations in a deliberate attempt to mislead customers.
 * So why did they do so ? They certainly did not use these particulars kind of unit KB,MB,GB for 'decades', yet in the early 80s they started to advertise their hardware as such..., AFTER the meanign of K=1024 had been well established and accepted - to the point where it was not necessary to explain it anymore. I'm sure you still have some vintage magazine from the 80-83 time period, like I do, and you can read the ads in these for harddrives.  -- Shmget 05:38, 2 June 2007 (UTC)
 * They also use MB/GB with a foot note when they means it as 'decimal', which illustrate that this particular meaning is NOT what a non-expert reader would expect.
 * They didn't until they were sued because of the discrepancy with operating system measurements.
 * Nope: "Western Digital specifically states on its website (and has done so since long before this lawsuit was filed) that the definition of a gigabyte is one billion bytes." (emphasis is mine). Beside the 'operating sysmtems' happen to report things in the same unit that the floppy disk were specified into and the the CD-ROM are specified into... So much for the claim of 'standard' withing the storage industry. -- Shmget 05:38, 2 June 2007 (UTC)
 * It wasn't important to be accurate until then; the vast majority of people just don't give a damn. — Omegatron 01:26, 2 June 2007 (UTC)
 * Why didn't they add a footnote to explan that the MB of the cache is in 'binary' prefix, rather than a footnote to explaini the decimal meaning ? why harddrive manufacturer, 9 years after the introduction of the IEC notation, still do not use them... could it be that they have no interest in lifting the cloud of confusion they have created in a segment of the consumer population ? And Again I asked. if K was equal to 978 instead of 1024, what kind of unit would harddrives be marketed with ? What is 'completely irrational' is to pretend that marketing people care about the purity of the SI. marketing guy: 'let's see, will we advertise this drive as 100MB or 95.3MB... uhmmm, I think we'll go with 100MB, especially since surely the guy across the street will do just that, and we don't want to look bad, do we...". this is not a 'conspiracy', it just plain common sens, applied to basic psychology. All the rest is post facto rationalization. At first it didn't matter because 1- the delta was relatively small and 2/ the kind of people that where spending that kind of money for a hard drive where hobbyist or professional, that knew the trick. The thing became a 'problem' with higher unit making the delta more and more significant, the popularization of computers and the generalization of hard-drive use, leading to a bunch of uninformed people buying harddrive. I wouldn't have cared that much (and I haven't really, until I met people like  Sarenne) that hardrive-manufacturer use the unit they way they do, but I do get irritated when they claim SI high-ground to rationalized their marketing decision, and insist that the rest of the world is 'wrong'/'incorrect' for not embracing their bastardized SI unit (a mix-breed between SI prefix and conflicting non SI re-defintion of the B SI unit) -- Shmget 05:38, 2 June 2007 (UTC)

Word "abused" seems to be POV
"However, with amounts of computer memory these prefixes were often abused to denote the nearby powers of two..."

The word "abused" in that statement appears biased. It would seem that saying "However, with amounts of computer memory, an alternate usage of these prefixes arose denoting the nearby powers of two..." (or something along that lines) would be a lot more neutral.
 * Please sign yr comment. Also, shouldn't this be under Talk 40 above.  IMHO, POV is perfectly acceptable in a Discussion page.  Where else can we work this out? Tom94022 04:33, 27 February 2007 (UTC)


 * I tried to rewrite it. — Omegatron 00:36, 18 March 2007 (UTC)

Floppy Section
"This is probably because some marketing person decided that this was best advertised as a double capacity version of the prior generation 720 KB product (of course, it was 720 KiB)." Doesn't seem very informative, more like speculation. --TJ09 00:49, 7 March 2007 (UTC)

In these paragraphs "KB" is often used, it should be replaced with KiB or kB (with the lower case 'k'). For example "The formatted capacity was 737,280 bytes or 720 KB." here there should be "KiB", 737280 bytes are equal to 737.28 kB or 720 KiB.

"The 1200 KB 5¼ inch diskette was marketed as 1.2 MB (1.171875 MiB) without any controversy.", here that KB could be both kB or KiB. If 1.2 MB is right so it should be 1200 kB, instead, if 1.171875 MiB is right it should be 1200 KiB, 1.2 MB are equal to 1.144 MiB.

"The drive was marketed as 1.44 MB when a more accurate value would have been 1.4 MB (1.40625 MB).", this is also wrong: 1474560 B = 1.475 MB = 1.406 MiB, so the latest two should be both MiB (the first is neither a MB nor a MiB). In the Microsoft article (linked in the reference n.49) it incorrectly uses KB as 1024 bytes and it also said "There are 1024 bytes in a kilobyte, not 1000.". This should be pointed out because it's wrong and it doesn't use the right prefixes. 130.232.126.114 (talk) 19:18, 18 February 2008 (UTC)