User:Swtpc6800/Binary

Binary and Decimal Addressing
Early computers used one of two addressing methods to access the system memory; binary (base-2) or decimal (base-10). The IBM 701 (1952) used binary and could address 2048 36-bit words. The IBM 702 (1953) used decimal and could address 10,000 7-bit words. One of the most successful early computers was the IBM 1401. It was introduced in 1959 and by 1961 one out every four electronic stored-program computers was an IBM 1401. It used decimal addressing and could have 1400, 2000, 4000, 8000, 12000 or 16000 characters of 8-bit core storage. A reference to a "4k IBM 1401" meant 4000 characters of storage (memory). The use of K in the binary sense as in a "32K store" can be found as early as 1960.

By the mid 1960s binary addressing was the standard architecture in computer design. The computer system documentation would specify the memory size with an exact number such as 32,768, 65,536 or 131,072 words of storage. There were several methods used to abbreviate these quantities. Gene Amdahl's seminal 1964 article on IBM System/360 used 1K to mean 1024. This style was used by other computer vendors, the CDC 7600 System Description (1968) made extensive use of K as 1024. Another style was to truncate the last 3 digits and append K. The exact values 32,768, 65,536 and 131,072 would become 32K, 65K and 131K. If 32,768 were rounded off it would be 33K. This style was used from about 1965 to 1975.

The use of the 1024 K was more common that the truncated K. Both were used, sometimes by the same company. The HP 21MX real-time computer (1974) denoted 196,608 as 196K and 1,048,576 as 1 M. The HP 3000 business computer (1973) could have 64K, 96K, or 128K bytes of memory.

The terms Kbit, Kbyte, Mbit and Mbyte started to be used as binary units in the early 1970s. Most memory capacities were expressed in K. The IBM System/370 Model 158 brochure (1972) had the following: "Real storage capacity is available in 512K increments ranging from 512K to 2,048K bytes." Megabyte was used to describe the 22-bit addressing of DEC PDP-11/70 (1975) and gigabyte the 30-bit addressing DEC VAX11/780 (1977).

By the mid 1970s it was common to see K (or Kbyte) as 1024 and the occasional M (or MByte) as 1,048,576 for words or bytes of memory (RAM). K and M were also used with their decimal meaning for disk storage. The dual use of these prefixes as both decimal and binary was defined in early standards and dictionaries. The ANSI/IEEE Std 1084-1986 is still available for reference and defined kilo and mega. The term "computer storage" means system memory. "'kilo (K). (1) A prefix indicating 1000. (2) In statements involving size of computer storage, a prefix indicating 210, or 1024.'" "'mega (M). (1) A prefix indicating one million. (2) In statements involving size of computer storage, a prefix indicating 220, or 1,048,576.'"

In the 1980s the terms kilobyte, megabyte, and gigabyte became popular along with the abbreviations KB, MB, and GB. The binary units Kbyte and Mbyte were formally defined in ANSI/IEEE Std 1212-1991. The terms Kbyte, Mbyte, and Gbyte are found in the trade press and in IEEE journals.

Gigibyte was formally defined in IEEE Std 610.10-1994 as either 1,000,000,000 or 230 bytes. Kilobyte, Kbyte, and KB are equivalent units and all are defined in current standard, IEEE 100-2000.

The industry has coped with the dual definitions because system memory (RAM) typically uses the binary meaning while disk storage uses the decimal meaning. (There are exceptions especially with disks.) There are no SI units for computer storage capacity but the decimal meanings of KB, MB, and GB are often referred to as SI prefixes.

Leave comments here
I don't think the evidence yet supports the statement that by the mid-1970's it was "common to see ... M (or MByte) as 1,048,576 for words or bytes of memory (RAM)". Yr 1972 find is a great one, but it is only one point. A counter point is the Nov 1983 Byte magazine which in 720 pages has only one page (631) advertising both a "2MB" LSI11 memory (binary) and a "140MB" HDD (decimal) but 17 other pages with MB only in a decimal sense. If it was uncommon in 1983 how can it be common in the mid-1970's. I'm busy with a C# problem right now so I don't have time to research much, but sometime this week I will look at a late 1970's Datamation and Byte Magazine to see what I find. Other interesting journals might be MiniMicro or some DEC publication but I don't have access to early editions of them.

What do you think the criteria should be for the use of common?

For example, IMHO, the DiskTrend use in 1977 was necessary for decimal M, given it was the leading and a widely circulated analysis of the HDD industry, but it required other supporting evidence such as listed in the timeline to say M was common in a decimal sense.

My guess is M (binary) did not become common until we saw PC's with significantly more memory than 1 MiB :-) Tom94022 17:26, 18 June 2007 (UTC)

Floppy Disks
Floppy disk drive and media manufacturers use decimal units for unformatted recording capacity while most computer operating systems use binary units to measure the formatted capacity. The original IBM Personal Computer (1981) used a Tandon TM100 5 1/4 inch floppy disk drive. The single sided drive was rated at 250 kilobytes (unformatted) and the double sided version was rated at 500 kilobytes.

A 5 1/4 diskette recorded at double density MFM will hold 6250 bytes per track and 40 tracks per side yield 250,000 bytes per side. To make it practical to record smaller blocks of data, the tracks are formatted into sectors with gaps between them. The gaps allow individual sectors to be recorded without overwriting adjacent sectors. Each sector also has additional header bytes to identify the sector.

With IBM PC-DOS 1.0 and 1.1, each track has 8 sectors of 512 bytes and this provides 163840 bytes per side (8 x 512 x 40). The IBM user documentation referred to this as "160KB" for single sided diskette and "320KB" for double sided diskette. Starting with PC DOS 2.0 (1983), each track had 9 sectors of 512 bytes. The formatted capacity was increased to 184320 bytes per side or 368640 bytes per diskette. The IBM documentation referred to these as "180KB" and "360KB" diskettes. The same drives and media can have different capacities depending on format.

On all diskettes the capacity available to the user will be smaller that the total number of sectors because some are reserved by the operating system for boot records or directory tables.

The IBM Personal Computer/AT (1984) had a new 5 1/4 inch disk drive that had 80 tracks per side, rotated at 360 rpm (versus 300 rpm) and had a new diskette media. The formatted capacity was 1,228,800 bytes or 1200 KB. (80 tracks x 15 sectors x 512 bytes x 2 sides)

The IBM PC Convertible (1986) used the 3 1/2 inch diskettes. These were similar in recording technology to the original 5 1/4 inch drives except they had 80 tracks per side. The formatted capacity was 737,280 bytes or 720 KB. Apple used the same disk with a different recording technology, GRC, that gave a formatted capacity of 819,200 bytes. Apple referred to this as an 800K disk.  The last widely adopted diskette was the 3 1/2 inch high density. This has twice the capacity as the 720 KB diskettes, 1,474,560 bytes or 1440 KB. The drive was marketed as 1.44 MB when a more accurate value would have been 1.4 MB (1.40625 MB). Some users have noticed the missing 0.04 MB. The 1200 KB 5 1/2 inch diskette was marketed as 1.2 MB (1.171875 MB) without any controversy.

The Flash Memory Drive is replacing the floppy disk and many new computers come without a floppy disk drive.