History of computing hardware (1960s–present)

The history of computing hardware starting at 1960 is marked by the conversion from vacuum tube to solid-state devices such as transistors and then integrated circuit (IC) chips. Around 1953 to 1959, discrete transistors started being considered sufficiently reliable and economical that they made further vacuum tube computers uncompetitive. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology subsequently led to the development of semiconductor memory in the mid-to-late 1960s and then the microprocessor in the early 1970s. This led to primary computer memory moving away from magnetic-core memory devices to solid-state static and dynamic semiconductor memory, which greatly reduced the cost, size, and power consumption of computers. These advances led to the miniaturized personal computer (PC) in the 1970s, starting with home computers and desktop computers, followed by laptops and then mobile computers over the next several decades.

Second generation
For the purposes of this article, the term "second generation" refers to computers using discrete transistors, even when the vendors referred to them as "third-generation". By 1960 transistorized computers were replacing vacuum tube computers, offering lower cost, higher speeds, and reduced power consumption. The marketplace was dominated by IBM and the seven dwarfs:


 * IBM
 * The BUNCH
 * Burroughs
 * UNIVAC
 * NCR
 * Control Data Corporation (CDC)
 * Honeywell
 * General Electric
 * RCA.

Some examples of 1960s second generation computers from those vendors are:


 * the IBM 1401, the IBM 7090/7094, and the IBM System/360;
 * the Burroughs 5000 series;
 * the UNIVAC 1107;
 * the NCR 315;
 * the CDC 1604 and the CDC 3000 series;
 * the Honeywell 200, Honeywell 400, and Honeywell 800;
 * the GE-400 series and the GE-600 series;
 * the RCA 301, 3301 and the Spectra 70 series.

However, some smaller companies made significant contributions. Also, towards the end of the second generation Digital Equipment Corporation (DEC) was a serious contender in the small and medium machine marketplace.

Meanwhile, second-generation computers were also being developed in the USSR as, e.g., the Razdan family of general-purpose digital computers created at the Yerevan Computer Research and Development Institute.

The second-generation computer architectures initially varied; they included character-based decimal computers, sign-magnitude decimal computers with a 10-digit word, sign-magnitude binary computers, and ones' complement binary computers, although Philco, RCA, and Honeywell, for example, had some computers that were character-based binary computers and Digital Equipment Corporation (DEC) and Philco, for example, had two's complement computers. With the advent of the IBM System/360, two's complement became the norm for new product lines.

The most common word sizes for binary mainframes were 36 and 48 bits, although entry-level and midrange machines used smaller words, e.g., 12 bits, 18 bits, 24 bits, 30 bits. All but the smallest machines had asynchronous I/O channels and interrupts. Typically binary computers with word size up to 36 bits had one instruction per word, binary computers with 48 bits per word had two instructions per word and the CDC 60-bit machines could have two, three, or four instructions per word, depending on the instruction mix; the Burroughs B5000, B6500/B7500 and B8500 lines are notable exceptions to this.

First-generation computers with data channels (I/O channels) had a basic DMA interface to the channel cable. The second generation saw both simpler, e.g., channels on the CDC 6000 series had no DMA, and more sophisticated designs, e.g., the 7909 on the IBM 7090 had limited computational, conditional branching and interrupt system.

By 1960, magnetic core was the dominant memory technology, although there were still some new machines using drums and delay lines during the 1960s.

Magnetic thin film and rod memory were used on some second-generation machines, but advances in core technology meant they remained niche players until semiconductor memory displaced both core and thin film.

In the first generation, word-oriented computers typically had a single accumulator and an extension, referred to as, e.g., Upper and Lower Accumulator, Accumulator and Multiplier-Quotient (MQ) register. In the second generation, it became common for computers to have multiple addressable accumulators. On some computers, e.g., PDP-6, the same registers served as accumulators and index registers, making them an early example of general-purpose registers.

In the second generation there was considerable development of new address modes, including truncated addressing on, e.g., the Philco TRANSAC S-2000, the UNIVAC III, and automatic index register incrementing on, e.g., the RCA 601, UNIVAC 1107, and the GE-600 series. Although index registers were introduced in the first generation under the name B-line, their use became much more common in the second generation. Similarly, indirect addressing became more common in the second generation, either in conjunction with index registers or instead of them. While first-generation computers typically had a small number of index registers or none, several lines of second-generation computers had large numbers of index registers, e.g., Atlas, Bendix G-20, IBM 7070.

The first generation had pioneered the use of special facilities for calling subroutines, e.g., TSX on the IBM 709. In the second generation, such facilities were ubiquitous. In the descriptions below, NSI is the next sequential instruction, the return address. Some examples are:
 * Automatically record the NSI in a register for all or most successful branch instructions
 * The Jump Address (JA) Register on the Philco TRANSAC S-2000
 * The Sequence History (SH) and Cosequence History (CSH) registers on the Honeywell 800
 * The B register on an IBM 1401 with the indexing feature


 * Automatically record the NSI at a standard memory location following all or most successful branches
 * Store P (STP) locations on RCA 301 and RCA 501


 * Call instructions that save the NSI in the first word of the subroutine
 * Return Jump (RJ) on the UNIVAC 1107
 * Return Jump (RJ) on CDC 3600 and CDC 6000 series


 * Call instructions that save the NSI in an implicit or explicit register
 * Branch and Load Location in Index Word (BLX) on the IBM 7070
 * Transfer and Set Xn (TSXn) on the GE-600 series
 * Branch and Link (BAL) on the IBM System/360


 * Call instructions that use an index register as a stack pointer and push return information onto the stack
 * Push jump (PUSHJ) on the DEC PDP-6


 * Implicit call with return information pushed onto the stack
 * Program descriptors on the Burroughs B5000 line
 * Program descriptors on the Burroughs B6500 line

The second generation saw the introduction of features intended to support multiprogramming and multiprocessor configurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, and atomic instructions.

Third generation
The mass increase in the use of computers accelerated with Third Generation computers starting around 1966 in the commercial market. These generally relied on early (sub-1000 transistor) integrated circuit technology. The third generation ends with the microprocessor-based fourth generation.

In 1958, Jack Kilby at Texas Instruments invented the hybrid integrated circuit (hybrid IC), which had external wire connections, making it difficult to mass-produce. In 1959, Robert Noyce at Fairchild Semiconductor invented the monolithic integrated circuit (IC) chip. It was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on the silicon surface passivation and thermal oxidation processes developed by Mohamed M. Atalla at Bell Labs in the late 1950s.

Computers using IC chips began to appear in the early 1960s. For example, the 1961 Semiconductor Network Computer (Molecular Electronic Computer, Mol-E-Com),  the first monolithic integrated circuit   general purpose computer (built for demonstration purposes, programmed to simulate a desk calculator) was built by Texas Instruments for the US Air Force.

Some of their early uses were in embedded systems, notably used by NASA for the Apollo Guidance Computer, by the military in the LGM-30 Minuteman intercontinental ballistic missile, the Honeywell ALERT airborne computer, and in the Central Air Data Computer used for flight control in the US Navy's F-14A Tomcat fighter jet.

An early commercial use was the 1965 SDS 92. IBM first used ICs in computers for the logic of the System/360 Model 85 shipped in 1969 and then made extensive use of ICs in its System/370 which began shipment in 1971.

The integrated circuit enabled the development of much smaller computers. The minicomputer was a significant innovation in the 1960s and 1970s. It brought computing power to more people, not only through more convenient physical size but also through broadening the computer vendor field. Digital Equipment Corporation became the number two computer company behind IBM with their popular PDP and VAX computer systems. Smaller, affordable hardware also brought about the development of important new operating systems such as Unix.

In November 1966, Hewlett-Packard introduced the 2116A minicomputer, one of the first commercial 16-bit computers. It used CTμL (Complementary Transistor MicroLogic) in integrated circuits from Fairchild Semiconductor. Hewlett-Packard followed this with similar 16-bit computers, such as the 2115A in 1967, the 2114A in 1968, and others.

In 1969, Data General introduced the Nova and shipped a total of 50,000 at $8,000 each. The popularity of 16-bit computers, such as the Hewlett-Packard 21xx series and the Data General Nova, led the way toward word lengths that were multiples of the 8-bit byte. The Nova was first to employ medium-scale integration (MSI) circuits from Fairchild Semiconductor, with subsequent models using large-scale integrated (LSI) circuits. Also notable was that the entire central processor was contained on one 15-inch printed circuit board.

Large mainframe computers used ICs to increase storage and processing abilities. The 1965 IBM System/360 mainframe computer family are sometimes called third-generation computers; however, their logic consisted primarily of SLT hybrid circuits, which contained discrete transistors and diodes interconnected on a substrate with printed wires and printed passive components; the S/360 M85 and M91 did use ICs for some of their circuits. IBM's 1971 System/370 used ICs for their logic, and later models used semiconductor memory.

By 1971, the ILLIAC IV supercomputer was the fastest computer in the world, using about a quarter-million small-scale ECL logic gate integrated circuits to make up sixty-four parallel data processors.

Third-generation computers were offered well into the 1990s; for example the IBM ES9000 9X2 announced April 1994 used 5,960 ECL chips to make a 10-way processor. Other third-generation computers offered in the 1990s included the DEC VAX 9000 (1989), built from ECL gate arrays and custom chips, and the Cray T90 (1995).

Fourth generation
Third-generation minicomputers were essentially scaled-down versions of mainframe computers, whereas the fourth generation's origins are fundamentally different. The basis of the fourth generation is the microprocessor, a computer processor contained on a single large-scale integration (LSI) MOS integrated circuit chip.

Microprocessor-based computers were originally very limited in their computational ability and speed and were in no way an attempt to downsize the minicomputer. They were addressing an entirely different market.

Processing power and storage capacities have grown beyond all recognition since the 1970s, but the underlying technology has remained basically the same of large-scale integration (LSI) or very-large-scale integration (VLSI) microchips, so it is widely regarded that most of today's computers still belong to the fourth generation.

Microprocessors
The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. The MOS IC was fabricated by Fred Heiman and Steven Hofstein at RCA in 1962. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.

The earliest multi-chip microprocessors were the Four-Phase Systems AL1 in 1969 and Garrett AiResearch MP944 in 1970, each using several MOS LSI chips. On November 15, 1971, Intel released the world's first single-chip microprocessor, the 4004, on a single MOS LSI chip. Its development was led by Federico Faggin, using silicon-gate MOS technology, along with Ted Hoff, Stanley Mazor and Masatoshi Shima. It was developed for a Japanese calculator company called Busicom as an alternative to hardwired circuitry, but computers were developed around it, with much of their processing abilities provided by one small microprocessor chip. The dynamic RAM (DRAM) chip was based on the MOS DRAM memory cell developed by Robert Dennard of IBM, offering kilobits of memory on one chip. Intel coupled the RAM chip with the microprocessor, allowing fourth generation computers to be smaller and faster than prior computers. The 4004 was only capable of 60,000 instructions per second, but its successors brought ever-growing speed and power to computers, including the Intel 8008, 8080 (used in many computers using the CP/M operating system), and the 8086/8088 family. (The IBM personal computer (PC) and compatibles use processors that are still backward-compatible with the 8086.) Other producers also made microprocessors which were widely used in microcomputers.

The following table shows a timeline of significant microprocessor development.

Supercomputers
The powerful supercomputers of the era were at the other end of the computing spectrum from the microcomputers, and they also used integrated circuit technology. In 1976, the Cray-1 was developed by Seymour Cray, who had left Control Data in 1972 to form his own company. This machine was the first supercomputer to make vector processing practical. It had a characteristic horseshoe shape to speed processing by shortening circuit paths. Vector processing uses one instruction to perform the same operation on many arguments; it has been a fundamental supercomputer processing method ever since. The Cray-1 could calculate 150 million floating-point operations per second (150 megaflops). 85 were shipped at a price of $5 million each. The Cray-1 had a CPU that was mostly constructed of SSI and MSI ECL ICs.

Mainframes and minicomputers
Computers were generally large, costly systems owned by large institutions before the introduction of the microprocessor in the early 1970s—corporations, universities, government agencies, and the like. Users were experienced specialists who did not usually interact with the machine itself, but instead prepared tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the jobs had completed, users could collect the output printouts and punched cards. In some organizations, it could take hours or days between submitting a job to the computing center and receiving the output.

A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple teleprinter and display terminals let many people share the use of one mainframe computer processor, with the operating system assigning time slices to each user's jobs. This was common in business applications and in science and engineering.

A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor. Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. They originated as peripheral processors for mainframe computers, taking on some routine tasks and freeing the processor for computation.

By today's standards, they were physically large (about the size of a refrigerator) and costly (typically tens of thousands of US dollars), and thus were rarely purchased by individuals. However, they were much smaller, less expensive, and generally simpler to operate than the mainframe computers of the time, and thus affordable by individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.

In addition, minicomputers were more interactive than mainframes, and soon had their own operating systems. The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers, because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software.

Microprocessor and cost reduction
In the minicomputer ancestors of the modern personal computer, processing was carried out by circuits with large numbers of components arranged on multiple large printed circuit boards. Minicomputers were consequently physically large and expensive to produce compared with later microprocessor systems. After the "computer-on-a-chip" was commercialized, the cost to produce a computer system dropped dramatically. The arithmetic, logic, and control functions that previously occupied several costly circuit boards were now available in one integrated circuit which was very expensive to design but cheap to produce in large quantities. Concurrently, advances in developing solid state memory eliminated the bulky, costly, and power-hungry magnetic-core memory used in prior generations of computers.

Micral N


In France, the company R2E (Réalisations et Etudes Electroniques) formed by five former engineers of the Intertechnique company, André Truong Trong Thi and François Gernelle introduced in February 1973 a microcomputer, the Micral N based on the Intel 8008. Originally, the computer had been designed by Gernelle, Lacombe, Beckmann and Benchitrite for the Institut National de la Recherche Agronomique to automate hygrometric measurements. The Micral N cost a fifth of the price of a PDP-8, about 8500FF ($1300). The clock of the Intel 8008 was set at 500 kHz, the memory was 16 kilobytes. A bus, called Pluribus was introduced and allowed connection of up to 14 boards. Different boards for digital I/O, analog I/O, memory, floppy disk were available from R2E.

Altair 8800 and IMSAI 8080
The development of the single-chip microprocessor was an enormous catalyst to the popularization of cheap, easy to use, and truly personal computers. The Altair 8800, introduced in a Popular Electronics magazine article in the January 1975 issue, at the time set a new low price point for a computer, bringing computer ownership to an admittedly select market in the 1970s. This was followed by the IMSAI 8080 computer, with similar abilities and limitations. The Altair and IMSAI were essentially scaled-down minicomputers and were incomplete: to connect a keyboard or teleprinter to them required heavy, expensive "peripherals". These machines both featured a front panel with switches and lights, which communicated with the operator in binary. To program the machine after switching it on the bootstrap loader program had to be entered, without error, in binary, then a paper tape containing a BASIC interpreter loaded from a paper-tape reader. Keying the loader required setting a bank of eight switches up or down and pressing the "load" button, once for each byte of the program, which was typically hundreds of bytes long. The computer could run BASIC programs once the interpreter had been loaded.

The MITS Altair, the first commercially successful microprocessor kit, was featured on the cover of Popular Electronics magazine in January 1975. It was the world's first mass-produced personal computer kit, as well as the first computer to use an Intel 8080 processor. It was a commercial success with 10,000 Altairs being shipped. The Altair also inspired the software development efforts of Paul Allen and his high school friend Bill Gates who developed a BASIC interpreter for the Altair, and then formed Microsoft.

The MITS Altair 8800 effectively created a new industry of microcomputers and computer kits, with many others following, such as a wave of small business computers in the late 1970s based on the Intel 8080, Zilog Z80 and Intel 8085 microprocessor chips. Most ran the CP/M-80 operating system developed by Gary Kildall at Digital Research. CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II.

Many hobbyists during the mid-1970s designed their own systems, with various degrees of success, and sometimes banded together to ease the job. Out of these house meetings, the Homebrew Computer Club developed, where hobbyists met to talk about what they had done, exchange schematics and software, and demonstrate their systems. Many people built or assembled their own computers as per published designs. For example, many thousands of people built the Galaksija home computer later in the early 1980s.

The Altair was influential. It came before Apple Computer, as well as Microsoft which produced and sold the Altair BASIC programming language interpreter, Microsoft's first product. The second generation of microcomputers, those that appeared in the late 1970s, sparked by the unexpected demand for the kit computers at the electronic hobbyist clubs, were usually known as home computers. For business use these systems were less capable and in some ways less versatile than the large business computers of the day. They were designed for fun and educational purposes, not so much for practical use. And although you could use some simple office/productivity applications on them, they were generally used by computer enthusiasts for learning to program and for running computer games, for which the personal computers of the period were less suitable and much too expensive. For the more technical hobbyists home computers were also used for electronically interfacing to external devices, such as controlling model railroads, and other general hobbyist pursuits.

Microcomputer emerges


The advent of the microprocessor and solid-state memory made home computing affordable. Early hobby microcomputer systems such as the Altair 8800 and Apple I introduced around 1975 marked the release of low-cost 8-bit processor chips, which had sufficient computing power to be of interest to hobby and experimental users. By 1977 pre-assembled systems such as the Apple II, Commodore PET, and TRS-80 (later dubbed the "1977 Trinity" by Byte Magazine) began the era of mass-market home computers; much less effort was required to obtain an operating computer, and applications such as games, word processing, and spreadsheets began to proliferate. Distinct from computers used in homes, small business systems were typically based on CP/M, until IBM introduced the IBM PC, which was quickly adopted. The PC was heavily cloned, leading to mass production and consequent cost reduction throughout the 1980s. This expanded the PC's presence in homes, replacing the home computer category during the 1990s and leading to the current monoculture of architecturally identical personal computers.