Vacuum-tube computer

A vacuum-tube computer, now termed a first-generation computer, is a computer that uses vacuum tubes for logic circuitry. While the history of mechanical aids to computation goes back centuries, if not millennia, the history of vacuum tube computers is confined to the middle of the 20th century. Lee De Forest invented the triode in 1906. The first example of using vacuum tubes for computation, the Atanasoff–Berry computer, was demonstrated in 1939. Vacuum-tube computers were initially one-of-a-kind designs, but commercial models were introduced in the 1950s and sold in volumes ranging from single digits to thousands of units. By the early 1960s vacuum tube computers were obsolete, superseded by second-generation transistorized computers.

Much of what we now consider part of digital computing evolved during the vacuum tube era. Initially, vacuum tube computers performed the same operations as earlier mechanical computers, only at much higher speeds. Gears and mechanical relays operate in milliseconds, whereas vacuum tubes can switch in microseconds. The first departure from what was possible prior to vacuum tubes was the incorporation of large memories that could store thousands of bits of data and randomly access them at high speeds. That, in turn, allowed the storage of machine instructions in the same memory as data—the stored program concept, a breakthrough which today is a hallmark of digital computers.

Other innovations included the use of magnetic tape to store large volumes of data in compact form (UNIVAC I) and the introduction of random access secondary storage (IBM RAMAC 305), the direct ancestor of all the hard disk drives we use today. Even computer graphics began during the vacuum tube era with the IBM 740 CRT Data Recorder and the Whirlwind light pen. Programming languages originated in the vacuum tube era, including some still used today such as Fortran & Lisp (IBM 704), Algol (Z22) and COBOL. Operating systems, such as the GM-NAA I/O, also were born in this era.

Development
The use of cross-coupled vacuum-tube amplifiers to produce a train of pulses was described by Eccles and Jordan in 1918. This circuit became the basis of the flip-flop, a circuit with two states that became the fundamental element of electronic binary digital computers.

The Atanasoff–Berry computer, a prototype of which was first demonstrated in 1939, is now credited as the first vacuum-tube computer. However, it was not a general-purpose computer, being able to only solve a system of linear equations, and was also not very reliable.

During World War II, special-purpose vacuum-tube digital computers such as Colossus were used to break German machine (teleprinter) ciphers known as Fish. The military intelligence gathered by these systems was essential to the Allied war effort. By the end of the war 10 Mark II COLOSSI were in use at Bletchley Park; they superseded the Heath Robinson. Each COLOSSI used 1,600 vacuum tubes (Mark I) and 2,400 vacuum tubes (Mark II). The wartime codebreaking at BP was kept secret until the 1970s.

Also during the war, electro-mechanical binary computers were being developed by Konrad Zuse. The German military establishment during the war did not prioritize computer development. An experimental electronic computer circuit with around 100 tubes was developed in 1942, but destroyed in an air raid.

In the United States, work started on the ENIAC computer late in the Second World War. The machine was completed in 1945. Although one application which motivated its development was the production of firing tables for artillery, one of the first uses of ENIAC was to carry out calculations related to the development of a hydrogen bomb. ENIAC was initially programmed with plugboards and switches instead of an electronically stored program. A post-war series of lectures disclosing the design of ENIAC, and a report by John von Neumann on a foreseeable successor to ENIAC, First Draft of a Report on the EDVAC, were widely distributed and were influential in the design of post-war vacuum-tube computers.

Early machines used to tabulate punch cards could only add and subtract. In 1931 IBM introduced an electromechanical multiplying punch, the IBM 601. After World War II, IBM made a version, the 603, that used vacuum tubes to perform the calculations. Surprised by market demand for it, it introduced, in 1948, a more compact version the IBM 604, using 1250 miniature vacuum tubes in removable plug-in modules. Much faster than the 601, it could divide and perform up to 60 program steps in one card cycle. Some 5400 units were leased or sold, making it the first successful commercial application of electronic computation.

The Ferranti Mark 1 (1951) is considered the first commercial stored program vacuum tube computer. The first mass-produced computers were the Bull Gamma 3 (1952, 1,200 units) and the IBM 650 (1954, 2,000 units).

Design
Vacuum-tube technology required a great deal of electricity. The ENIAC computer (1946) had over 17,000 tubes and suffered a tube failure (which would take 15 minutes to locate) on average every two days. In operation the ENIAC consumed 150 kilowatts of power, of which 80 kilowatts were used for heating tubes, 45 kilowatts for DC power supplies, 20 kilowatts for ventilation blowers, and 5 kilowatts for punched-card auxiliary equipment.



Because the failure of any one of the thousands of tubes in a computer could result in errors, tube reliability was of high importance. Special quality tubes were built for computer service, with higher standards of materials, inspection and testing than standard receiving tubes.

One effect of digital operation that rarely appeared in analog circuits was cathode poisoning. Vacuum tubes that operated for extended intervals with no plate current would develop a high-resistivity layer on the cathodes, reducing the gain of the tube. Specially selected materials were required for computer tubes to prevent this effect. To avoid mechanical stresses associated with warming the tubes to operating temperature, often the tube heaters had their full operating voltage applied slowly, over a minute or more, to prevent stress-related fractures of the cathode heaters. Heater power could be left on during standby time for the machine, with high-voltage plate supplies switched off. Marginal testing was built into sub-systems of a vacuum-tube computer; by lowering plate or heater voltages and testing for proper operation, components at risk of early failure could be detected. To regulate all the power-supply voltages and prevent surges and dips from the power grid from affecting computer operation, power was derived from a motor-generator set that improved the stability and regulation of power-supply voltages.

Two broad types of logic circuits were used in construction of vacuum-tube computers. The "asynchronous", or direct, DC-coupled type used only resistors to connect between logic gates and within the gates themselves. Logic levels were represented by two widely separated voltages. In the "synchronous", or "dynamic pulse", type of logic, every stage was coupled by pulse networks such as transformers or capacitors. Each logic element had a "clock" pulse applied. Logic states were represented by the presence or absence of pulses during each clock interval. Asynchronous designs potentially could operate faster, but required more circuitry to protect against logic "races", as different logic paths would have different propagation time from input to stable output. Synchronous systems avoided this problem, but needed extra circuitry to distribute a clock signal, which might have several phases for each stage of the machine. Direct-coupled logic stages were somewhat sensitive to drift in component values or small leakage currents, but the binary nature of operation gave circuits considerable margin against malfunction due to drift. An example of a "pulse" (synchronous) computer was the MIT Whirlwind. The IAS computers (ILLIAC and others) used asynchronous, direct-coupled logic stages.

Tube computers primarily used triodes and pentodes as switching and amplifying elements. At least one specially designed gating tube had two control grids with similar characteristics, which allowed it to directly implement a two-input AND gate. Thyratrons were sometimes used, such as for driving I/O devices or to simplify design of latches and holding registers. Often vacuum-tube computers made extensive use of solid-state ("crystal") diodes to perform AND and OR logic functions, and only used vacuum tubes to amplify signals between stages or to construct elements such as flip-flops, counters, and registers. The solid-state diodes reduced the size and power consumption of the overall machine.

Memory technology
Early systems used a variety of memory technologies prior to finally settling on magnetic-core memory. The Atanasoff–Berry computer of 1942 stored numerical values as binary numbers in a revolving mechanical drum, with a special circuit to refresh this "dynamic" memory on every revolution. The war-time ENIAC could store 20 numbers, but the vacuum-tube registers used were too expensive to build to store more than a few numbers. A stored-program computer was out of reach until an economical form of memory could be developed.

In 1944, J. Presper Eckert proposed using mercury delay-line memory in a successor to the ENIAC which would become the EDVAC. Eckert had earlier worked with delay line memory for radar signal processing. Maurice Wilkes built EDSAC in 1947, which had a mercury delay-line memory that could store 32 words of 17 bits each. Since the delay-line memory was inherently serially organized, the machine logic was also bit-serial as well. Eckert and John Mauchly used the technology in the 1951 UNIVAC I and received a patent for delay-line memory in 1953. Bits in a delay line are stored as sound waves in the medium, which travel at a constant rate. The UNIVAC I (1951) used seven memory units, each containing 18 columns of mercury, storing 120 bits each. This provided a memory of 1,000 12-character words with an average access time of 300 microseconds. This memory subsystem formed its own walk-in room.

Williams tubes were the first true random-access memory device. The Williams tube displays a grid of dots on a cathode-ray tube (CRT), creating a small charge of static electricity over each dot. The charge at the location of each of the dots is read by a thin metal sheet just in front of the display. Frederic Calland Williams and Tom Kilburn applied for patents for the Williams tube in 1946. The Williams tube was much faster than the delay line, but suffered from reliability problems. The UNIVAC 1103 used 36 Williams tubes with a capacity of 1,024 bits each, giving a total random access memory of 1,024 words of 36 bits each. The access time for Williams-tube memory on the IBM 701 was 30 microseconds.

Magnetic drum memory was invented in 1932 by Gustav Tauschek in Austria. A drum consisted of a large rapidly rotating metal cylinder coated with a ferromagnetic recording material. Most drums had one or more rows of fixed read-write heads along the long axis of the drum for each track. The drum controller selected the proper head and waited for the data to appear under it as the drum turned. The IBM 650 had a drum memory of 1,000 to 4,000 10-digit words with an average access time of 2.5 milliseconds.

Magnetic-core memory was patented by A Wang in 1951. Core uses tiny magnetic ring cores, through which wires are threaded to write and read information. Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise), and the bit stored in a core is zero or one depending on that core's magnetization direction. The wires allow an individual core to be set to either a one or a zero and for its magnetization to be changed by sending appropriate electric current pulses through selected wires. Core memory offered random access and greater speed, in addition to much higher reliability. It was quickly put to use in computers such as the MIT/IBM Whirlwind, where an initial 1,024 16-bit words of memory were installed replacing Williams tubes. Likewise the UNIVAC 1103 was upgraded to the 1103A in 1956, with core memory replacing Williams tubes. The core memory used on the 1103 had an access time of 10 microseconds.

Start of the computer industry
The 1950s saw the evolution of the electronic computer from a research project to a commercial product, with common designs and multiple copies made, thereby starting a major new industry. The early commercial machines used vacuum tubes and a variety of memory technologies, converging on magnetic core by the end of the decade.

Many of the early commercial machines carried on from the one-off machines and were designed for rapid mathematical calculations needed for scientific, engineering and military purposes. But some were designed for data-processing workloads generated by the large, existing punch card ecosystem. IBM in particular divided its computers into scientific and commercial lines, which shared electronic technology and peripherals but had completely incompatible instruction set architectures and software. This practice continued into its second-generation (transistorized) machines, until reunification by the IBM System/360 project. See IBM 700/7000 series

Below is a list of these first generation commercial computers.