Counter-machine model

There are many variants of the counter machine, among them those of Hermes, Ershov, Péter, Minsky, Lambek, Shepherdson and Sturgis, and Schönhage. These are explained below.

1954: Hermes' model
observe that "the proof of this universality [of digital computers to Turing machines] ... seems to have been first written down by Hermes, who showed in [7--their reference number] how an idealized computer could be programmed to duplicate the behavior of any Turing machine", and: "Kaphengst's approach is interesting in that it gives a direct proof of the universality of present-day digital computers, at least when idealized to the extent of admitting an infinity of storage registers each capable of storing arbitrarily long words".

The only two arithmetic instructions are
 * 1) Successor operation
 * 2) Testing two numbers for equality

The rest of the operations are transfers from register-to-accumulator or accumulator-to-register or test-jumps.

Kaphengst's paper is written in German; Sheperdson and Sturgis's translation uses terms such as "mill" and "orders".

The machine contains "a mill" (accumulator). Kaphengst designates his mill/accumulator with the "infinity" symbol but we will use "A" in the following description. It also contains an "order register" ("order" as in "instruction", not as in "sequence"). (This usage came from the Burks–Goldstine–von Neumann (1946) report's description of "...an Electronic Computing Instrument".) The order/instruction register is register "0". And, although not clear from Sheperdson and Sturgis's exposition, the model contains an "extension register" designated by Kaphengst "infinity-prime"; we will use "E".

The instructions are stored in the registers:
 * "...so the machine, like an actual computer, is capable of doing arithmetic operations on its own program" (p. 244).

Thus this model is actually a random-access machine. In the following, "[ r ]" indicates "contents of" register r, etc.

remove the mill/accumulator A and reduce the Kaphengst instructions to register-to-register "copy", arithmetic operation "increment", and "register-to-register compare". Observe that there is no decrement. This model, almost verbatim, is to be found in Minsky (1967); see more in the section below.

1958: Ershov's class of operator algorithms
observe that Ersov's model allows for storage of the program in the registers. They assert that Ersov's model is as follows:

1958: Péter's "treatment"
observe that Péter's "treatment" (they are not too specific here) has an equivalence to the instructions shown in the following table. They comment specifically about these instructions, that:
 * "from the point of view of proving as quickly as possible the computability of all partial recursive functions Péter's is perhaps the best; for proving their computability by Turing machines a further analysis of the copying operation is necessary along the lines we have taken above."

1961: Minsky's model of a partial recursive function reduced to a "program" of only two instructions
In his inquiry into problems of Emil Post (the tag system) and Hilbert's 10th problem (Hilbert's problems, Diophantine equation) led Minsky to the following definition of:
 * "an interesting basis for recursive function theory involving programs of only the simplest arithmetic operations" (Minsky (1961) p. 437).

His "Theorem Ia" asserts that any partial recursive function is represented by "a program operating on two integers S1 and S2 using instructions Ij of the forms (cf. Minsky (1961) p. 449):

The first theorem is the context of a second "Theorem IIa" that
 * "...represents any partial recursive function by a program operating on one integer S [contained in a single register r1] using instructions Ij of the forms":

In this second form the machine uses Gödel numbers to process "the integer S". He asserts that the first machine/model does not need to do this if it has 4 registers available to it.

1961: Melzak model: a single ternary instruction with addition and proper subtraction

 * "It is our object to describe a primitive device, to be called a Q-machine, which arrives at effective computability via arithmetic rather than via logic. Its three operations are keeping tally, comparing non-negative integers, and transferring" (Melzak (1961) p. 281)

If we use the context of his model, "keeping tally" means "adding by successive increments" (throwing a pebbles into) or "subtracting by successive decrements"; transferring means moving (not copying) the contents from hole A to hole B, and comparing numbers is self-evident. This appears to be a blend of the three base models.

Melzak's physical model is holes { X, Y, Z, etc. } in the ground together with an unlimited supply of pebbles in a special hole S (Sink or Supply or both? Melzak doesn't say).


 * "The Q-machine consists of an indefinitely large number of locations: S, A1, A2, ..., an indefinitely large supply of counters distributed among these locations, a program, and an operator whose sole purpose is to carry out the instructions. Initially all but a finite number from among the locations ... are empty and each of the remaining ones contains a finite number of counters" (p. 283, boldface added)

The instruction is a single "ternary operation" he calls "XYZ":
 * "XYZ" denotes the operation of 1. Count the number of pebbles in hole Y,

2. put them back into Y,

3. attempt to remove this same number from hole X. IF this is not possible because it will empty hole X THEN do nothing and jump to instruction #I; ELSE,

4. remove the Y-quantity from X and (iv) transfer them to, i.e. add them to, the quantity in hole Z. Of all the possible operations, some are not allowed, as shown in the table below:

Some observations about the Melzak model: 1. If all the holes start with 0, then how do we increment? Apparently this is not possible; one hole must contain a single pebble.

2. The conditional "jump" occurs on every instance of XYZ type because: if it cannot be performed because X does not have enough counters/pebbles then the jump occurs; otherwise if it can be performed it will be and the instructions continue to the next in sequence.

3. Neither SXY nor XXY can cause a jump because both can always be performed.

4. Melzak adds indirection to his model (see random-access machine) and gives two examples of its use. But he does not pursue this further. This is the first verified instance of "indirection" to appear in the literature.

5. Both papers – that of Z. Alexander Melzak (William Lowell Putnam Mathematical Competition, winner 1950) was received 15 May 1961 and Joachim Lambek received a month later on 15 June 1961 – are contained in the same volume, one after the other.

6. Is Melzak's assertion true? – that this model is "so simple that its working could probably be understood by an average school-child after a few minutes's explanation" (p. 282)? The reader will have to decide.

1961: Lambek "abacus" model: atomizing Melzak's model to X+, X- with test
Original "abacus" model of Lambek (1962):

Lambek references Melzak's paper. He atomizes Melzak's single 3-parameter operation (really 4 if we count the instruction addresses) into a 2-parameter increment "X+" and 3-parameter decrement "X-". He also provides both an informal and formal definition of "a program". This form is virtually identical to the Minsky (1961) model, and has been adopted by.

Abacus model of Boolos, Burgess & Jeffrey:

The various editions beginning with 1970 the authors use the Lambek (1961) model of an "infinite abacus". This series of Wikipedia articles is using their symbolism, e.g. " [ r ] +1 → r" "the contents of register identified as number 'r', plus 1, replaces the contents of [is put into] register number 'r' ".

They use Lambek's name "abacus" but follow Melzak's pebble-in-holes model, modified by them to a 'stones-in-boxes' model. Like the original abacus model of Lambek, their model retains the Minsky (1961) use of non-sequential instructions – unlike the "conventional" computer-like default sequential instruction execution, the next instruction Ia is contained within the instruction.

Observe, however, that B-B and B-B-J do not use a variable "X" in the mnemonics with a specifying parameter (as shown in the Lambek version) --i.e. "X+" and "X-" – but rather the instruction mnemonics specifies the registers themselves, e.g. "2+", or "3-":

1963: Shepherdson and Sturgis's model
reference Minsky (1961) as it appeared for them in the form of an MIT Lincoln Laboratory report: "In Section 10 we show that theorems (including Minsky's results [21, their reference]) on the computation of partial recursive functions by one or two tapes can be obtained rather easily from one of our intermediate forms." Their model is strongly influenced by the model and the spirit of Hao Wang (1957) and his Wang B-machine (also see Post–Turing machine). They "sum up by saying": "...we have tried to carry a step further the 'rapprochement' between the practical and theoretical aspects of computation suggested and started by Wang."

Unlimited Register Machine URM: This, their "most flexible machine... consists of a denumerable sequence of registers numbered 1, 2, 3, ..., each of which can store any natural number...Each particular program, however involves only a finite number of these registers" (p. 219). In other words, the number of registers is potentially infinite, and each register's "size" is infinite.

They offer the following instruction set, and the following "Notes":

Notes.
 * 1) This set of instructions is chosen for ease of programming the computation of partial recursive functions rather than economy; it is shown in Section 4 that this set is equivalent to a smaller set.
 * 2) There are infinitely many instructions in this list since m, n [ contents of rj, etc.] range over all positive integers.
 * 3) In instructions a, b, c, d the contents of all registers except n are supposed to be left unchanged; in instructions e, f, the contents of all registers are unchanged (p. 219).

Indeed, they show how to reduce this set further, to the following (for an infinite number of registers each of infinite size):

Limited Register Machine LRM: Here they restrict the machine to a finite number of registers N, but they also allow more registers to "be brought in" or removed if empty (cf. p. 228). They show that the remove-register instruction need not require an empty register.

Single-Register Machine SRM: Here they are implementing the tag system of Emil Post and thereby allow only writing to the end of the string and erasing from the beginning. This is shown in their Figure 1 as a tape with a read head on the left and a write head on the right, and it can only move the tape right. "A" is their "word" (p. 229):
 * a. P(i)   ;add ai to the end of A
 * b. D      ;delete the first letter of A
 * f'. Ji[E1] ;If A begins with ai jump to exit 1.

They also provide a model as "a stack of cards" with the symbols { 0, 1 } (p. 232 and Appendix C p. 248):
 * 1) add card at top printed 1
 * 2) add card at top printed 0
 * 3) remove bottom card; if printed 1 jump to instruction m, else next instruction.

1967: Minsky's "Simple Universal Base for a Program Computer"
Ultimately, in Problem 11.7-1 Minsky observes that many bases of computation can be formed from a tiny collection:
 * "Many other combinations of operation types [ 0 ], [ ' ], [ - ], [ O- ], [ → ] and [ RPT ] form universal basis. Find some of these basis. Which combinations of three operations are not universal basis? Invent some other operations..." (p. 214)

The following are definitions of the various instructions he treats:

Minsky (1967) begins with a model that consists of the three operations plus HALT:
 * { [ 0 ], [ ' ], [ - ], [ H ] }

He observes that we can dispense with [ 0 ] if we allow for a specific register e.g. w already "empty" (Minsky (1967) p. 206). Later (pages 255ff) he compresses the three { [ 0 ], [ ' ], [ - ] }, into two { [ ' ], [ - ] }.

But he admits the model is easier if he adds some [pseudo]-instructions [ O- ] (combined [ 0 ] and [ - ]) and "go(n)". He builds "go(n)" out of the register w pre-set to 0, so that [O-] (w, (n)) is an unconditional jump.

In his section 11.5 "The equivalence of Program Machines with General-recursive functions" he introduces two new subroutines:
 * f. [ → ]


 * j. [ ≠ ]
 * Jump unless equal": IF [ rj ] ≠ [ rk ] THEN jump to zth instruction ELSE next instruction

He proceeds to show how to replace the "successor-predecessor" set { [ 0 ], [ ' ], [ - ] } with the "successor-equality" set { [ 0 ], [ ' ], [ ≠ ] }. And then he defines his "REPEAT" [RPT] and shows that we can define any primitive recursive function by the "successor-repeat" set { [ 0 ], [ ' ], [RPT] } (where the range of the [ RPT ] cannot include itself. If it does, we get what is called the mu operator (see also mu recursive functions) (p. 213)):


 * Any general recursive function can be computed by a program computer using only operations [ 0 ], [ ' ], [ RPT ] if we permit a RPT operation to lie in its own range ... [however] in general a RPT operation could not be an instruction in the finite-state part of the machine...[if it were] this might exhaust any particular amount of storage allowed in the finite part of the machine. RPT operations require infinite registers of their own, in general... etc." (p. 214)

1980: Schönhage's 0-parameter model RAM0
Schönhage (1980) developed his computational model in context of a "new" model he called the Storage Machine Modification model (SMM), his variety of pointer machine. His development described a RAM (random-access machine) model with a remarkable instruction set requiring no operands at all, excepting, perhaps, the "conditional jump" (and even that could be achieved without an operand):


 * "...the RAM0 version deserves special attention for its extreme simplicity; its instruction set consists of only a few one-letter codes, without any (explicit) addressing" (p. 494)

The way Schönhage did this is of interest. He (i) atomizes the conventional register "address:datum" into its two parts: "address", and "datum", and (ii) generates the "address" in a specific register n to which the finite-state machine instructions (i.e. the "machine code") would have access, and (iii) provides an "accumulator" register z where all arithmetic operations are to occur.

In his particular RAM0 model has only two "arithmetic operations" – "Z" for "set contents of register z to zero", and "A" for "add one to contents of register z". The only access to address-register n is via a copy-from-A-to-N instruction called "set address n". To store a "datum" in accumulator z in a given register, the machine uses the contents of n to specify the register's address and register z to supply the datum to be sent to the register.

Peculiarities: A first peculiarity of the Schönhage RAM0 is how it "loads" something into register z: register z first supplies the register-address and then secondly, receives the datum from the register – a form of indirect "load". The second peculiarity is the specification of the COMPARE operation. This is a "jump if accumulator-register z=zero (not, for example, "compare the contents of z to the contents of the register pointed to by n). Apparently if the test fails the machine skips over the next instruction which always must be in the form of "goto λ" where "λ" is the jump-to address. The instruction – "compare contents of z to zero" is unlike the Schonhage successor-RAM1 model (or any other known successor-models) with the more conventional "compare contents of register z to contents of register a for equality".

Primarily for reference – this is a RAM model, not a counter-machine model – the following is the Schönhage RAM0 instruction set:

Again, the above instruction set is for a random-access machine, a RAM – a counter machine with indirect addressing; instruction "N" allows for indirect storage of the accumulator, and instruction "L" allows for indirect load of the accumulator.

While peculiar, Schönhage's model shows how the conventional counter-machine's "register-to-register" or "read-modify-write" instruction set can be atomized to its simplest 0-parameter form. <!-- One possible instruction set (now using more conventional mnemonics) might be the following set 1A through 8. This set is not minimimal – we can dispense with at least one instruction. But the arithmetic operations are "symmetric" in the sense that what we do to A (e.g. 1A, 2A, 3A) we can do to N (e.g. 1N, 2N, 3N), with the exception of 8 and 9 which use the contents of N to address the register that provides the datum to A (LDAN) or accepts the datum from A (STAN):

Shepherdson-Sturgis (1963) have shown how similar instruction sets, expanded with "convenience instructions" such as "CLRA" and "CLRN", are possible. The very convenient instruction "LDN k" – LoaD N with "immediate" constant k (actually a register address) obtained from the instruction" – is useful when the number of registers is known and bounded to less than some kmax.

As discussed above, the CPYAN instruction is required for indirect addressing.

If the jump-to address were in its own register (or even in N) then the jump-to address/parameter could be dispensed with. But all of this atomization comes at a significant cost – the program must generate a register-address by successive INCN instructions, or a register-address by successive "INC_Jaddress", for example. -->