Talk:Turing machine/Archive 1

Old discussions with no heading
I added the formal definition of the Turing machine, because I felt it was missing. Such a definition exists for the pushdown automaton and I think it is important to have it for the Turing machine as well. However, I am not satisfied with the blank symbol, because the bar should strike through the upper part of the "b", and not sit on top of it. If anyone knows how to display this symbol between "math" markups, please correct it. --Alexandre

I was thinking of adding a link to the general theorem which says that practically no non-trivial question about the behavior of Turing machines can be decided, but I forgot the name of it. Anybody knows? --AxelBoldt

Sure, it's Rice's Theorem. --AV

Two things are missing from the article but I don't have time right now:
 * The functions that Turing machines define are only partial because TM's need not halt.
 * Link to recursive functions, recursive and recursively enumerable languages. --AxelBoldt

Someone got Minsky's 7x4 confused with Rogozhin's 4x6, and so states and symbols were tangled in the text...I've tried to unsnarl it but I have trusted the content of the table, except for swapping the order to be more intuitive. By the way, Rogozhin said Raphael Robinson spotted a defect in Minsky's table, which both Rogozhin and Robinson remedied in different ways, still using only 7x4, so maybe this isn't the best example. Also I thought it worth clarifying the different definitions of UTM, based on Turing-completeness vs. simulating other Turing machines. None of that list of smallest ones directly simulates a Turing machine, though they can simulate a tag-system that simulates a Turing machine, or indeed a tag-system that simulates a Pentium. -Daniel.


 * Personally, I don't think the table of the universal machine is useful at all without knowing precisely what tag-systems are and how they are supposed to be presented as input to the machine. Maybe we should start Universal Turing machine and discuss all these matters there. AxelBoldt 22:17 Nov 16, 2002 (UTC)


 * A good idea. I've deleted the table, and will dig up the right references and make the new page soon, if I remember. -Daniel.


 * Okay. If I haven't done it by now I probably never will. Sorry. -Daniel.

Moved from article, from section "References and external links" (link not working - 404):

(by snoyes 22:16 Mar 30, 2003 (UTC))
 * A Turing machine implemented in the cellular automaton known as Conway's Game of Life: http://www.rendell.co.uk/gol/tm.htm

Something which may or may not deserve mentioning. A Turing machine as described in Turing's original paper actually never halts, so the halting problem for such machines is easily solvable. The problem shown to be unsolvable in that paper is thus not the halting problem but another, closely related problem. This point seems to be totally ignored in most web documentation on the issue, and I don't know whether or how to go about mentioning it here. Thoughts? -Daniel.


 * I don't understand Turing's paper sufficiently to check whether it halts or not... but in my compsci lectures the lecturer described a turing-like machine with a HALT instruction, and proceeded from there to show X Y and Z. An obvious equivalent in a machine with no HALT instruction is to set a flag READ-ONLY and modify all rules so that if the READ-ONLY flag is set then no modifications are made. Then you define the halting problem as asking - does the machine ever get the READ-ONLY flag set? Martin

I hope I didn't break this article with my incomprehensible cloud-shovelling strict version of Turing machines. I'm also not completely certain I didn't make some sort of trivial error somewhere. There's a guy at the McGill math department who claims he had an alternate model of computation at the same time (or so) as Turing did, and for that reason they use it a bit there. His claim to fame is that it's less cumbersome and tricky to write down than this Turing machine business, even if the Turing machine approach is closer in spirit to actual hardware. Loisel 08:59 6 Jul 2003 (UTC)


 * Loisel, it is getting a little dense towards the end. We're trying to write for as general an audience as is realistic here.  Do you think you could present it more accessibly?  I'd like to have a dash at it, but it might be a while until I get round to it.  By the way, there is already an article on complexity classes P and NP, so you might consider how much material on that subject should be in this article. --Robert Merkel 05:48, 7 Aug 2003 (UTC)


 * In fact, I have to say that earlier versions of the article are actually, to my way of thinking, more useful. I honestly am tempted to revert it.  --Robert Merkel 05:55, 7 Aug 2003 (UTC)

I have decided to revert to an earlier version of the article, on the basis that this version is actually readable, unlike the present version. Have a look at the two versions and compare. Any useful information from later versions not captured in this version should of course be added back in, but let's try and remember our audience when we write please. --Robert Merkel 09:11, 10 Aug 2003 (UTC)


 * I put back the paragraph about undecidable UTM problems and Rice's theorem. I think the current version (un-reverted) is both clearer and more informative than the reverted version. I hope you agree.  I do agree that the article is better without the giant mathematical gobbledygook section that you removed.  -Dominus 12:52, 10 Aug 2003 (UTC)


 * I completely agree. Good work. --Robert Merkel 00:16, 11 Aug 2003 (UTC)

My question is this. Must the turing machine move either left or right everytime?

Can there be a state which does not move the head?

28th Aug 2003


 * It's easy to prove that for every Turing machine that is allowed to leave its head stationary, there is an equivalent Turing machine, accepting the same language, which moves its head at every step. So it doesn't matter whether you require the head to move or not. -- Dominus 04:25, 28 Aug 2003 (UTC)

Old Read  Wr.      New St. Sym. Sym. Mv. St. Eh? Old State, ?, ?, Move, New State? -- r3m0t 23:45, 7 Dec 2003 (UTC)


 * I'd presume "Old State, Read Symbol, Write/Written Symbol, Move, New State", but I'm not 100% sure, and have only skim-read the article. I agree that it needs clarifying, though - and perhaps the tables ought to be properly marked up, too, rather than just laying out text ASCII-art style?
 * In fact, if I understand it correctly, it would probably best be labelled "Old State, Symbol Read, Symbol to Write, Move Direction, New State" - at least in some kind of key/legend.
 * - IMSoP 00:08, 9 Dec 2003 (UTC)


 * I came here to make the same comment. This doesn't make any sense if you don't know what the abbreviations mean, and they're not obvious.  Needs rewriting by someone who understands what it means! HappyDog 01:28, 17 Jan 2004 (UTC)

TEXT OF DELETED ARTICLE Turing Machine with faults, failures and recovery (in case anyone wants to merge anything in) Tannin 11:05, 3 Mar 2004 (UTC)

''This page has been listed on Votes for deletion. Please see that page for justifications and discussion.''

Turing Machine with faults, failures and recovery
In contrast to practical situation an ordinary Turing Machine never fails. Turing Machine with faults, failures and recovery is (weakly) non-deterministic Turing Machine consisting of : Only daemon has non-deterministic behavior.
 * five semi-infinite (to the right) tapes
 * Master Tape,
 * Synchro Tape,
 * Backup Tape,
 * Backup Synchro Tape,
 * User (Tester) Tape;
 * four controlling components (sets of rules)
 * Program,
 * Daemon,
 * Apparatus,
 * User.

Tapes, alphabets
Master Tape corresponds to the tape of ordinary deterministic Turing Machine. Head of Synchro Tape is synchronized with head of Master Tape. Backup Tape is used to back Master Tape data up. Backup Synchro Tape is used to back Synchro Tape data up. User Tape is used to perform pure/ideal computation (without faults and failures). Master Tape, Backup Tape, User Tape are using the same (user-defined) alphabet. Synchro Tape, Backup Synchro Tape are using the same special (embedded, user-independent) alphabet.

States
Program contains : - user-defined states (initial, internal and halting states), - user-required check-point states (indirectly defined by user), - embedded (user-independent) shutting down state. Note 1. Some of user-defined rules an user may mark as check-points. Check-point states are derived from these rules. Note 2. Shutting down state differs from user-defined halting state. Daemon contains three embedded (user-independent) states : - passive, - active, - aggressive. Apparatus contains two embedded (user-independent) states : - normal, - emergency. User contains two embedded (user-independent) states : - tracking, - stabilizing.

Rules
There are three sets of rules : a) Daemon's set of non-deterministic rules. The set includes only daemon states. b) Common set of deterministic rules The set includes states of all controlling components (program, daemon, apparatus, user). In fact, this common set consists of two subsets : - program rules including - outside rules including c) Daemon-defined set of rules (fault rules).
 * user-defined rules,
 * user-required check-point rules;
 * deterministic daemon rules,
 * apparatus rules,
 * user rules.

Transitions
Each transition step consists of two half-steps (tacts) : a) First tact. Daemon performs transition according to non-deterministic rule from passive state to {passive, active, aggressive } b) Second tact. One of three kinds of transitions is performed : - normal transition, if daemon is in passive state, - fault transition, if daemon is in active state, - failure transition, if daemon is in aggressive state. Note. On second tact daemon always goes into passive state.

Faults and failures
The difference between fault and failure is as follows. - Fault transition : and the program continues computation. - Failure transition :
 * apparatus stay in normal state.
 * illegal (daemon-defined) program rule is applied,
 * apparatus go into in emergency state.
 * the program is unable to continue computation.

Diagram
This page really needs a diagram. Even a simple one would do. I'll take care of this in a while if no one else gets to it. Deco 00:58, 2 Oct 2004 (UTC)


 * I have one at, although it's not particularly attractive. If you do want to use it, I hereby release the illustration under the terms of the GFDL. -- Dominus 14:10, 2 Nov 2004 (UTC)


 * I like it, but I'd prefer something that shows that the tape is semi-infinite and evokes that finiteness of the control. I'm thinking of adding in the "toy train" metaphor somewhere, and a diagram to go with that. (The train metaphor is that the read/write head is represented by a toy train on a track that extends forever in one direction, and it decides without any outside control which way to go based on the character under it. Less clear, though, is how a train writes characters on the track.) Deco 23:52, 2 Nov 2004 (UTC)


 * According to the wikipedia article, the tape is infinite in both directions. -- Dominus 14:08, 3 Nov 2004 (UTC)


 * You're right. I usually see it treated with a semi-infinite tape, as in Sipser, but admittedly this complicates the mechanics a bit (you need to say what happens when it moves left from the leftmost cell). On the other hand it helps sometimes to be able to speak of the "beginning" of the tape, but you might as well label them with integers and call the cell labelled zero the "origin" of the tape. I'm unsure which is better for education. Deco 18:17, 3 Nov 2004 (UTC)

Were the actual machines ever widely produced pre-computer era?
I know Turing machines were used a lot in military and industrial sector applications before computers made their way through as the mainly used calculating machines; but were Turing machines ever produced by any company on a wider market for small-business clients?


 * Turing machines are a purely theoretical concept, not real machines. There were different calculating machines before the computer used by military as well as by bigger businesses. -- till we &#9788; &#9789; | Talk 08:55, 29 Nov 2004 (UTC)

The meaning of "state", and computers as DFAs
Im curious about the use of the term "state" in this document.

It seems to me that two seperate concepts of state are referred to in the document without distinguishing them. Consider that in the "informal description" it keeps the concept of state and memory distinct:

" 1. A tape which is divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendible to the left and to the right, i.e., the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written to before are assumed to be filled with the blank symbol. .... 3. A state register that stores the state of the Turing machine. The number of different states is always finite and there is one special start state with which the state register is initialized. "

However later on in the section "Comparison with real machines" a different "state" is discussed (albeit implicitly): "What is missed in this statement is that almost any particular program running on a particular machine and given a finite amount of input is in fact nothing but a deterministic finite automaton, since the machine it runs on can only be in finitely many states."

The problem here is that in the first case "state" is held to be distinct from memory. In the second case memory and state are merged. The truth is that a "real computer" is much closer to a turing machine (with memory distinct from state) than it is to a DFA (with no memory outside of state).


 * I agree with you up to this last sentence. Two different senses of "state" have been used, both of which are common and established in their own areas, and this is potentially confusing. But "state" as the word is used in talking about real computers definitely includes the contents of memory; and "state" as the word is used in talking about Turing machines does not map neatly onto any feature of real computers. So I don't think it would be clear to say "The truth is that a 'real computer' has memory distinct from its state."


 * In a related issue, after mapping DFAs to Turing machines (where the "state" is not the main form of storage), and then mapping Turing machines to real computers, we are then asked to map real computers back onto DFAs. But this time, we're using "state" in the sense of the complete configuration of a computer's memory, registers, hard drives, and whatever...and mapping that back onto the DFA sense of "state". Meaning we end up with a DFA with two-to-the-trillionth-power different states or something, as you've mentioned below, and it seems like a technicality to call that "finite". However, this is indeed what people mean in saying a real computer is only as powerful as a DFA.

It seems to me that this mixed use in the discussion of any form of finite state automata is extremely confusing. Either a turing machine has infinite states,


 * (in the sense of "state" used in talking about real computers, it does...)

or a "real computer" is a turing machine with restricted memory. If we accept that a "real computer" is equivelent to a DFA then we shouldnt discuss Turing machines as though they were finite state machines. Also in todays market of mass storage devices and backup tapes IMO a modern computer has essentially infinte resources to work with. After all we could keep adding new harddrives ad nauseum.


 * This is true and I've added it to the article.

I also think that the merging of state and memory is problematic as it implies that the state of the machine is a holistic one comprising of the combined state of every mass storage device and memory address that the computer can access. Since a single byte of RAM implies 256 distinct states a computer with a few terabytes of mass storage available and a few gigabytes of memory would have more states than there are atoms in the universe. Add a network connection and the number of states that the machine can be in just explodes.


 * Right. And that is what they mean. It's a huge number but it is still finite and that's their point. I've tried to address the distance between that and any practical limitations, in my addition to the article.

Maybe this is just the ramblings of a confused man, but personally i feel that there is a problem here. Some areas where it gets confusing is that in other documents we see that one requires a turing machine to process a type 0 language and that DFA's and NFA's can only handle a type 3 language. However if a "real computer" is really a DFA then how can it process a type 0 language like C which requires handling recursive constructs (which we know a DFA cannot.)


 * A real computer can't necessarily process a fifty petabyte C program. Not that there is any such thing AFAIK.


 * DanielCristofani 10:04, 16 May 2005 (UTC)

I think that however you slice it there is something misleading in calling a "real computer" a DFA.

(an anonymous contributor)

Deleted links
The two papers on the Turing Test clearly shouldn't be here. Paterson's Worms are more of a judgement call, but the analogy is less close than in the case of Langton's Ant, and I don't know that anyone has suggested they might be Turing-complete.

Deleted picture?
The edit at 05:45, 30 July 2005 by 202.56.193.222 just removed the artist's conception of a turing machine without any reason given. Is there anything wrong with it? Does anyone object if I put the picture back? --Lemontea 13:34, 30 July 2005 (UTC)
 * I posted to the anon user's empty talk page asking why he deleted the image. Let's wait a day and if we don't get a response, reintroduce it. Rmrfstar 14:20, 30 July 2005 (UTC)
 * I have just reintroduced the image into the article because we have heard no justification for its removal and some users have requested its reimplementation. -- Rmrfstar 21:25, 31 July 2005 (UTC)

Real Turing Machines
Hi,

We have a weird reference on the french Wikipedia (fr.wikipedia.org), about Turing Machines made of optical rays, or marbles...

I haven't dared to add it on this article, since I haven't found any other reference to this on the internet. However, if you are interested, or have time to contact the owners of such machines, here is the page :

http://fr.wikipedia.org/wiki/Machine_de_Turing#Une_machine_de_Turing_r.C3.A9elle

King mike

Smallest possible universal Turing machines
Right in the middle of the article it says:

[A] universal Turing machine can be fairly simple, using just a few states and a few symbols. For example, only 2 states are needed, since a 2×18 (meaning 2 states, 18 symbols) universal Turing machine is known. A complete list of the smallest known universal Turing machines is: 2×18, 3×10, 4×6, 5×5, 7×4, 10×3, 22×2.

Why? Why these particular numbers? If we can have 2 states and 18 symbols, why not 18 symbols and 2 states?

Jm546 21:05:05, 2005-08-19 (UTC)


 * Er, "2 states and 18 symbols" is "18 symbols and 2 states". But maybe you meant "18 states and 2 symbols", in which case the answer is that nobody has yet created a Turing machine with those parameters (i.e. a particular definition table with 18 states and 2 symbols) which is itself Turing complete (the criterion defined in that paragraph).


 * There is (or was, before I editted it) something odd about that sentence though, because it claims the list to be "complete" but defines it only as "of the smallest known" - so it's just a subset of an imaginary list of all known UTMs. If it said "a complete list of known [UTMs] where states multiplied by symbols is less than 44", that would make sense, but with no set upper limit, it can't really be "complete". [Maybe I'm being too theoretically-minded and pedantic here, but this is an article about Turing machines, after all! ;)] - IMSoP 21:29, 19 August 2005 (UTC)


 * Er, yes, on the states/symbols business. Jm546 22:18:02, 2005-08-19 (UTC)


 * So, does my explanation (and my slight rewording of the page) answer your original question? - IMSoP 22:36, 19 August 2005 (UTC)


 * Yes. My question arose out of failure grasp the significance of "known".  Jm546 22:54:43, 2005-08-19 (UTC)


 * BTW. I always read "complete list of the smallest known UTMs" as "complete list of [sizes of] known UTMs such that no known UTM is unequivocally smaller [i.e., such that no known UTM has fewer states without also having more symbols, and vice versa]". DanielCristofani 09:58, 11 November 2005 (UTC)

UTM programs from other TM programs ?
Assuming we have a program for a SxE - Turing machine (that has S symbols and E states), how to write the program that computes the same function for a UTM (e.g. 2x18) ? This is particularly useful in the field of compilation. In my particular case, I want to compile Brainfuck programs for a UTM. The article widely exposes that 'it can be shown possible', but doesnt show any possible transformation.

--King Mike 18:27, 10 November 2005 (UTC)


 * You mean, given the transition function and initial tape state of Turing machine A, how to construct an initial tape state for universal Turing machine B which will make B simulate A? (The answers will be isomorphic to the answers that A would produce, but not usually identical.) This depends heavily on the details of machine B. If you pick one of the tiny ones like 2x18 to be machine B, then it simulates a tag-system. So you will first have to transform Turing machine A into a tag-system; the construction for this is given in Marvin Minsky's book "Computation: finite and infinite machines". This is out of print but still in libraries. Then you will have to transform that tag-system into an input tape for your chosen machine B, using a construction specified by the designer of Turing machine B, which in many cases is Yurii Rogozhin. You may want to dig up this article:


 * Yu.Rogozhin, Small Universal Turing Machines. Theoretical Computer Science, vol.168-2, 1996, 215-240


 * If, on the other hand, you use a larger UTM that directly simulates another Turing machine, then the tape can be constructed in one step according to the instructions of the person who wrote that UTM. The book by Minsky, mentioned above, has more direct UTMs in it as well, which are more likely to run at reasonable speeds.


 * Incidentally, I don't recall seeing a transformation from brainfuck programs to any kind of Turing machine, though it should not be hard to do... DanielCristofani 10:25, 11 November 2005 (UTC)


 * Generally speaking, the encoding interpreted by most UTMs is extremely low-level and difficult to work with directly. You'd have to first build up a number of primitive operations such as adding/subtracting one, indexing, and so on. It can be done, but most automata books that I've seen demonstrate maybe three small Turing machines before things get so complex that it's no longer worthwhile.
 * Another route you could take is to define a Turing machine which interprets a Brainfuck program. Then, by writing a TM simulator in Brainfuck, you demonstrate that your Turing machine is actually a UTM (since it can simulate any Turing machine by feeding that machine to the TM-simulator Brainfuck program which it's simulating). Deco 19:57, 11 November 2005 (UTC)


 * Thanks, Daniel, for the help you gave me, now and in the past, in my search for a better comprehension of all this Turing machine thing. I'll look at these references and an article may appear on Wikipedia as soon as I get the point.
 * Deco, that's not what I want to do : I don't want anything to be neither efficient, not easy, I just want to have fun :-) --King Mike 19:54, 21 November 2005 (UTC)

Notes on changes, December 2005
Turing did not describe a stack of sheets of paper in his article. What he said was:

"Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares."

Further, the first paragraph of the Wikipedia article made it sound like "the Turing machine was developed later from Turing's article, which only talked about pen-and-paper computation"; where in fact Turing's article is all about Turing machines and only mentions pen-and-paper computation as an analogy in order to try to convince people that a Turing machine can do anything that can be done with pen and paper.

Rewriting the first paragraph was a tricky balancing act, because of the nature of Turing machines; they have an unusual position, since

1. it was necessary that it be obviously possible to build them, since they are meant as an example of what can in fact be computed mechanically; but

2. it was not important to actually build them, nor did anyone at the time think it would be a good idea to do so.

Physical artifacts which are designed to remain merely possible are comparatively rare; I hope the present wording makes their status clear.

I decided not to put the title of Turing's paper in the first paragraph since it's long and non-crucial.

I made some other tweaks for style and/or accuracy at the same time; discussion is welcome. DanielCristofani 06:01, 17 December 2005 (UTC)

Assessment comment
Substituted at 20:55, 4 May 2016 (UTC)