Talk:Algorithm/Archive 2

Revised history section -- background information
Please observe that "the archives" above contain extensive "history" background information for any writer willing to take on the task of reworking the history. The move to "archive" has rendered this work invisible. If no writers or editors can get to this I will eventually.

Frankly I don't think this "cleanup" is a good idea unless a way exists to move an index of the archive to this discussion page. wvbaileyWvbailey 03:59, 11 July 2006 (UTC)


 * Archiving is standard procedure for a long talk page. You can still get to it with the link.  --Ideogram 04:03, 11 July 2006 (UTC)

"Isomorphic?"
G'day, I've got basic maths and computing skills and I don't know exactly what is meant by "isomorphic" in this context...for the lay people a clarification would be great! ben 16:08, 11 July 2006 (UTC)

Additional content
Somebody should add a section about the fact that algorithm is not a well defined term. Everybody knows what it means to say that two computer programs are the same, or are different, as programs. But there is no accepted formal definition of when two computer programs (possibly in different langauges) implement the same algorithm. CMummert 20:35, 14 July 2006 (UTC)


 * I agree. And there's more: Some argue that the interpreting mind (or machine) is part of the algorithm. I had a dialog with Yuri Gurevich (he's at Microsoft as a senior fellow) re this issue: the definition of "algorithm". See the archived discussion section for my dialog re recent Uspensky-Gurevich papers. I believe that Gurevich leans toward the view that I lean toward -- that an algorithm must be expressed as a machine in operation (e.g. a Turing machine -- both table and tape -- or a mind), not just a list of instructions (i.e. a list of cookbook instructions, or the "Turing Table"), or even as a state machine (i.e. "Turing Table") in operation. But I'm not sure about exactly what Gurevich is saying; I'm still reading on this. I'd welcome your input if you're so inclined. wvbaileyWvbailey 00:46, 15 July 2006 (UTC)


 * Sorry, wrong references -- the first paper is Andreas Blass and Yuri Gurevich, Algorithms: A Quest for Absolute Definitions, Bulletin of European Association for Theoretical Computer Science 81, 2003; and Yuri Gurevich, Sequential Abstract State machines Capture Sequential Algorithms, ACM Transactions on Computational Logic vol. 1, no 1, July 2000, pp. 71-111. Both papers can be found via Google at Gurevich's site at Microsoft where he can be reached at guerevich@microsoft.com. And here is another paper: Yuri Gurevich, On Kolmogorov Machines and Related Issues, July 2000. The early work on this was by Kolmogorov and Uspensky. Sorry, I'm on vacation and am a bit discombobulated. wvbaileyWvbailey 05:10, 15 July 2006 (UTC)

merger from Church-Turing thesis
Oppose merge. I agree with Cornflake pirate that there is some duplication. I have started a discussion at Turing machine about how to clean up that page. That cleanup will require links to Algorithm and Church-Turing thesis. I don't think it makes sense to merge Algorithm and Church-Turing thesis, because that merges two out of three important ideas, viz:
 * The Church-Turing thesis says that any algorithm can be implemented on a Turing machine.

Each of those bold terms has enough content to justify an article. In particular, the history of Church's thesis and its various forms are not relevant to the Algorithm article. The first section of the current Church-Turing thesis article could be moved into Algorithm and drastically shortened in place. CMummert 14:12, 16 July 2006 (UTC)
 * Yes that's what I intended, not the entire article. :S --Cornflake pirate 01:09, 17 July 2006 (UTC)


 * Indeed there is some duplication but I don't think there's any potential for an actual merger. The Church-Turing thesis article just needs to be rewritten to take full advantage of the existing articles for algorithm and Turing machine. Pascal.Tesson 14:19, 16 July 2006 (UTC)


 * Oppose Merge. The topic clearly deserves its own article.  No trees are harmed in repeating some content in two different contexts.  Jon Awbrey 15:25, 16 July 2006 (UTC)


 * Strongly oppose merge. Algorithm is much different than Church-Turing Thesis. See the history section in the archived discussion for why this is true, plus my brief discussion of work of Komogorov-Uspensky and Andreas Blass-Yuri Gurevich immediately above. The Church-Turing Thesis is a historical fact, the result of Kleene's formulation of the work of Church-Turing-Emil Post; the definition of algorithm is in flux. wvbaileyWvbailey 16:03, 16 July 2006 (UTC)


 * Oppose merge per other opposing comments. --Allan McInnes (talk) 16:58, 16 July 2006 (UTC)
 * Support merge. I withdraw my opposition now that the merge proposal has been more clearly defined, i.e. that it covers just the Church-Turing thesis section. --Allan McInnes (talk) 01:20, 17 July 2006 (UTC)


 * Oppose merge. This article should not have the detail needed in the other. -R. S. Shaw 21:18, 16 July 2006 (UTC)


 * Oppose merge. I too agree with the above. Cedars 00:17, 17 July 2006 (UTC)


 * I left a comment for the user who suggested the merge. Nobody, no even that user, has spoken in favor of the merge, so for now let's agree it isn't happening soon. CMummert 00:28, 17 July 2006 (UTC)


 * Comment It's not a merge of the entire article, just the "Algothms" section of the Church-Turing page, which clearly does not belong in that article, however the information there does not seem to be present in this article. --Cornflake pirate 01:03, 17 July 2006 (UTC)

Proposal to merge just "algorithms" section of Church-Turing page

 * Opposed until certain editors of the algorithm page allow the "algorithms" section of the Church-Turing thesis page to appear here in substantially the same form as it appears on that page. Certain person(s) reverted my work here because it had "too much quotation from an original source" and they feared copyright problems. (They are clearly confused). To avoid a reversion war I moved it there because it is necessary for an understanding of the Church-Turing thesis. After I have time to rework the history section (etc.) on the algorithm page with some of the material, then I agree it can be removed. But not until the algorithm page is reworked. wvbaileyWvbailey 01:30, 17 July 2006 (UTC)
 * Comment I've been bold and done the merge myself. I'll do a bit of work on it and if there's a reversion war then we can deal with that if it happens. :) --Cornflake pirate 01:35, 17 July 2006 (UTC)
 * Actually I did revert it. It feels like you've just copied a section across and made little effort to integrate it with the article. Much of what was copied seemed to be repeated later in the article section "Formalization of algorithms". This was a good article before the merge, though it can be improved, it will take some effort to improve it and add new content. Maybe a subsection "Knuth's definintion" in the "Formalization of algorithms" would work or even a sub-article? Cedars 04:50, 17 July 2006 (UTC)


 * The WP:MM page says that just dumping the information, and leaving it for others to clean up, is fine. I don't think that reverting it helped anything. I moved the content to a separate section, which is where it should have been all along.  I also edited it down some.  The stuff by Kleene (commented out now) belongs back on the Church-Turing thesis page.  I am not sure that I favor the large number of footnotes; I think a single reference is enough for a contiguous series of quotes.   Another subsection should be added emphasizing that there is not a formal defintion of algorithm, and that moreover it is a subjective question as to whether pairs of programs implement the same algorithm.  (This replaces my previous comment). CMummert 12:57, 17 July 2006 (UTC)


 * I agree that we could get away with less footnotes. I added them in the first place because they reflected the existing in-text references in the merged text, with the intent of trimming them down later if User:Wvbailey was amenable to doing so (he had previously stated that the section should be in "substantially the same form"). --Allan McInnes (talk) 13:59, 17 July 2006 (UTC)


 * The new section re Knuth and edits look good to me. Since the reference to Knuth's work is already on the page only an in-text reference such as (Knuth 1973 p. 1-9) is required; a footnote isn't necessary. Also, the Knuth reference is annotated to precisely direct the reader to those pages. But CMummert's observation re "no formal definition of algorithm" is quite important and needs to be added perhaps just below. (The "history" section -- actually the history of modern attempts to define "algorithm" -- is so large it may have to go on a separate page. Title suggestions, anyone? But Knuth's is the most "famous" and well-known and should stay here.) wvbaileyWvbailey 16:08, 17 July 2006 (UTC)

Result
In this edit, the section Church–Turing thesis disappeared. Anthony Appleyard 09:30, 28 January 2007 (UTC)

Formal characterization of Knuth
I moved this from the main article
 * He goes on to provide a "brief indication by which the concept of algorithm can be firmly grounded in terms of mathematical set theory." (Knuth p. 7). He defines a computational method as a quadruple (Q, I, Ω, f). "The four quantities Q, I, Ω, f are intended to represent respectively the states of the computation, the input, the output, and the computational rule." Q contains subsets I and Ω; f is a function of Q onto itself. (He places further restrictions on his variables and functions; in a following paragraph he addresses the issue of effectiveness. For details see Knuth p.7-9 and his reference to Markov).

I don't doubt that Knuth said it, but it doesn't fit with the general consensus that there is no mathematically rigorous definition of algorithm. That is, his proposed mathematical definition is not accepted as a correct formal characterization. It doesn't serve the article to pretend that such a definition has been given. If anything, the brief characterization here makes it look like Knuth has just redefined a Turing machine. CMummert 12:24, 25 July 2006 (UTC)

I wondered when this would disappear from the main article and who would 'disappear' it -- I figured it would be gone by morning. As I wrote it I was thinking that really, the article is not too good. "Algorithm" is "locked up/swirls around" with "lambda-calculus/recursion" and "Turing machines" and "Post calculations" and "Kolmogorov machines" etc. etc. (i.e. "foundations[?]"), with "logicism versus intuitionism versus formalism" i.e. the problem of "constructability/demonstrability" of a number via an "algorithm", and with my question: is "an algorithm" just a set of cookbook-like instructions (cf Knuth's discussion page 4) or is it a Turing-equivalent-machine/man-as-machine-doing-recursion in operation (cf my dialog with gurevich@microsoft.com -- he says there aren't any good texts on the issue of what an algorithm really is -- I'm still pursuing this). All this ambiguity -- and some of the current opinions -- needs to be emphasized in the article (with in-text citations to the authors of the opinions). Am working on this slowly. Advice is welcome.wvbaileyWvbailey 17:26, 25 July 2006 (UTC)


 * I agree there is no censensus on how to formalize the natural language meaning of the word algorithm. It is easy to tell whether two programs are the same - you just compare them. It is not easy to tell whether two programs implement the same algorithm; the equivalence relation of same algorithm is coarser than the relation of same program.  But the relation same partial computable function is coarser still, because for example there are many different sorting algorithms.  So a formal definition of algorithm cannot identify it with its result (the computable function) or with the specific program that implements it. Knuth's characterization illustrates properties that an algorithm must have without giving a definition of what an algorithm is. CMummert 17:42, 25 July 2006 (UTC)

It's interesting to note that you and I are waltzing back and forth between the "algorithm" and "Post-Turing machine/Turing machine" pages. Not too long ago I was working on the "intuitionism" page. Clearly the first two are tightly woven, and the "intuitionism" page sensitized me to the idea of a 'constructive demonstration', i.e. "an algorithm". wvbailey

I've probably asked you the following before, if so, sorry 'bout that ... Do you have an opinion re whether or not an algorithm is "a machine/man" in operation, or whether it is just a list of instructions? Do you know of any good writing on this? wvbailey

Knuth seems to waffle, as does everyone else I've encountered. He says: "Besides merely being a finite set of rules which gives a sequence of operations for solving a specific type of problem," (p. 4) the algorithm has the five features: finiteness, definiteness, input, output, effectiveness. But he always slips into a use of active verbs, such as [my boldface] "the algorithm ... terminate[s]" (p. 4), "... before the algorithm begins" (p. 5), "the reader is supposed to understand exactly what it means to divide m by n and what the remainder is" (p. 5 -- i.e. if "reader" is to hand-execute Knuth example algorithm, "reader" must be able to read the instructions and "do the math" -- your point some days ago), etc. wvbailey

Thus we are led to believe that "an algorithm" isn't just a list of instructions -- that is merely the finite state table giving the "transformation rules". The algorithm is also "the tape" with an instance of input on it, and blank tape for "the output". And we know that "tape" is required to do e.g. arbitrary multiplication -- a "very finite" (Knuth, p. 6), so finite-state machine won't do the job. wvbailey

Everyone I've run into (Post, Turing, Kleene, Rosser, Knuth, Kolmogorov, Gurevich, Minsky) suggests that an algorithm implies the existence of an "interpreter" aka "device or man" capable of, and actually in the act of, converting "the list of instructions" into physical movement (and I mean actual real-world physics here -- motions of parts, mechanical or human -- leading to "an output" in material form). Without this "the algorithm" is just paper -- mulch for the garden. The algorithm is my black boxes -- in goes the tape with "the input", out comes the answer. Inside the box is "the algorithm", aka "Turing machine under power" or aka "little man with fast fingers eagerly anticipating the next calculation". There must be some modern thinking re the definition. But Gurevich says not too much. Ideas? wvbaileyWvbailey 19:26, 25 July 2006 (UTC)


 * My way of thinking is that the word algorithm is an important natural language term that is not particularly amenable to formalization. Nevertheless, there are several people (including Gurevich) who are doing interesting work that makes some progress towards the goal of a formal theory of algorithms. CMummert 02:42, 26 July 2006 (UTC)

Formalisation of algorithms - proposed change
At the moment the "Formalisation of algorithms" section is broken down into: This fails to show the different formalisations of an algorithm and focuses heavily on Knuth's characterisation. I think this section should focus on the different bodies of mathematics that formalise what an algorithm is. Something like: They are not just types of programming language. Both lambda calculus and formal logic can be used to define what a computable function (and therefore algorithm) is.
 * 1) Knuth's characterization
 * 2) Expressing algorithms
 * 3) Implementation
 * 1) Turing machines (and the Knuth material)
 * 2) Recursive function theory
 * 3) Logic

Is there anyone for/against this approach?Pgr94 11:39, 26 July 2006 (UTC)


 * I am against it. An algorithm is not the same thing as a computable function (because there are many different sorting algorithms that all do the same thing) and is not the same as a Turing machine program (because a particular algorithm, such as quicksort, is the same algorithm even when implemented in different programs). There is no formalization of algorithm (but there is a formalization of computable functions).  The point of the Church-Turing thesis is that any algorithm (in the informal sense) can be programmed into a Turing machine (the formal sense).  As a separate point, recursive function theory probably means computability theory which overlaps with Turing machines. CMummert 13:29, 26 July 2006 (UTC)


 * I am against it, at least until we have better references and attribution practices. I believe I have added all but one of the references, and there are more I haven't had time to investigate (Markov, Kolmogorov, any others?). The Knuth entry is NOT ambiguous as to who said it; although I agree that the fact that it is an opinion may not be so clear (CMummert's complaint back a few weeks ago), it's just fine for the time being. When I first encountered this page someone had cribbed the Stanford Encyclopedia without attribution and then someone else observed this and it was removed (it was Knuth's description in disguise). I do agree that more authors' opinions could be added, and I suggest that more should be added. For example: I could quote Gurevich (and why not? It's in the literature! It may not be the truth the whole truth and nothing but the truth (e.g. CMummert seems to disagree with the following) but the quotes are facts-- as in "attributable statements"):
 * "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine" (Gurevich 2000 p.1)
 * "...according to Savage [1987], an algorithm is a computational process defined by a Turing machine."(Gurevich 2000 p.3)
 * "...every sequential algorithm can be step-for-step simulated by an appropriate sequential ASM [Abstract State Machine]"(Gurevich 2000 p. 2)
 * Pretty bold stuff: ANY algorithm! Godel came to the same conclusion (see my history stuff in the archived discussion). To CMummert's point immediately above, for a discussion of the distinctions between the Church-Turing thesis versus the Church Thesis versus the Turing Thesis -- in the context of "algorithm" -- I recommend Gurevich 2003.wvbaileyWvbailey 17:07, 26 July 2006 (UTC)


 * I have some knowledge of the work of Gurevich and Blass on abstract state machines. This work is interesting because it may lead to a commonly accepted definition of what an algorithm. But I do not think that it has reached that status yet. Gurevich's claim that any sequential algorithm can be simulated step by step by an ASM follows from his definition of sequential algorithm, which corresponds very closely to the definition of an ASM (so it is not so surprising).   I am agnostic about the name for Church's thesis. CMummert 23:15, 26 July 2006 (UTC)


 * I agree with you (... no reflection on Gurevich's work in particular but I don't believe folks working on this have bitten "the really Big Bullet" (really Big Bullet -- TBD). But his "modern" viewpoint might be a nice addition to the article.wvbaileyWvbailey 00:47, 27 July 2006 (UTC)


 * For CMummert (and anyone else so inclined): I'd appreciate your help with the Gurevich part. I am confused as to exactly what is new in the point of view he represents (which I can't follow excepting he's got the advantage of more models (Kolmogorov machines, pointer machines -- B.T.W. i created a new page for that one but its just a skeleton)).Lemme know. Thanks. wvbaileyWvbailey 20:57, 28 July 2006 (UTC)

complaints about algorithm article
Copied from "to do"
 * move most of the information added in history to a page like "history of computing" and keep absolutely related history to algorithm here.
 * give references to other pages and do not write more than simple paragraph about all those different formalisations here.

This article was used to be so nice to explain what algorithm is, and its brief history and list various algorithms and fundamentals. It was good reference for people looking for information on algorithm or for that matter for non computer scientists. Honestly it is boring and prosiac now and scares away any quicktime readers.


 * Problem is:


 * 1) nobody knows, and therefore cannot agree to, what an algorithm is ... so,


 * 2) any definition is difficult... and must be cited as an opinion... not fact


 * 3) people have their opinions about the "best definition", they have their favorites


 * One person didn't like so much emphasis on Knuth, so we add more authors' opinions...and now the reader complains there's too much definition,


 * 4) Knuth's definition was put here because it is the most common, and it was moved from another page to where it belongs,


 * 5) another wants more history -- and so we add what we believe is germane -- algorithm has to do symbols and rules and mechanical devices, and then a reader bitches its too much and yada yada yada whaa whaa whaa...

Eventually what we can do is reduce the page by the use of sub-articles, i.e. "Definitions of algorithm", "history of algorithm", whatever. If there were a common definition for "algorithm", then the article could be simplified. But all we will see will be more complaint. About the only summary text that can be written is sthe following, the rest is innuendo [aka opinion]:


 * "There is no agreed-to definition of algorithm. Over the last 200 years the definition has become more complicated and detailed as researchers have tried to pin down the term. Indeed there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining processes for the creation of ["output"] integers from other "input" integers -- "input parameters" arbitrary and infinite in extent, or limited in extent but still variable -- by the manipulation of distinguishable symbols [numbers] with finite collections of rules that a man can do with paper and pencil. The most common schemes for manipulation are: (1) the λ-calculus, (2) recursion, or the (3) Turing machine or its Turing equivalents -- i.e. the computer. The hypothesized equivalence of these formulations is called the Church-Turing thesis. See History of algorithm for what led up to, and steps in, the effort to better understand and define the notion of "algorithm", and Definitions of algorithm for various, more-precise formulations [aka opinions].

wvbaileyWvbailey 14:24, 24 August 2006 (UTC)

Yes, please let us do the subarticles as mentioned by you.

1. IMO, the algorithm is more computer science related term, and kunth's definition makes more sense. Other formalisations did not use the term algorithm initially, and tried to propose a computing model or mathametical theorem. Now we all know all of these formalisations are same as or similar to concept of algorithm. But that does not justify their lengthy details and history in this article. This article should not be about researching the right defintion of algorithm or proving that all computing models are same as algorithm, but shoud be about the concept of algorithm, properties of algorithm and the breif background.


 * I agree that the article is too long and too diffuse. I have created a new page Algorithm characterizations. Other suggested names are welcome.


 * Thanks a lot for doing this! It is cleaner. The charecterizations article is looking very nice in its own. mlpkr Mlpkr 10:41, 9 September 2006 (UTC)


 * Many researchers - Markov for instance -- actually titled their work "defintion of algorithms" or whatever. This is serious stuff, this definition of algorithm. The reader should be shown just how serious this is for mathematical foundations. See the long discussions on the Turing machine page and this page. It also impacts on Intuitionism, Finitism, Hilbert's Problems, Constructivism, and problems in Philosophy of Mind. wvabailey Wvbailey 19:10, 8 September 2006 (UTC)

Why do you think there is no agreed defention of algorithm? You may say different people define algorithm with different terminology, but all of them mean it as a step by step mechanical procedure to get outputs from given inputs. The wordings are different, but meaning is same.


 * This is your (very) informal definition and I don't disagree. But as you look into this deeper the problem gets difficult. The reader needs to know that this is a difficult topic: that there are serious issues having to do with the notions of "quality -- goodness, best", with the capabilities and knowledge of the machine/person, with whether or not the algorithm should come with its own "embedded" instructions for use. But in what language should the instructions be written? What symbols should be used?


 * Think about passing parameters to a subroutine. In what form will the parameters appear (string, constant, matrix, decimal, binary, unary??) Where (in what register, memory, whatever) will they be passed? In what form will they be returned? Where will they be returned? This looks easy when you're doing C programming -- C programmers are so spoiled ... they've forgotten that low-level calls may be passing through registers and both caller and subroutine have to agree to the location and the format of the parameters in both directions etc. etc. This becomes very very important when you drop down to the Turing machine or register-machine level (or the assembly-language level). And here is where the defintion-problems start.


 * You may touch this lightly in one paragraph somewhere, expression algorithm is varied based on the situation and application. It may also be expressed just as mathamatical technique, or cab be coded in pseduo code, expressed in flow charts, or running as software, or implemented in hardware chip. Its inputs and outputs can be integers, strings or any other data types, or even bits or bytes depend on the conext where algorithm is being applied or used. mlpkr Mlpkr 10:41, 9 September 2006 (UTC)


 * I have given some consideration to giving such an example to make the point, but haven't decided whether or not its original research. It is an interesting topic. One author talks about three levels of definition: a natural language, a semi-formal, and a pure formal definition. But in what language? You have to specify the machine and its language/syntax, parameters etc. down to the tiniest of details.


 * Just the fact that Blass and Gurevich are working on "the definition of algorithm" at Microsoft (I've been in touch with Gurevich) and there are other researchers as well indicates that this is a non-trivial topic. wvbailey Wvbailey 19:10, 8 September 2006 (UTC)


 * This is completely wrong. The poster that you are replying to is entirely correct. In thousands of research papers an algorithm is considered to be such a basic part of the field of computing that it is either mentioned in passing, or defined in a couple of sentences. There is no great controversy about what an algorithm is anywhere but on this wikipedia page, which seems to have confused the notion of a program with that of an algorithm. An algorithm is a mechanical procedure for computing an output from a given input. There are no issues of termination (and that entire section should be deleted) as algorithms are not programs. They are specifications of programs. There are no issues about the language that algorithms are specified in (as psuedo code is normally used), again this is a confusion with programs. The fact that Kleene may have used the term algorithm in the 50s is not a good indication that the programs that he was describing are what we think of as algorithms in the field. I've quickly looked up Blass's research and it not indicative of the common usage of Algorithm.


 * As someone with a PhD in this field I think the article is a terrible mess. It has tried to be complete and accurate, rather than readable and interesting. In the process it has failed to be any of these adjectives. To restore this article to it former quality it is necessary to stop the endless circling around whether or not we know what an algorithm is. We do. It does not require a citation, because as I've pointed out the definition is unchanged across hundreds of textbooks and thousands of papers. It is a common foundation in the field, and these attempts to find a definite citation to back up a single wording of the definition have destroyed this article. Wikipedia is supposed to be an encyclopedia, not a CS textbook. If something is in common usage then it should be stated as such and defined broadly. Amoss 13:31, 11 July 2007 (UTC)


 * Of course you are entitled to your opinion, but that opinion is not necessarily correct. But please speak only for yourself: not " we do, rather I Amoss do know what an algorithm is". And perhaps you do: If you've published something in this regard please present where it can be found and then someone can look at it; perhaps it should go on the algorithm characterizations page. If I can find it in the library then it is as legit as the next guy's opinion. Please read algorithm characterizations; you will see a series of different views from Kleene forward. What this article does is trying to do is survey the field. If you have something positive and constructive to add please do so but please do so with in-line citations and complete references -- citations are mandatory and without them reversion is certain. In particular if you can speak to the Gurevich point of view in an expert manner, that point of view needs expert assitance. As does verious sections of the article (patents, etc). The references quoted there algorithm characterizations are quite explicit as to various points of view (and contrary to what you've written above there is a lot of debate). Your point of view has merit, but it is a point of view; it is not fact. wvbaileyWvbailey 03:00, 12 July 2007 (UTC)


 * No, you have not addressed my comments. The concept of an algorithm is a well understood part of Computer Science. In peer-reviewed papers it is not necessary to define what an Algorithm is because there is a commonly understood definition. In trying to find citations for the definition of a basic term you are trying to hold Wikipedia to a higher standard than the original research that it is reporting upon. As this page covers the historical progression of the term, as well as its current meaning then it makes sense to refer to work from the 50s before complexity theory and other developments, when the term had a more general meaning. It does not make sense to try and generalise the term to include its earliest meanings. (Almost) all of the definitions that you list on the characterizations page have a common core. The final entry about Blass is straying into original research. Their paper is an attempt to redefine the term algorithm, for their own agenda, and has no place in an encyclopedia entry on the subject. The distinction between an algorithmic process and a reactive process has been settled since the mid 80s when Pnuelli wrote his classic survey paper on the subject. Having read the chacterizations page I am worried that the compiler of the information does not understand the difference between a program and an algorithm. Furthermore, the juxtaposition of quotes confuses the issue because terminology has changed over time. At one time (the 50s) the terms algorithm and program and computation were interchangable. This is no longer the case, and to treat it as such by taking older work out of context is to overcomplicate the subject.


 * So that we don't spend time running in circles. What kind of citation do you feel is relevant to show that the term Algorithm is a basic term? The previous layout of the page used Knuth as a basis. Given that Knuth wrote that material in the 80s, normally a citation to Knuth is considered enough to show that the term has a common usage. Second question, why is there a terrible section on termination? Apart from the early quotes on the characterizations page (which refer to programs) all of the definitions agree that Algorithms are finite in execution. Amoss 12:03, 12 July 2007 (UTC)


 * Further to my questions above I have some more points for you. While waiting for your reply I realised that the simplest way to show that Algorithm is a basic term would be to dig out introductory undergraduate lectures on the subject. I've compiled a bunch for you to check, all of which are definite that they are using an unambiguous accepted definition of Algorithm: [|One],[|Two], [|ThreeA], [|ThreeB], [|Four] and [|Five]. So to return to your point; it is not simply my opinion that there is an unambiguous commonly accepted definition of Algorithm; as you can see it is a widely taught fact. By trying to pull out papers as citations to make this argument you have missed the point that the term is so commonly used that it is only defined explicitly when it is being used in a nonstandard way.


 * I note that the top of the current page is correct, and uses the common definition of an algorithm. It is only the sections that have been rewritten in the body of the article that make this less clear by claiming there is some debate over what an Algorithm is. As far as I can tell it is only you (correct me if I'm wrong)
 * Wrong. A number of folks have contributed to the debate, and to the definitions. wvbaileyWvbailey 14:21, 12 July 2007 (UTC)


 * that is arguing this point, and others on this discussion page seem to have been argued down when they made the same claims about the term Algorithm being in common usage. Please provide some evidence that there is any wide debate. At this point your citation of Bless and Guershin is not authoritive as they are explicitly conducting novel research and trying to redefine what an Algorithm is. Your quotes from the 50s are not authoritive as they predate the invention of Complexity Theory and the modern definition of an algorithm. They are, however, interesting material and should be merged in somewhere, just with a proper description of how the term has changed usage, and without the description of a debate over the meaning of the term. Dennit is a philosopher rather than a Computer Scientist and so (while an interesting author) he is not an authoritive source on the definition of Algorithm. Finally, as you have written yourself Minkey uses the term Algorithm once and then moves to other more general terms pointing out it is not an accurate description.


 * My concrete suggestions for improving this page are to 1. Return to a commonly understood description of an Algorithm. 2. Remove the claims that it is debatable what it means. 3. Remove the section on termination which is very poor, and completely irrelevant to this page. 4. Explain the relevance of Complexity Theory as the modern understanding of an algorithm, rather than the mention in passing at the end. 5. Pull the more historical information into a coherent history section, and explain how the term has changed used over the past 50 years.


 * Do you agree with these proposed changes? If not, can you explain why not, otherwise I think this page can be improved greatly. Amoss 13:32, 12 July 2007 (UTC)


 * I am very far away from home and cannot spend much time on this now. I do not agree with you or your proposal. If anything this article does make it clear that a "program" and an "algorithm" are not the same thing. Your CS definition is naive: "An algorithm is a mechanical procedure for computing an output from a given input." First: define your terms: what is "a mechanical procedure", what does "compute" mean? "Input?" "Output?" Personally I don't give a damn about links. I want to see published articles: the operative phrase is "published in the literature". Secondly, "algorithm" is more than a "specification": all sorts of specifications are not algorithms. Your definition appears to come completely from computer science and has omitted any contributions and signficance from mathematics and from philosophy. I stand on what I have contributed, and the other contributors as well (I am not the only one). Actually the reason the "definition" expanded was because of debate, about a year ago, such as this. I started to research the background and ran into a bunch of different points of view. If you read the background of this article, not under this particular heading but going back, you will see that there were questions about "termination", about "history", about just what an algorithm is. The proof of the pudding is the algorithm characterizations -- in the CS-sense they all converge to basically what your definition, but so what? They give many different points of view from many different fields. Also, read the discussion page on the church-turing thesis. I have been doing some research into that as well. Like the rest of us you are free to edit all you want to, and we are free to edit your contributions as well. Please realize that "algorithm" is more than CS. wvbaileyWvbailey 14:21, 12 July 2007 (UTC)


 * I am willing to wait for you to get back home so that you can spend some time on this. I'm also not going to edit the article without some kind of agreement on what such be there. But when you get back please address my points which you have dodged again. 1. I can't see anybody else arguing that the definition of an algorithm is ambiguous apart from you. Please point out any support for your POV. 2. It's nice that you want articles rather than links but I've explained about why this is not practical for this topic. The published literature does include the body of work taught to students, none of the links I provided was a random website, they are all topnotch CS departments with algorithms courses.


 * Please answer both of these points as you have dodged them a couple of times. It's nice that there is some philosophical background for algorithms and some things from other fields, but what field is the vast majority of work on algorithms in? CS. Where is the vast majority of the common usage of the term and related concepts. CS. So why should the article not have a CS bias, rounded out with information from other fields? The definition of an algorithm is locked down in CS so explain why this article should be vague and woolly? If you don't see where the confusion between an algorithm and program is on this page then explain why the termination section exists at all? FYI The Church-Turing thesis (and variants) are about computations (programs), not algorithms. Amoss12:40, 16 July 2007 (UTC)


 * Wrong about the CT thesis and termination. Here is Kleene's (1943) definition where he first proposed his Thesis I; observe the words "algorithmic", "effective means", "terminates", "deciding", etc.:
 * "12. Algorithmic theories. As one choice of the objective, we can ask that the theory should give us an effective means for deciding, for any given one of the propositions which are taken as values of the predicate, whether that propostion is true or false. Examples of predicates for which a theoretical conquest of this kind has been obtained are: a is divisible by b . . . ax+by=c is solvable for x and y . . . We shall call this kind of theory for a predicate a complete algorithmic theory for the predicate.


 * "Let us examine the notion of this kind of theory more closely. What we want to do is describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "Yes" or "No", to the question, "Is the predicate value true?" (Kleene 1943 reprinted in The Undecidable).


 * Kleene amplified this discussion in his text (1952):
 * "Example 2. Does the equation ax + by = c, where a, b, c are given integers, have a solution in integers for x and y? there is a well-known method for answering the question, using Euclid's algorithm.


 * "A method of this sort, which suffices to answer, either by "yes" or by "no", any particular instance of a general question, we call a decision procedure or decision method or algorithm for the question (Kleene 1952: 136).


 * Minsky actually defined algorithm as effective procedurewvbaileyWvbailey 14:54, 20 July 2007 (UTC)


 * I had plenty of time to think about this discussion. I am very perplexed: you say everyone knows what the definition is, and yet you quote Gurevich as having an alternate definition and probing the concept. And we are having this debate! And see the comment below from Ale2006: "My intent is that common readers can quickly learn why there's no accepted definition (yet) and follow the link iff that's what they're looking for." So at least four people disagree with you (Gurevich & Blass, Ale2006, and me and CMummert; see below). Ergo the definition cannot be settled. The problem is: you are pushing your point of view, which is, as you state quite clearly directly above: "Why should the article not have a CS bias?" I don't have a preconceived notion of what an algorithm is. In fact I am quite confused about it. (Until I embarked on this, I always thought an algorithm was whatever it was that we did when we learned in school how to do e.g. long division). You, apparently, are quite sure you know the definition. Fine, give us a quote from published-in-print literature that says: "The CS definition of algorithm is ' . . .' ".


 * To that point, one thought would be to offer up a number of definitions, or at least one with its own section for computer science. On this page, create a new section at the bottom and write up what you have in mind. What I would expect would be something of this sort: "The formal definition of algorithm in computer science is '. . .' ( author_of_my_peer-reviewed_printed_source_here 1999: 56)". Until you came along the "authoritative" definition was probably Knuth (I assume you don't agree with it, see it in the characterizations) -- a year ago this article had cribbed Knuth without attribution ; a debate ensued, I created algorithm characterizations as a consequence. As best as I can determine the definition of algorithm from mathematics is the notion of "man with paper and pencil" together with Kleene's above together with Rosser's notion of computational (calculational) process (an "effective method"): "'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date [here he cites Church 1936, Godel-Herbrand, and Post-Turing]."(Rosser 1939: 225 loc. cit.) And the philosophers are definitely probing the question at a much deeper level of space-time, semantics-vs-syntax, etc. etc. as the Ale2006 entry indicates. wvbaileyWvbailey 14:54, 20 July 2007 (UTC)


 * This is what CMummert wrote; see above: "I don't doubt that Knuth said it, but it doesn't fit with the general consensus that there is no mathematically rigorous definition of algorithm. That is, his proposed mathematical definition is not accepted as a correct formal characterization. It doesn't serve the article to pretend that such a definition has been given. If anything, the brief characterization here makes it look like Knuth has just redefined a Turing machine. CMummert 12:24, 25 July 2006 (UTC)"


 * I don't know too much about complexity theory. But I don't see what that has to do with definition of "algorithm". wvbaileyWvbailey 14:54, 20 July 2007 (UTC)


 * Thankyou for the response Wvbailey. I can see exactly where you are arguing from, and although I don't agree with it you are right that this is going to take citations to clear up. I think there are several issues. 1. The "modern" definition of an algorithm comes from complexity theory which grew out of algorithmic analysis. Looking around on the web there are some sources in the literature about this, I'll try and find a good one when I have some time that explains the history. This would make a good separate section as it is additional to the information already in the page. 2. Prior to the formalisation of the field the term algorithm was used in a more general way. The quote from Kleene are interesting, and they back up your argument well, but I would still argue that this is a question of terminolgy. That what Kleene was referring to is what we would now call a program, rather than an algorithm. This is going to be much harder to proof via citation so I'll go away and try and find some primary source about this issue.


 * About the bias on the page, I didn't mean to imply that the page should soley be about the CS viewpoint of an algorithm. As an encylopedia page it adds a richness to show where the ideas originated from, and how they interact with work in other disciplines. What I would like to clear up is the idea that there is any doubt about what an algorithm is in CS -- even if there is in other fields. Your idea of adding a section to the end with more CS-centric info seems like a good way to go about this. I'll come back and do it when I find some good citations to back things up. Amoss 13:28, 26 July 2007 (UTC)


 * For anyone interested, I ran into three papers where their authors are duking it out re whether or not "algorithm" is a "machine" or some sort of detailed specification that includes (along with a lot of other details) a description of an (effective) computational method/process:


 * Blass and Yurevich 2002 can be found at http://research.microsoft.com/~gurevich/Opera/158.pdf
 * Moschovakis 2001 can be found at http://www.math.ucla.edu/~ynm/papers/eng.psu
 * There's a third paper by Dershowitz and Gurevich 2007 "A Natural Axiomatization of Church's Thesis". I don't have the link for this one; see the Talk:Church-Turing Thesis page where a correspondent left the link. (I'm sure it can be found at http://research.microsoft.com/~gurevich). Here it is: http://research.microsoft.com/~gurevich/Opera/188.pdf


 * I don't understand Gurevich's stance. Moschovakis comes of his corner flatly stating the issue: Algorithm = machine or specification, or what exactly? (but the paper is marred with a typo here and there that render interpretation dicey). wvbaileyWvbailey 20:26, 29 July 2007 (UTC)

2. It is not bitching. I would suggest to read the article from common reader point of view, or ask opionion of common readers. I am not discouraging your efforts, but requesting you to make this as an article instead of a text book.


 * Your point is good. I've worried about this, and I don't disagree. Wvbailey 19:10, 8 September 2006 (UTC)


 * I came to the "algorithm" page following the logic links. That subject is explained easily enough for common readers to understand. (I was wandering about the relation between causality and time, but I'm a programmer so I know what an algorithm is.)
 * Algorithm characterizations is quite difficult for common readers, thus I perceived a discontinuity. I moved the forward links to the end, after mentioning that the term is also used in logic. My intent is that common readers can quickly learn why there's no accepted definition (yet) and follow the link iff that's what they're looking for. I turned "reasonable" time into a link and mentioned that logic has no time (yet) (I'm unable to concisely and correctly state that temporal logic is not exactly about the fact that our concepts of antecedent, consequent, simultaneity, etcetera, are inextricably rooted in the concept of time.) ale 11:50, 18 July 2007 (UTC)


 * Some discussion of this might be found (vaguely) in Chalmers' book Philosophy of Mind compendium of papers (Oxford University Press, 2002, ISBN 0-19-514581-X"). Also I found a book on "causality" -- Causation & Explanation' (Stathis Psillos, Acumen Press, 2002), and various books on "time": Time and Free Will, Henri Bergson, 1913), The End of Time (Julian Barbour, Oxford U. Press, 2000), and I could swear that I read an article (or a chapter of something) that reports an exhaustive look at all forms of "before" and "after" when applied to two events happening (something on the order of "a before b", "b before a", "a happens then b happens then a goes away then b goes away", etc. etc. One thought I've had is to actually define time in terms of a state machine (i.e. a logic construction with memory) that can "come to know" that "a" happened before "b" or vice versa: in other words-- time is a "mental construct" that becomes defined only when memory is present in a observing device. But there is a problem with "initialization", and this presupposes a "before". But all of this is "O.R." or at least unpublished so it must remain here. On the problem of simultaneity see Marc Lange The Philosophy of Physics: Locality, Fields, Energy, and Mass (Blackwell Publishers, 2002). What this has to do with "algorithm" is unclear to me. wvbaileyWvbailey 14:54, 20 July 2007 (UTC)


 * That last sentence of mine is not quite right: Simultaneity does have a connection to algorithm: cf Gandy and his "Church's Thesis and Principles for Mechanisms". He delves into the matter with his "Principle IV: The Principle of Local Causality . . . Its justification lies in the finite velocity of propagation of effects and signals: contemporary physics rejects the possibility of instantaneous action at a distance" (Gandy 1980 in Barwise, Keisler and Kunen:141). This is fascinating: that the speed of light represents a constraint on computation and hence on "algorithm", that the notion of "locality" is forced onto the notion of "machine computation". On the other hand, Turing's forcing constraint was the finite memory of his human computer: "For the present I shall only say that the justification [of his definition of computatable numbers] lies in the fact that the human memory is necessarily limited." (Turing 1936-7:117 in Undecidable). Later he gives his "computer" the opportunity to write down a note and thereby "It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued . . . thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, the state of the system may be described by a single expression (sequence of symbols) . . .. This expression maybe called the "state formula"."(p. 140 in The Undecidable"). Perhaps the "finite memory" constraint of Turing's man-as-calculator and the "locality" constraint of Gandy's machines are two expressions of a single as-yet-to-be defined constraint . . .. wvbaileyWvbailey 17:58, 20 July 2007 (UTC)

3. Symbols and mechanical devices, telephony etc. are more related to a broader concept of computing/computer. Their lengthy details here makes this article disconnected and do not create a smooth flow of information. Suppose I am searching for algorithm on a search engine and this page comes up(as it is now), I have to read about mechanical devices, various mathematical models and and old computers. I realise that this is roundabout and lengthy prose after a while and go to next search results. This is subjective opinion anyway and I feel the case is same with many people.


 * Another good point. I moved the stuff to the end. Whether or not it should go to its own page... I disagree at this time. See the improvements requested for the page. The point of the history, which might need a summary paragraph, is that the informal definition of "algorithm" at the beginning of the article has evolved over many thousands of years, starting with the idea of discrete, distinguishable, one-to-one mapping of symbols with things like sheep and money; then the Greeks' generalized instructions for geometry constructions and "number constructions", then the absorption into Western culture of the Arabic numerals and the Persion notion of "algebra" to construct numbers with generalized processes as symbolized by discrete "variables", through the invention of the clock with its discrete tick-tock leading to "automata" (mechanical music box figurines, for instance) and finally the engineering, philosophic and mathematical advances of the past 200 or so years that led us to "recursion" and "λ-calculus" and "Turing machines" and "Halting problems" and computers and undecidablility and intractability. Only a fool would believe that we are at the end of this development. Hence the evolving definition of algorithm. Wvbailey 19:10, 8 September 2006 (UTC)


 * Writing a paragraph of history here and moving to another page like "algorithm evolution" is okay in my opinion, but I also do not object your present format of moving the detailed history to the end, as this gives the enthusiastic reader more material. Thanks again for doing that. Mlpkr 10:41, 9 September 2006 (UTC)


 * No problem. Your tactful criticism has improved the article. We're just here to make a really great article. (As more and more folks add more and more stuff, eventually the history may have to go to its own page).


 * I would appreciate your opinion here: I can't tell if this would be "original research" or not. I would like to provide a vivid example of some of the issues (like parameter passing problems). A good example is the the "multiply algorithm" (it executes Peano's definition of multiplication with two recursive loops in the simplest form) using a Register machine or Post-Turing machine. I can do either; the register-machine is easier to understand. So far, there is plenty of historical precedent behind the two models of machines so there is no "original research" going on if I just stick to a "standard model" (Shepherdson-Sturgis register- machine model seems the most promising)... [but as I write this, maybe the Melzak-Lambek model would be better (the Melzak model allows full addition, the Lambek model just increments and decrements)] anyway ...


 * This is definitely not orginal research.


 * But the best example would be a hybrid of both machines (a P-T machine equipped with one register functioning like an accumulator) because the register machine uses "numbers" and the P-T machine uses unary coding -- so we can see vividly that "the user" must be instructed both how and where to put the "input strings" (of ones, i.e. "multiply"(a,b)) and where to read the output (as a decimal in the register, or as a string of ones on the tape...). If we further simplify the multiply by equipping the register with an "adder" (i.e. "contents of memory n" + "contents of A" --> A) and we use it in an algorithm to calculate powers (e.g. a^b), we see a startling result -- that a trivial change in how we design the algorithm makes all the difference between a really poky machine (N^2) time and a fast machine (linear time). This gets to the "quality/goodness" issues -- the-time-to-execute issues.


 * In my opinion, this is evolved over years of research, and not our original research. Many computer theory books like Sipser use such high level abstraction for solving many problems. Also in computer organisation or computer architecture books, when they talk about designing circuits for multiplication, power etc. using low level blocks like adders and registers, they are actually talking about algorithms in a sense that a hardware circuit is an implementation of algorithm.


 * This example might go into the characterizations page or even on its own article-page. Your thoughts? Thanks. Wvbailey 20:07, 9 September 2006 (UTC)


 * If you are going to give as described here, it may go to Characterizations page for now. I think putting this in separate article-page like "Algorithm dependency on machine model" is also okay, but we can do it later. It can be extended to explain more about algorithm goodness depends on its implementation. I am giving some more text for that article in the raw form. Please rephrase and digest yourself before adding. "A solution(logic for solving a problem) when expressed(in a theoritical model) or implemented(in a practical model) becomes an algorithm. As the expressing means or implementation means change, they could become different(possibly efficient) algorithms, although they all of those algorithms for the same problem may have originated or evolved from the same solution. Efficient implementations allow bigger inputs in old model to be treated as basic inputs. The machine models limit the efficiency of any possible algorithm. The present Von Newman models are slightly superior(more efficient) to turing machine in the sense that they will treat integers as basic units and allow complex operations on them to be done in unit computing time, like doing it on bits of the number. Yet, they have no more power than the turing machines, as they can not solve new problems that turning machine can not solve. This is one of the reasons why the concepts like the solvability and tractability of the problems are still studied with abstract models like turing machines, but algorithms for solvable problems are discussed with high level implementations and formalised in high level languges close to current technology". mlpkr 2:48, 18 December 2006 (UTC)


 * I've kind of lost the thread of this discussion .... Since 9 September I just went ahead and created some examples in a new Algorithm examples article (done in October). I will look at what you wrote above and digest it. I have sprinked examples in other articles, for example on the partial function page there is an example of (im-) proper subtraction and how the algorithm can be redesigned to make it "proper".


 * There seems to be confusion with respect to total versus partial functions (algorithms) -- for example in Kleene (1952) where he suggests that any partial function (algorithm) be put into an infinite loop (and therefore non halting) given that we can detect the error (e.g. in an output with an incorrect format). (In the partial function article I raise the horrible possibility that the algorithm does terminate correctly, but with the wrong answer! So we need an independent proof of the algorithm. Minsky (1967) discusses this, sort of.) Davis (1958?) compounds Kleene's confusion with his example of (im-) proper subtraction executed on a Turing machine. (I checked his example -- simulated it -- his example is correct, and it does go into an infinite loop -- but he has to detect the error to get it into the loop). But why would someone do this intentionally? Why not just fix the algorithm so it isn't "improper"? Sort of an evolutionary response -- just make the algorithm better. I don't understand. wvbaileyWvbailey 16:32, 18 December 2006 (UTC)

Definition of finite
Is an algorithm a set of instructions with which an instance will terminate over ALL conditions, or that will terminate over ANY conditions?

e.g.


 * 1.) "Go outside"


 * 2.) IF "raining=TRUE" THEN goto to 1 (is this correct?) ELSE "if raining = FALSE" THEN


 * 3.) "go inside & take a nap"


 * 4.) STOP

69.28.40.34 20:59, 26 October 2006 (UTC)


 * Since October I did some research, and added examples at partial function. And added some more to the algorithm article. This is not a trivial issue. More often than not algorithm will have a restricted "domain" of input over which it "functions correctly"; if the input is outside that 'domain' the algorithm should say "woops" and halt with an error message (e.g. "0", reserved). This makes the algorithm a "total function." If there happens to be an unexpected vile input, then the algorithm is likely to never halt. Or produce the wrong answer! In my mind we never want an algorithm to be a partial function -- rather, they should always be "total". Then the algorithm will terminate with an answer, even an error message, that we can trust. wvbaileyWvbailey 17:49, 18 December 2006 (UTC)

Good question: I took the liberty of rewriting the above to get to its core/jist. Suppose the little man lives on that desert down in Chile? Peru? where it never ever rains (mostly never ever, which I think will be an important point...). If the unconditional goto is back to 1, then we visualize a little man running outside, checking and seeing no rain, going inside, running outside, checking, ad infinitum. Eventually, the little man (before going back out, he can go pee and eat when he needs to, we allow him this) either "produces an object" or not.

We have to agree ahead of time, in the specification of the algorithm, what 'the object' is. Say, the output will be: on a tape, the little man prints a 0 or a 1 and moves the tape right, as follows:


 * 1.) INPUT (outside_is_raining)


 * 2.) IF "outside_is_raining"= FALSE THEN goto 6.) ELSE:


 * 3.) Print 0


 * 4.) Move tape Right


 * 5.) Go to 1.)


 * 6.) Print 1


 * 7.) Move tape Right


 * 8.) "go inside & take a nap"


 * 9.) STOP

So now the algorithm is producing an object (0's and 1's) no matter what the input conditions are. E.g.
 * 000000......000001[stop]

This I would believe, is a valid algorithm. Okay, lets agree that it is a valid algorithm and take away the tape and the printing. We can probably agree that the "printing", in a sense, goes on in the mind of the little man, but only if and only if he can remember every time he tests for rain. Suppose he cannot remember more than the last 6 or 7 times, does this mean the algorithm fails?

Oops. We are into philosophy of mind now. I don't know. Maybe other readers will have opinions. This is why the definition of algorithm is non-trivial. wvbailyWvbailey 16:48, 27 October 2006 (UTC)


 * I believe finiteness means "it terminates in finite steps on all inputs". This is one of the halting problem variants. mlpkrmlpkr 3:06, 18 December 2006 (UTC)


 * But I can easily write a viable algorithm that goes on ad infinitum. The Kleene (1952) approach (p. 328) is to have an "undecided" algorithm (a "partial recursive function") produce something while it is undecided. Here's an example with regards to the Collatz conjecture. I designed a simple "Minsky machine" (counter machine) that has a "Collatz" machine inside a "Supervisor" machine; the Supervisor feeds the Collatz machine numbers to test. When the Collatz machine "reduces" the test number to 1, it returns control to supervisor, and the supervisor feeds it the next number to test. To reassure us, the Supervisor could write an output after every success e.g. (i.e. ...(17, 1), (18, 1), (19, 1), (20, 1).... ) and we could watch as this pair does their work. This will go on ad infinitum, or until the Collatz machine doesn't halt because it cannot reduce the number to 1 -- at this point we want the supervisor to write "(0, really_big_number)", and then halt (meaning: "Yeay! We just disproved the Conjecture!"). When the work is hard and taking a long time, to reassure us we want another output, "u" ("undecided") intermittently. This requires a third machine, an "Interrupter" to periodically interrupt the two machines, then return control after writing "u". So the question becomes: will the composite Interruptor+Supervisor+Collatz machine ever halt? It will produce 1's and numbers and u's, e.g. ... (47568, 1), u, u, u, u, u, u, (475679, 1), u, u, u, ... . So far no one knows whether this composite machine will halt or not. wvbaileyWvbailey 17:49, 18 December 2006 (UTC)


 * I think it is a way that different authors and text define the terms. What I was convinced in my education is, an algorithm terminates on all inputs, but a procedure(or a function) may not terminate. Some procedures are decidable and some procedures are undecidable. Even all decidable procedures are not algorithms. E.g. Operating system never terminates, but its actions based on current input are decidable. They are called computatable procedures. Mlpkr 08:01, 23 December 2006 (UTC)

I thought about this more: While the example has a humorous side, there is an interesting epistomological/ontological issue around this: just where/what is "the output" and "for whom" does the output exist?

Suppose "the man" is "a robot" -- when I wrote the following it seemed to turn spooky (less funny) -- and we write a routine for the robot-man, then in fact the algorithm always produces an output, under all conditions. The problem comes in what it means to "display an object" in the sense of "an algorithm' -- for whom does the bell toll? -- does it toll for thee (exterior viewer) or for me (the interior viewer aka the robot's mind)? Algorithm specification:
 * computational device: human nervous system + mind


 * OUTPUT format, INPUT format:
 * function(arm_L, hand_L, arm_R, hand_R, leg_L, leg_R, balance, eyes)

Available functions:
 * OUTPUT(, , ....., , )
 * INPUT(, , ...., , , )
 * GOAL(name_of_goal)
 * CASE(, , , .... , , , )
 * GOTO(instruction #)
 * HALT(time)

Program:

start:
 * GOAL(is_raining?)
 * CASE( ....., is_raining?, .....):

is_raining?:
 * GOAL(doorway)
 * OUTPUT (, , , , leg_L:walk_forward, leg_R:walk_forward, balance:upright, eyes:forward)
 * INPUT (, ,  ,  ,  ,  , leg_L, leg_R, balance, eyes)
 * CASE ( ...., NOT_is_doorway, is_doorway, .... )

NOT_is_doorway:
 * GOTO is_raining?
 * GOAL(feel_rain)
 * OUTPUT (, , arm_R:outstretch, hand_R:hand_open, , , balance:upright, eyes:upward)
 * INPUT (, , arm_R, hand_R, leg_L, leg_R, balance, eyes)
 * CASE( ...., right_hand_is_dry, right_hand_is_wet, .....):

right_hand_is_dry:
 * GOAL(pee)
 * OUTPUT (, , , , leg_L:turn 180, leg_R:turn 190, balance:turn 180, eyes:forward)
 * INPUT (, , , , leg_L, leg_R, balance, eyes)
 * etc. etc.
 * GOTO start

right_hand_is_wet:
 * GOAL(bed)
 * INPUT(, , , , leg_L, leg_R, balance, eyes)
 * OUTPUT(, , , , leg_L:walk, leg_R:walk, balance:upright, eyes:forward)
 * CASE( ...., NOT_is_bed, is_bed, ....)

NOT_is_bed:
 * GOTO right_hand_is_wet

is_bed:
 * OUTPUT(, , , , leg_L:collapse, leg_R:collapse, balance:supine, eyes:closed)
 * STOP

For those who want to see more about the problem of "algorithm" from the philosophic point of view see:
 * John Searle 2002, Consciousness and Language, Cambridge University Press, Cambridge, UK.

Searle is of "Chinese Room" fame. He is asking the question: where is the information? Is the robot just a mindless syntactic/rule-following mechanism and the information is in us the viewers of this little play? Or is there information -- meaning-- to be found in the machine itself? He does ask the question: "With regards to a computation by mechanism, where are "the numbers" really? cf p. 17. I may add this to algorithm characterizations:
 * "Computation does not name an intrinsic feature of reality but is observer-relative, and this is because computation is defined in terms of symbol manipulation, but the notion of a 'symbol' is not a notion of physics or chemistry. Something is a symbol only if it is used, treated or regarded as a symbol. The Chinese room argument showed that semantics [meaning -- sound and fury signifying something] is not intrinsic to syntax. But what this argument shows is that syntax is not intrinsic to physics. There are no purely physical properties that zeros and ones or symbols in general have that determine that they are symbols. Something is a symbol only relative to some observer, user or agent who assigns a symbolic interpretation to it. ... computation exists only relative to some agent or observer who imposes a computation interpretation on some phenomenon. This is an obvious point. I should ahve seen it ten years ago but I did not." (p. 17)

wvbaileyWvbailey 16:43, 28 October 2006 (UTC)

Googl's edit
Googl changed the beginning text from a "set of instructions" to a "sequence of instructions". A sequence is not necessary, however. A state machine need not have its instructions in a sequence -- for example each instruction for the state table of a Turing machine carries its own pointer(s) to its next instruction(s). Does anyone else object to this change in the wording? "Set" or "collection" is more general and I prefer it. Comments out there? thanks, wvbaileyWvbailey 17:15, 4 February 2007 (UTC)


 * Well, you are right. The instructions needn't be in a sequence.
 * I reverted myself, but I'd prefer if there would be a mention about instruction order in an algorithm. Elements of a set are not ordered, and I personally think of an instruction as in "let A be 2+2" (and go to next step by default) not as in "let A be 2+2 and go to step B". I'd change it to something like...
 * "an algorithm is a procedure (a finite set of well-defined instructions with defined flow)"
 * Thanks, googl t 20:22, 4 February 2007 (UTC)

Actually what you wrote is what Turing (1936-7) originally defined in his paper: the case "let A be 2+2 and go to step B" as the "motions" of his machines. Simultaneously, Emil Post atomized the operations of "the computer" (a human in his day) into simpler "motions", and a bit later (ambiguous at first) Post's instructions were default-sequential along the lines of the second of your two.

It is unclear to me who defined the first 'state machine' and where exactly it was defined (as a formality, in print). Turing is probably the first, although mechanisms for computation had been around for about 200 years.

If you do simulations you'll find that the most primitive machine (minus its "list of instructions", e.g. fewest logic NAND gates necessary to build it) is the Turing version; whereas the "sequential" form implies the "successor function" of the Peano axioms (e.g. "addition"). However, if you count the extra "memory" required if the "next address(s)" is/are in the list of instructions, the "most primitive" is not so obvious.

Perhaps what you want to see someting more along a formal specification such as this:
 * "A list of 'instructions', each unique and distinguishable by 'the mechanism' (human or machine), and with a location known by "the mechanism" such that "the mechanism" can locate an identified instruction (as specified in/on its 'state register'/'scratch pad') AND "the mechanism" has the capability of turning said instruction into action per a pre-defined specification."

Simpler said: each instruction consists of two parts (A) and (B): (A) a unique "address" (identifier), and (B) a call to action. "The mechanism" has the capability of (i) locating, (ii) distingushing, (iii) recognizing, (iv) interpreting the instruction into a pre-defined action.

This is why a single definition of "algorithm" is still unsettled. Problem is: where in the literature is such a discussion/opinions as the above? I dunno. Until we can locate it, we can't put it into Wikipedia. wvbaileyWvbailey 19:02, 5 February 2007 (UTC)

Evolution as an algorithmic process
I believe that topic should not be here, but may go to "evolution" topic. mlpkr 16:27, 7 February 2007 (UTC)


 * Am unclear as to what in the article you are objecting. There are indeed, for use in machine learning, "evolutionary" algorithms that use lots of "parents", randomness and "survival of the fittest" (i.e. the best performing parents at the task are "chosen to breed") as part of the process of the algorithm. "Genetic" algorithms are of this sort (i.e. the "genes" get scrambled by random processes; then new baby machines are built per their now-shuffled genes, and tested against the challenge, etc. etc.). wvbaileyWvbailey 00:14, 8 February 2007 (UTC)


 * I am objecting to the section http://en.wikipedia.org/wiki/Algorithm#Evolution_as_an_algorithmic_process mlpkr 23:15, 10 February 2007 (UTC)


 * The section is actually much more about algorithms than about evolution. The bulk of it gives a definition of an algorithm.  It relates that Dennett says that evolution fits the definition, but it doesn't describe what evolution is like or how it fits.  Perhaps the section should be retitled "Dennett's definition". -R. S. Shaw 03:20, 11 February 2007 (UTC)


 * This new section is misplaced -- placed as it is early in the article it has too much prominence especially given the fact that it's just one philosopher's opinion. It should go either into algorithm characterizations, or (much) deeper into the article; it is just one of many "characterizations", albeit an interesting one. Any other opinions? If not in a few days I'll move it there and retitle it similar to the above suggestion. wvbaileyWvbailey 21:49, 11 February 2007 (UTC)


 * I support your opinion on making it part of algorithm characterizations. mlpkr 17:20, 25 February 2007 (UTC)


 * Done as of 5 March 2007. wvbaileyWvbailey 17:52, 6 March 2007 (UTC)

Dennett´s definition of Substrate Neutrality
"an algorithm relies on its logical as opposed to its formal structure. Thus, the particular form in which an algorithm is manifested is not important (Dennett's example is long division: it works equally well on paper, on parchment, on a computer screen, or using neon lights or in skywriting)." Using different media (different materials, surfaces) is possible through "symbolic abstraction". That´s one thing - and it´s not a special feature of algorithms or logic. Speaking about "formal structures" is another topic. There are for example different languages available to express an algorithm. Computer scientists will sometimes even consider using a "natural language" (in opposition to the so called "formal languages"), which have of course their own kind of form, as every linguist can tell you. And at the end of the day, you could even ask an art-historian, what an "informal structure" could be, and we would possibly end up in the old abstract/figurative dichotomy, which reminds us again, how differently the term "abstraction" can be used.

In a nutshell: The terms "formal" and "structure" put in opposition to the term "logical" do more harm than good if you want to understand what "algorithm" means. --Armin B. Wagner 13:04, 19 February 2007 (UTC)


 * Now that you point this out, I agree with you. The medium of a Turing machine -- the tape built of hopscotch squares, of paper ruled off into squares, of magnetized tape, of CMOS shift-registers, of CMOS RAMs plus an up-down counter -- all of these yield a Turing machine tape if properly applied. However, it is quite easy to show an "improper division" algorithm on said Turing machine (however constructed) that performs incorrectly when confronted by division by 0 ... and how we represent "0" to our Turing machine is a whole matter in itself. There are probably as many properly-constructed "division algorithms" as there are people who can dream them up, and more than likely the "data" as presented to one machine+algorithm will not work when presented to another machine+algorithm (hence the "language problem"). Over the years Dennett has demonstrated an inability to comprehend this stuff -- syntax versus semantics -- and Dennett's nemesis John Searle has taken him to task for it a number of times. As an antidote to Dennett, I'd recommend a recent book: John R. Searle, Consciousness and Language, Cambridge U. Press, 2002. See pages 34-35 where he reiterates his point re syntax versus semantics and his coming to awareness that that which the receiver calls "information" is in the "mind" of the receiver, almost verbatim as Searle states it in his book. wvbaileyWvbailey 19:16, 25 February 2007 (UTC)

Complete But Not Clear
I think this article could use some revamping so that it's size isn't so intimidating. I think if it were made into a more general treatment of algorithms and some of the more technical sections were split into seperate articles it would be easier for a ley-person to understand. A good example of what this article could be is Physics which uses the main page more as a first step and history that as a complete treatment of the subject.Adam McCormick 00:44, 4 March 2007 (UTC)


 * I hear you. I've already spun off two subsidiary pages -- Algorithm characterizations and Algorithm examples. I hesitate to spin off the history section because many times that is what a newbie reader wants -- some historical perspective. I dunno. Let's see if anyone else has input. wvbaileyWvbailey 21:18, 5 March 2007 (UTC)


 * I don't think the article is too long, but I think it would benefit from reorganization and copyediting. There is not a lot of "structure" writing, which makes the parts of the article seem disconnected. Moving the "example" and "history" sections higher to just after the informal characterization might help. I also feel that the number of inset direct quotes should be reduced. CMummert · talk 22:02, 6 March 2007 (UTC)


 * I agree that it is not too long. Plus it's missing important content re "copyright and patents" -- we would need something (at least some references) re US and EU patent law here; I know something about patenting objects and processes but nothing about patenting algorithms. Originally the "history" was higher up, but when I added to it, it seemed to obscure the article's major content, so I moved it down; I had left the matter open as to whether or not it should be split off eventually. I don't particularly like the article's "example".


 * The problem is: the field is so broad ... while at the college bookstore a few days ago I found two huge computer-science tomes (meant to be used as texts for classes) re algorithms for computation; we only have to think of the 3-volume (soon to be rewritten plus a 4th if he gets his act together) Knuth series. If it were left up to me I'd split off the types of algorithms (searching and sorting and greedy and that sort of specific stuff) with the intent of letting this new sub-article contain pointers to yet more sub-articles with details about specific algorithms. wvbaileyWvbailey 17:33, 8 March 2007 (UTC)

improving intro
Upon reading the intro, the second sentence caught me as way too esoteric and tangential for an intro: "The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures." Moreover, this is pretty much a "topic sentence", but it is actually the last sentence of a paragraph that says nothing about complexity, implementation, or data. I strongly thing it should be removed. It's simply a stumbling block for the reader that adds nothing.


 * I can say the sentence is there trying to give important context and concepts to relate algorithms with computing. For the specifics you'll naturally have to follow the links, but I don't understand what's esoteric or tangential here. Not that I'd want to stop you from improving the article, just to give another view. --TuukkaH 12:48, 8 April 2007 (UTC)

Also, the second paragraph, using an analogy is not encyclopedic form. And it is redundant with the first paragraph, which is more precise.

I propose the following change to the intro:


 * In mathematics, computing, linguistics, and related disciplines, an algorithm is a procedure (a finite set of well-defined instructions) for accomplishing some task which, given an initial state, will terminate in a defined end-state. The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures.


 * Informally, the concept of an algorithm is often illustrated by the example of a recipe, although many algorithms are much more complex. Basically, an algorithm is a method, like a recipe, in that by following the steps of an algorithm you are guaranteed to find the solution or the answer, if there is one. Algorithms often have steps that repeat (iterate) or require decisions (such as logic or comparison). Algorithms can be composed to create more complex algorithms.


 * The concept of an algorithm originated as a means of recording procedures for solving mathematical problems such as finding the common divisor of two numbers or multiplying two numbers. The concept was formalized in 1936 through Alan Turing's Turing machines and Alonzo Church's lambda calculus, which in turn formed the foundation of computer science.


 * Most algorithms can be directly implemented by computer programs ; any other algorithms can at least , and all algorithms can, in theory, be simulated by computer programs. In many programming languages, algorithms are implemented as functions or procedures.

Actually, i think the last paragraph should be excised all together. The last sentence of the para before it contextualizes algorithms just fine, and leads into the body of the article where detail like "In many programming languages, algorithms are implemented as functions or procedures." is addressed. Kevin Baastalk 20:31, 7 April 2007 (UTC)


 * That seems reasonable to me. Why not go ahead and make the changes? CMummert · talk 12:53, 8 April 2007 (UTC)


 * I suggest that you strike "a procedure" in the first sentence i.e.
 * " ...an algorithm is a procedure ( a finite set of well-defined instructions ) for accomplishing some task ...


 * I also agree that the last paragraph should be struck; I'm not convinced that it is (deep-philosophically) correct: in particular
 * "...and all algorithms can, in theory,'' be simulated by computer programs..."
 * The above is a hypothesis, a conjecture, not a truth, unless "algorithm" is so defined, and that is still an open topic. wvbaileyWvbailey 16:10, 9 April 2007 (UTC)

Citations and style
Hi. I did notice that this article is indeed well-cited, but it doesn't seem to be in one of the three styles found on Citing sources. I think we should switch the style of the inline citations to Harvard. It would take the minimum of effort to switch them to that style, compared to the others. Meowist 07:28, 27 June 2007 (UTC)


 * I agree. I added the bulk of the citations -- but without any particular style in mind. (I just wanted to get the information down in print. . . Whatever "style" you see is probably 40 years old). As you said all the information's there, it's just a matter of formatting it. If you or someone else doesn't get to it first I'll do it, I just can't say when. Also, as I recall, the references are mixed in style, some done with templates and some without. I find the templates frustrating (i.e. they take more time than they appear to be worth) and not knowing/understanding what utility they have I have been inclined to ignore them. wvbaileyWvbailey 01:48, 28 June 2007 (UTC)

Lack of pictures hinders FA status odds
Does anybody have any ideas for what pictures to add to this article other than flowcharts? Pictures that help the article, while not being of specific algorithms discussed in depth in other pages? I was thinking of adding another example algorithm, this time of a simple non-deterministic algorithm and then adding a graph of steps used versus problem size to illustrate the concepts in asymptotics of algorithms, in this case probably asymptotically optimal. Or perhaps an animated GIF to illustrate the current example algorithm? Or both ideas? What do people think? It's certain, in my mind, that this article needs more pictures to stand a chance at getting FA. Meowist 23:49, 27 June 2007 (UTC)


 * Your example (non-deterministic algorithm) sounds too complicated and too “abstract”. The picture at the top of the article is too simple. (I am no fan of the "pseudo-code" example in the article). And one thing we have is to avoid is "original research" . . . And it would be good if the example were instructive of some other interesting notion(s) presented in other wikipedia articles . . . And it would be good to use symbols and terminology derived from some prevalent, accessible text (e.g. Boolos-Burgess-Jeffrey).


 * I see that the problem of choosing an example comes (in part) from subtle constraints implied in the notion of “algorithm”. As the article discusses, an “algorithm” is more than a fool-proof recipe for some process; rather, it also includes/implies (or should include, to be complete) a specification for the machine or person who executes it; in particular that machine/person must have the capabilities of performing the steps, and there are a host of input and output restrictions that have to be met if the “algorithm” is to be a success.


 * That the reader be aware of the “seriousness” of these issues is a key point of this article. But because of our wide assortment of readers, of varying experience levels, these fussy constraints narrow our possible choices of examples. For instance, if the example comes from mathematics, it should use really simple arithmetic -- no more than +, -, x and maybe division—and really simple concepts. If the example involves a machine the recipe should be in some absolutely simple language that the humans understand, and the machine specified.


 * If a person were to “animate” an example, one with a conditional go-to would be far more interesting and educative. The “conditional go-to” also touches on issues in recursion theory and philosophy of mathematics (i.e. see Church-Turing thesis and effective calculability (in particular see the history in Turing machine )). Such things as partial function come to light, as well.


 * Here's one human-algorithm example that is fascinating because it’s so damned easy to describe and yet it's an unsolved problem: Ulam's problem (Collatz Conjecture). You are given a number N (but not zero, otherwise you need to test for zero at the outset). Is it “1”? If so you’re done, otherwise: Is it EVEN or ODD? If it's odd you triple it and add 1 (3*N+1), otherwise you divide it by 2 (N/2). Recycle the new number and repeat until the number is 1 and then HALT. Question of interest: For any starting number N, will the number always converge to 1? Prove it! Now, suppose you change the algorithm to “IF N is ODD THEN 5*N+3 ELSE divide by 2”? Then what happens?


 * We can simplify the algorithm (in a sense) when we realize that (3*N+1) is always EVEN, so (3*N+1) will end up being divided on the next repeat. So we can just skip this predictable step and say: “IF N is ODD then (3*N+1)/2 else N/2”.


 * If we convert this to a machine-algorithm, we can get away with a counter machine (using DECREMENT register, INCREMENT register, and JUMP IF register is ZERO, and maybe ADD and COPY to make things easier). A very efficient Ulam-machine aka “algorithm” is as follows: First test to see if the number N is 1; if so then done. If not, halve N by successive decrements of N until 0 while at the same time incrementing “H” every other decrement (e.g. If N = 11, N/2 = 5 + ODD determination). Keep this half (H = N/2 = 5). If the process of halving resulted in the ODD determination (it did), triple the half H by 3 repeated additions of H to N plus two increments (i.e. N = 3*H+2 = 3*5+2 = 17). Otherwise you just keep the half H and copy it to N. Recycle this new number N (i.e. recycle 17) and Repeat until the number is 1.


 * Here’s another machine-based suggestion: we start with a counter machine designed around the Peano axioms (as opposed to the INCREMENT-DECREMENT version). “X” is any “register” in the machine, “xxx” is some instruction number:
 * (1) CLEAR contents of register X and go to next instruction: CLEAR X
 * (2) INCREMENT (add one to) contents of register X: INCREMENT X and go to next instruction
 * (3) IF contents of register X = contents of register Y THEN JUMP TO INSTRUCTION xxx ELSE go to next instruction: IF X = Y THEN xxx
 * (4) HALT


 * Suppose we want to write a routine that DECREMENTS (decreases by 1) the value in register A. How do we do this? It turns out we need two more registers, call them C for “copy” and D for “difference”. The example actually shows how to create both a "DECREMENT" subroutine as well as a "COPY" subroutine:


 * ’’start’’:
 * CLEAR C  ; 0 --> C
 * CLEAR D  ; 0 --> D
 * IF N=C THEN ’’copy’’  ; C=0, so we’re testing for N=0
 * INCREMENT C  ;here is where we get the “extra 1”
 * ’’loop_1’’:
 * IF N=C THEN ’’copy’’ ;is the copy equal to our number N
 * INCREMENT C  ; [C]+1 --> C
 * INCREMENT D  ; [D]+1 --> C
 * IF N=N THEN ’’loop_1’’ ; a trick to create an unconditional jump
 * ’’copy’’:  ;D now has the difference in it, but we have to move it to N
 * CLEAR N   ; 0 --> N
 * ‘’loop_2’’:
 * IF N=D THEN ’’done’’
 * INCREMENT N  ; [N]+1 --> N
 * IF N=N THEN ‘’loop_2’’
 * ’’done’’:
 * HALT


 * If we stick with a counter machine with instructions (INCREMENT, DECREMENT, CLEAR and COPY) the multiply routine and division routine are of comparable difficulty. Here is "Division by repeated decrement": At the start, N = starting number to be divided; at the end, N = 0, Q = quotient, R = remainder (residue), D = divisor. This does not check to see if divisor D is 0.
 * N = D*Q + R


 * start:
 * CLEAR Q
 * outer_loop:
 * CLEAR R
 * COPY D to DVSR ;DVSR = loop counter
 * inner_loop:
 * JUMP IF N IS ZERO TO N_is_empty
 * DECREMENT N
 * INCREMENT R
 * DECREMENT DVSR ;DVSR = loop counter
 * JUMP IF DVSR IS NOT ZERO TO inner_loop
 * INCREMENT Q
 * JUMP TO outer_loop
 * N_is_empty: etc . ..

I dunno, just some thoughts. wvbaileyWvbailey 17:43, 28 June 2007 (UTC)


 * The following are even more primitive. They are shorter but at the expense of having to specify "the next move". The steps are numbered, and we assume that the person/machine can recognize thier identifying "numbers" (aka symbols) etc. The first one is the "DECREMENT" version above.


 * ’’start’’:
 * 1 CLEAR C, go to step 2,  ; 0 --> C
 * 2 CLEAR D, go to step 3,  ; 0 --> D
 * 3 IF N=C THEN go to step 5 ’’move’’ ELSE go to step 4  ; C=0, so we’re testing for N=0
 * 4 INCREMENT C, go to step 5  ;here is where we get the “extra 1”
 * ’’loop_1’’:
 * 5 IF N=C THEN go to step 8 ’’move’’ ELSE go to step 6 ;is the copy equal to our number N
 * 6 INCREMENT C, go to step 7  ; [C]+1 --> C
 * 7 INCREMENT D, go to step 8  ; [D]+1 --> C
 * ’’move’’:  ;D now has the difference in it, but we have to move it to N
 * 8 CLEAR N, go to step 9      ; 0 --> N
 * ‘’loop_2’’:
 * 9 IF N=D THEN go to step 11 ’’done’’ ELSE step 10
 * 10 INCREMENT N, go to step 9  ; [N]+1 --> N
 * ’’done’’:
 * 11 HALT


 * Addition program: A+B --> S. The computational model is the same Peano counter machine (or a person with some paper divided into 4 "locations" (squares) labelled A, B, S, T.) Begin with two "numbers" in locations A and B. (Tally marks are much more peculiar and interesting/revealing with respect to the equality test; their use points out that "numbers" (symbols like 1, 2, 3 . . .) have implied meanings for humans, for instance "symbol 4 is the successor of symbol 3", etc.)) The "summation" will appear in the square called "S". For a person "CLEAR" will mean "ERASE"; "INCREMENT" will mean "put another tally mark (or "number-symbol") in the specified square, overwriting not allowed".
 * [A]+[B] --> S


 * 1 CLEAR S, go to step 2,
 * 2 CLEAR T, go to step 3,
 * 3 IF A=S THEN go to step 5 ELSE go to step 4,
 * 4 INCREMENT S, go to step 3,
 * 5 CLEAR T, go to step 6,
 * 6 IF T=B THEN go to step 9 else go to step 7,
 * 7 INCREMENT S, go to step 8,
 * 8 INCREMENT T, go to step 6,
 * 9 HALT

In this one I eliminated the "clues" as to what the thing is doing: it is now so primitive it seems (to me anyway) absolutely mind-numbing. Which is the point: To add any two integers written in locations "A" and "B", just pretend to be a robot, follow the rules and the sum will appear at "S" as specified. wvbaileyWvbailey 22:21, 28 June 2007 (UTC)

Termination
I've had a professor or two question the proposition that an algorithm necessarily must be defined as terminating at/in a given state. Although that's how your typical algorithm works, wouldn't certain algorithms violate this rule, e.g. algorithms for interpreted-language garbage collection or operating system watchdog processes, which have to be told when to quit by another process rather than terminating at/in a definite end-state? Groupthink 21:38, 20 July 2007 (UTC)


 * This is interesting (and I think, philosophically difficult). The idea of algorithmic termination appears as early as Hilbert's Diophantine equation Problem 10 (1900): "[T]o devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers" (quoted from a new Dershowitz-Gurevich paper A Natural Axiomatization of Church's Thesis, 10 July 2007).


 * As you point out (the watchdog example) an algorithm may have no exit point at all. Indeed, as described by Turing in his first proof (1936-7), his "a-machine" had no "halt" instruction (but this notion did appear in Post 1936); his a-machine (programmed to be a universal machine) just went on and on producing numbers to test (testing for what Turing called "circle-free" behavior) and printing 1's and 0's on its tape ad infinitum.


 * Turing’s notion here is of the “output region” of the tape as being “external to” the “algorithm” (i.e. he could have used a two-tape Turing machine that would write-only the 1’s and 0’s to the 2nd tape).  I suppose an observer could stop the machine at any time and see what was printed on the output region. But when the machine is testing its own number we as observers suffer a quandary – will it ever again print a new 1 or a 0? (this is Turing's proof of what has become called "the halting problem"; in Turing's proof the machine prints the same pattern of 1's and 0's ad infinitum, i.e. it ends in a circle).


 * Kleene (1952) answers this with the notion of printing a special symbol “u” so the observer knows that the machine has not come to a decision, i.e. it is “busy”.


 * Must an algorithm exhibit "an object"? The following shows that the answer is no, but there is a philosophic issue beginning to appear. Suppose you consider yourself a machine. Suppose you are dreaming. Your dreaming produces no output, and yet something is going on (you are having an experience). Or, suppose you are thinking. Again, your thinking produces no output, but something is certainly going on. Probably what "the objects" are, in these cases, is the collection of internal states that represents your dream or thoughts. These “states” are Turing’s notion of “compete configuration” that includes both what is in the “state register” (e.g. program counter) and everything on the tape.


 * I am not pushing Gurevich here (I don’t understand it well enough to push it even if I wanted to), but they seem to be proposing the same thing (they are attempting to axiomatize the notion of algorithm):


 * ”In brief, the postulates say the following about algorithms:
 * “I. An algorithm determines a sequence of “computational” states for each valid input
 * ”II. The states of a computational sequence can be arbitrary structures
 * ”III. The transitions from state to state in computational sequences are governed by some finite description”. (Nachum Dershowitz and Yuri Gurevich, ‘’A Natural Axiomatization of Church’s Thesis’’, 10 July 2007)


 * Observe that they say nothing about output; “output” is (apparently ?) reflected in the state currently occupied. This is, in fact, the way hardware Moore state machines operate. The “input” drives the state transitions. The “output” is not a function of the input directly (as it is in the case of a Mealy machine) but rather, the input “passes through” the state machine and comes out modified as “a state”, and the current state determines any output that will appear (and output is not necessary).


 * The problem of "output" arises when the machine is under observation by an external-to-the-machine observer (machine #2, algorithm #2, person, etc). For example, you the dreaming machine produce nothing at all: an observer cannot decide if you are dreaming or dead. Consider a microcontroller hooked up to a battery and maybe a few parts (like a crystal/resonator and a switch or two and a couple light-emitting diodes) but without an output e.g. “light on”. Is it operating or not? Is it having an experience? Is there “something it is like to be this machine?” Is there any “internal behavior” whatever? You close a switch. Nothing happens. You wait. And wait. Will an output ever appear?


 * The notion of “output” from machine #1 seems to involve (require? imply?) the notion of “observer” (aka machine #2, algorithm #2, person, etc). “Output” is either for the benefit of another (machine #2, algorithm #2, person, etc), or because machine #1 wants to make something happen using another (machine #2, algorithm #2, person, etc) as an agent. It would seem that the observer is bringing something to the party -- an interpretation of "the meaning" of "the output".


 * So the above argument does seem to support the notion that an algorithm need not produce an output. But in this no-output case the algorithm is of use only to itself (i.e. the dreamer's dreams are of use only to the dreamer). wvbaileyWvbailey 15:31, 22 July 2007 (UTC)


 * I thought of another example: a hardware divide-by-10 counter (i.e. an integrated circuit that "outputs" 0000, 1000, 0100, . . . 0001, 1001, 0000 -- written in left-to-right binary that "counts to 9" and then "rolls over" its count.) Internal to the device, the 4-bit "output" is fed back, through some logic gates, to its 4 flip-flops and thereby determines the next state (aka the next "number"). This device is executing a distinct algorithm (or is a distinct algorithm, or represents a distinct algorithm, depending on your point of view) but it does not halt until you "pull the plug". In a sense there is no "output" unless an observer is there to observe the status of the "bits" that appear at the device's pins. If the counting is slow enough a human observer can use a voltmeter to probe the pins (e.g. +5 volts is a "1", +0 volts is a "0"), or use LEDs+resistors to watch the pattern of ON-OFFs, or a signal-analyzer etc. etc. Or the output pins could go to another "circuit" and cause something to happen in that circuit. But in all cases there's an observer, and the observer is bringing "meaning" to the behavior of the output pins. (Until the observer enters the picture, does "the algorithm" have an intrinsic meaning? Or is an external observer required to bring their extrinsic "meaning" to the algorithm? I dunno. But the dreaming example seems to indicate that an "algorithm" is be capable of, and can possess, an intrinsic meaning. Spooky...)wvbaileyWvbailey 20:58, 22 July 2007 (UTC)


 * Wow, you've made some great points there. The observer dilemma rather reminds me of the issues raised by Schroedinger's famous feline thought experiment:  quantum mechanics says that a state isn't truly defined until an observer causes the state "waveform" to collapse into a definite state – but until then, the cat in the box is both alive and dead.  As for the Moore machine point (next state equals current state plus input, and the only "indicator" of what the machine's doing is the current state) IIRC a Moore machine doesn't ever really "terminate", it just loops through the same state (or states) forever (I think a Mealy machine works the same way – neither of them have "exit" arrows or arcs, right?)


 * At any rate, aside from the philosophical issue, there's a utility issue too. If one can't define the end-state of an algorithm, can it be said to be "useful"?  For that matter, is "usefulness" intrinsically part and parcel of an algorithm? (Which in turn begs the question, is the ostrich algorithm thus not a true algorithm because it does nothing and is therefore "useless"?) Groupthink 03:21, 23 July 2007 (UTC)


 * The philosopher Searle (see algorithm characterizations) is very bothered by all of this. Other philosophers of mind have been bothered about the dilemma of the "wave collapse" as well. (I am too). Also there's a parallel to the question: "Does the tree falling in the woods make a noise when no one is there to hear it?". I believe the answer to this 2nd one is NO: the tree falling in the woods does indeed cause compression-waves in the surrounding air. But no, it does not "make a sound" until there's a sentient observer (e.g. a crow) with adequate auditory apparatus (ears and a functional hearing appartus) to receive the compression waves, and a mind suitable to the task of recognizing the auditory event as "loud scary noise" (Yikes! Bad-noise! Fly away!).


 * Either Moore or Mealy machines (either designed in hardware or simulated in software) can have one or more "end" states where the machine ends up truly halted. In years past I designed controls for machinery (plasma arc metal cutters -- a "cutting torch" connected to a "power supply" and "gas supply") with such a state or states. These were "error" states such as "low gas-pressure", "torch cap off", "low voltage" -- these states would disable the machine and render it "safe". However, how the machine escaped from such states depended on the "specifications" of the machine, in particular the safety issues involved. First example: a plasma cutter has a "cap off" state that the state machine would end in whenever the operator removed the front "cap" of the cutting torch to service "the electrode". Unless all power were removed to the torch, this could result in a hazardous "cap-off-but-armed" situation -- if the operator were to accidentally push the torch's "start" push-button he could be electrocuted. Only when the operator put the cap back on AND turned the machine OFF then ON AND removed his finger from the start push-button would the machine "reset" (to a "Power-Up Reset" state with an incoming arrow from nowhere). We lost a lot of sleep over this; machines also used "diversity" and "redundancy" as additional safeguards (e.g. big electrical relays called "contactors" that interrupted the AC power to the power-conversion circuitry). A second example: such machines are equipped with one or more gas pressure switches that, if the pressure(s) dropped too low, caused the machine to go to a cyclic "halt-like" polling state; the state machine would poll the switch(es), and if the pressure(s) were to rise to an adequate level the machine would "re-arm" itself. In this case the state machine would have a "gas-low" state with a conditional-arrow looping back to this "gas-low" state plus an escape-arrow to a "ready-to-fire" state. But again, we had to be careful that the operator had not maintained his finger on the start-switch; the operator had to let go of the start-switch AND the gas pressure(s) had to be okay before the machine would go to the "ready-to-fire" state. To summarize: we had two types of "halt" or end-states, one a true halt, and another a polling state.


 * The "utility" issue harks back to Searle's contention that only we as sentient beings (i.e. minds) can bring "meaning" to an otherwise meaningless syntactic event (or process or symbolic-expression) called "an algorithm". But I am bothered by the dreamer -- the dreams are of use to the dreamer alone. Contrary to what Searle has written, the dreamer seems to imply that an algorithm can have meaning to (and derive meaning from) itself and does not necessarily require external "validation" from, or "utility" for, an observer. Thus an algorithm can be its own observer and thereby give meaning to itself! wvbaileyWvbailey 18:10, 23 July 2007 (UTC)

Al Kwarizmi's genetic makeup, religion, political luck-of-the-draw
I am sick of this and will now edit out anything that even closely smacks of a religion/racial point of view. We don't care. Al Kwarizmi was a mathematician and astronomer (apparently, so the historians say). If someone can place, in earth coordinates and altitude, his point of origin and where he lived and worked, fine, add it. Add in what "regime" he flourished, and what "state" this location/place currently is. But I'm sick of this racial crap. We don't write: "David Hilbert, white, Christian mathematician". We don't write: "Albert Einstein, European white Jewish scientist...". why should we write, Al Kwarizmi, Islamic [Muslim, Moslem, Persian, whatever] mathematician. The whole thing stinks of somebody trying to push a racial/relion purity point. Enough already. wvbaileyWvbailey 22:17, 28 July 2007 (UTC)

because a computer program is essentially an algorithm ??
This statement in the article is contradictory to the definition given in the intro. Not all computer programs are algorithms. Not all computer programs proceed from a known initial state. Not all computer programs are instructions for accomplishing some task. Not all computer programs proceed through a well-defined series of successive states. And not all computer programs eventually terminate in an end-state. The only essential ingredient of a computer program is an instruction. Metrax 22:39, 21 October 2007 (UTC)

Wrong statement?
I am convinced that the first part of following statement is based on a misunderstanding of what Kleene writes:

"Others, including Kleene, include procedures that could run forever without stopping; such a procedure has been called a "computational method" (Knuth 1997:5) or "calculation procedure or algorithm" (Kleene 1952:137); however, Kleene notes that such a method must eventually exhibit "some object" (Kleene 1952:137)."

In fact, Kleene writes:

"Similarly, we may have a "calculuation procedure" or "algorithm" (and hence a "calculation problem") in relation to a general question which requires for an answer, not "yes" or "no", but exhibiting some object."

Therefore, the new aspect of this more general definition is that he now allows for more general output, not merely "yes" and "no". He still requires termination, though, as is obvious by the last part of the sentence. Claus from Leipzig 13:51, 24 October 2007 (UTC)


 * This matter has come up before. It's an interesting matter, and I will posit an "interpretation" (a theory). First, go to Talk:Division by zero and look at the example that I posted there -- it uses a Shepherdson-Sturgis counter machine to calculate Q = INT(N/X). In the first example, when X="0" (0 is defined as the condition of "register is empty"), the algorithm produces, in the place "Q" ("quotient" register), a sequence of objects one after another that we interpret to be (0, 1, 2, 3, ..., n, ..., ad infinitum). But the algorithm never reaches the instruction "halt".


 * In the second example, an "extension" to the frist algorithm requires the machine to produce a non-zero object in the place "Y". If input [X]=0 ("contents of X"=0 =def "empty") then the algorithm never gets to these last two steps that copies [Q] into Y.


 * Thus if we were to watch the first algorithm (easy to do on a spreadsheet, or if we build this thing of real stuff (also easy to do)) we see it steadily putting "numbers" (aka objects) into Q. But in the second case we do not see this, we only see -- "eventually" --, a "number" (not equal to "0") appear in Q. Another, 2nd, extension could be used to light a signal-lamp and blow an horn by use of a place called "DONE" to indicate that a "number" has been put into Q; that we we could go get some supper instead of hanging out waiting. Thus we have to conclude that the notion of "producing an object" is relative to our (that is: us the observers') behavior and/or desires, not intrinsic to the machine or to the notion of algorithm. (I.e. the notion of "termination" -- what it really really means -- must be defined in the algorithm's specification). Bill Wvbailey 17:49, 24 October 2007 (UTC)

On recursion and iteration
Under the heading "Classes --> Classification by implementation" there's a slight inaccuracy: Only tail recursive algorithms can be re-coded iteratively, while nested recursive algorithms cannot. Droste effect 23:48, 4 November 2007 (UTC)


 * I don't think so; there's just a simple way to convert Tail recursion to iteration. For instance, Ackermann isn't tail-recursive but has iterative versions.  -R. S. Shaw 02:02, 5 November 2007 (UTC)