Talk:Lambda calculus/Archive 1

Computer science?
The article starts with "In computer science, lambda calculus...". This doesn't make sense to me; lambda calculus was originally part of mathematics. In fact the sentences immediately afterwards make the start sound silly since they cite Church and Kleene from the 1930s, when there were no computers and no computer science. I am changing this to "mathematical logic and computer science". --Fuchsias 22:05, 1 August 2006 (UTC)


 * Computer Science isn't computer programming, it's applied mathematics, and it's been around for much longer than practical electronic computers have. --Jorbettis 05:10, 15 June 2007 (UTC)

Syntax of constant function
Maybe I'm being really dumb but isn't (λ x. x 3) (λ x. x+2) identical to 3?? (λ x. x 3) is the constant function 3, applying it to (λ x. x+2) still gives you 3? Kuratowski's Ghost 15:28, 23 Nov 2004 (UTC)


 * The constant function you mean is (&lambda; x. 3) which is different from (&lambda; x. x 3). See the difference? :-) -- Jan Hidders 15:50, 23 Nov 2004 (UTC)


 * Nope what I'm saying is that my understanding of (&lambda; x. x 3) is that it is identical to writing (&lambda; x. x) 3 which is 3. The article seems to treat it as meaning &lambda; x.( (x) 3) in other words its treating function application as having precendence over &lambda; quantification. I've never seen that before and can't find any mention of it, all the texts I have include parentheses to avoid ambiguity, I've never really seen something like (&lambda; x. x 3) which can even be misinterpreted as multiplication of x by 3. :P Kuratowski's Ghost 00:10, 24 Nov 2004 (UTC)


 * The precedence rules are later explained in the section that gives the formal definition, right after the syntax. These are AFAIK the common conventions when using the dot notation. I agree if you say that we shouldn't apply those rules before we have actually introduced them. I leave it to you to add brackets to disambiguate. -- Jan Hidders 12:38, 24 Nov 2004 (UTC)


 * I'm trying to find a good rigorous source on the subject. Some of the stuff I have does naughty things like relying on the fact that people know that f and g are functions and that x and y are numbers so that fx is function application while xy is multiplication :) Kuratowski's Ghost 15:25, 24 Nov 2004 (UTC)


 * I don't think that's a good idea. We should be consistent in this article and if you simulate numbers in pure calculus you cannot tell the difference. The best reference is the work by Barendregt, but that's probably not what you are looking for. For an introduction google for "introduction lambda calculus barendregt barendsen". -- Jan Hidders 23:03, 24 Nov 2004 (UTC)


 * It looks quite good, still not rigourous enough for my liking ;) Kuratowski's Ghost 17:26, 30 Nov 2004 (UTC)

Assuming that 2 + 3 = 5, (λ x. x 3) (λ x. x+2) returns 5. (λ x. x 3) a is the same as a 3, that is, it returns the result of applying its argument to 3. 70.177.165.188 (talk) 02:23, 7 December 2007 (UTC)

Kudos
Really excellent work on this. When I saw the initial version, I went ahead and made a stub for Y combinator, admittedly for humor reasons, but perhaps something about some of the combinators would be nice. :) -- EdwardOConnor

Renaming
When this article talks about renaming formal arguments:

(λx. λy. y x) (λx. y)    to     λz. z (λx. y)

It doesn't make sense to me.

- It's an information loss, because we don't know what z is.

- It conflicts with the discussion about normal forms, since then any expression can be transformed to another one just by calling it λ z. z.

But it is a very interesting topic, and I'm certainly learning from this page. -- forgotten gentleman


 * I made an attempt to clarify the point Hugo Herbelin (talk) 10:19, 25 December 2007 (UTC)

Alpha conversion
Change of bound variables is known as alpha conversion, and has no effect on the meaning of the λ-expression e.g  (λ x. f x) 3 is just the same as (λ y. f y) 3 Your example is moderately interesting, and requires also the use of beta conversion or substitution, which goes as follows: (λx. λy. y x) (λx. y) = (λx.λz.z x) (λx. y) (by alpha conversion - do this first!) = (λz. z (λx.y)) by beta conversion (substitution of (λx. y) for x )


 * This should work this time, because there is no clash on y.

Does that make you happier? User talk:David Martland


 * I hope that the reordering of the conversion processes now gives the correct results - thanks for the comment re free occurences. This example does seem quite instructive - it ought to be possible to get somewhere going the other way surely ...


 * The article is veering towards considerable formalism - this is in many ways a good thing, but may also render it less accessible to people who don't have much of an idea about this stuff, and want to learn. Maybe some more examples would help? Sometimes intuitive examples do help. David Martland 22:22 Dec 7, 2002 (UTC)

Maybe there should be a note that this is the untyped lambda calculus, and that there are also several typed lambda calculi.


 * done.

I removed this:


 * Church invented lambda-calculus in order to set up a foundational project restricting mathematics to quantities with "effective procedures". Unfortunately, the resulting system admits Russell's paradox in a particularly nasty way; Church couldn't see any way to get rid of it, and gave up the project.

This is taken from FOLDOC; I don't think it is correct. The motive ascribed to Church (and Kleene) is wrong: they wanted to formalize the notion of computability. Furthermore, the paragraph presents the project of the lambda calculus as a failure; indeed it was a smashing success: it solved the Entscheidungsproblem and defined computable function for the first time cleanly (as the article already explains). It is not that "Church couldn't see any way to get rid of it", but in fact he proved that "there is no way to get rid of it", thus solving the Entscheidungsproblem in the negative. AxelBoldt 08:51 Sep 3, 2002 (PDT)

The end of the section on alpha conversion suddenly and inexplicably drops in an example in a C-family programming language, which won't make sense to anyone without a programming background. Can it be replaced with something more friendly/appropriate? Kaleja 17:45, 23 May 2007 (UTC)

Barendregt quote
I did a little research and found an article by Henk Barendregt (one of the worlds leading experts on lambda calculus) titled The impact of the lambda calculus in logic and computer science that you can find at http://citeseer.nj.nec.com/barendregt97impact.html and there he says:
 * (page 2) As to the representation of proofs, one of Church's original goals had been to construct a formal system for the foundations of mathematics by having a system of functions together with a set of logical notions. When the resulting system turned out to be inconsistent, this program was abandoned by him. Church then separated out the consistent subsystem that is now called the lambda calculus and concentrated on computability. (*1). [...]

At the top of page 5 you can read more about the details. -- Jan Hidders 12:16 Sep 3, 2002 (PDT)


 * Nice reference; we definitely have to weave this in then. AxelBoldt 14:20 Sep 3, 2002 (PDT)

Combinatory logic
I've written the beginning of an article about Combinatory logic, a variation of lambda calculus that has only application, and no abstraction. It seems to me it should be linked from here, but I'm not sure what to say. -- Mark Jason Dominus

De Bruijn indices
To do: De Bruijn discovered a version of the Lambda calculus that avoids the alpha-equivalence difficulties by eliminating all variable names. There should be a paragraph about that. I will write it if someone else doesn't get there first. Dominus 04:18 Nov 30, 2002 (UTC)

AQlpha conversion quibble
The discussion of alpha-conversion is not quite correct. For example, it says


 * The alpha-conversion rule states that
 * &lambda; V. E == &lambda; W. E[V/W]
 * where V and W are variables, E is a lambda expression and E[V/W] means the expression E with every free occurrence of V in E replaced with W.

Here is a counterexample: Take V==x, W==y, and E==(x y). Then the definition above asserts that


 * &lambda; x.(y x) == &lambda; y.(y x)[x/y]
 * == &lambda; y.(y y)

since I have replaced "every free occurrence of x with y", as instructed. But the equivalence is wrong.

I would like to fix this, following the exposition in section 1.1 of Lectures on the Curry-Howard Isomorphism by Sørenson and Urzyczyn. If nobody objects, I will. As the same time I would add a section about the De Bruijn formulation, which avoids the α-reduction issue entirely. Dominus 00:17 Dec 1, 2002 (UTC)


 * FWIW that certainly has my blessing (you just have to add that W does not ocurr freely in E and all occurrence of W in E[V/W] should be free) and I would even vote for a separate article on the De Bruijn formulation that explains the why and how of it. -- Jan Hidders 14:46 Dec 1, 2002 (UTC)

Eta conversion
What about eta conversions? I used to know this stuff, but now it's very hazy. I think there are some circumstances where etas are needed - not just alpha and betas. David Martland 22:51 Dec 10, 2002 (UTC)


 * There's a bit on it now, but it needs work.

Are the last two paragraphs intended as a proof of eta-conversion or extensionality? It reads as if this is the intention. If this is the intention, then it is not a proof. One cannot prove two statements by showing that they imply each other. Here is a counter example:

statement1 := If I let go of my pen it will not fall.

statement2 := Gravity does not exist.

If either statement is assumed to be true then the other logically follows; however they are both clearly false.


 * Extensionality is true by definition. The proof shows that the notion of eta-conversion gives rise to exactly the same equivalence relation as the notion of extensionality. greenrd 19:49, 28 December 2006 (UTC)

Kudos II
Great article! I'll point my students to it. -- Ellen Spertus

To do
TODO: connection between lambda calculus and natural language semantics Kowey 08:01, 27 Nov 2003 (UTC)

Parallelism in LC
I reverted back the changes make by CarlHewitt, the reason being that the lambda calculus is suited neither for parallel nor concurrent programming. This is motivated by Berry's sequentiality theorem (see, e.g., Barendregt's book Ch. 14). In essence the theorem states that whatever execution is performed in the lambda calculus it will always be sequential and it can never be either concurrent or parallel. Note, however, that simulation is possible for both parallel and/or concurrent programming: the lambda calculus is Turing complete. Koffieyahoo 6 July 2005 11:37 (UTC)


 * The accepted practice on the Wikipedia is that if you disagree with something in the text is as follows: maintain a neutral point of view by laying out the arguments for and against instead of imposing your own point of view by simply reverting anything that you disagree with.--CarlHewitt 2005 July 6 15:34 (UTC)


 * This is massively hypocritical as you have reverted all alternative views (and even the addition of pointers to online papers) in the Actor model page that you handle as of it were your personal property --Blissex


 * Removing incorrect information is allowed, and your statement regarding parallelism is incorrect. So it seems to me we can agree on a correct statement on this discussion page and leave any incorrect information out of the article for the time being. Koffieyahoo 6 July 2005 17:45 (UTC)


 * Actually the lambda calculus is fairly well suited for parallel computation but not for concurrent computation. On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus.  Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially.--CarlHewitt 2005 July 6 15:30 (UTC)


 * No it is not. Try to define, e.g., the parallel-or in the lambda calculus (P(T,x) -> T; P(x, T) -> T; P(F, F) -> F). I think we can agree that this is about the most primitive parallel function that exists. Now, a consequence of Berry's theorem is that this function is *not* definable in the lambda calculus, so I don't see how you can justify that the lambda calculus allows for parallel computation. That the lambda calculus allows for parallel reduction, which is what you mean is something completely different. Koffieyahoo 6 July 2005 17:45 (UTC)


 * It's OK by me if you want to call computation by the name "reduction". You might try working through the parallel-or using the combinators for true and false.--CarlHewitt 2005 July 6 18:14 (UTC)


 * Clearly you only partially understood what I tried to say. The true and false are not the problem. If you want to introduce them as constants it's even fine. This still does not make it possible to write down the parallel-or as a lambda term. Koffieyahoo 6 July 2005 18:19 (UTC)


 * If you look in the literature, you will find true and false themselves defined as lambda expressions. Using them it is possible to study parallel-or using parallel reductions.--CarlHewitt 2005 July 6 18:37 (UTC)


 * Given the existence of parallel rewriting models for the lambda calculus, I can see no problem with what Carl wrote. Koffieyahoo: you might like to think more carefully about what Berry's result tells us about the possibility of using the lambda calculus to express concurrent computation. --- Charles Stewart 6 July 2005 19:37 (UTC)


 * Charles has a good point, but the issue of whether the lambda calculus can implement parallel-or is not settled until the investigation is done. My conjecture would be that there are some limitations on the ability of the lambda calculus to implement parallel-or.  Of course the lambda calculus can clearly implement some kinds of parallelism such as the parallel reduction of the arguments of a lambda expression.--Carl Hewitt 8 July 2005 06:50 (UTC)


 * It's a direct consequence of Berry's theorem (this is immediate by Thm 14.4.12 in Barendregt's book) that the only way to write down the parallel-or in the lambda calculus is to write down some lambda term that alternates between the left and right argument of the parallel-or. Consequently, the implementation is no longer really parallel (even if parallel reduction is used): it can happen, for example, that right argument of the parallel-or has been reduced to true already but that before we really obtain this result (as in reduction to the lambda term that represents), we first have to do some more reductions in on the left argument. If the lambda-calculus was truly parallel, however, this should not be necessary the result should be available immediately.


 * That parallel reduction is possible at all, is just a consequence of the lambda calculus being side-effect-free. It does not mean the lambda calculus is truly parallel.


 * I think it's best is the statement is changed in something much more concrete like: multiple beta-redexes can be reduced at the same time, but this can always be emulated by reducing a number of beta redexes one after another. Referring to sequentiality and parallelism is just too confusing in the light of Berry's theorem. Koffieyahoo 8 July 2005 09:35 (UTC)


 * The article currently states "Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially. The lambda calculus is suitable for expressing some kinds of parallelism but in general not concurrency. On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus."


 * I do not see how Berry's theorem contradicts the above statement.--Carl Hewitt 21:22, 11 July 2005 (UTC)


 * It contradicts The lambda calculus is suitable for expressing some kinds of parallelism as Berry's theorem implies that eventually you have to do something you didn't had to if this were a parallel formalism. By the way, I have no objections to the first sentence, altough I think it should be made more concrete as I already suggested above. -- Koffieyahoo 13:50, 13 July 2005 (UTC)


 * The possibility of evaluating arguments to functions at the same time (possible in both CBN and CBV evaluation) is already a useful form of parallelism. And John Lamping's partial sharing graphs, used to realise optimal reductions in the lambda calculus, show that there is more parallelism than that available in the lambda calculus.  Berry's theorem tells you that POR cannot be implemented given a particular encoding of the Booleans.  It does not tell us what can be done if Booleans were encoded at higher types. --- Charles Stewart 20:39, 22 July 2005 (UTC) (PS. Indentation changed in above, to keep different speakers at same depth)


 * Two comments on this:
 * Berry does not specify which particular encoding of the Booleans you should choose, only that one must be assumed beforehand, which is completely fair, as otherwise you would never know what true would be and what false would be. So, higher-type (whatever that would be in the untyped lambda-calculus) is covered by the theorem.
 * There is adifference between (a) parallel execution, which exists for the lambda calculus in the form of parallel reductions, sharing graphs, etc., and (b) a true parallel program which of which POR is the prime example: just start evaluating both arguments and when one eventually evaluates reduces to "true" we don't need to evaluate the other argument any further before we move on to the next part of our "program"; Berry's theorem shows that this "we don't need to evaluate the other argument any further" is impossible. Unfortunately, the most common reading of "parallel" is (b) and, to make things worse, at lot of people assume "concurrency" is synonymous with parallelism.


 * Of course parallelism is a special case of concurrency.--Carl Hewitt 11:13, 4 August 2005 (UTC)


 * No it is not, you can easily do concurrency without any form of parallel execution. See for example the (single-core) single processor (hyper-threading off) you probably have on you desk: It does concurrency (in the form of preemptive multitasking), but not parallel execution (there is only one processor). -- Koffieyahoo 12:19, 19 August 2005 (UTC)


 * Hence, my problems with the current text aren't resolved yet. -- Koffieyahoo 10:13, 4 August 2005 (UTC)


 * I clarified the article as follows: "Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially. The lambda calculus is suitable for expressing some kinds of parallelism ,e.g., the parallel evaluation of the arguments of a procedure.   However the lambda calculus does not in general implement concurrency, e.g., a shared financial account that is updated concurrently.  On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus."--Carl Hewitt 11:21, 4 August 2005 (UTC)


 * I would drop everything from e.g. onwards (it is covered by the concurrency link) and move it to a slightly different part of the article, but this sounds more like it. -- Koffieyahoo 12:19, 19 August 2005 (UTC)

The paragraph doesn't explicitly mention – though alludes to – Church-Rosser, and is phrased in a confusing manner. I do notice that I'm about a year behind the discussions, but would nevertheless like to rewrite the paragraph as follows, if no one objects or offers suggestions (tempted to cut the examples off after ‘process calculi’ though...): --Liyang 06:12, 27 July 2006 (UTC)
 * The Church-Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order. (Indeed, it is referentially transparent.) While this means the lambda calculus can model the various nondeterministic evaluation strategies, it does not offer any richer notion of parallelism, nor can it express any concurrency issues. Process calculi such as CSP, the CCS, the π calculus and the ambient calculus have been designed for such purposes.

Looks good. -- Koffieyahoo 07:00, 27 July 2006 (UTC)

Representation of integers, functions, etc.
The section "Arithmetic in lambda calculus" misstates the manner in which integers, functions, etc., are standardly represented in lambda calculus. In the standard rendering, mathematical objects such as booleans, integers, functions, etc., are all represented by lambda terms &mdash; they are not themselves lambda terms. Thus, Church numerals standardly represent integers (they are not themselves integers, despite the wording of the article), and certain lambda expressions represent functions on numerals (they are not functions on integers, as the article would have it). Is there a good reason for not rewriting this section accordingly? --r.e.s. (Talk) 07:01:46, 2005-08-05 (UTC)


 * There is no practical difference, and very little philosophical difference, beteween saying that the lambda terms merely "represent" numbers, and going one more step and identifying the numbers with the lambda terms themselves. In the former case, one needs to prove that the lambda term &lambda;nfx.(f (n f x)) transforms terms to terms in a way that is isomorphic to the way the successor function operates on integers; in the latter case, one needs to prove that &lambda;nfx.(f (n f x)) has the properties that one usually associates with the successor function.  Practically, these come to the same thing.


 * One can make the same distinction in other areas as well. For example, one can quibble over whether a Dedekind cut is a real number, or whether it is merely  representation of a real number.  But the practical difference is minimal, and mathematicians do not encounter any problems when they identify real numbers with cuts and say that the cut is the number.  Similarly one can quibble about whether the Von Neumann construction {},  –, {{},  – }, ... actually constructs the set of natural numbers, or only some object that is isomorphic to the set of natural numbers.  But practically there is no difference, and mathematicians are happy to speak of the number 2 as being identical with the set {{},  – } in the context of such discussions.  Ignoring the distinction may be technically incorrect in some hyperstrict sense, but it is only a stylistic decision, and not one that affects the development of the theory.


 * To answer your question, it seems to me that there is an excellent reason for not rewriting the section as you suggest. The rewriting to use the "representation" point of view adds an additional layer of verbiage, abstraction, and technical terminology that only serves to obscure the real point of what is going on.  This Wikipedia article is not intended to be a perfectly rigorous exposition of the lambda calculus for specialists, à la Barendregt, but rather is for a general, nonspecialist audience.  In my opinion, he rewrite you suggested is unnecessary and pedantic, and would accomplish nothing of value, while at the same time rendering the article more difficult to understand.  -- Dominus 08:12, 5 August 2005 (UTC)


 * I disagree with your opinions as follows:
 * (1) First the concept ... A standard presentation in terms of Church numerals would be no harder for the general reader to understand than is the unnecessary innovation of so-called "Church integers" (sic), and in my opinion the distinction is neither "hyperstrict", "merely stylistic", nor pedantic.
 * (2) Secondly the name ... Church numerals are universally known by that name. I suppose it is not Wikipedia policy to replace such a standard name with a newly-invented one -- especially in a separate article on that subject.
 * --r.e.s. (Talk) 20:23:04, 2005-08-05 (UTC)


 * I agree with you that the term "Church integers" is incorrect and should be replaced. Tony Sidaway already replaced one instance of this phrase; I have just replaced the other.  -- Dominus 06:18, 6 August 2005 (UTC)


 * I've changed one more (and I think the last) instance of "Church integer" to "Church numeral". Also, for what it's worth, I've added to the Church numeral article what I think is a more-standard presentation that emphasises the lambda calculus rather than just Haskell and Scheme. --r.e.s. (Talk) 08:01:26, 2005-08-06 (UTC)

As a former mathematician, I've learned a lot from this article. I did add a little bit of motivation on the arithmetic of Church numerals based on iterated composition of functions and the laws of exponents. I'm hoping this makes the (potentially rather obscure) definitions clearer. Thanks to all of you for a very enlightening piece. Natkuhn 03:53, 10 November 2007 (UTC)

Maybe add the syntactic convention for curried functions?
I think that λ x. &lambda y. ... should be changed to λ x y. ..., of course explaining this syntactic convention in the section on curried functions. --64.231.73.48 22:46, 23 October 2005 (UTC)

Recent Edits Need Cleanup
Edits added 11-06 need at the very least radical cleanup. I would revert to version from Nov. 2, but not sure if I'm missing the value in the Nov. 6 changes.Jyossarian 18:11, 7 November 2005 (UTC)


 * I think they have negative total value. -- Dominus 21:57, 7 November 2005 (UTC)


 * Deleted nonsense section "critics on Church" 130.94.162.64 07:12, 8 November 2005 (UTC)


 * I reverted to the 11-02 revision. None of the changes improve the clarity of the explanation.  They seem to be the editor's personal philosophical take on the Lambda calculus. - Jyossarian 18:52, 8 November 2005 (UTC)

Not for one turing machine only, but for all turing machines?

 * JA: Not that the extra word wasn't helpful in that context, but, in the presence of a universal turing machine it's really an inessential generalization. Jon Awbrey 12:18, 28 January 2006 (UTC)

Programming languages
The section on programming languages seems to be POV. It refers to Emacs Lisp but not ALGOL 60 as "archaic" (only Emacs Lisp is still in use). It incorrectly lauds Common Lisp as lacking dynamic binding (all Lisps need dynamic binding, which CL calls "special variables"). The claim that binding in Lisps don't match the lambda calculus because their authors "misunderstand" the calculus (rather than because the calculus in pure form has poor efficiency) is just a damn lie. jyavner 17:36, 29 January 2006 (UTC)


 * Wow, take it easy man. The use of archaic is only there to contrast it to the other variants of Lisp. Anyway, I changed this to older which is more neutral.
 * For your claimes regarding dynamic binding: To my knowledge the current text of the article is more or less correct, and there was actaully a misunderstanding. Might my knowledge be incorrect, please provide references that argue as such. -- Koffieyahoo 08:51, 30 January 2006 (UTC)


 * MacCarthy's original Lisp used dynamic scoping, and I find it rather unlikely that McCarthy "misunderstood" the lambda calculus.


 * As for Emacs Lisp: Scheme actually predates Emacs Lisp by several years; it was designed in the late 1970s. In a 1981 paper on Emacs Lisp, Stallman discussed the pros and cons of dynamic vs lexical scoping, and why he decided on dynamic scoping for Emacs Lisp . So the question is, can you provide references to back up your claim that there was a misunderstanding? -- CYD


 * E.g. this |tech report, a shorter verions of which was published in Lisp and Symbolic Computation (a refereed journal) claims this in the first paragraph of it's introduction: bug. So, you might guess that if this claim had been wrong a referee would have caught it. (I'll see if I can find something that make a more explicit claim. However, this is sort of the folk tale, accepted by a large number of people within the Lisp and lambda calculus world, of how dynamic binding came into being. So something in this direction absolutely needs to stay in.)


 * Oh, and I don't think I made any claims about Emacs Lisp, now did I? -- Koffieyahoo 18:39, 31 January 2006 (UTC)


 * Ahem: "Older dialects of Lisp, such as Emacs Lisp, still use dynamic binding, and so are not based on the lambda calculus. Rather, they are based on the syntax of the lambda calculus, together with a misunderstanding of the notion of binding and substitution in the lambda calculus." -- CYD


 * I suggest that you look at the history of the text. You're mixing two issues here: archaic and misunderstanding. Anyway this is rather irrelevant. The question is: do we agree that we can call Emacs Lisp an older variant of Lisp, or don't we? -- Koffieyahoo 08:40, 1 February 2006 (UTC)


 * It is newer than both Scheme and Common Lisp, so if anything it's a newer Lisp. -- 18.19.6.82


 * It is difficult to find unbiased sources on this subject. reports that McCarthy says he wasn't trying to match the binding-methodology of lambda calculus, then disbelieves him because he is "quite near to Scheme!" Like absoutely every article I've been able to find on the subject, this one takes as a given that McCarthy was "really" trying to write Scheme and so every way that Lisp is different from Scheme is a "mistake". If all the referees believe this meme, they will not think there's anything wrong with the use of word "bug" in a paper. Another subtle problem is that McCarthy *did* think that funarg represented a bug, but "funarg problem" ≠ "dynamic binding".
 * There are other considerations besides theoretical rigor in language design. Scheme is the best language only if rigor is all you care about. But we are not talking here about programming languages. The topic is lambda calculus. As a side note to the calculus, we can mention that some programming languages match up with it quite well, others less so. But matching perfectly is not a goal for most language designers and this article should not be grading their work on that basis. I was planning to edit the article towards this end, but I see that User:CYD has already beat me to it. jyavner 07:01, 1 February 2006 (UTC)


 * McCarthy himself writes: In modern terminology, lexical scoping was wanted and dynamical scoping was obtained. I must confess that I regarded this issue as just a bug and expressed confidence that Steve Russell would soon fix it (ACM SIGPLAN Notices, Vol 13, No. 8, 1978, p. 220). I can only explain these sentences as McCarthy admitting that he misunderstood what was going on in lambda calculus vs Lisp. The funarg device was the result of this misunderstanding. -- Koffieyahoo 08:36, 1 February 2006 (UTC)

There is no link for Haskell_(programming_language) - functional oriented programming language. I think it should be addded.

λ calculus
The phrase lambda calculus should be λ calculus throughout, except the article title. If you disagree, explain why you don't support Pee = Enn Pee and so on. CMummert 22:49, 15 July 2006 (UTC)


 * I don't think so. First, λ calculus should be spelled λ-calculus (with a dash), but that's just a small technical point. Second, in the literature there is no real preference for either lambda calculus or λ-calculus, so changing this potentially might give rise to a revert war. -- Koffieyahoo 05:59, 17 July 2006 (UTC)

I think the introduction should somehow explain that "λ" is the Greek letter lambda. It seems to state the proposition "lambda calculus = λ-calculus" in the introduction without much explanation that these two names are basically the same thing. --Whiteknox 21:32, 2 February 2007 (UTC)
 * People might not necessarily know that λ is "lambda." This is the english wikipedia.   Asmeurer  ( talk   ♬  contribs ) 23:39, 12 October 2008 (UTC)

Plugging Eiffel
Hello Fuchsias,

I see you've been rather enthusiastic with adding entire paragraphs on Eiffel's agents in articles on the lambda calculus. It makes more sense to put the examples on the Eiffel page itself.

The study of the lambda calculus is a large enough field on its own as to make the link to Eiffel rather tenuous. If you do decide to leave a link, then quick references other languages in this regard would be welcome too, in order to maintain a neutral PoV.

Cheers, --Liyang 19:01, 31 July 2006 (UTC)

Hi, Liyang, thanks for the comment on my Eiffel edit. The lambda calculus article has extensive sections on other languages on how to emulate lambda expressions, but nothing on Eiffel that has full-fledged support for the mechanism. So I added the information. I have refrained from adding similar sections on other languages since I prefer to make edits only where I know the subject matter very well, but may do so in the future, and of course others can provide info on other languages.

Fuchsias 19:27, 31 July 2006 (UTC)

You said: "The lambda calculus article has extensive sections on other languages on how to emulate lambda expressions"

Please point out one or two such paragraphs. Lambda calculus mentions functional programming languages and cites Haskell, ML, Lisp as being the prominent ones. No examples in any of these languages.

You also said: "but nothing on Eiffel that has full-fledged support for the mechanism."

Eiffel's ‘mechanism’ for supporting closures / anonymous functions (or objects in Eiffel's case) is (in some cases not) as fully-fledged as that of Smalltalk, Ruby, Python, Perl, Scala, JavaScript or a dozen other general purpose not-primarily-functional programming languages; even C++ has support for this ‘mechanism’ via Boost. And let's not forget that Java has anonymous objects which can be used to achieve the same effect. If you mention one, you should at least mention the more widely-used ones from the rest too.

Having said this, the articles are mainly concerned with the lambda calculus formalism, as part of mathematical logic, theories of computation and type theory. Programming examples do not belong there, there are much more appropriate venues. Look: here's a section I found on Eiffel's agents.

Adding extra information to Wikipedia articles is generally appreciated, but please make sure it's being added to the correct page, under the right context.

Thank you, --Liyang 03:29, 1 August 2006 (UTC)

Regarding sections on other languages: they are not as extensive as I remembered them from a few months ago, or maybe I confused with another page I had just read, sorry.

But I still think it is important to mention that lambda-calculus mechanisms are not limited to Lisp and functional languages. There is a rather strong bias in favor of functional programming on many of these pages, and I think it should be corrected. What I propose to do is to add information about other non-functional languages, e.g. C#. That will take a couple of days.

Thanks,

--Fuchsias 03:47, 1 August 2006 (UTC)

You said: "But I still think it is important to mention that lambda-calculus mechanisms are not limited to Lisp and functional languages."

Please read carefully what I wrote above, in particular the penultimate paragraph.

You do not mean ‘lambda-calculus mechanisms’ (there is no such concept.) You mean anonymous function definitions, closures (related to continuations) and first-class functions (various hacks include Java's inner classes).

I acknowledge your expertise with Eiffel, but trust me, I am not at all unfamiliar with what I'm talking about either.

Thanks, --Liyang 04:16, 1 August 2006 (UTC)

Hi Liyang. I am a little concerned by your last message. Did I imply that I didn't value your competence? I would be very sorry if I had made any such comment.

Also, I don't understand your observation that

Having said this, the articles are mainly concerned with the lambda calculus formalism, as part of mathematical logic, theories of computation and type theory. Programming examples do not belong there, there are much more appropriate venues.

The article is not just about math since it includes a section "Lambda-calculus and programming languages" -- precisely the one we are discussing. That section should be as good as the math part, in particular it should be well-balanced and not limited to pushing the functional language POV. I think the paragraph on imperative languages as it stands now does a reasonable job of completing the picture.

Fuchsias 17:38, 1 August 2006 (UTC)

You said: "Did I imply that I didn't value your competence?"

No, I was asserting that I'm familiar with the lambda calculus as well a range of programming languages: imperative, OO, functional and logical.

You said:"(Lambda calculus and programming languages) should be as good as the math part, in particular it should be well-balanced and not limited to pushing the functional language POV."

It's not pushing a functional PoV. The section was simply to point out that function languages are, to a large extent, extensions of the lambda calculus.

As an aside, let's have a look at what you've written: "Numerous imperative languages, e.g. Pascal, have long supported passing subprograms as arguments to other subprograms. In C and C++ the equivalent result is obtained by passing pointers to the code of functions (subprograms)."

What you have described is certainly not equivalent. A function pointer/reference does not a closure make. Please fix this after you have read and understood the article on closures.

"New mechanisms in imperative object-oriented languages have, however, started to provide the direct equivalent of lambda expressions of arbitrary order; this is the case 'agents' in Eiffel and 'delegates' in C#."

These mechanisms are not new. Unless you regard Smalltalk as new. Nor are such mechanisms directly equivalent. Saying that agent (x: REAL): REAL do Result := x * x end represents (λx. x*x) is firstly factually untrue: you're enforcing a particular evaluation order, namely passing the argument by value. Secondly there is no * operator in the lambda calculus.

What you are adding is – while appreciated – outside the scope of the article.

Wikipedia guidelines recommend that I simply change anything non-factual without discussion. I'm trying to do you a favour so you can fix the errors and place the information somewhere appropriate. I'm tired of this. If you still insist on tenuous additions which have more appropriate homes, please take this discussion to the talk page. –Liyang 19:03, 1 August 2006 (UTC)

You are right about Smalltalk -- I will fix this.

It is really a pity that you should get so fired up because other people are trying to improve the coverage of the page. There is more to life than functional languages. You should not get "tired" so quickly; listen instead to the arguments of other people (as I am doing to yours).

Even the part about functional languages was poorly written, and it still needs improvement.

Copied and pasted from our user talk pages. I'm tired of this. Someone else take over. –Liyang 19:11, 1 August 2006 (UTC)

Thanks for copy-pasting this discussion (although comments that sound fine in personal discussions may appear too strong in a public discussion -- none of mine are intended that way as I respect your expertise). The point is simply to have a balanced view, in the programming language section, between functional and imperative languages. In fact it is striking that Landin's original paper was focused on Algol 60, which is about as imperative as it gets. Fuchsias 19:33, 1 August 2006 (UTC)

A slight confusion
First of all, sorry if I'm being stupid; it's 2 in the morning and I stumbled onto this article and thought that I might try to see if I can manage to learn this. In the first definition of PRED, is the (lambda u.u) absolutely necessary (it doesn't seen to do anything, like multiplying by 1)? Also, wouldn't it better if the definitions were a bit more consistent (all with m, n AND f and x)? Now let's see what happens when I program this in Python... Zyxoas (talk to me - I'll listen) 00:24, 3 December 2006 (UTC)


 * Yes, it is necessary. Consider the difference between P  and (P &lambda;u.u); one is the function P, and the other is the result of applying P to the identity function, not the same thing at all.


 * As for writing the code in Python, good luck. It can be done, but there are some complications.  For example, IFTHENELSE will not work as written, because Python uses applicative order reduction, which is not always guaranteed to find the normal form of an expression.  A discussion of these issues, and working demonstration code,  can be found at Perl contains the &lambda;-calculus.  The translation from Perl to Python should be simple, since the two languages are so similar.


 * I hope this was useful. -- Dominus 01:24, 3 December 2006 (UTC)

Yes, that was very useful, thanks. It's just that when I did pred 1 and pred 2 by hand the (lambda u.u) seemed to always just onto the left side, and didn't seem to have an effect (the (lambda g h.h (g f)) flips its parameters around) or am I missing something here? Give me Turing machines over lambda calculus any day!! ;) Zyxoas (talk to me - I'll listen) 11:24, 3 December 2006 (UTC)


 * I wonder i fyou reduced something the wrong way? I think the reduction of (PRED 1) goes like this:


 * (PRED 1)
 * (&lambda;nfx.(n &lambda;gh.(h(g f)) &lambda;u.x &lambda;u.u) 1)
 * &lambda;fx.(1 &lambda;gh.(h(g f)) &lambda;u.x &lambda;u.u)
 * &lambda;fx.(&lambda;fx.(f x) &lambda;gh.(h(g f)) &lambda;u.x &lambda;u.u)
 * &lambda;fx.(&lambda;x.(&lambda;gh.(h(g f)) x) &lambda;u.x &lambda;u.u)
 * &lambda;fx.(&lambda;x.&lambda;h.(h(x f)) &lambda;u.x &lambda;u.u)
 * &lambda;fx.(&lambda;h.(h(&lambda;u.x f)) &lambda;u.u)
 * &lambda;fx.(&lambda;u.u (&lambda;u.x f))
 * &lambda;fx.(&lambda;u.x f)
 * &lambda;fx.x


 * And the result is 0, as it should be. But if you leave out &lambda;u.u, it doesn't go correctly.  It's pretty much the same up through step 7:


 * (PRED 1)
 * (&lambda;nfx.(n &lambda;gh.(h(g f)) &lambda;u.x) 1)
 * &lambda;fx.(1 &lambda;gh.(h(g f)) &lambda;u.x)
 * &lambda;fx.(&lambda;fx.(f x) &lambda;gh.(h(g f)) &lambda;u.x)
 * &lambda;fx.(&lambda;x.(&lambda;gh.(h(g f)) x) &lambda;u.x)
 * &lambda;fx.(&lambda;x.&lambda;h.(h(x f)) &lambda;u.x)
 * &lambda;fx.(&lambda;h.(h(&lambda;u.x f)))
 * &lambda;fx.(&lambda;h.(h x))
 * &lambda;fxh.(h x)


 * But then the &lambda;h.(h(&lambda;u.x f)) doesn't lose the h the way it needs to in order to continue. -- Dominus 14:54, 4 December 2006 (UTC)

Suggestions to add to article
The more confusing parts of standard lambda calculus:
 * 1) Note from D.Keenan: "Note that we must now be careful to use an explicit multiplication sign where juxtaposition might be wrongly interpreted as function application."
 * 2) Note from R.Rojas: "(...) we not give names to functions."

Sintax ERROR on examples??
Please, on article or on this Talk, need show HOW TO TEST, with a "testable notation". The more simple and ease to test is on UCSD/Barker Javascript page... please test there, it is easy to convert notation (check if ok my convertions).

HOW TO TEST IT (??): 0 := (lambda f (lambda x x)) &lambda; f x. x :=? (lambda f (lambda x (x)  ))

1 := (lambda f (lambda x f x)) &lambda; f x. f x :=? (lambda f (lambda x (f x)  ))

2 := (lambda f (lambda x f (f x))) &lambda; f x. f (f x) :=? (lambda f (lambda x  (f (f x))  )) ... SUCC :=? (lambda n (lambda f (lambda x (f (n f x))))) &lambda; n f x. f (n f x) There are a error on article's SUCC? SUCC X := ((lambda n (lambda f (lambda x (f (n f x))))) X) SUCC 0 := ((lambda n (lambda f (lambda x (f (n f x)))))  (lambda f (lambda x x))) BUT REDUCING == (lambda f1 (lambda x3 (f1 (lambda x4 x4)))) IT IS NOT 1! — unsigned added by 201.83.176.224 at 13:43, 6 January 2007 (UTC).

Yes, UCSD/Barker is a very simple way to readers and collaborators check examples and test your understand about the article's syntax explanations... and the article have little problems, not have emphasis on the "primer problems":
 * mistakes on interpretation of juxtaposition: multiplication? function application? new variable name?
 * " " is a valid expression, and is not a application!   is not the same as  : juxtaposition with spaces are sequences!
 * mistakes with parenthesis:  but juxtaposition   and   is the same?
 * how to translate to implementations, to run examples?

I think it is necessary more comments, review (homologation tests) and indication of "how to test examples" on article.

The unique problem with UCSD/Barker is about eta-conversion, not do it. See another Lambda Calculators (but not "all readers" accessible).

About the SUCC problem, test if: SUCC M := ( (&lambda; n (&lambda; f (&lambda; x (f ((n f) x)))  ))   M) you used   and perhaps the correct is. The use of parenthesis, on the UCSD/Barker interpreter (and many other practical ones), fix the application order.

Perhaps on article we need to use
 * SUCC := λ n f x. f ((n f) x) instead f (n f x)

Homologations with 0 and 1 (Barker-interpreter reductions): SUCC 0 := ( (λ n (λ f (λ x (f ((n f) x)))  ))        (λ s (λ z z))    ) = (lambda f1 (lambda x3 (f1 x3))) = λ f (λ x (f x)) = λ f (λ x f(x)) = 1 SUCC 1 := ... See also R.Rojas PDF. -- Krauss 20:15, 6 January 2007

Smallest Language?
Smallest language?! I think not. http://ling.ucsd.edu/~barker/Iota/

Named anonymous function?
The Python example says: "This creates a new anonymous function named func..."

How can it be anonymous if it has a name (or vice versa)?


 * This is an implementation detail in Python. The lambda function will return an object that refers to an anonymous function, and then the name func will be bound to that object. You can think of this as an artifact of Python's binding system, not something related to lambda. 71.141.89.149 02:44, 7 September 2007 (UTC)


 * fixed 80.194.146.50 14:25, 25 September 2007 (UTC)

removed comment from 'Substitution' section
The following snippet of text was in the running text of the article, but it's pretty clearly a discussion possibly between two editors. I'm moving it here so it won't get lost, but it would be better if whatever changes or corrections were being discussed can be incorporated into the article, if correct.


 * This should not be the case because binding variables don't get substituted. It should stay the same. [ZC: Actually the case is valid since V is free in (λ V. E&prime;), hence is not mentioned inside E&prime;. Note that substitution is different from β-reduction below, where the λ would first goble up the V. to free it up for substitution under the rules here.]**

Zero sharp 05:12, 13 October 2007 (UTC)


 * The quotation above refers to the third rule.
 * I do not agree with the "refutation" above: The statement that V is free in (λ V. E&prime;) and hence does not appear in E&prime; is not true as one can derive from the definition for free variables in the article:
 * In an expression of the form  λ V. E, the free occurrences are the free occurrences in E except for V. In this case the occurrences of V in E are said to be bound by the <tt>λ</tt> before <tt>V</tt>.
 * This means if V is not mentioned in E&prime; it cannot be a free variable in E&prime; and hence not a free variable in the whole expression. Maybe this is just a problem with the definition of free variables in this article. But even if it is, the fact that V is free in the whole expression is not mentioned in the third rule and I do not see anything in the text that allows to derive this, so at this point the definition should be stated more precisely.
 * --Sholsten 12:42, 10 November 2007 (UTC)


 * I've just noticed that the rule has been changed since I last checked. Seems ok to me now.--Sholsten 12:52, 10 November 2007 (UTC)

Concerns about Copyright Infringement
It looks like parts of the text in the introduction have been cut-and-pasted from their sources. Do we have their permission? Does somebody wish to summarize or paraphrase this text to resolve the issue? So far, I've found ...

from http://www.inf.fu-berlin.de/lehre/WS03/alpi/lambda.pdf
 * The � calculus can be called the smallest universal programming language of the

world. The � calculus consists of a single transformation rule (variable substitution) and a single function de�nition scheme. It was introduced in the 1930s by Alonzo Church as a way of formalizing the concept of e�ective computability. The � calculus is universal in the sense that any computable function can be expressed and evaluated using this formalism. It is thus equivalent to Turing machines. However, the � calculus emphasizes the use of transformation rules and does not care about the actual machine implementing them. It is an approach more related to software than to hardware. we had
 * Lambda calculus can be called the smallest universal programming language. It consists of a single transformation rule (variable substitution) and a single function definition scheme. Lambda calculus is universal in the sense that any computable function can be expressed and evaluated using this formalism. It is thus equivalent to the Turing machine formalism. However, lambda calculus emphasizes the use of transformation rules, and does not care about the actual machine implementing them. It is an approach more related to software than to hardware.

Note I removed the last two sentences about hardware, which apparently were written out of confusion over the fact that a "Turing machine" is not a piece of hardware. (Frankly I thought it was a joke.)

CeilingCrash 06:28, 1 November 2007 (UTC)


 * Is it possible that Raúl Rojas got it from Wikipedia, rather than the other way around? -- Dominus 16:34, 1 November 2007 (UTC)


 * It was introduced by Charles Matthews, in This change, 9 April 2004. Notice the interesting change summary: "Merge in content from 'lambda calculas' page, redirected here".  I will continue to investigate.  -- Dominus 16:44, 1 November 2007 (UTC)


 * The lambda calculas material was put in by user Arvind Singh on 8 April 2004: [ http://en.wikipedia.org/w/index.php?title=Lambda_calculas&oldid=3119294 here]. It is almost certainly plagiarized since the Rojas paper is at least two years older than that and perhaps much more.  I have removed the plagiarized paragraph.  The change history I liked above makes it appear that nothing else was plagiarized from Rojas's introduction, at least in this particular episode.


 * Thanks for spotting this. -- Dominus 16:49, 1 November 2007 (UTC)
 * Sure thing, I was just trying to get the in-text references set up and turned that up (i think there may be more). I like the few sentences from Rojan; but the last two sentences (where he seems to think a Turing Machine is a piece of hardware) is a student's mistake, so the good sentences may not be original to Rojan.  (I am not accusing Rojan of plagiarism in his paper; he does not use in-line references in his text, so I cannot tell which statements come from which source.)
 * At least I shall paraphrase the text from Rojan, or else I will resource it.
 * I'll keep working on sourcing the rest of the article. 24.128.99.107 18:07, 1 November 2007 (UTC)

new introduction
I've written a new introduction since we had to cut out the last one, would appreciate a knowledgable person looking it over (I only kind-of sort-of know what i'm doing) CeilingCrash 06:16, 2 November 2007 (UTC)

Some comments from a non-expert who should be expected to be able to understand this page
My background: I have a batchelor's degree in computer science, so I've been exposed to lambda calculus in the past (although for only briefly; I think we introduced the notation, proved Turing machine equivalencem and then moved on to other things). My degree is 10 years old, and a refresher was in order. So I came here.

On to the comments:


 * The notation for multiple variables is not defined as part of the formal definition, yet it is used in formal contexts later on in the page. I think a statement that $$\lambda\ x_{1}\ x_{2}\ \dots\ x_{n}\ .\ E$$ is treated as equivalent to $$\lambda\ x_{1}\ .\ \lambda\ x_{2}\ .\ \dots\ .\ \lambda\ x_{n}\ .\ E$$ would help.
 * Also, its definition in the middle of a paragraph makes it not entirely obvious to a reader scanning through the section. Putting it in its own paragraph would make matters clearer.
 * The use of curried function notation in the definitions of Church numbers is initially confusing, because we think in terms of a function with two parameters. Explicitly defining it as a function that returns a function would make the meaning clearer, IMO.
 * Using LISP terminology in the Pairs section is kind-of cute, but it may hinder understanding for anyone who isn't familiar with LISP. This section needs an explanation of why it is useful (i.e., it shows that structured data handling is possible and even reasonably readable).
 * I'm not really understanding how the Y combinator works. This could be me, though.  I've tried looking at its page, and that isn't helping much either.
 * I'm not convinced the long list of examples of programming languages is particularly useful. It would be best to take one language and stick to it.  Also, it doesn't sound particularly convincing to me.  It states that implementation of higher-order functions is possible, and then the list given is all of first-order functions.

Hope this all helps, and somebody who knows more about this than me can fix it. JulesH 18:57, 11 November 2007 (UTC)


 * One more thing. The page doesn't show equivelance with Turing machines.  This is an important result that should be included here (at least it should be mentioned, if not actually shown). JulesH 18:59, 11 November 2007 (UTC)

I expanded, unburied and hopefully clarified currying, I think taking care of (1) & (2) above. I gave another definition of PRED (I think the easiest to understand) using pairs, which may help with (4). Not sure that (3) would be a good change. Natkuhn (talk) 13:52, 22 November 2007 (UTC)

Merge from Church encoding
I've suggested this article be merged here because pretty much all of its content is already duplicated here. I do think, however, that some of the explanations in this page are better than the ones here, so it should not simply be redirected. JulesH 19:20, 11 November 2007 (UTC)


 * IMO, the most important thing is to ensure that the present article be made consistent with that one, if there are still any disparities. (That article appears to adopt the approach I recommended some time ago for this one, in Talk:Lambda calculus above). Whether the other article should remain stand-alone might depend on whether it's likely or desireable to be expanded beyond a length appropriate for this one. --r.e.s. 19:47, 11 November 2007 (UTC)


 * Do not merge. This article (lambda calculus) is already long, probably too long. It can be shortened by removing any discussion it may have of Church numerals (if any, I haven't read this article yet). My impression is that Church numerals are not very important for lambda calculus: for example: I am currently reading Barendregt's book, I'm up to page 90, and there's been no disucssion of Church numerals at all, other than a passing theorem about $$\lambda +(1=I) \leftrightarrow \lambda\eta$$ or something like that. linas (talk) 14:21, 11 December 2007 (UTC)

I think merging is inherently an information-losing process, and its usage is justified only if the merged article is surely a subconcept of the larger concept. But merging can be inappropriate in cases where there are too many cross-cutting concerns and orthogonal concepts in the proposed merging candidates. Such cross-cutting concepts can be grasped and made explicit well by typical solutions used in programming design: decomposition, separation of concerns, reusability, modularity. For articles, it means that the articles must be left standalone (sometimes even splitting off further concerned concepts), and appropriate links must be established among them (possibly decorated by appropriate templates like  ,   ,   ).

Although sophisticated use of section links can get around these problems, but it is a temporary and very fragile solution: section links do not redirect, thus even the smallest rename makes them broken.

Now let us see, how all this concerns this situation. Are there any orthogonal or cross-cutting things?
 * The main idea in Church numerals lives not only in the realm of lambda calculus. The same idea may be exploited also in the realm of combinatory logic. I used to amuse with writing a simple combinatory logic interpreter (which was capable also of some computer algebra) and implement mathematical notions in it (lists, exceptions, natural numbers, booleans, direct product and direct sum, etc.) The natural numbers were implemented by the idea of Church numerals. Despite of the fact that my interpreter knew nothing about lambda calculus.
 * Natural numbers can be represented in several ways. Thus, it is worth to write an article about these representations (either a standalone one or merged into Natural numbers article). One of these representations is Church numerals. Thus Church numerals can be linked also to Natural numbers (or to an article discussing its representations)

Some examples of the possible representations of natural numbers:
 * In set theory, several set-theoretic definition of natural numbers are used,e.g.
 * $$\emptyset$$
 * $$\{\emptyset\}$$
 * $$\{\emptyset,\{\emptyset\}\}$$
 * $$\dots$$

Nothing Just Nothing Just (Just Nothing) Just (Just (Just Nothing)) comes into my mind: the type of natural number can be grasped as the least fixed point of <tt>Maybe</tt> type constructor. (The type expresstion <tt>Maybe</tt> T can be regarded as T + 1, thus as "augmenting" a type T with an exceptional case, "tagging" normal cases separated from the exception).
 * in type theory

And these were only the tally notations of natural numbers, number theory enables several other ones: the most familiar one is the use of digits, numeral systems.

Returning to the original proposal: the "best of both words" (merging versus links) can be achieved by   template. A summary of Church numerals can be included (as section) in Lambda calculus (and in Combinatory logic), and   templates could point to the "big" Church numeral article. Possibly templates like ,    can be of use as well.

Physis (talk) 16:09, 11 December 2007 (UTC)

Sorry, I have just noticed that the Church encoding article embraces more topics than just Church numerals. It contains also Church booleans etc. I have modified the image accordingly. The arguments above still apply: Church encoding links to both combinatory logic and lambda calculus. For me, it seems to be a collection of implementation techniques of programming concepts (lists, booleans, natural numbers, direct sum and product) in the realm of such "functional programming" fields like lambda calculus and combinatory logic, using insights from category theory. In some cases (direct sum, lists) it seems to make use of continuation-passing style. Anyway, the link systems seems to be ramified, I think this discourages merging. Physis (talk) 20:28, 11 December 2007 (UTC)


 * Yeah, lambda calc is a much bigger (and different) topic than church encoding. I'm removing the merge tag.  linas (talk) 00:55, 8 April 2008 (UTC)

Error in Pairs?
I've seen things on this page that I thought were errors and I eventually saw that they were correct, but surely the definition

CONS := λf.λs. λb. f b s

is wrong, and should preserve be

CONS := λf.λs. λb. b f s

and wouldn't it be clearer in its intent if it were written in (partially) curried notation, i.e.

CONS := λf s. λb. b f s

Natkuhn (talk) 03:13, 22 November 2007 (UTC)

EXPT
Using Church numerals, exponentiation is very simple: ‹EXPT› ≡ λxy.yx (I prever ≡ over := because ≡ is the usual sign to indicate lexical equality) Would it be a good idea to add this lambda term? Jos.koot (talk) 18:17, 8 December 2007 (UTC)

spaces and capitals
I think this is a nice article! In several lines capital letters are used for variables. I find this confusing. I think variables (and meta variables, i.e. names that can represent any variable, should always be written as one single lower case letter (possibly subscripted)

The lambda terms are written with many spaces. It is clear that using names like PLUS and CONS introduces the necessity of spaces. Nevertheless it should be possible to reduce the number of spaces, particularly if all variables are written as one single lower case letter (possibly subscripted) and names that consist of more than one letter are encloses between small angle brackets. Hence somethink like λf s. λb. b f s can be written as λfs.λb.bfs and PLUS (MULT x y) as ‹PLUS›(‹MULT›xy). Futhermore (S I I) (S I I) can simply be written as SII(SII). Such a notation would be more close to generally adopted notational conventions in lambda calculus. Jos.koot (talk) 18:17, 8 December 2007 (UTC)


 * Regarding variables: most of the names used in the article are lowercase. So, by overall consistency of the notations, I would also suggest to change the few lines that use capital letters for variables to the lowercase style.
 * More generally, it may be worth also to think at some time about improving the overall consistency with the notations and typographic conventions used in related articles. Hugo Herbelin (talk) 10:56, 25 December 2007 (UTC)

alpha conversion
The section on alpha conversion is clear, but could be enhanced by adding a remark about the set of lambda terms modulo alpha congruence, Jos.koot (talk) 18:17, 8 December 2007 (UTC)

Y combinator and recursion
This section can be simplified I think. The following I copy/pasted and adapted from one of my own manuscripts:

In the following Y ≡ (λ f ((λ x (f (x x))) (λ x (f (x x))))). We have:

(Y F) = ((λ x (F (x x))) (λ x (F (x x)))

Fixed point theorem (F (Y F)) = (Y F)

Proof:

(Y F)) = ((λ x (F (x x))) (λ x (F (x x))))) =

(F ((λ x (F (x x))) (λ x (F (x x))))) = (F (Y F)).

Corollary:

Let:	F ≡ (Y (λ r E[r]), where E[r] is a term possibly containing free occurrences of variable r.

Now:	F = ((λ r E[r]) (Y (λ r E[r]))) = ((λ r E[r]) F) = E[F], i.e. F = E[F] : recursion!

Example

Let:	make-factorial ≡ (λ fact (λ x (if (zero? x) 0 (* n (fact (sub1 n))))))

Let:	factorial ≡ (Y make-factorial)

Now:	factorial ≡ (Y make-factorial) = (make-factorial (Y make-factorial)) =

(make-factorial factorial) =

((λ fact (λ x (if (zero? x) 0 (* n (fact (sub1 n)))))) factorial) =

(λ fact (λ x (if (zero? x) 0 (* n (factorial (sub1 n))))) : recursion!

Jos.koot (talk) 15:18, 14 December 2007 (UTC)


 * I agree that some work was needed there and I've added a derivation of g(Y g) but I did it with the pure understanding of the text and am very prone to a formal mistake. Enisbayramoglu (talk) 14:34, 13 November 2008 (UTC)

Alternative presentation of lambda calculus coming from the page Curry-Howard
The following alternative presentation of lambda calculus was given in the page Curry-Howard, a page which is being restructurated. This content is more relevant for the lambda calculus page, and as it seems to me a bit more direct way to define lambda calculus in simple words, I keep a copy of it here for comparison with the current page.

Following the lambda calculus, we will use λx.E to denote the function with formal parameter x and body E. When applied to an argument, say a, this function yields E, with every free appearance of x replaced with a. Valid λ-calculus expressions have one of these forms:


 * 1) v (where v is a variable)
 * 2) λv.E (where v is a variable and E is a λ-calculus expression)
 * 3) (E F) (where E and F are λ-calculus expressions)

The second form above is referred to as function abstraction; it builds the function whose argument is v, and whose body (in which the argument v may occur zero or more times) is defined by E.

The third form is called function application; it denotes an attempt to apply the term E (as if it were a function) to the term F (as if it were to be the argument to the function).

It is common to abbreviate ((A B) C) as (A B C) and λa.λb.E as λab.E. An expression is said to be closed if it contains no free variables.

Hugo Herbelin (talk) 13:30, 25 December 2007 (UTC)

Proposal
Hello, I propose to move the lengthy examples on "arithmetic" through to "recursion" to a separate article, called something like "Encodings in the lambda calculus". My intention would be that someone can come to this article to find out the basics of the lambda calculus, and then, if they wanted the details of encodings, they could go to that one. Any ideas, objections? Sam Staton (talk) 16:11, 8 April 2008 (UTC)
 * That makes sense to me, especially since this section could be expanded and the lambda article, itself, is becoming too long. Note that there is already an article on Church encoding that covers a great deal of what would be covered in your proposed article. A B Carter (talk) 19:26, 31 May 2008 (UTC)

Another (modest?) proposal
Hello, I would like to propose another header on the section 'See Also' on explicit substitutions. I have a feeling this is going to be a controversial one, so I'm only proposing to get a squib going, with a text a bit like:

- Explicit substitution is an umbrella term used to describe several calculi based on the Lambda calculus that pay special attention to the formalization of the process of substitution. Explicit substitutions grew out of an ‘implementational trick’ and became a respectable syntactic theory in lambda-calculus and rewriting theory. The idea of a calculus where substitutions are part of the object language, and not of the informal meta-theory, is credited to Abadi, Cardelli, Curien and Levy\cite{}. Their seminal paper on the lambda-sigma calculus explains that implementations of lambda-calculus need to be very careful when dealing with substitutions; without sophisticate mechanisms for structure-sharing, substitutions can cause a size explosion, and therefore, in practice, substitutions are delayed and explicitly recorded. This makes the correspondence between the theory and the implementation highly non-trivial and correctness of implementations can be hard to establish. One solution is to make the substitutions part of the calculus, that is, to have a calculus of explicit substitutions.

De Bruijn indices are a syntax for terms in the λ calculus invented by the Dutch mathematician Nicolaas Govert de Bruijn of AUTOMATH fame, which is the immediate precursor of the calculus of explicit substitutions.

But there is a huge number of calculi of explicit substitutions in the literature. Mellies surprising counterexample\cite{} to strong normalization for the original calculus of explicit substitutions made things worse: a multitude of calculi were described trying to offer the best compromise between syntactic properties of explicit substitution calculi. - I searched for "explicit subs" in this talk page(and in the article), to see what kinds of discussions had happened already, but the search didn't get much. I do know about the page http://en.wikipedia.org/wiki/De_Bruijn_index, though.

If there are no objections to starting this discussion I will go ahead.Valeria.depaiva (talk) 06:07, 10 April 2008 (UTC)


 * Hi Valeria, were you planning to use this text to make a new article? I think that would be a nice idea. Explicit substitutions have been around for a while now. Perhaps the title should be plural – not sure. Sam Staton (talk) 20:59, 10 April 2008 (UTC)

Organization of Article
A couple of comments on the organization of the article. There is a section named "Reduction Strategies" that mixes up what are proper rules of reduction of the lambda calculus (beta and eta rules), and true reduction strategies, which should really be moved to the section on computer languages.

Also, that the order of lambda representations could be changed so that it goes from the simple to the more complex. I was thinking along the lines of An order like this or similar is how I usually see it in other texts that go over this topic. A B Carter (talk) 19:32, 31 May 2008 (UTC)
 * Boolean values
 * Logical connectives
 * Pairs
 * Numbers
 * Arithemetic Operations
 * Recursion

induction vs. recursion
the formal definition section states that free-variables are defined "inductively", whereas lambda expressions are defined "recursively": that is not consistent with the definition of "recursion" and "induction" in the corresponding articles. If induction is indeed a proof method, then both definitions in the article are recursive. If "recursive" and "inductive" are interchangeable terms, please ignore my comment. —Preceding unsigned comment added by 85.74.169.125 (talk) 20:09, 23 June 2008 (UTC)
 * Hi, the terminology in the article is pretty common as it was, but I've tried to clarify. Sam (talk) 00:32, 24 June 2008 (UTC)

Semantics
Added a section on semantics, for completeness. Since the material on semantics is in other articles, it's mostly just a reference to those. Radagast3 (talk) 04:06, 16 September 2008 (UTC)

First model of computation?
I believe that the lambda calculus is the first formal model of computation (Turing's 1936 paper cites Church and notes that the lambda calculus provides an equivalent model of computability). Surely this deserves a mention? Radagast3 (talk) 08:18, 17 September 2008 (UTC)
 * You would need a better reference than that one, which doesn't explicitly state that it was the first one. If you can find a good reference, then that would be a great addition.   Asmeurer  ( talk   ♬  contribs ) 20:22, 16 October 2008 (UTC)
 * It's worth saying something in the article about the topic but I don't think it's accurate to say lambda calculus was the first such formal model. Hilbert's tenth problem (posed in 1900) asked for an effective method of determining whether a given diophantine equation was soluable, but nobody knew exactly what an effective method was.  The theory of recursive functions developed in the Hilbert program in the 1920's, Church's lambda calculus, and what we now call Turing machines, were separate proposals that turned out to be equivalent.  The equivalence convinced most people that the notion of effective computation had been captured (this is Church-Turing thesis).  "From Frege to Gödel: A Source Book in Mathematical Logic" (ed. by Jean van Heijenoort) has most of the original papers and there is plenty of secondary literature, so finding suitable citations shouldn't be difficult.  207.241.239.70 (talk) 01:40, 3 December 2008 (UTC)

second sentence second paragraph
I am not sure about this, but the second sentence of the second paragraph, "It is capable of expressing any algorithm ..." seems to imply that somebody has proved Church's thesis. In any case, if the second sentence is expressing something non-trivial, a citation would be of great interest. I am probably misunderstanding both the article and the subject matter and certainly am not up-to-date with current research. 172.190.96.206 (talk) 10:28, 20 September 2008 (UTC)

Can we make the LaTeX bigger?
The LaTeX formulas are a little too small to read. I think they should be at least as big as the text, if not bigger. I have no idea how to fix this though. Anyone? Asmeurer ( talk   ♬  contribs ) 23:42, 12 October 2008 (UTC)

Assessment comment
Substituted at 15:13, 1 May 2016 (UTC)