Talk:Fixed-point iteration

$$\cos$$ and $$\sin$$
Why does $$\cos$$ satisfy the assumptions of the Banach fixed-point theorem, while $$\sin$$ does not? Both functions have no Lipschitz constant $$< 1$$, unless one restricts the domain of $$\cos$$, but then the statement "for any starting point $$x_0$$" in combination with satisfying the assumptions of the Banach fixed-point theorem cannot be made anymore. However, those assumptions are always satisfied after the first iteration step because $$\cos$$ has a Lipschitz constant $$< 1$$ on $$[-1,1]$$. This phenomenon should be mentioned explicitly. — Preceding unsigned comment added by 213.22.81.218 (talk) 21:50, 14 September 2022 (UTC)

What examples should be given in this article?
The Numerical analysis article gives one example of fixed point iteration: $$x_{k+1}=(x_k^2-2)^2+x_k$$. This article can accommodate several examples and can illustrate them with graphics (maybe animations too?).

The example in the NA article can be complemented here by comparing it with the the superficially similar iteration: $$x_{k+1}=(x_k^2-2)+x_k$$.

The Fixed point (mathematics) article has a nice graphic of an iteration of sin(x) that --uhvpirate 04:55, 12 April 2007 (UTC)could be used to show that FPI can be used on many types of functions.

Examples of "what can go wrong" would also be nice.

Of course, examples are only part of the article.

--Jtir 14:13, 8 October 2006 (UTC)


 * I redirected this for now. Feel free to undo that if more example are added. Oleg Alexandrov (talk) 01:31, 9 October 2006 (UTC)


 * An additional nice example would be the calculations of the fixed points of the function $$g(x) = (x-1)\cdot(x-2)\cdot(x-3) + x$$ and the stability of the iteration $$x_{n+1}=g(x_n)$$. I've written this example in the Spanish lemma of the fixed point lemma. I can provide the Python code too if you like it. Mlliarm (talk) 12:43, 1 November 2022 (UTC)

Intuitively to me (and thus likely wrong?), are not most diffusion equation solvers, the relaxation method for differential equations, and quantum computing all examples of fixed-point iteration? 104.187.53.82 (talk) 15:36, 29 October 2023 (UTC)

"fixed point iteration" is a technique in theoretical computer science
(copied from User talk:Oleg Alexandrov) --Jtir 06:17, 9 October 2006 (UTC) Actually "fixed point iteration" is a technique in theoretical computer science: definition by recursion is regarded as solution of a fixed point problem g = F(g) and iterates of F converge to the fixed point. This technique has various flavors: an order theoretic one and a metric space one. --CSTAR 06:06, 9 October 2006 (UTC)

How are the iteration and the function related?
The third example has me wondering about the relation between the iteration and the function — why this function and not some other? Put another way, if someone gives you an iteration, can you say what function corresponds(?) to it? This article is the place to explain the relationship between the two.

"... - this function is not continuous at x=0, and in fact has no fixed points." I'm not sure what the dash in the third example means. Can it be replaced with the word "because"? --Jtir 18:05, 10 October 2006 (UTC)


 * The function f and the iteration rule $$x_n \rightarrow x_{n+1}$$ are related by $$x_{n+1} = f(x_n)$$. The point of the third example is to show that the condition that f is continuous is a necessary condition; if f is not continuous then the iterations of f can converge to a value that is not a fixed point of f. Gandalf61 14:39, 11 October 2006 (UTC)

an example comparing FPI and Newton's method?
I tried computing the example xn+1 = sin xn on a handheld calculator. I was down to about 0.2 when I got tired of pressing the sin button. The Further properties section could give an example comparing the number of iterations needed to reach some value using FPI with the number needed using Newton's method. --Jtir 18:25, 10 October 2006 (UTC)

The Newton iteration is a fixed point iteration. Loisel 22:37, 10 October 2006 (UTC)


 * I took that distinction from the article, which reads:
 * "Fixed point iteration is not always the best method of computing fixed points. For example, Newton's method ..."
 * --Jtir 22:47, 10 October 2006 (UTC)

Well obviously the article is missing a point. Loisel 00:24, 11 October 2006 (UTC)


 * You are right. Newton's iteration is a fixed point iteration. I fixed that in the article. Oleg Alexandrov (talk) 03:27, 11 October 2006 (UTC)


 * Sorry, I think I overwrote your text when I got an edit conflict. I had written up a bunch of stuff which seemed to moot your changes. Loisel 04:27, 11 October 2006 (UTC)

example of the Babylonian method
Thanks for adding the Babylonian method to the examples. I noticed that it is the only example that does not give the iteration. --Jtir 17:03, 12 October 2006 (UTC)

complex fixed points?
not sure if this is the right place to ask this but here goes anyway: obviously functions that don't cross f(x)=x shouldn't have fixed points, like ln(x). But you can use fixed point iteration on ln(x) to obtain an attractive complex fixed point. Does anyone have any information on complex fixed points?--uhvpirate 04:55, 12 April 2007 (UTC)
 * @uhvpirate: I think these notes contain a great deal ! Mlliarm (talk) 13:21, 1 November 2022 (UTC)

Toilet flushing
I had some diarrhea today and when i flushed i wondered: Can the question of whether the water spirals clockwise or counter-clockwise finally be answered by using a fixed-point problem? —Preceding unsigned comment added by 130.85.237.88 (talk) 11:55, 14 November 2007 (UTC)


 * > Can the question of whether the water spirals clockwise or counter-clockwise finally be answered by using a fixed-point problem?
 * This has to do with the Coriolis pseudo-force. Don't know if it's related to fixed points. Mlliarm (talk) 20:50, 31 October 2022 (UTC)

Methodological Error
I have updated the "Fixed Point Iteration" page with "Methodological Error" section. I have added some points which would help to know if the iteration would converge or diverge.When we calculate the exact root of a given function we could plug in the root value into g'(r). After plugging in the value we run a check if the calculated value '<1 ' or '>1'. If |g'(r)| < 1 then the scheme converges else it is diverging. If there is divergence then the iteration should be terminated. Pooja929 21:09, 07 October 2010


 * I have removed the new section because it was mostly cut-and-pasted from the source; it was not consistent with the notation or flow of the rest of the article; and it was also unenecylopedic in tone - Wikipedia strongly discourages use of the second person ("we", "us") in articles, see WP:TONE. Also, I think there is a logical error in that section - the method it proposes for determining the convergence of an iterative process depends on knowing a root of the iterated function, which can in turn only be found if the iterative process converges. So there is some circular reasoning here.Gandalf61 (talk) 08:02, 8 October 2010 (UTC)

Sample Code
A simple Matlab code has been added to the page. PoojaP 10:02 A.M, 07 November 2010

When will the fixed-point iteration converge?
This article gives some examples about the fixed-point iteration converges and diverges respectively. But it does not state specifically about when the iteration would converge (thus it is the one we should use) and when the iteration diverges (so it won't give us the fixed-point). Of course there are a lot of fixed-point theorems that we can use to tell if it exists, but for the fixed-point iteration, since the function has to be continuous (mentioned in example 4), I think we can add the section about how to tell the iteration converges into this article. I noticed in the Properties section there is a theorem that we can use: If a function $$f$$ defined on the real line with real values is Lipschitz continuous with Lipschitz constant $$L<1$$, then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess $$x_0$$.

I suggest to add this paragraph:

Not all iterations can arrive at a convergent fixed-point. Therefore, when constructing a fixed-point iteration, it is very important to make sure it converges. There are several fixed-point theorems to guarantee the existence of the fixed point, but since the iteration function is continuous, we can usually use the following theorem to test if an iteration converges or not: If a function $$f$$ defined on the real line with real values is Lipschitz continuous with Lipschitz constant $$L<1$$, then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess $$x_0$$. One can generalize this to any metric space.

Proof of this theorem:

Since $$f$$ is Lipschitz continuous with Lipschitz constant $$L<1$$, then for the sequence $$\{x_n,n=0,1,2...\}$$, we have:

$$|x_2-x_1|=|f(x_1)-f(x_0)|\leq L|x_1-x_0|$$,

$$|x_3-x_2|=|f(x_2)-f(x_1)|\leq L|x_2-x_1|$$,

$$\cdots$$,

and

$$|x_n-x_{n-1}|=|f(x_{n-1})-f(x_{n-2})|\leq L|x_{n-1}-x_{n-2}|$$.

Combining the above inequalities yields:

$$|x_n-x_{n-1}|\leq L^{n-1}|x_1-x_0|$$.

Since $$L<1$$, $$L^{n-1}\rightarrow 0$$ as $$n \rightarrow \infty$$.

Therefore, we can show $$\{x_n\}$$ is a Cauchy sequence and it converges to a point $$x^*$$. This $$x^*$$ is apparently the fixed point for $$x^*=f(x^*)$$. XueGong (talk) 01:26, 17 September 2012 (UTC)


 * Your suggestions seem good and it looks like they have been implemented. As someone currently learning this method, I am confused that the fixed point is labeled x_0 and the initial starting point is also labeled x_0 indicating that the fixed point and the starting point are the same number, but this is clearly not true.  Is it possible if the initial guess is labeled as x_0 that the fixed point not be labeled also as x_0? Harlananelson (talk) 01:00, 23 October 2022 (UTC)
 * You are right, initial value and fixpoint are usually distinct, and should have distinct names. I'm going to fix this. Thanks for noticing! - Jochen Burghardt (talk) 09:33, 23 October 2022 (UTC)
 * ✅ I used "$$x_{\rm fix}$$" to denote fixpoints throughout the article (I hope, I got every occurrence - ?). - Jochen Burghardt (talk) 09:55, 23 October 2022 (UTC)

Cambridge University comments
Cambridge University's Computational Projects IB manual (2014/15) says at the bottom of page 31: "There are also [numerical analysis] articles in Wikipedia, but that on fixed-point iteration is in need of some improvement as of July 2013." It does not seem to have changed much since then. --Rumping (talk) 00:03, 13 January 2015 (UTC)

Sternberg's book
Added "Dynamical Systems" in the resources. The first chapter ("Iteration and fixed points") is a really nice intro in fixed point iteration and a great study of the stability of Newton-Raphson algorithm. Mlliarm (talk) 21:05, 31 October 2022 (UTC)

Shashkin's book
Added in the bibliography the excellent introductory book of Yu. A. Shashkin. Mlliarm (talk) 12:43, 1 November 2022 (UTC)

India Education Program course assignment
This article was the subject of an educational assignment supported by Wikipedia Ambassadors through the India Education Program.

The above message was substituted from by PrimeBOT (talk) on 20:03, 1 February 2023 (UTC)