Talk:Bisection method

Number of iterations for certain Tolerance
I think a little more should be added to this page about the number of iterations of the Bisection Method is needed to get the function within a certain Tolerance. By modeifying Theorem 2.1 from Numerical Analysis 8th edition, a quick bound approximation to the error can be found. Even know the actual error can be much smaller, it gives a good estimate of how long a function will take to converge by the Bisection Method. I would like to add info about this Sc920505 (talk) 13:07, 30 September 2008 (UTC) Hmm, my textbook give the formula for the absolute error as (b-a)/2^n. Whereas the one here is (b-a)/2^(n+1). I don't know which one is correct... can someone clarify for me?


 * Its (b-a)/2^(n+1). Think about it... Let n=0. then if it was (b-a)/2^n then the error would be b-a, which would extend beyond the limits of b and a, a domain in which we know MUST contain the target. Strange though, that a textbook had it wrong....


 * Also, I would like to tell readers of this article that bisection can break down if f is non-continous or is not monotonic or not a function. It is too much of an unnecessary complication for the article, but worth mentioning.


 * I don't think that f needs to be monotone (why do you think so?), but it indeed needs to be a continuous function. -- Jitse Niesen (talk) 12:11, 13 November 2005 (UTC)


 * just put a note that the function must be differentiable along the section wanting to be root searched. &mdash;The preceding unsigned comment was added by Mikefadock (talk &bull; contribs).


 * Please explain why you think it needs to be differentiable. -- Jitse Niesen (talk) 02:20, 3 March 2006 (UTC)


 * The error is |b-a|/2^n. His text had it right.  After 1 subdivision (n=1), the error is at most half the original interval, not 1/4.  |b-a| doesn't extend beyond the domain in which we know must contain the target -- it's exactly equal to it.  I also added the missing absolute value. 165.189.91.148 20:46, 20 April 2006 (UTC)


 * It depends on what you take to be the approximation to the root. If you use the midpoint of the interval, the error is |b-a|/2^(n+1); if you use either endpoint, the error is |b-a|/2^n. Which formula is used, is not very important, in my opinion. -- Jitse Niesen (talk) 02:26, 21 April 2006 (UTC)


 * Agreed, and I've edited the section to reflect the "controversy". The problem arises because the method doesn't give a single estimate for the root's location, and the concepts of "absolute error" and "length of the interval" are easily confused.  The actual formula indeed isn't important, save in Calculus 1 classes where it's fairly often an exam question. Majromax (talk) 22:38, 19 November 2007 (UTC)

Pseudo Code completion
The Pseudo Code presented in this article needs to be made more accurate. As of right now, it does not account for the possibility of encoutering the 0 exactily on xL or xR. A theird condition needs to be added.


 * No. Bisection method will happily converge to a root even if it occurs at the endpoint.  It will just converge to the endpoint with the root.  The given pseudocde is proper in that it will consider an interval beginning or ending at 0 to have a sign change.  Admittedly, this isn't exactly an efficient approach, but it certainly gives clear code without extraneous conditions. Majromax (talk) 22:38, 19 November 2007 (UTC)


 * Incorrect. The code checks if it is < 0, and if the left, right, or midpoint is exactly zero, an infinite loop would have occurred. Added a simple else - no performance hit compare dto the normal function. It could still be efficientized though. —Preceding unsigned comment added by 207.161.134.144 (talk) 18:25, 4 February 2010 (UTC)

Visual Basic Code
state which version. Visual Basic.net is not backward compatible with previous versions anymore —Preceding unsigned comment added by 192.197.54.27 (talk) 14:10, 2 October 2007 (UTC)


 * Although in general the two are not compatible, this code runs in both VB.net and the older VisualBasic. CRGreathouse (t | c) 21:32, 2 April 2008 (UTC)


 * Anyway, Visual Basic isn't free, it would be wiser to write the pseudocode in Python or Java. -- PhDP (talk) 01:30, 26 November 2008 (UTC)
 * As long as the section header states 'pseudo-code', I find both Visual Basic, Python and Java inappropriate. Either change the header, or change the code into something that is more or less language-independent. Also keep in mind that source code is not particularly encourared in Wikipedia (WP:NOT). --Berland (talk) 11:35, 26 November 2008 (UTC)

enumeration

 * If f has several roots in the interval [a, b], then the bisection method finds the odd-numbered roots with equal, non-zero probability and the even-numbered roots with zero probability (Corliss 1977).

with respect to which enumeration??? --Philtime (talk) 08:28, 11 May 2008 (UTC) PS: what if there are is an infinite and/or uncountable amount of roots? --Philtime (talk) 08:28, 11 May 2008 (UTC)


 * I think by even-numbered it means double etc. So looking at something like
 * $$f(x)=x^5+2x^4+x^3$$
 * which has a double root at -1 and a triple root at 0, it is saying that 0 would almost always be found rather than -1. --Rumping (talk) 17:44, 30 August 2008 (UTC)


 * I looked through Corliss' article and added some details to the article. The roots are enumerated according to their order. I don't know what happens if there are infinitely many roots; it makes sense to assume they all have probability zero, but perhaps the axioms of probability theory do not work in this case. -- Jitse Niesen (talk) 15:51, 20 September 2008 (UTC)

Corliss' paper is rather poorly worded. He takes a probability space of functions with the property that the distribution of zeros is the same as the distribution of an odd number of independent random variables with uniform distribution. (This implies, for example, that multiple zeros don't happen.) Applying bisection to a random function in that probability space, the probability that the i-th smallest zero is found is 0 if i is even (but this is always true for functions with only simple zeros), and independent of i if i is odd. Corliss' result says nothing at all about the behaviour of bisection on particular functions. Also, I don't know any real-life class of functions which satisfies Corliss' assumptions, so the practical significance is uncertain. I hope my new version of the text is clear enough. McKay (talk) 02:16, 12 June 2009 (UTC)

Convergence
Hi, I have a question on the "convergence linearly". Since the absolute error of bisection convergence is |b-a|/2^n, so the rate of convergence is O(1/2^n) which means the growth of error is exponential. I think it is more precise to use this information instead of the phrase "convergence linearly"! —Preceding unsigned comment added by 132.235.37.166 (talk) 13:24, 3 October 2008 (UTC)
 * This is called linear convergence in numerical analysis because the number of digits in the answer grows at a linear rate. Equivalently, the running time is linear in log(eps). McKay (talk) 06:13, 1 April 2009 (UTC)

I don’t agree with the statement that the bisection method has a linear convergence or, in other words, a 1st order convergence. The classification of a 1st order method has to do with the error along the sequence (distance between the successive values and the root) and not with the estimation of the absolute maximal error, incidentally quite imprecise in the bisection method. In the 1st order methods the observation of the sequence of values converging to the root reveals a steady increase in the number of correct figures. On the contrary, through the bisection method it is possible for the distance to the root to increase in two consecutive iterations. It follows that the bisection method is just a method that narrows the interval that contains the root, halving it on each iteration. It doesn’t guarantee the same happens with the distance to the root, therefore it is not a 1st order method. User: Ana Maria Faustino 20 March 2013 (FEUP) — Preceding unsigned comment added by 213.22.50.11 (talk) 02:46, 20 March 2013 (UTC)

Computation of the midpoint
In section 'Practical consideration', 2. point, the passage
 * Another method relies on the assumption that (right+left)/2 exactly equals left or right if left and right are adjacent non-equal floating-point values. This is true for most hardware, including IEEE arithmetic in the absence of overflow, but can be violated if intermediate expressions are calculated to greater precision than stored variables (depending on computer optimization).

is totally dubious to me. What does this other method do about? Is that what's implemented in the code below i.e. use left+(right-left)/2 instead of (left+right)/2? And if so, does it take care of the case when the assumption is true (it sounds like that) or when the assumption is violated (makes more sense in my opinion)? Furthermore, when is the assumption violated-any example? Final question: if I know that IEEE arithmethic is used: is there any advantage of preferring left+(right-left)/2 over (left+right)/2? I can't see any, whether left and right are very close to each other or not... Ezander (talk) 08:48, 26 November 2010 (UTC)


 * Ahh, now I see, it was about the stopping criterion, not the computation of the midpoint. Then it makes sense. However, still the simpler (hi+lo)/2 has been changed to lo+(hi-lo)/2. Any reason for this? Ezander (talk) 08:53, 26 November 2010 (UTC)

Language of Code examples
I don't think it is very sensible that both code examples are in different languages. One in VB and the other one (it states in the source tag that it's C, which it clearly isn't) in some hodgepodge of languages (I've seen all that notation in different languages, but never together in one language: assignment (:=) from Pascal-like languages, comments (//) from C++, loop constructs (endwhile, endif) from PHP(?), very strange...

I'm not a friend of VB. However the code looks quite clear, nearly like pseudo-code... Maybe the second example should be changed also to VB? Ezander (talk) 09:03, 26 November 2010 (UTC)

History
Does anyone know when the bisection method was first used? The method seems quite obvious but that doesn't necessarily mean it came before other methods, like secant for example. — Preceding unsigned comment added by 2.222.44.210 (talk) 12:36, 2 December 2011 (UTC)

Examples
This page could probably benefit from showing a couple examples. I will give a shot at creating one. Whlitt (talk) 18:24, 10 September 2012 (UTC) It may be laid out something like this:

(New Header within Wikipedia page titled: Examples)

Finding the root of a polynomial
Find a root of following polynomial using the bisection method
 * $$ f(x) = x^3 - x - 2 \,.$$

Find two numbers a and b s.t. $$f(a)$$ and $$f(b)$$ have opposite signs.
 * Let $$a = 1$$ and $$b = 2$$. Then
 * $$ f(1) = (1)^3 - (1) - 2 = -2 $$ and
 * $$ f(2) = (2)^3 - (2) - 2 = +4 \,.$$

Because the function is continuous, there must be a root within the interval [1, 2].

In the first iteration, we have $$ a_1 = 1 $$, $$ b_1 = 2 $$,
 * $$ p_1 = 1 + \frac{2-1}{2} = 1.5 $$, and
 * $$ f(p_1) = (1.5)^3 - (1.5) - 2 = -0.125 .$$

Because $$ f(p_1) $$ is negative, $$ a = 1 $$ is replaced with $$ a = 1.5 $$ for the next iteration to ensure that f(a) and f(b) have opposite signs. As this continues, the interval between a and b will become increasingly smaller, converging on the root of the function. See this happen in the table below.

After 15 iterations, it becomes apparent that there is a convergence to about 1.521: a root for the polynomial. Whlitt (talk) 00:42, 11 September 2012 (UTC)
 * Your Flow needed some work; there were formulas not in the sentence structure.
 * Math formatting was inconsistent.
 * Mjmohio (talk) 20:01, 18 September 2012 (UTC)

The disadvantage of bisection method
I think the bisection method mentioned above can be only used for finding only one root case and it is not effective for multiple roots case. If I assume that the function has multiple roots on [a, b] and f(a) and f(b) have opposite signs,then by using the method above, I can find the midpoint c between a and b. If f(c) and f(b) have opposite sign, then I can use the new interval [c, b] and keep going to find a smaller interval until I find one root on that interval,but in this way, I neglect the possible roots on [a, c].Since f(a) and f(c) have the same sign,I can not apply bisection method to find roots on [a, c]. It seems to be difficult to find all the roots of the function on the interval using bisection method because it is hard to know each interval which includes only one root of the function.

So my opinion is to find one of the roots on the interval first by bisection method and then we can apply other methods ,for example Newton's method to find other roots of the function. In fact, we can at first find the monotonicity of the function on the given interval. If the function has monotonicity on interval[a, b] and f(a),f(b) have opposite signs, then we can apply bisection method to find the only one root of that function, otherwise we can not only use bisection method to find all roots of the function unless we know all the local maximal and minimal points of that function by solving the first and second derivative.After that I can split the original interval into some small intervals which include one maximum and one minimum and check if the maximum and the minimum have the opposite sign. If they are not, then skip that interval and continue checking the neighbor interval until you can see the opposite sign of those two values.So we can use bisection method in that interval to find one root and keep going the steps above until we find all the roots of the given function.

YangOu (talk) 01:50, 12 September 2012 (UTC)

==The algorithm == Given a continuous function f(x), Hh687711 (talk) 05:42, 17 September 2012 (UTC)
 * Find points $$a_0$$ and $$b_0$$ such that $$a_0 < b_0$$ and $$f (a_0)$$ and $$f (b_0)$$have opposite signs, that is, $$f(a_0) \times f(b_0) < 0$$.
 * Find the midpoint $$c_0 = (b_0+a_0)/2$$ and find $$f(c_0)$$.
 * If $$f (c_0) > 0$$, set $$ a_1 = c_0$$ and $$b_1 = b_0$$, i.e. $$[c_0, b_0]$$ is the new interval. If $$f (c_0) > 0$$, set $$a_1 = a_0$$ and $$b_1 = c_0$$. (If $$f (c_0) = 0$$, the root is found.)
 * Keep doing the similar computation, you'll get an approximation solution.
 * Step 3 is incorrect; the choice depends on the sign of $$f(a_0)$$ too.
 * Step 4 is vague. Could the whole thing be expressed as a loop?
 * Could this be combined with the pseudocode already there?
 * Mjmohio (talk) 19:28, 18 September 2012 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 one external links on Bisection method. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20130519092250/http://siber.cankaya.edu.tr:80/NumericalComputations/ceng375/node32.html to http://siber.cankaya.edu.tr/NumericalComputations/ceng375/node32.html
 * Added archive https://web.archive.org/web/20060901073129/http://numericalmethods.eng.usf.edu/topics/bisection_method.html to http://numericalmethods.eng.usf.edu/topics/bisection_method.html

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 06:01, 3 November 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 3 external links on Bisection method. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Corrected formatting/usage for http://siber.cankaya.edu.tr/NumericalComputations/ceng375/node32.html
 * Added archive https://web.archive.org/web/20090413123941/http://numericalmethods.eng.usf.edu/topics/textbook_index.html to http://numericalmethods.eng.usf.edu/topics/textbook_index.html
 * Added archive https://web.archive.org/web/20060906070428/http://numericalmethods.eng.usf.edu/ to http://numericalmethods.eng.usf.edu/

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 01:09, 21 July 2017 (UTC)

Mathematical formula
An anonymous editor made this revision to the article. Was this correct? Thanks for your time. Opalzukor (talk) 13:44, 24 November 2020 (UTC)