Talk:Laplace's method

This page should split in two
I don't think "Method of Steepest Descent" and "Laplace's Method" should be the same page. While you could perhaps call Laplace's Method a special case, that is misleading. The Method of Steepest Descent is a technique to reduce a problem to a form where you can apply Laplace's Method. It would be much more valuable to have separate pages, so that there is some information on steepest descent and saddle-point methods (e.g. why deform a contour onto the steepest descent path, and why this has constant phase). Lavaka 19:41, 6 September 2006 (UTC)

I agree with the above comment. This page is titled "method of steepest descent," yet nearly all of it is about Laplace's method. You can't actually learn the method of steepest descent from reading this article, but you can learn Laplace's method. 72.130.185.73 (talk) 03:58, 8 November 2009 (UTC)

I also agree. This page is mostly on Laplace's Method, and suggest the title should be changed to reflect it. --Bluemaster (talk) 14:30, 23 April 2010 (UTC) I've just moved all the content to the page Laplace's Method, which, I believe (based also on the comments above), is more appropriate, leaving a redirection from Method of steepest descent page to Laplace's Method page.--Bluemaster (talk) 16:16, 23 April 2010 (UTC)


 * Well the 2007 suggestion by Lavaka actually advocated a split... Never mind.
 * However, if this is retained as one page, it should not have been moved to a new title by the cut-and-paste method. Bluemaster, you should have moved the page (or asked an administrator to move it, if there was any trouble in doing so); in that way, you would not have created a number of double redirects, and you would have retained a coherent history.
 * Information for any admin who might consider amending this: Before Bluemaster's change, Laplace's method was a redirect to Method of steepest descent; it had no history, except its creation as a redirect in 2005. Bluemaster moved all the content of the latter article to the former, and made the latter (Method of steepest decent) to a redirect to the former (but still retaining the full history). I'm sure this was a bona fide mistake; I understand the motivations for a move. (The original suggestion of a split is another thing; I've no opinion on the feasibility of this.) However, I see that you are an experienced user, and thus I'm a little surprised by the cut-and-paste. Temporary confusion, perhaps? JoergenB (talk) 17:31, 23 April 2010 (UTC)
 * I've undone my cut-and-paste move to the original status and asked the administrator to make the move. As the (empty) Laplace's method article already exists (just to redirect to here) that was my quick and dirty way to do that without any other intention beyond improving the article. I left clear messages on the discussion pages (on both pages) so there wouldn't be doubts about the move. Sorry for the inconvenience. --Bluemaster (talk) 18:33, 23 April 2010 (UTC)
 * OK, the administrator made the move, everything is fine now, concerning to the title. I made some minor changes to make the article more consistent with the title. If someone is interested in improving the Method of steepest descent topic, in the future, the redirection can be eliminated, and only links within the articles pointing to each method can be retained, in the spirit of the original suggestion of splitting the articles. --Bluemaster (talk) 02:32, 24 April 2010 (UTC)

fr link points to wrong article
The fr: link for this page points to the French article for 'Stationary Phase' -- not for 'Steepest Descent' -- I'm not sure how to fix this though as I don't know enough French to find the right article in French wikipedia. Help? Zero sharp 23:21, 26 November 2006 (UTC)

A couple of omissions

 * M is a large number

Large compared to what?


 * Let x0 be a global minimum of f(x), which, for simplicity, we will assume to be unique.

Ok, but the article should at least mention in passing how to deal with the case where it is not. --Starwed 09:54, 1 February 2007 (UTC)


 * Well, large number M here means that $$M\to\infty$$ (that is made precise later in the article).


 * If the maximum is not unique, things can be complicated. If the maxima are separated from each other, say one here, and the other one more to the right, etc, one can apply this theorem to each individual maximum separately by splitting the interval of integration. If the maxima form an interval (say for a constant function) the theorem is not applicable. I'll reword the statement of the theorem a bit. Oleg Alexandrov (talk) 15:52, 1 February 2007 (UTC)
 * This is rather a belated reply, but it really is necessary to specify what large means in this context. For any given function, the method will be a useful approximation for some values of M, but break down as M gets smaller.  --Starwed 03:36, 25 June 2007 (UTC)

Asymptotic series
A question involving perhaps divergent series, let be:

$$ \int_{a}^{b} dx e^{Mf(x)}= g(M)(1+M^{-1}a1+a2M^{-2}+............) $$

Laplace in admitted in general that this series in 1/M would be divergent if M isn't big but could be summable (in Borel or other sense??)? --85.85.100.144 22:12, 16 February 2007 (UTC)


 * In general the series is an asymptotic series, and, as such, is often divergent, but sometimes summable. The theory for correctly dealing with such sums has a number of intricacies, but, for the intended use as an approximation, these are usually quite safe and accurate, especially when the sum is truncated at the smallest term. i.e. once the terms start growing again, stop summing! linas (talk) 03:32, 31 January 2010 (UTC)

Self-promotional links
It seems to me that example 2 is just a trick to promote the reference to Azevedo-Filho and Shachter. It is not a relevant example for an introductory page on the Laplace method. Both the reference and example 2 should be removed. — Preceding unsigned comment added by Superhiggs (talk • contribs) 15:53, 24 September 2012 (UTC)

Proof
Am i wrong or should the assumption be: f(x) is a twice differentiable function, where the second derivative is continuous? Because the proof uses continuity of the second derivative, which is not given for many functions fulfilling the assumptions... — Preceding unsigned comment added by 134.76.62.65 (talk) 17:28, 30 January 2013 (UTC)

Error in proof?
I saw this comment about the proof, which I am not knowledgeable enough to evaluate: "The biggest error is their usage of the limit; it's not even clear that this limit exists. One must either argue using limsups and liminfs or first somehow establish that the limit exists before proceeding with showing it's 1." Andrew Keenan Richardson (talk!) 17:09, 31 May 2013 (UTC)

Why is it so hard to understand
Laplace's method basically says that a function which looks like a Gaussian has a similar integral as one.

I admit it's slightly more complicated than that: increasing the "M" does make it resemble a Gaussian more and more because function maximums usually resembles the -x^2 function more as you zoom in on them. And I admit there is a lot of bonus content at the bottom.

But how does this article make it all sound like scholarly science only high level mathematicians should understand? I'm betting that some portion of readers give up after not understanding "$$R = O\left((x-x_0)^3\right).$$" even though that has little to do Laplace's method. "What the heck is O?" is a certain thought to occur. Also, "Therefore, the function ƒ(x) may be approximated to quadratic order" is sure to add a tinge of confusion to some readers, although on its own that's endurable.

I still don't understand what "Importantly, the accuracy of the approximation can depend on the variable of integration, that is, what goes into $h(x)$ and what stays in $g(x)$." meant.

The article seems written for someone who already knows everything about Laplace's method 135.0.167.2 (talk) 04:53, 27 November 2013 (UTC)

The question about the integration for d-dimensional case shown in section *Other formulations*
I have a question about the documents shown in this Laplace's method and the method of steepest descent. Both of these two documents use the n-dimensional vector "$$ x $$" as "$$ dx $$". My question is that: Does "$$ dx $$" denote "$$ \| dx \| $$" or "$$ dx_1\wedge dx_2 \cdots \wedge dx_d $$"? It looks like that you use "$$ dx_1\wedge dx_2 \cdots \wedge dx_d $$" as your definition. Ychsue (talk) 09:24, 14 May 2014 (UTC)

Now I understand that $$ d\mathbf{x}$$ means $$d^dx =d x_1\wedge x_2 \wedge \cdots \wedge x_d $$ Ychsue (talk) 11:50, 19 June 2014 (UTC).

The "general theory" presented in Section 2 is actually not general
I would just like to point out that the assumptions in Section 2 are fairly restrictive. A situation that appears often in practice, arguably as often as the one presented, is that f(x) attains its maximum at one of the endpoints of the interval, with a nonvanishing derivative. Then we will get another asymptotic equivalent, with a 1/M factor instead of 1/M^{1/2}. And for the most general situation, I remember it is treated in a good book (possibly in French) by Dieudonné. — Preceding unsigned comment added by 64.18.85.10 (talk) 16:24, 24 July 2018 (UTC)

definition
I find the structure of this article very confusing. I'd expect it to define the subject (Laplace's method) or clearly have a section that does so. But it doesn't. To me, the method belongs in the summary and in a section that might lead the reader to believe that the section defines Laplace's method. 018 (talk) 02:27, 4 January 2019 (UTC)

Confusing part in "Other formulations"
In the "Other formulations" section the article reads,


 * By the way, although $$\mathbf{x}$$ denotes a $$d$$-dimensional vector, the term $$ d\mathbf{x}$$ denotes an infinitesimal volume here, i.e. $$d\mathbf{x} := dx_1dx_2\cdots dx_d$$.

Why is this important? also does "$$d\mathbf{x} := dx_1dx_2\cdots dx_d$$"? I know we write it that way, but this seems to imply that the product itself has some meaning. I think it's more like



\begin{align} & \int e^{M f(\mathbf{x})}\, d\mathbf{x} = \int dx_1 \int dx_2 \cdots \int dx_d e^{M f(x_1, x_2, \dots, x_d)} \end{align} $$

but I don't think anyone who understood what a Hessian was would have this question. 018 (talk) 02:37, 4 January 2019 (UTC)

"Steepest descent" should be included in "see also", am I mistaken?
Steepest descent should be included in the "see also" section since it explicitly uses Laplace's method. I am no expert in complex or real analysis but this is pretty evident f rom anyone who learned this from Bender and Orzak.

Judging from the discussion previously, steepest descent and Laplace's method were conflated and used to share the same page. This alone merits the inclusion of steepest descent in the "see also" section.