Talk:Ornstein–Uhlenbeck process

COMMENTS 1
I don't think the AR(1) process is the discret analog to the Ornstein-Uhlenbeck process, as it has no mean reverting characteristics. 212.39.192.3 (talk) 10:14, 30 December 2009 (UTC) --


 * Can someone clarify how in the equation r_t goes from being in the integrand of a random variable to having a set value? I found this to be a bit confusing. —Preceding unsigned comment added by 209.6.49.37 (talk) 17:53, 23 November 2009 (UTC)

--
 * The definition of the Ornstein-Uhlenbeck is not correct. The Ornstein-Uhlenbeck process does not have to be mean reverting, it can have a general drift process, i.e.


 * $$dr_t = \mu\,dt + \sigma\, dW_t,\,$$

However, it is possible to define mean-reverting Ornstein-Uhlenbeck processes like
 * $$dr_t = -\theta (r_t-\mu)\,dt + \sigma\, dW_t,\,$$

Usually an Ornstein-Uhlenbeck process refers to processes, where the process itself does not appear in the stochastic part of the SDE which describes the dynamics of the process, i.e. &sigma; does not depend on r.


 * I disagree. The simplest definition of an O-U process is given by $$dr_t = -\theta r_t \,dt + \sigma\, dW_t,\,$$ which is mean reverting to the zero, (did you have a typo in your definition of the OU process?). The definition in the article is just a more general case of the OU process than we are normally used to seeing. If you want, just set $$\mu =0$$.


 * See also http://pauillac.inria.fr/algo/csolve/ou.pdf. If you think this is incorrect, could please you provide a better reference?

How about Stochastic Differential Equations by Bernt Oksendal? There O-U and mean-reverting O-U processes are distinguished. Your point above about setting \mu=0 is valid but it makes pedagogic sense to introduce the \mu=0 case first I think —Preceding unsigned comment added by 131.111.16.20 (talk) 08:57, 13 April 2009 (UTC) --

Covariance function for this process looks strange and quite different from earlier versions of this article, released last summer in particular.

Indeed, when |s-t| -> oo, cov(rs, rt) should tend to 0, as the autocorrelation effect decreases with time interval.

--

The article fails to mention what $$dW_t$$ is!


 * Indeed. I added that $$W_t$$ refers to the Wiener process. -- Jitse Niesen (talk) 05:49, 26 June 2006 (UTC)

The text below the graph
The text below the graph should read $$\theta\ = 1, \mu\ =1.2, \sigma\ = 0.3 $$to be consistent with the legend on the actual graph itself. I made this correction on Aug. 30, 2006 (ko4seki)


 * The text below the graph gives values for the parameter $$a$$ but fails to mention what it is. Tim (talk) 07:43, 7 January 2008 (UTC)


 * Does it really make sense to write a=0 (a.s.) for a simulated trajectory? Since it's a simulation you know the exact value of a? —Preceding unsigned comment added by 85.24.189.238 (talk) 16:06, 29 December 2010 (UTC)


 * Also it is nonsense to label a single trajectory as "normally distributed". A single trajectory has a single starting point. An ensemble of trajectories has a distribution. — Preceding unsigned comment added by 199.89.103.11 (talk) 12:06, 17 August 2012 (UTC)

The whole point is that the "disturbance" that drives the trajectory has a distribution. That distribution could be uniform, two-sided exponential, two-sided triangular, Gaussian, or whatever. Please do not give us anything about "ensembles of trajectories has a distribution." That is beside the point. Since the process is Markov, you can pick out any point along the curve, anywhere, and then you have no way to use the past to predict the the future of the curve. That is the whole idea of a Markov process. The only things that you know anything about are 1) the location of the point that you chose, and 2) the distribution of the disturbance that drives the curve along. 98.67.108.12 (talk) 01:47, 26 August 2012 (UTC)

The O-U process is also the same as first-order, low-pass filtered white noise.
Just as the Wiener process is integrated white noise (non-leaky integrator), the Ornstein-Uhlenbeck process is RC low-pass filtered (a.k.a. "leaky integrator") white noise. This is electrical engineering language, but should somehow be included, right? 71.254.8.148 (talk) 04:29, 31 March 2009 (UTC)


 * It probably should be included. Michael Hardy (talk) 17:42, 13 April 2009 (UTC)

Make article accessible to physicists, engineers, etc.
Dear mathematicians who wrote this article. As a mathematician, I appreciate your work. But I am also a physicist and have to say that this article is hard to understand for specialists from other sciences such as physics and engineering who also use OUPs. (We do no have to try to make this article accessible to just everybody, but we really should include the other specialists' viewpoints).

I have added a section on application of OUP in physics. This should give a start. Improvements are most welcome. If someone could also write a section on OUPs in engineering and signal processing (see above) that would be great.

So that's the new section:

Application in physical sciences: The OUP is a prototype of a noisy relaxation process. Consider for example a Hookean spring with spring constant $$k$$ whose dynamics is highly overdamped with friction coefficient $$\gamma$$. In the presence of thermal fluctuations with temperature $$T$$, the length $$x(t)$$ of the spring will fluctuate stochastically around the spring rest length $$x_0$$; its stochastic dynamic is described by an OUP with $$\theta=k/\gamma$$, $$\mu=x_0$$, $$\sigma=\sqrt{2k_B T/\gamma}$$. In physical sciences, the stochastic differential equation of an OUP is rewritten as a Langevin equation
 * $$ \gamma\dot{x} = k( x - x_0 ) + \xi $$

where $$\xi(t)$$ is Gaussian white noise with $$<\xi(t_1)\xi(t_2)= 2 k_B T \delta(t_1-t_2) $$.

Benjamin.friedrich (talk) 20:14, 25 April 2010 (UTC)

STATIONARY 1
This is supposed to be a stationary process. At the very least, that should mean that the probability distributions of xt and xs should be the same. I had imagined that the covariance between values of the process at different times would depend on those times only through how far apart those times were, i.e. it would depend on s and t only through |s &minus; t|. But the article says
 * $$ \operatorname{cov}(x_s,x_t) = \frac{\sigma^2}{2\theta}\left( e^{-\theta(t-s)} - e^{-\theta(t+s)} \right) $$
 * $$ \operatorname{cov}(x_s,x_t) = \frac{\sigma^2}{2\theta}\left( e^{-\theta(t-s)} - e^{-\theta(t+s)} \right) $$

for s < t, and I am suspicious. Should I be?

(OK, next I'll try working through the details myself and maybe figure out what I'm missing.) Michael Hardy (talk) 00:46, 26 April 2010 (UTC)


 * The confusion arises because, in this part of the article, the process is not being treated as stationary. The workings-out in this section assume that the process starts from a value x0 at time zero. It would be good if something were done to be clearer about this. 193.62.153.194 (talk) 11:25, 28 April 2010 (UTC)


 * I see your point. The non-stationary part decays exponentially. Maybe there's something to be said for thinking about that, but the covariance formula for the stationary process has an appealing simplicity that helps one understand and remember what this process is about. Michael Hardy (talk) 01:48, 30 April 2010 (UTC)


 * This issue is very confusing. The second sentence says the process is stationary, but the definition given is of a non-stationary process. I suggest "stationary" -> "asymptotically stationary" to resolve this.  Any thoughts? 128.243.253.117 (talk) 17:35, 26 July 2011 (UTC)


 * I suggest "Not stationary" as a reasonable approximation to the truth. — Preceding unsigned comment added by David in oregon (talk • contribs) 05:28, 25 April 2012 (UTC)


 * I wouldn't be so hasty, it appears from googling that it can be stationary or non-stationary. Probably better to expand on this. IRWolfie- (talk) 08:07, 25 April 2012 (UTC)

STATIONARY 2
The presentation in the Wikipedia is very strange. A stationary process MUST have a mean value that is a constant, but the one given there is a function of time. There is no allowance for hand-waving that says, "Oh, the time-varying part decays away.

Furthermore, the autocorrelation function MUST depend only on the time difference (it is shift-invariant) and the one here and now is a function of time. THIS IS ABSOLUTELY FORBIDDEN. Electrical engineers call this the "autocorrelation function", so ignore any business about "autocovariance".

For a stochastic process that is Markov, Gaussian, and stationary, the autocorrelation function MUST work out to be Kexp[-p|t - s|], where K and p are given constants, and t and s are two instants in time. This is the only way that it can work out.

Furthermore, the power spectral density should exist in this case and it MUST not be a function of time. That power spectral density is found by the Wiener-Khinchine theorem by taking the Fourier transform of the autocorrelation function.

Some mathematicians, etc., might not give a hoot about the power spectral density, but many engineers, physicists, and chemists, too. Besides electrical engineers, many chemical engineers, mechanical engineers, and nuclear engineers care very much about spectral densities. Note that I didn't write "most", but "many". Many physicists do, too. 98.67.108.12 (talk) 01:32, 26 August 2012 (UTC)


 * Other strange statements:

"Over time, the process tends to drift towards its long-term mean: such a process is called mean-reverting." Stationary stochastic processes do not do this. They are just as likely to drift away from their mean values as they are to drift toward them. In fact, if a stationary stochastic process ever reaches its mean value, it is just as likely to "keep right on going" and drift away from its mean value in the opposite direction. There are no "magic pieces of string" that tells them which way to go. It is just that in the long run, they tend to spend about as much time above their mean values as below them. On the other hand, nonstationary processes can be different. However, this article says that it is all about stationary processes. 98.67.108.12 (talk) 02:19, 26 August 2012 (UTC)

Alternative representation
I have undone the previous edit to the "Alternative representation" section. I believe the modification was incorrect. At least the last modification was inconsistent with the previous content and that had been up there for some time.

On the other hand I do admit the expression in question could be improved. KeithWM (talk) 12:28, 11 June 2012 (UTC)

Some people have difficulty in subdividing
Some people have difficulty in subdividing: Yes, they cannot divide things up like this: 1. The stationary Ornstein-Uhlenbeck process, and 2. Nonstationary Ornstein-Uhlenbeck processes.

These are what we need in this article. What we do not need is a bunch of statements about stationarity, then a bunch of equations about something that is OBVIOUSLY nonstationary. It is amazing that anyone would even attempt to do this. 98.67.108.12 (talk) 02:02, 26 August 2012 (UTC)


 * The process has stationary independent _increments_, just like brownian motion and all other diffusions. That is to say X(t+s) - X(s) has a distribution that is independent of s and X(t_2) - X(t_1) is statistically independent of X(t_1) - X(t_0). Perhaps that is were the confusion originally arose. 108.28.189.95 (talk) 22:23, 6 February 2024 (UTC)

Alternative definitions
Gardiner seems to use a different diffusion constant (factor 2) from the text (section on Fokker Planck). I don't have Risken here. Can someone check everything is fine? If there are really different definitions used, we should probably mention it. — Preceding unsigned comment added by 128.243.2.30 (talk) 13:10, 13 December 2022 (UTC)

Physical units
I would appreciate someone helped specifying the physical units of the parameters and variables of the SDE. If i ask chatgpt or the like, i get an answer that has two answers to the questions and they contradict each other. Bodait (talk) 02:25, 21 February 2024 (UTC)


 * There aren't any. The interpretation depends on the setting. Genes in genetics, money in finance, etc. However, t is almost always taken to mean "time". 67.198.37.16 (talk) 22:04, 23 May 2024 (UTC)

The paragraph talking about the properties of the maximum likelihood estimator (MLE) in the "Numerical simulation" section feels out of place and incongruous. I see three problems: (1) properties of the maximum likelihood parameter estimator have nothing to do with numerical simulation; (2) the MLE is not stated, so stating the properties of said MLE feels unmotivated; (3) I read the reference (Aït-Sahalia 2002) and the stated MLE property does not appear to be in there (the closest is similarity is equation 5.1 in the paper, which is different). 2601:644:0:1B80:0:0:0:E7F3 (talk) 04:17, 14 June 2024 (UTC)