Talk:Continuous-time Markov chain

Terminology
Undo the name change, see this page: http://www.encyclopediaofmath.org/index.php/Markov_process — Preceding unsigned comment added by 213.103.216.40 (talk) 06:07, 6 April 2013 (UTC)

Why should the state space be discrete?? Sodin 01:50, 4 March 2007 (UTC)


 * It shouldn't. I've changed it.  I think some people may reserve the word "chain" for processes with discrete state spaces and this article may at some point have been moved from "continuous-time Markov chain".  (Check the edit history, maybe?) Michael Hardy 03:25, 4 March 2007 (UTC)


 * This is incorrect. Markov chains can have a non-discrete state space. However the word "chain" is often reserved for discrete time! A search in google scholar of "continuous time markov chain" yields one third of the result compared to "continuous time markov process". The article should definitely be renamed! The name chain does not make sense for something that moves in continuous time on a contiuous space. To call a poisson-process a chain would be ok but a brownian motion is not a chain of events.


 * The state space is still assumed to be discrete in the article. A Markov process on a discrete space is generally referred to as a Markov chain.  The more general definition should be given here.  --129.7.128.30 (talk) 02:01, 5 January 2010 (UTC)


 * This is something that needs to be sorted out properly. The Markov chain article is presently (except possibly in the lead) only about discrete-time and discrete-space. The present version of this article is only about continuous-time discrete-space. And the present version of Markov process only has definitions suitable for discrete-space; similarly for Markov property. A possibility is to rename the present version of this article to sometime like "continuous time Markov chain" or "Markov chain (continuous time)" and to expand Markov process to properly cover continous-(multivariate)-space. Or Markov process could be a simple overview with pointers to Markov chain, "continuous time Markov chain" (renamed from here) and to an entirely new version of "Continuous-time Markov process". Melcombe (talk) 10:48, 5 January 2010 (UTC)

It seems to me that the terms "jump process" and "embedded Markov chain" relate to the same things; maybe the corresponding paragraphs should be merged? 140.78.107.99 13:48, 25 April 2007 (UTC)
 * Good catch. When I wrote the EMC section, I hadn't heard that referred to as a "jump process" before, but from a quick search of papers, they look like they're used at least somewhat interchangeably, so I merged the two sections.  I'm not familiar with the recent literature or subtleties between these two terms, so if anyone knows of any subtle differences between EMC and jump process, please edit accordingly.  I see that jump process refers to a finance term, but it doesn't sound like it's the same as what is discussed in this article.Halcyonhazard 17:01, 26 April 2007 (UTC)

Strange example: {Bull Market, Bear Market, Recession}
The example given in the article, describing a three-state space consisting of {Bull Market, Bear Market, Recession}, seems rather strange. In their ordinary meaning, the "Recession" state is not mutually distinct from the other two. It is certainly possible to have a Recession during a Bull Market or Bear Market. It would be better to define the three-state space as {Bull Market, Bear Market, Stagnant Market} or {Bull Market, Bear Market, Mixed Market}. —BarrelProof (talk) 20:03, 30 May 2013 (UTC)
 * I would make that change myself, but it requires re-drawing the figure, and I don't have an easy way to do that. Incidentally, the figure also violates the MOS:NUM guideline that says "Numbers between −1 and +1 require a leading zero (0.02, not .02)". —BarrelProof (talk) 20:12, 30 May 2013 (UTC)
 * Thanks for these suggestions, I've now updated the image, but seem to be suffering some awkward font issues. I'm unsure why at the moment. Previously I haven't had problems. Gareth Jones (talk) 21:53, 30 May 2013 (UTC)
 * It looks good to me! Thanks. —BarrelProof (talk) 22:29, 30 May 2013 (UTC)
 * I just noticed that there's a similar example for the discrete-time case in the Markov chain article. —BarrelProof (talk) 22:33, 30 May 2013 (UTC)
 * Well spotted, I've updated that too. Gareth Jones (talk) 23:10, 30 May 2013 (UTC)

Discrepancy correction
In the section about Embedded Markov chains, there seems to be a discrepancy between the definition given in terms of $$s_{ij}$$ and $$S = - D_Q^{-1}Q$$. In the first definition, the diagonal values of S are given as zero, but the second definition will have them as 1. I believe the first definition is correct, so we want instead that $$S = - D_Q^{-1}Q - I$$, which also makes more sense in the following lines when we try to find the kernel of $$(S-I)$$ for the steady states of the discrete time Markov process. If S is the transition probability matrix of the EMC, we should be trying to find the stationary states of $$S$$, not $$S-I$$. I have gone ahead and made this change.76.120.32.59 (talk) 21:02, 3 June 2013 (UTC)Christopher

Note: I have not checked to see if this change affects the definition of \pi on the following line. This should be verified.76.120.32.59 (talk) 21:04, 3 June 2013 (UTC)Christopher


 * You're right about getting zeroes being correct, but need $$S = I - \left( \operatorname{diag}(Q) \right)^{-1} Q$$ to get them. I'll change this now. Gareth Jones (talk) 11:59, 4 June 2013 (UTC)

History
Can anyone shed light on the history of CTMCs? Richard Weber's Markov Chain course notes contain a section titled "Historical notes" (p.52) where a quote from Youschkevitch suggests Markov's own work was limited to the discrete case ("For him the only real examples of the chains were literary texts, where the two states denoted the vowels and consonants"), but I haven't made any further progress. Gareth Jones (talk) 11:06, 10 June 2013 (UTC)

Formula for P(t) seems incorrect
It is written that


 * $$P(t) = e^{tQ}$$

has elements of the form


 * $$p_{ij}(t) = \delta_{ij} + \sum_{k=1}^\infty \frac{t^k q^k_{ij}}{k!}$$

which equals


 * $$p_{ij}(t) = \delta_{ij} + e^{tq_{ij}} - 1.$$

By definition of matrix exponential, however:


 * $$P(t) = \sum_{k=0}^\infty \frac{t^k Q^k}{k!}.$$

And there is no simple elementwise expression, which is the source of complexity of calculating these. So it seems wrong... bungalo (talk) 14:10, 21 July 2013 (UTC)

Yep, you are right, it is incorrect. I have taken it out of the article. — Preceding unsigned comment added by 75.102.81.97 (talk) 19:02, 28 December 2013 (UTC)

Rate instead of holding time.
> Each non-diagonal value can be computed as the product of the original state's holding time with the probability from the jump chain of moving to the given state.

This looks wrong to me. In the example, 3 is the product of 6 and 1/2, with 6 being the rate rather the holding time (which is a random variable with a mean of 1/rate). 92.110.219.57 (talk) 19:59, 6 September 2022 (UTC)


 * You are correct. It should also be the expected holding time, rather than the holding time (which is a random variable). I've just fixed this. Malparti (talk) 12:20, 8 September 2022 (UTC)