Talk:Algorithmically random sequence

Randomness vs. Representability?
I wonder a bit why we talk here about randomness if actually we talk about "representation"? Specifically, as there is mentioned that it is usually taken to mean "incompressible". So, the actual questions at had seems to be "Can I represent a definite/concrete sequence of letters in an alphabet in terms of an algorithm?". But who claims that "algorithms" are the only way to represent such sequences? Maybe the only *know* way currently. (Who knows?)

And a clear separation from the "statistical randomness" is also not given ... what imho is also not possible, if that randomness is not defined thoroughly either. What seems it doesn't.

It is interesting that this notion of "randomness" still has not died. Because it seems only be needed to express some sort of uncertainty or ignorance -- speak: lack of information. — Preceding unsigned comment added by 2001:9E8:32A3:6A00:10DB:6072:461D:3DAE (talk) 11:49, 11 May 2022 (UTC)

RAND is Sigma^0_2
It is correct to say that RAND is Sigma^0_2, because there is a Sigma^0_2 formula that defines the class of 1-random real numbers. It is not true that every random real is itself Sigma^0_2; there are 1-random reals of arbitrarily high Turing degree. CMummert 23:41, 23 June 2006 (UTC)

Edit today
Here are some comments on the changes I made today CMummert 14:00, 24 June 2006 (UTC)
 * It is not NPOV to describe some things are very technical and describe other things as very easy. My impressions of easy and hard are in fact the opposite of the opinions the article used to have.  In any event, there is no reason to lecture the reader about how hard some idea is.  I removed all such comments.
 * I rewrote the Martin-Löf definition in topological language, which should be more clear.
 * I made other small changes.

Lebesgue measure
I don't think it's proper to call the measure of a cylinder Lebesgue measure. A cylinder isn't a subset of R^n. It's not internally consistent even with Wikipedia's current definition of Lebesgue measure.


 * I rephrased it to say fair coin measure. Everyone can edit, so you can make changes like that yourself to save time. CMummert 14:58, 27 July 2006 (UTC)


 * I think calling it Lebesgue measure is pretty standard. People often don't distinguish between a subset of natural numbers, an infinite binary sequence, and the real number with said sequence as it's binary expansion. So Lebesgue measure on cantor space is really Lebesgue measure on the interval $$[0,1]$$ under suitable blurring of definitions. There is a slight issue of the fact that .1000...=.0111..., but such things have measure 0 anyways.


 * In any case, I'm using "Lebesgue measure" elsewhere, but can stop if this is felt to be an important distinction. Althai (talk) 22:54, 8 June 2007 (UTC)

Complexity is prefix-free
I noticed that the article made no mention that the Kolmogorov complexity definition of randomness made no mention that the complexity used is prefix-free complexity, not plain complexity. This is important, since if you used plain complexity for the definition, there would be no resulting random sequences, because you can use a trick to get some extra compression by using the length of your input program as additional information. On the other hand, I worry that I may have made the article harder to understand for a non-mathematician reader. If you are reading this and have any thoughts, feel free to change or revert if you think added precision is not worth the tradeoff. Althai 00:37, 30 April 2007 (UTC)

Please explain
Could we include an explanation of what w0 and w1 are in the condition for martingale functions? Kronocide (talk) 15:26, 8 September 2009 (UTC)
 * Better? skeptical scientist (talk) 19:59, 8 September 2009 (UTC)
 * Much, thanks! Kronocide (talk) 16:59, 10 September 2009 (UTC)

About universality of martingale
There is a universal constructive martingale d. This martingale is universal in the sense that, given any constructive martingale d, there is some constant λ>0 so that for any string σ, $$\mathbf{d}(\sigma)\geq \lambda d(\sigma)$$

This is the definition of optimal martingale and no c.e. martingale is optimal, see "Algorithmic randomness and complexity". —Preceding unsigned comment added by コドボル (talk • contribs) 09:23, 26 May 2010 (UTC)

Betting
Martin-Löf randomness has since been shown to admit...random sequences should be incompressible, they should pass statistical tests for randomness, and it should be difficult to make money betting on them.

The nature of the randomness has no relevance  to how easy it is to make money betting on them. This needs to be clearer to be true. I am not going to edit as it is 15 years since I did any algorithmic computer science, but these days, doing finance, i know that statement is untrue.

It is dificult to make money betting on the precise value of a sequence, but a sequence can fall within bounds, bounds that can be discovered and exploited in the sense of gambling, and still the values of the sequence can be random from the POV I understand it (Chaitin:  The sequence is un-compressible.)

Gambling strategies are every bit as evolved, sophisticated and complex as AIT these days. So the language needs to be moderated. It may indeed be very easy to make money by betting on a random sequence.

Worik (talk) 20:46, 2 November 2010 (UTC)

Introduction -- bases other than 2
The introduction used to say

"The definition can not be applied equally well to sequences on any finite set of characters, but naively applied in practice."

I thought this was unclear, suggesting there was something special about base 2, as opposed say to base 3 or 10. I replaced it by the following, which reflects the straightforward generalizability of algorithmic randomness to sequences on any finite alphabet. This parallels the theory of statistical randomness, where the notion of a fair coin can be generalized in a straightforward way to a fair n-sided die.

"The notion can be applied analogously to sequences on any finite alphabet (e.g. decimal digits)."CharlesHBennett (talk) 19:33, 6 July 2014 (UTC)