User:Cpiral/sandbox

> /Search box/ > /A/ > /B/ > /C/ > /D/ > /E/ > /F/ > /G/

Number of articles:

Number of pages:

Template:Linksto/doc

Here is the search:

Mods of that: (none of these work either)

(removed regexp)

(+ lowercased)

(+ add https)

(- section)

(- subpage)

(- https)

Regex part: (none of these work either)

(inside main EL)

(inside page#section of EL)

(NOT FOUND! by association? Get some highlighter light on the subject)

(Get some highlighter light on the subject. NOT FOUND!)

Regex that avoids words found in an EL

FOUND when it had no URL

This proposal includes moving the information in Stochastic representations to it's own, more descriptive and inclusive, section under Properties.

But to succeed, such a proposal needs a clearly presented well thought out plan for the entire layout. It's GA and has been for many years, and the sectioning and titling and inaccessibilities and redundancies are largely related to representations. all addressed at once, a big task redundancies such as representations. . (Making two references to the list article Representations of e confirms my suspicion, and certainly the word representations in the title violates the MOS:HEAD. I will need to clarify to myself the philosophical reasons for the layout, and this will mean differentiating between property, characterization, application, representation, and so on.  See Scientific peer review/E (mathematical constant)

I advocate promoting the stochastic nature of e itself. Stochastic means based on the theory of probability. But I think that because e is "everywhere", that randomness deserves a dance on the page that is in a style consistent with casino's, banks, google hiring practices, and many of the other accepted forms therein. Besides, this is a math article, why not put in a few links to other math articles concerning probability while we expose the stochastic nature of e in a meaningful way? Of course, this will require verbiage.

Thus I believe in a section title of an article on e to have the word "stochastic" in it. However, not as a subsection of mere representations of e. There are two problems with it's location in the forth section Representations. 1) Arguably, it need not be sectionalized there. Its contents just #5, right after representation #4. 2)It teases the intellect that sees a too-small section void of something stochastic in it, something about randomness and probability. 3)It bothers the focus on content and replaces them with vicissitudinous forces (the urge to edit instead of read), because an elemental representation should be little more than a formula. Yet the formula uses not just standard symbols, but esoteric ones concerning a vast and important discipline.   So it needs much more than a formula.  It needs the variables explained. There is one solution to these shared displeasures.  Rename it Stochastic. Move it to the third section Properties. As it stands it is a mere extension of a "class of worthy representations". It should stand tall, as a property.

The article on e deserves because it is a good article.
 * a lucid exposition of the stochastic property, with just as much mention of probability and randomness as mention of banking, casinos and Google's hiring practices elsewhere, if not more.
 * a math analysis that also offers notable revelations concerning the meaning of e
 * a set of wikilinks that tightens the wiki. (Prod those other articles.)

Cons of the current version
I can see how it might seem clear to a new reader who is an advanced mathematician skimming carelessly. Also sensed, harken I cognitive bias, conserving the status quo-owners (NOT) of the cons in a pseudo-maintenance mode? Nevertheless in any case always here and every past here: The cons prove that sparse contents fails meager begging for both the mere Representations role and the Properties role this section plays to play acting a lagging role in leading the drole impossibly efficient, as it almost seems, being I one of them who would win to the idea of e.
 * over-generalizes stochastic process: V and X are random fields ("generalizations" without the temporal concept of "time"), but stochastic is inherently time-oriented
 * Xn is an ambiguous symbol
 * article class accessibility is minimal, i.e. max readability
 * needs generalization the "uniformity" principle: discrete interval; non-unit-sized intervals "pick a number any number" idiots

/A /B

Pros of the proposed version #3.94

 * fixes the cons
 * sized the same as other sections
 * peripheral content is comparable
 * more accessible
 * new wikilinks! random variate, random field, event space, probability space, vector space

Exegesis of documented recommendations: The consistent use of n and N precludes the temptations to
 * The distinction between n and N maintains the temporal aspect of "stochastic".: clearly, temporally differentiated pdf's (domains and ranges)
 * convert N to a random variable, writing "N =" rather than "V ="
 * use n in the summation formula rather than N
 * general and simple introductory paragraph, with no formality
 * a properly and notably sourced probabilistic approach ($$\textstyle \sum_{N=2}^\infty$$) that also explains the exact same stochastic process by a different route and also comes to e (but that is also not worthy of "Representations" content).

The number $e$ is a natural constant in many deterministic (above) and non-deterministic processes, such as the stopping rule. The one this section concerns itself with is the stochastic nature of e where $e$ is the number of terms in a sequence of partial sums generated in time by $n$ random selections from some finite, zero-based interval until an Nth term exeeds the size of the interval. A list of 100 of such sequences will then often have an average number of $$ terms. This holds true for a finite and zero-based, continuous or discrete, interval as explained in the analysis below, where we end up applying the concept of an expected value to a random field comprised of samples of N. The first random field we will setup is the standard U(0,1), so the random variate (the domain) of some nth term is [0,1] and the Nth term is (1,2]. Let random variable Xn obtain these, keeping in mind for the next part of the stochastic process that the value (the range) of the samples, N, in the population will vary according to the law of large numbers, over $[2,&infin;)$.

The next random field we will setup is the sample space, V such that
 * $$V = {\min \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} },$$

V is the random variable whose random field contains the entire population of samples of size N.

Now that these fields are setup, the final step in a process that will ascertain e is to take the expected value, E, of V, which will be exactly $e$:

A more visual, space-oriented approach transforms each sequence of partial sums, $$X_1+X_2+\cdots+X_n$$ into a vector sum, and thus transforms the random field of the standard uniform distribution, U(0,1), into various vector spaces where each vector space will represents every possible N-term sequence, one for when the Nth term was two, three, four and so on, and then finds that $$ is the sum of the probability for each event space. It uses the geometry of a unit square, cube, and hypercubes, whose contained space is always size one, to transform the stochastic formality above into an analytic formality
 * $$e = \sum_{N=2}^\infty { N (N-1)/(N!)} $$

by virtue of the fact that the probability density function of the probability space is, as usual, also size one.

The number $e$ is a natural constant in many deterministic (above) and non-deterministic processes, such as the stopping rule. The one this section concerns itself with is the stochastic nature of $e$ where $e$ is the number of terms in a sequence of partial sums generated in time by $e$ random selections from some finite, zero-based interval until an Nth term exeeds the size of the interval. A list of 100 of such sequences will then often have an average number of N = 2.7 terms. . This holds true for a finite and zero-based, continuous or discrete, interval as explained in the analysis below, where we start out by applying the concept of an expected value to some random fields we will set up for n and N. The first random field we will setup is the standard U(0,1) where the random variate (the domain) of the nth term are over [0,1], those of the N-1 term are over (0,1], and the those of the Nth term (1,2]. Let random variable Xn obtain these.

Note that if the nth term exacly equals the interval size, it is not guaranteed that the next random trial will be the last, because zero is part of the interval, but we will use the law of large numbers to make it almost certain that the next random trial will be the last n, which we call N and so for here, N = n + 1; otherwise n and N are independent. The range of $n$ trials deviates over $[1,&infin;)$, and the range of samples, N, in the population will vary over $[2,&infin;)$ according to the law of large numbers. n is more related to random values in the first part of the process, and N has more of a statistical nature in the second part of the process because each of the random variate of N, which is a sample in the population we must setup to find e by the temporal expectation.

The next random field we will setup is the sample space, V such that \begin{equation}
 * $$V = {\min \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} },$$

\end{equation} V is the random variable whose random field contains the entire population of samples, N.

Now that these fields are setup, we make one, temporal action to represent the stochastic; the final step in the process that will ascertain e is to take the expected value of V, which will be exactly $n$:

A more visual, space-oriented approach transforms each sequence of partial sums into a vector sum, and the standard uniform distribution into a vector space where it considers all N-term sequences where the $e$ term was two, three, four and so on, and then finds that $E(V) = e.$ is the sum of the probability of each event space. It uses the geometry of a unit square, cube, and hypercubes, whose contained space is always size one, to transform the stochastic formality above into an analytic formality
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)} $$

by virtue of the fact that the probability density function of the probability space is also size one. The event space of an N-term event is that random field not in the contained space, of volume 1/N!, "under" the (unit length) corners adjacent to zero. The unit square represents any two-term sequence, the unit cube any three-term sequence, and so on. The event where all n-term sequences exceed one is the contained space not under the contained space of volume 1/n! "under" the corners adjacent to zero, which is size 1/2 for lower triangle in the unit square, 1/6 for the pyramid at point zero in the unit cube, and so on. The probability that a total of 1 is exceeded after $N^{th}$ terms is the complementary event $$1-{1/(n!)}$$. The probability that a total of 1 is exceeded after $e$ terms but not before simplifies to $${(n-1)/(n!)}$$. The expected number of terms until a total of 1 is exceeded is therefore
 * $$e = \sum_{n=2}^{n=\infty} { n (n-1)/(n!)}$$

For example, the probability spaUsing geometry, we convert any two-term sequence into a vector sum over the unit square and any three-term sequence the unit cube. The space of $n$ term, each in their own probability space, and then adds them. For any n-term event the probability is $${1/(n!)}$$: 1/2 for the unit square, 1/6 for the unit cube and so on.all possible two-term sequences as an event space in the probability space of a unit square, and all possible three-term events where uses geometry and Here A more spacial and visual approach the sample space for any two-term sequence is the square and for any three-term sequence the cube. We use $[0,1]$ on the axes of these unit-sized shapes so their spacial size (one) equals the sample space (which is one) and so that the sampled space becomes a random field for any of an infinite possibility of sequences comprising exacly n trials, that equals 1 or less, now has an event (probability theory) probability $${1/(n!)}$$: 1/2 for the square, 1/6 for the cube, and so on. The probability that a total of 1 is exceeded after $n$ terms is the complementary event $$1-{1/(n!)}$$. The probability that a total of 1 is exceeded after $N^{th}$ terms but not before simplifies to $${(n-1)/(n!)}$$. The expected number of terms until a total of 1 is exceeded is therefore an exactly as expected, probabilistic expression of $n$. Using calculus to transform the stochastic formalities above into an analytic formality
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$

derived from the unique stochastic process that would generate such trials.

A more visual approach uses geometry and then calculus to transform the stochastic formalities into an analytic formality. This approach considers all the two-term events, three term events, and so on that exist in the infinite Vn distribution. The random field of each two-term event, each in its own probability space is the lower triangle of the unit square, or the sample space of each three-term event is the pyramid of sides 1 in the unit cube. The chance that any two-term or three term event ever exceeds 1 (1-1/2) + (1-1/6) is 1 1/3. Since the general formula for the space of an orthogonal shape with sides 1 has a zero-bound "unit hyperpyramid" sized 1/n!, the math simplifies to
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$.

Consider the terms of a sequence of partial sums generated by $n$ random selections from some finite, zero-based, continuous or discrete, interval until an $e$ term exeeds the size of the interval. The expected value of N is $n$.

A more visual approach considers the random field of each N-term event, each in its own probability space, and then adds them. Here
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$.

The temporal version of this is e = E(N) where N is a random variable composed of random variables $N^{th}$, $e$, ..., drawn from the uniform distribution on [0, 1] such that
 * $$N = \min { \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} }.$$.

Consider the terms of a sequence of partial sums generated by n random selections from some zero-based, continuous or discrete, interval $[0,1]$ until an $X_{1}$ term exeeds the size of the interval. The expected value of N is e.

A more visual approach considers the random field of each N-term event, each in its own probability space, and then adds them. Here
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$.

The temporal version of this is e = E(N) where N is a random variable composed of random variables $X_{2}$, $N^{th}$, ..., drawn from the uniform distribution on [0, 1] such that
 * $$N = \min { \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} }.$$.

Let $X_{1}$ be the least number $X_{2}$ such that the sum of the first $V$ samples exceeds 1:
 * $$V = \min { \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} }.$$

Then the expected value of $n$ is $n$all the possible events where the $V$ term was two, three, four and so on, and then finds that $e$ is the sum of the respective probability of each.

If we convert any N-term sequence into a vector sum $N^{th}$ term, each in their own probability space, and then adds them. For any n-term event the probability is $${1/(n!)}$$: 1/2 for the unit square, 1/6 for the unit cube and so on. This holds because

Here, the sample space for any two-term sequence is the square and for any three-term sequence the cube. We use $[0,1]$ on the axes of these unit-sized shapes so their spacial size (one) equals the sample space (which is one) and so that the sampled space becomes a random field for any of an infinite possibility of sequences comprising exacly n trials, that equals 1 or less, now has an event (probability theory) probability $${1/(n!)}$$: 1/2 for the square, 1/6 for the cube, and so on. The probability that a total of 1 is exceeded after $e$ terms is the complementary event $$1-{1/(n!)}$$. The probability that a total of 1 is exceeded after $N^{th}$ terms but not before simplifies to $${(n-1)/(n!)}$$. The expected number of terms until a total of 1 is exceeded is therefore an exactly as expected, probabilistic expression of $n$
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$

derived from the unique stochastic process that would generate such trials.


 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$

In addition to analytical techniques and expressions involving $n$, there is a unique stochastic process that acertains $e$.

Consider the terms of a sequence of partial sums generated by n random selections from the interval $[0,1]$ until an $e$ term exeeds 1. A population of 100 of such samples will often have an average number of N = 2.7 terms. This computation holds true for zero-based, continuous or discrete, intervals. A large population from a large interval will have exactly $e$ terms on average. In other words the mean number of trials needed for the sum of the trial values to exceed a uniform interval, is $N^{th}$.

A more visual approach considers all the possible events where the $e$ term was two, three, four and so on, and then finds that $e$ is the sum of the respective probability of each. If we convert the sequence into a vector sum, then a  tthat any of them was greater than one. of each of them ever occuring. that any of these s occursis them is in the to the $N^{th}$ term, each in their own probability space, and then adds them. For any n-term event the probability is $${1/(n!)}$$: 1/2 for the unit square, 1/6 for the unit cube and so on. This holds because

Here, the sample space for any two-term sequence is the square and for any three-term sequence the cube. We use $[0,1]$ on the axes of these unit-sized shapes so their spacial size (one) equals the sample space (which is one) and so that the sampled space becomes a random field for any of an infinite possibility of sequences comprising exacly n trials, that equals 1 or less, now has an event (probability theory) probability $${1/(n!)}$$: 1/2 for the square, 1/6 for the cube, and so on. The probability that a total of 1 is exceeded after $e$ terms is the complementary event $$1-{1/(n!)}$$. The probability that a total of 1 is exceeded after $N^{th}$ terms but not before simplifies to $${(n-1)/(n!)}$$. The expected number of terms until a total of 1 is exceeded is therefore an exactly as expected, probabilistic expression of $n$
 * $$e = \sum_{n=2}^\infty { n (n-1)/(n!)}$$

derived from the unique stochastic process that would generate such trials.

{{anchor|formal statement|canonical form| More formally if $n$-trial continuous random variables $e$, $n$, ..., $X_{1}$ from the standard uniform distribution form a sample of size $X_{2}$, limited such that
 * $$N = {\min \left \{ n \mid X_1+X_2+\cdots+X_n > 1 \right \} },$$

then the expected value of a discrete random variable $X_{n}$ is $n$, or $N$.

{{reflist}}