Wikipedia:Reference desk/Archives/Mathematics/2021 February 10

= February 10 =

Expected values
On the page of Expected value it is stated that if $$X$$ is a random variable defined on a probability space $$(\Omega,\Sigma,\operatorname{P})$$, then the expected value of $$X$$, denoted by $$\operatorname{E}[X]$$, is defined as the Lebesgue integral $$\operatorname{E} [X] = \int_\Omega X(\omega)\,d\operatorname{P}(\omega).$$. How does the definition for the case when $$X$$ is a random variable with a probability density function of $$f(x)$$, (in which case the expected value is defined as $$\operatorname{E}[X] = \int_{\R} x f(x)\, dx$$) follow from the general definition? Thanks - Abdul Muhsy (talk) 11:00, 10 February 2021 (UTC)


 * If a real-valued random variable has a probability density function $$f$$, it is the derivative of its cumulative distribution function $$F$$, so $$\int_S y\,dF = \int_S yf(x)\,dx$$ (see ). The probability function $$\operatorname{P}$$ can then be equated with $$F$$ (see ). So $$\int_\R x\,d\operatorname{P} = \int_\R x\,dF = \int_\R xf(x)\,dx$$. --Lambiam 15:20, 10 February 2021 (UTC)


 * Can you please explain the first equality in your last equation a bit more? Thanks Abdul Muhsy (talk) 18:03, 10 February 2021 (UTC)


 * If $$\operatorname{P}=F$$ (see the sentence immediately before the equation), their differentials are also the same. --Lambiam 22:23, 10 February 2021 (UTC)


 * But I don't understand why $$\operatorname{P}=F$$ given that they have different domains (the sigma field and the real numbers respectively), and also why is $$\operatorname{E} [X] = \int_\Omega X\,d\operatorname{P}=\int_\R x\,d\operatorname{P}.$$! How does the integral on the probability space turn into an integral on the real line? Is it by first considering the pushforward measure, observing the expectations for X and the identity will be equal and then by Radon-Nikodym theorem applied for the pushforward measure and the Lebesgue measure? Which book will contain a detailed proof Abdul Muhsy (talk) 00:46, 11 February 2021 (UTC)


 * It is nothing deep. The event space $$\mathcal{F}$$ can be taken to be the Borel $ \sigma $-algebra; then its distribution is a measure on $$\mathcal{F}$$ (see ). Given a probability distribution $$F,$$ the $$F$$-measure of an interval $$I=[x_0,x_1]$$ is $$F(I) = F(x_1)-F(x_0),$$ where the endpoints may be infinite. For an event $$\mathcal{I}$$ represented as a set of disjoint intervals, $\operatorname{P}(\mathcal{I}) = \sum_{I\in\mathcal{I}}F(I).$  We can also go the other way: $$F(x) = \operatorname{P}(\{[-\infty,x]\}),$$ which shows that as far as probability distributions on $$\R$$ are concerned, there is a one-to-one correspondence between distributions and probability functions. The lower-case variable $$x$$ corresponds to a possible outcome of $$X$$. If it is all (notationally) a bit confusing, this is because the theory was originally developed independently (and not always in the most general way) for discrete events and for real-valued random variables, and the concept of probability space was developed afterwards to create a uniform framework capable of capturing these and more.  --Lambiam 10:39, 11 February 2021 (UTC)