Wikipedia:Reference desk/Archives/Mathematics/2015 March 16

= March 16 =

Subsets of the real numbers that are finite in every bounded interval
What's the term for a subset of the real numbers whose intersection with every bounded interval is finite (such as the integers, or any finite subset of the reals)? Being nowhere dense is a necessary condition for this, but clearly isn't sufficient, since [0,1] minus the dyadic rationals is an example from that article of a nowhere-dense set that isn't an example. (I don't know whether nowhere-density and unboundedness are sufficient in combination, but I suspect so.) Neon  Merlin  05:39, 16 March 2015 (UTC)
 * Discrete. --Trovatore (talk) 05:54, 16 March 2015 (UTC)
 * But $$\{ 1/n \; | \; n \in {\mathbb Z}^+ \}$$ is discrete but doesn't have a finite intersection with (say) the interval [0,1]. So discrete is not sufficient. Abecedare (talk) 07:46, 17 March 2015 (UTC)
 * OK, that's true. --Trovatore (talk) 18:17, 17 March 2015 (UTC)
 * It appears as if such a set is characterized by not having a ω-accumulation point (in ℝ). The linked article doesn't give a name for it. YohanN7 (talk) 08:33, 17 March 2015 (UTC)
 * Seems right.
 * Clearly necessary, since if S has an accumulation point, any bounded interval containing that point will have an infinite intersection with S.
 * And sufficient, since consider an S that has no accumulation points, and assume there exists a bounded interval I s.t. $$S \cap I$$ is infinite, then $$S \cap I$$ has an accumulation point in ℝ $$\Rightarrow$$ S has an accumulation point in ℝ. Contradiction.
 * Abecedare (talk) 09:09, 17 March 2015 (UTC)
 * (You have to remove more than the dyadic rationals from [0,1] to obtain a nowhere dense set, see Nowhere dense set. YohanN7 (talk) 11:22, 16 March 2015 (UTC))
 * The set $$\{ 2^n \; | \; n \in \mathbb Z \}$$ is unbounded and nowhere dense but not discrete (using Trovatore's term) doesn't have a finite intersection with all bounded intervals. -- BenRG (talk) 07:07, 17 March 2015 (UTC)
 * All elements in that set are isolated points. It is thus a discrete set. YohanN7 (talk) 07:36, 17 March 2015 (UTC)
 * Sorry, fixed. -- BenRG (talk) 03:22, 18 March 2015 (UTC)
 * A set has the property described if and only if it has no limit points (by the Bolzano-Weierstrass theorem).  Sławomir Biały  (talk) 10:51, 17 March 2015 (UTC)

Beta Function used in Beta distribution
Could anyone explain in laymen terms what really the beta function does as said in Beta_function.I know that gamma function is used to find the factorial of real numbers but when it comes to beta function I can't get what really beta function does.I am reading an article:Empirical Analysis and it explains about Dirichlet distribution.

After looking for the prerequisites needed to study Dirichlet distribution as said in Prerequisites for Dirichlet distribution I have reached up to Beta distribution in the tree.This is where the beta function comes in beta distribution and that's where I'm stuck.Could anyone help me.JUSTIN JOHNS (talk) 12:18, 16 March 2015 (UTC)
 * I'm not sure there's a meaningful way to answer this question. Some things have a simple, insightful explanation to go along with them. Others can't really be explained better than with a definition and some theorems; the intuition for such things comes from experience working with them. Expecting everything to be reducible to combinations of things you already understand leaves you with only a narrow set of concepts you can get exposed to. Hence the saying "in mathematics, you don't understand things, you just get used to them".
 * As for the beta function - it can be defined as either an integral or in terms of the gamma function, and the best thing to say about it is that, well, it's the normalizing factor of the beta distribution (a normalizing factor is what takes you from saying "the probability density is proportional to X" to an actual probability density function with a total probability mass of 1).
 * The beta distribution itself, of course, can be intuitively explained as the posterior distribution of the unknown probability underlying a process, after observing the outcomes of this process (assuming your prior is also beta). In other words, it's the conjugate prior family for Bernoulli trials. -- Meni Rosenfeld (talk) 12:32, 16 March 2015 (UTC)
 * Consider the expression
 * $$\binom n k p^k (1-p)^{n-k}$$
 * For fixed values of n and p it is the binomial distribution function of k.
 * For fixed values of n and k it is a the likelihood function of p. An unnormalized beta distribution function. Bo Jacoby (talk) 15:19, 16 March 2015 (UTC).

Thanks for your answer.Could you explain what do you mean by "the probability density is proportional to X".What's really X?Is it the random variable?If it's the random variable whether the probability density would always be proportional to X?Do you meant to say about the random variable X or the probability associated with X.JUSTIN JOHNS (talk) 04:57, 17 March 2015 (UTC)
 * "X" here was just a placeholder. I wasn't sure if to say "proportional to X" or "proportional to this and that" or "proportional to ..." . In the case of the beta distribution, the probability density is proportional to $$p^{\alpha-1}(1-p)^{\beta-1}$$. In other distributions, the density is proportional to other things. You often define a distribution in terms of an expression the density is proportional to, and then you multiply by a normalizing factor equal to the inverse of the integral of this expression (so you end up with an expression with an integral of 1). -- Meni Rosenfeld (talk) 09:02, 17 March 2015 (UTC)

Okay that's getting a good start to beta distribution.Could you explain the Binomial distribution in terms of it's probability density and it's normalizing factor.I know that binomial distribution has two outcomes success or failure.This might be a nice way to relate binomial distribution to beta distribution.JUSTIN JOHNS (talk) 11:20, 17 March 2015 (UTC)
 * The binomial distribution is
 * $$f(k)=\binom n k p^k (1-p)^{n-k}$$
 * where n (n≥0) is the number of trials, k (0≤k≤n) is the unknown number of successes, and p (0≤p≤1) is the success probability.
 * The unnormalized beta distribution is
 * $$f(p)=\binom n k p^k (1-p)^{n-k}$$
 * where as n (n≥0) is the number of trials, k (0≤k≤n) is the number of successes, and p (0≤p≤1) is the unknown success probability. Bo Jacoby (talk) 11:49, 17 March 2015 (UTC).

That's really a good relation between binomial and beta distribution.Why couldn't the number of successes be known in binomial distribution for example you could see here:Binomial distribution that k is known.Is there any normalization for binomial distribution?JUSTIN JOHNS (talk) 03:47, 18 March 2015 (UTC)
 * If you know the number of successes k, then there is no need to compute the binomial distribution. The above formula for the binomial distribution is normalized because
 * $$\sum_{k=0}^n \binom n k p^k (1-p)^{n-k}=1$$
 * Bo Jacoby (talk) 08:12, 18 March 2015 (UTC).

I tried looking in the internet but couldn't get an example that uses binomial distribution to find the number of successes.Could you show an example that uses bionomial distribution with 'k' unknown number of successes.JUSTIN JOHNS (talk) 10:16, 18 March 2015 (UTC)
 * Look here: Binomial distribution. Bo Jacoby (talk) 18:40, 18 March 2015 (UTC).

Sorry to say but it's too difficult to understand the explanation given here:Binomial distribution.Could you give me a simple example.JUSTIN JOHNS (talk) 03:59, 19 March 2015 (UTC)
 * Toss a coin four times. Consider the result HEAD to be a success (=1) and the result TAILS to be a failure (=0). Then the 16 possible outcomes are 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111. The possible number of successes are 0 1 1 2 1 2 2 3 1 2 2 3 2 3 3 4. (Count the 1s in each outcome). The distribution of the possible number of successes is 1 4 6 4 1. (Count the 0s, 1s, 2s, 3s, 4s). This distribution is the unnormalized binomial distribution
 * $$f(k)=\binom 4 k$$
 * for k=0 1 2 3 4. Bo Jacoby (talk) 04:20, 19 March 2015 (UTC).

That's really a good example.Could you show how to normalize this distrbution.JUSTIN JOHNS (talk) 05:29, 19 March 2015 (UTC) [this text formerly misplaced; restored upon archiving by scs]

Thanks for the effort you have taken in showing this example.Could you show how to normalize this distribution of the example you have given.JUSTIN JOHNS (talk) 05:32, 19 March 2015 (UTC)


 * Divide by 1+4+6+4+1=16. The general formula gives
 * $$\binom n k p^k (1-p)^{n-k}=\binom 4 k \left(\frac 1 2\right)^k \left(1-\frac 1 2\right)^{4-k}=\binom 4 k \frac 1 {16}$$
 * Bo Jacoby (talk) 09:24, 19 March 2015 (UTC).

You're great.Thanks for giving me such a deeper explanation of binomial distribution.Could you know tell how the normalization takes place in beta distribution. Does the 'unknown success probability p' in the beta distribution indicates that it is similar to binomial distribution but it's probability is uncertain to find.Also does beta distribution looks the history of successes(k) to plot a probability distribution?JUSTIN JOHNS (talk) 10:28, 19 March 2015 (UTC)
 * An unnormalized beta distribution is (as said above)
 * $$f(p)=\binom n k p^k (1-p)^{n-k}$$
 * The normalization divisor is
 * $$\int_0^1 f(p)dp$$
 * So the credibility that the unknown success probability is located somewhere between the numbers p and p+dp is
 * $$\frac{f(p)dp}{\int_0^1 f(p)dp}=\frac{p^k(1-p)^{n-k}dp}{\int_0^1 p^k(1-p)^{n-k}dp}$$
 * Bo Jacoby (talk) 19:07, 19 March 2015 (UTC).

Trig half angle question
Hi folks, just a bit puzzled by the following:

$$ \sin \frac{\theta}{2} = \sgn \left(2 \pi - \theta + 4 \pi \left\lfloor \frac{\theta}{4\pi} \right\rfloor \right) \sqrt{\frac{1 \! - \! \cos \theta}{2}} $$

How does $$ \sgn \left(2 \pi - \theta + 4 \pi \left\lfloor \frac{\theta}{4\pi} \right\rfloor \right)$$ get derived? Letsbefiends (talk) 13:17, 16 March 2015 (UTC)
 * It's just a rather involved way of saying that $$ \sin \frac{\theta}{2}$$ is positive when $$\theta$$ is between zero and $$ 2 \pi$$, and negative between $$2 \pi$$ and $$4 \pi$$ (that's the $$2 \pi - \theta$$ part) and it has a period of $$ 4 \pi$$ (the $$ 4 \pi \left\lfloor \frac{\theta}{4\pi} \right\rfloor$$ part). Try graphing the expression inside the sgn function and it should become clear(er). AndrewWTaylor (talk) 14:48, 16 March 2015 (UTC)
 * Oh, and if you were asking about the sign function sgn(x), it just returns the sign of the number x (positive for x &gt; 0, zero for x = 0, and negative for x &lt; 0). If you didn't want to use it you could just write the various cases out explicitly, following what Andrew did above. Double sharp (talk) 21:51, 16 March 2015 (UTC)
 * Thanks! - Letsbefiends (talk) 02:30, 17 March 2015 (UTC)

Geometric interpretation of standard deviation or variance on an arbitrary probability density function
Hello mathmos!

I made this graphic to help statistics students visualise the mode, median and mean of an arbitrary probability density function.

Is there a similar geometric analogue for its standard deviation or variance? I understand that the standard deviation is the distance between the mean and either inflection point for a normal distribution but what about an arbitrary PDF?

Thanks,
 * cm&#610;&#671;ee&#9094;&#964;a&#671;&#954; 09:44, 17 March 2015 (UTC)

Draw dots at &mu;+&sigma; and &mu;−&sigma; where &mu; is the mean value and &sigma; is the standard deviation. Bo Jacoby (talk) 11:26, 17 March 2015 (UTC).
 * That's how to draw it but it's not a geometric interpretation. I don't know if there is an easily picturable geometric interpretation, but physical interpretation of &sigma;2 is the moment of inertia about a vertical line at the mean. --RDBury (talk) 16:28, 17 March 2015 (UTC)
 * Yep, also good to think about standard deviation in terms of the radius of gyration - then you can do a fun demonstration of how that affects rotational stability with a chalk board eraser (two axis are rather stable, the other is not). SemanticMantis (talk) 18:37, 17 March 2015 (UTC)