Wikipedia:Reference desk/Archives/Mathematics/2008 August 14

= August 14 =

Euler's theorem
a and n are positive integers.

Claim: a is coprime to n if and only if $$a^{\varphi (n)} \equiv 1 \pmod{n}$$.

Prove or disprove. —Preceding unsigned comment added by 93.172.15.144 (talk) 09:56, 14 August 2008 (UTC)
 * The article you've linked to in the title has two proofs of the non-trivial implication. Algebraist 12:06, 14 August 2008 (UTC)

Deceptively distributed decimal digits
X and Y are independent continuous random variables both uniformly distributed between 0 and some upper limit a. Z is the leading (non-zero) digit in the decimal expansion of Y/X. So, for example, if X=2 and Y=5 then Z=2; if X=5 and Y=2 then Z=4. What is the probability that Z=1 ?

I expected Benford's law to apply, because I thought this was analogous to measuring an observable of magnitude Y in units of magnitude X. So I expected to find P(Z=1) = log10(2). But I think I have a geometric argument that shows that P(Z=1) is 1/3. Is this correct ? And, if so, why doesn't Benford's law apply ? Gandalf61 (talk) 10:17, 14 August 2008 (UTC)
 * The probability is independent of a, so set a=1. This gives some simplification. Within the unit square in the X,Y-plane consider the triangles for which 0.1&le;X/Y<0.2, 1&le;X/Y<2, 10&le;X/Y<20, 100&le;X/Y<200 etc and sum their areas. Is this what you did? Bo Jacoby (talk) 11:00, 14 August 2008 (UTC).


 * Yes, in essence that was my geometric argument that led to P(Z=1) = 1/3. Gandalf61 (talk) 11:10, 14 August 2008 (UTC)


 * Benford's law is just a rule of thumb, and I don't think there's any realistic situation in which it holds exactly. I'd expect it to hold approximately here, and it does: log10(2) ≈ 1/3. A precisely correct statement of the law would be that P(Z=1) is the integrated area from 0 to log10(2) of the distribution of the log10 of your random variable in $$\mathbb{R}/\mathbb{Z}$$. Often that distribution is roughly flat, so the integral comes out to about 0.301. -- BenRG (talk) 11:31, 14 August 2008 (UTC)


 * Hmmm. 1/3 doesn't seem very close to log10(2). And the difference gets worse if you increase the base. In hexadecimal, Benford's law says the P(Z=1) should be log16(2) = 1/4, whereas the geometric method gives P(Z=1) = 3/10. In fact, as the base increases, Benford's law says that P(Z=1) should tend towards 0 because


 * $$\lim_{b \rightarrow \infty} \log_b(2)=0$$


 * whereas with the geometric method, P(Z=1) tends to a limit of 1/4 (because the area of the largest triangle, between the lines Y=X and Y=2X, is always 1/4 and the areas of all the other triangles become negligible). So I still think that either the geometric method is incorrect or Benford's law doesn't apply - but I don't know which. Gandalf61 (talk) 12:58, 14 August 2008 (UTC)


 * Your geometric method is fine; it's a proof. Benford's law never applies in the sense that you mean—it's not a theorem (except in base 2) and can't be used to prove anything. It's generally the case that it gets less accurate for higher bases. For this problem it's exactly right (for a leading digit of 1) in bases 2 and 4 and off by less than 1% in base 3. -- BenRG (talk) 14:56, 14 August 2008 (UTC)


 * But Benford's law isn't just theoretical - Benford supported in with statistical evidence in his original paper, and MathWorld mentions other evidence as well. What I am trying to work out is whether Y/X is a good model for measuring an arbitrary observable in arbitary units. What is the probability that the first significant digit in the base b expansion of an arbitary observable measured in arbitrary units (for example, the speed of light in furlongs per fortnight) is 1 ? Is it logb(2) as per Benford ? Or is it (b+2)/4(b-1) as per the Y/X model ? Gandalf61 (talk) 15:31, 14 August 2008 (UTC)


 * Oh, I see what you mean. Benford's law is better. What you're seeing in this problem is edge effects arising from the particular cutoff you've imposed on X and Y. If X is a physical quantity and Y is a unit, it's probably more realistic to assume a uniform distribution on log X and log Y rather than X and Y. The leading-1 case of Benford's law says that the fractional part of log X − log Y will lie in the range [0, 0.301) about 30.1% of the time, which will be true if the fractional part of log X − log Y is uniformly distributed, which will be roughly true as long as X and Y are allowed to range over enough orders of magnitude. -- BenRG (talk) 18:35, 14 August 2008 (UTC)


 * Gandalf, the answer is P(Z=1) = 1/9. The nine triangles from 0.1 to 1.0 have each the same area. The nine triangles from 0.01 to 0.1 have each the same area. &c. So each of the nine possible leading digits from 1 to 9, have the same probability, = 1/9. You do not even need to sum the geometric series. Bo Jacoby (talk) 14:18, 14 August 2008 (UTC).


 * Here is how I got P(Z=1) = 1/3. We have one sequence of triangles with bases down the right hand side of the square: 0.1X <=Y<0.2X, 0.01X<=Y<0.02X etc. with bases 1/10n and height 1 so total area of this sequence is 1/18. We have another sequence of triangles with bases along the top of the square: X <=Y<2X, 10X<=Y<20X etc. with bases 5/10n and height 1 so total area of this sequence is 5/18. And 1/18 + 5/18 = 1/3. Gandalf61 (talk) 14:35, 14 August 2008 (UTC)


 * You're correct, the total area is 1/3 (in general, $$\tfrac{b+2}{4(b-1)}$$). The nine triangles corresponding to different leading digits have different areas. -- BenRG (talk) 14:56, 14 August 2008 (UTC)
 * Sorry, I was too hasty. Bo Jacoby (talk) 17:13, 14 August 2008 (UTC).


 * When $$Y\approx1$$, Benford can't apply at all; the first digit is uniformly distributed because it's uniform on each of $$[0.1,1), [0.01,0.1)$$, etc. When $$Y=\frac12$$, on the other hand, you get a full half probability for 1 from $$X/Y\in[1,2)$$, and then uniform otherwise.  The pattern repeats at $$Y=\frac1{10},\frac1{20}$$; it only makes sense that the final probabilities resemble Benford's Law (since $$Y=\frac15$$ benefits 1&hellip;4 but not 5&hellip;9), but nothing more precise may be assumed because there is a variable amount of uniformity mixed in and because the relative favor given 1 and then 1/2 and then 1/2/3 is not logarithmically distributed.  --Tardis (talk) 18:14, 14 August 2008 (UTC)


 * As BenRG notes above, for Benford's law (in base b) to hold exactly for a random variable R, the variable S = (log R) mod (log b) must be uniformly distributed. If X and Y are both uniformly distributed between 0 and a, then the probability density and cumulative probability functions for X/Y and their logarithmic equivalents are
 * $$\begin{align} \frac{\mathrm{d}}{\mathrm{d}z}\;\mathrm{P}\left(\frac{X}{Y} < z\right) &= \begin{cases} 1/2 & \mbox{for } 0 \le z \le 1, \\ 1/(2z^2) & \mbox{for } z > 1, \end{cases} \\ \mathrm{P}\left(\frac{X}{Y} < z\right) &= \begin{cases} z/2 & \mbox{for } 0 \le z \le 1, \\ 1 - 1/(2z)  & \mbox{for } z > 1, \end{cases} \\ \mathrm{P}\left(\log \frac{X}{Y} < \zeta\right) &= \begin{cases} e^\zeta/2 & \mbox{for } \zeta \le 0, \\ 1 - e^{-\zeta}/2  & \mbox{for } \zeta > 0, \end{cases} \\ \frac{\mathrm{d}}{\mathrm{d}\zeta}\;\mathrm{P}\left(\log \frac{X}{Y} < \zeta\right) &= \begin{cases} e^\zeta/2 & \mbox{for } \zeta \le 0, \\ e^{-\zeta}/2  & \mbox{for } \zeta > 0. \end{cases}\end{align}$$
 * Thus, the logarithmic probability density peaks at 0, reflecting the fact that the construction makes it fairly likely that X and Y are of similar magnitude. Writing β = log b, the probability density function for log X/Y mod β is then
 * $$\frac{\mathrm{d}}{\mathrm{d}\eta}\;\mathrm{P}\left(\log \frac{X}{Y}\ \mbox{mod}\ \beta < \eta\right) = \frac 12 \sum_{k \in \N}\left( e^{-k\beta+\eta}+e^{(1-k)\beta-\eta} \right) = \frac 12 (e^\eta + e^{-\eta}) \sum_{k \in \N} e^{-k\beta} + \frac 12 e^{-\eta}.$$
 * This cannot be constant for any b; therefore Benford's law (at least in its extended, multi-digit form) will not hold exactly in any base for this distribution. The probability density function does become approximately constant (over its domain 0 ≤ η ≤ β) as β approaches 0, so the law will more or less hold for small b.  For large b, however, the probability density peaks sharply at the ends of the range; in particular, P(log X/Y mod β < η) tends (pointwise) to 1/2 - e-η/2 as β tends to infinity.  (The second peak at β moves out to infinity, taking half of the probability mass with it.)  Thus, as the geometric reasoning above suggests, P(log X/Y mod β < log 2) indeed approaches 1/2 - e-log 2/2 = 1/4 for large β.  —Ilmari Karonen (talk) 22:16, 14 August 2008 (UTC)

Thank you for all the responses. Obviously if we start with a uniform distribution for log X and log Y over a wide enough range then a Benford-type distribution for Z follows almost trivially. I was hoping that there might be a simple explanation of why Benford's law is empirically seen to apply in less restricted scenarios. As far as I know, the only explanation of Benford's law that does not start out by assuming an underlying uniform log distribution or something close to that is Hill's 1996 paper on samples from "random" distributions, which is a fairly technical explanation. Gandalf61 (talk) 09:21, 15 August 2008 (UTC)


 * Well, a slightly better explanation is that, even if the log distribution isn't that close to uniform, "rolling it up" modulo log b will tend to make it more uniform if b is small enough. (Even your sharply peaked example gets pretty close for small b, the relative error P(log X/Y mod log b < log 2) / (logb 2) - 1 being, coincidentally, about b/100 for b up to 60 or so.)  In particular, if the variable is approximately log-normally distributed (which the law of large numbers says is likely for variables that are products of many independent underlying factors) with a standard (log-)deviation of more than a few orders of magnitude, then Benford's law is likely to hold quite well.  —Ilmari Karonen (talk) 12:46, 15 August 2008 (UTC)

2012
what day of the week willb 18th August 2012 fall on —Preceding unsigned comment added by 86.135.111.151 (talk) 10:30, 14 August 2008 (UTC)
 * Saturday. You can doubleclick on Windows Clock on taskbar. See Calculating the day of the week. —Preceding unsigned comment added by 93.172.15.144 (talk) 10:47, 14 August 2008 (UTC)
 * You can also visit 2012 here, and click on the link to leap year starting on Sunday. -- Coneslayer (talk) 16:04, 14 August 2008 (UTC)
 * 86.135.111.151: What do you expect will happen on August 18, 2012? None of these, I hope?  --Bowlhover (talk) 19:09, 15 August 2008 (UTC)

A question about terminology in elementary category theory
How is it named, or how would you name, the following property for a map h:x-->y in some category C:

(P) for any f:x-->x there exists a g:y-->y such that gh=hf.

In other words, if h is put at the two horizontal edges of a square diagram, and f is put at the left edge, we can close a commutative diagram putting g on the right edge.

In fact an analog property may be considered in any algebraic context, that is f,g,h may be elements of a group or so, but has this a name? —Preceding unsigned comment added by 79.38.22.37 (talk) 13:48, 14 August 2008 (UTC)

How fast?
This is an odd question. I purchased the The Flash (TV series) on dvd a while back and one episode had the superhero running 10 miles in 30 seconds to escape a radio signal that would blow up a bomb. I'm sorry to say that I don't remember the formula for calculating speed. How fast is that anyway? --Ghostexorcist (talk) 19:52, 14 August 2008 (UTC)


 * Distance = (Speed) x (Time) and Speed = (Distance)/(Time). After you do the math, you figure that the superhero was going at 1200 mph.--El aprendelenguas (talk) 20:45, 14 August 2008 (UTC)


 * Which is still slower than the speed of light, which radio wave (being a very low frequency light) travel. --antilivedT 08:52, 15 August 2008 (UTC)