Wikipedia:Reference desk/Archives/Mathematics/2018 November 20

= November 20 =

Total multiplicative error after n/2 steps
I have some quantity and in each of $$\frac n2$$ steps I gain a multiplicative error of $$1 \pm n^{-2/3}$$. In what sense is the total error $$1 \pm n^{1/2} \cdot n^{-2/3}$$?

I would naively compute just the "maximum" error to be $$(1 \pm n^{-2/3})^{n/2} \approx 1 \pm n/2 \cdot n^{-2/3}$$. Why the square root? (The context is somewhat vague regarding the exact assumptions.) --37.122.157.27 (talk) 22:05, 20 November 2018 (UTC)


 * Statistically, the probability that all of the random errors are the same way is extremely low. The assumption is that some errors will be one way and some will be the other way, so there will be some cancelling.  See standard error.  If you are dealing with a constant systematic error, then you are correct to calculate the maximum.  Dbfirs  22:30, 20 November 2018 (UTC)
 * But why specifically square root? --37.122.157.27 (talk)  — Preceding unsigned comment added by 176.12.239.38 (talk) 09:34, 21 November 2018 (UTC)
 * On average the error terms will add to zero, but the square of the error terms for adding n things will grow proportionally to the number of terms. So for n terms the error term grows approximatly as the square root of n times the error term for a single one.
 * By the way for multiplicative terms it doesn't work exactly as you said. Kelly criterion is an ineresting article on this and how it affects making money - though that really needs a bit of change when you can invest in a number of different things. Dmcq (talk) 13:31, 21 November 2018 (UTC)


 * I do not think the above answer are satisfactory in explaining the final result. I am also not sure that getting the probability distribution function of a product, even of a product of independent normally-distributed variables and even for small standard deviations, is as simple as taking the product of (mean +/- std).
 * However, I can confidently say why your naive computation is incorrect. What you are doing implicitly is applying the asymptotic expansion $$(1+\epsilon)^N = 1 + N\epsilon + O(\epsilon^2)$$ (see big O notation if needed). For a fixed $$N$$, when $$\epsilon$$ grows smaller, then you can neglect the second-order term if you want a first-order development.
 * The problem is that for what you want to do, $$N$$ is not fixed but depends on $$\epsilon$$ (or the other way around, depending how you see things). This makes the procedure unreliable: for instance, as N grows large, the value of $$\left(1+\frac{a}{N}\right)^N$$ goes to $$e^a$$, not $$1+a$$.
 * Another way to spot the problem is to make the exact binomial expansion $$(1+\epsilon)^N = 1 + N\epsilon + \frac{N(N-1)}{2}\epsilon^2 + \frac{N(N-1)(N-2)}{6}\epsilon^3 + ...$$ ; for $$\epsilon=N^{-2/3}$$, as N grows large, the first-order term $$N\epsilon \approx N^{1/3}$$ is small, not big, compared to the second-order term $$\frac{N(N-1)}{2}\epsilon^2 \approx N^{2/3}$$ which itself is small compared to the third order term etc. Now because we are talking about "errors" etc. this complicates things but the point is that you cannot neglect the other terms as easily as you would like to. Tigraan Click here to contact me 16:47, 21 November 2018 (UTC)