Wikipedia:Reference desk/Archives/Mathematics/2009 April 13

= April 13 =

Length of a Continued Fraction
Alright, I have several questions regarding the length of a continued fraction. Feel free to answer any, all, or none of them (though, preferably, if you intend to do the last, don't respond at all :). These questions are similar, but distinct.


 * 1) Does the length of the continued fraction expansion of a rational number say anything meaningful about the number itself?  I can already kind of guess it is kind of a measure of the complexity of the number, but I'm wondering if there's any significantly less vague property to speak of.
 * 2) Can the numerator/denominator form of a rational number be used to predict the length of that number's continued fraction, without actually calculating its expanded form?
 * 3) Would it be easier to predict the length if the fraction were given in reduced form?
 * 4) What about with the numerator and denominator decomposed into prime factors?

BTW, just to deflect any misunderstanding from question # 3, I am aware that converting a numerator/denominator fraction into its continued fraction form and back will automatically put it in reduced form.

All responses appreciated, --69.91.95.139 (talk) 01:46, 13 April 2009 (UTC)


 * Have you thought about the questions? What have you worked out with regards to each question? Please list some of your ideas. -- PS T  04:53, 13 April 2009 (UTC)


 * Hint: the length of the continued fraction expansion of a/b (which may or may not be in reduced form) is related to the number of steps required to find the greatest common denominator of a and b using the Euclidean algorithm - can you see why ? Our Euclidean algorithm article contains some results on the "worst case" and average number of steps. Can you find a fraction a/b, where a and b are both less than 100, with a continued fraction expansion of length 10 ? Gandalf61 (talk) 05:49, 13 April 2009 (UTC)


 * Is the last partial quotient allowed to be 1? —Tamfang (talk) 04:53, 14 April 2009 (UTC)


 * Oops, yes it was. So make that "...with a shortest continued fraction expansion of length 9". Gandalf61 (talk) 11:01, 14 April 2009 (UTC)


 * I've figured out the connection algebraically. Thanks for the tip, Gandal. I'll post my formulation here for the curious later. --69.91.95.139 (talk) 22:30, 14 April 2009 (UTC)


 * Let a and b be integers satisfying $$0 < a < b \,$$. The Euclidean algorithm and the derivation of the continued fraction both proceed by finding the unique integer n such that $$n \le b/a < n+1 \,$$ or $$a(n) \le b < a(n+1) \,$$. (Strictly speaking, this is not how the Euclidean algorithm is defined, but I will use n in the Euclidean algorithm to aid in finding the modulus, which is effectively 'subtracting a from b as many times as possible').


 * Let's look at deriving the continued fraction form of b/a first.
 * $$b/a = n + (\frac{b}{a} - \frac{na}{a}) = n + \frac{b-na}{a} = n + \frac{1}{(\frac{a}{b-na})}$$
 * This is one step. If b = na, then the fraction portion can be left out, and the continued fraction expansion has ended. If this is not the case, then we know $$b < an + a \,$$ or $$b-na < a$$, so the lower fraction is larger than 1, and the expansion continues.


 * Now take a look at the same for the Euclidean algorithm. We're trying to find the GCD of a and b. In order to do so, the Euclidean algorithm tells us that that GCD(a, b) = GCD(a, b-na).


 * That was also one step. We notice that this is the same pair of numbers as was left in the fraction above. Thus, the continued fraction expansion and the Euclidean algorithm are doing essentially exactly the same thing.


 * There is a difference to note, that the continued fraction expansion ends sooner. When we end up with two integers of the form a and b = na, the continued fraction ends, but the Euclidean algorithm continues 1 more step, to give the pair (a, 0). Thus, the length of the continued fraction of a fraction is 1 less than the number of steps it takes to perform the Euclidean algorithm of calculating the GCD of the numerator and denominator, and this is so regardless of whether they are given in reduced form, and is not helped at all by giving them in factored form.


 * That answers all my questions. Thanks again for the help. --69.91.95.139 (talk) 00:43, 15 April 2009 (UTC)


 * Hey! it's just me or in those formulas there is written "banana" everywhere... I think I'll go for another banana split! ;-) --pma (talk) 21:11, 15 April 2009 (UTC)

constructing a table
I want to let someone, blindly, try to guess how dice will land with a machine (while they are rolling in the air), but they don't see how the dice will land, so I thought I would say they can stop whenever they want, after 5 throws or 50, and I would tell them after each number of throws how well they would need to be predicting (the amount of the effect)for this to show up at the 95, 98, and 99% confidence. Can this be done? How would my chart look? (and how would I calculate it?) Thanks! 94.27.151.13 (talk) 14:10, 13 April 2009 (UTC)


 * How many dice - 1 ? 10 ? 100 ? What exactly is the other person guessing ? What is the "effect" that you want to test for ? Is this another variation on the coin tossing questions that you were asking above ? And have you read hypothesis testing, normal distribution and Z-test yet ? Those articles will help you to answer your questions. Gandalf61 (talk) 15:53, 13 April 2009 (UTC)


 * the provided links were very difficult so i am asking a simpler question!
 * Q. How many dice?
 * A. One standard six-sided die.
 * Q. What exactly is the other person guessing?
 * A. They claim a machine can deduce to some extent, from the observed position and spin of the die, on which face it will land. The machine cannot see the results (or what happens once the die falls below a certain height).
 * A. They claim a machine can deduce to some extent, from the observed position and spin of the die, on which face it will land. The machine cannot see the results (or what happens once the die falls below a certain height).


 * Q. What is the "effect" that you want to test for ?
 * A. Can the machine predict the die to any statistically significant extent for the number of throws the experiment consists of?  (ie is the machine predicting the results of the fair die throws statistically better than I can by saying 1, 1, 1, 1, 1 etc).  The device-wrangler can stop after any number of throws (they dont see the results until the end) and I want them to be able to consult my chart to see when they want to stop...
 * Q. Is this another variation on the coin tossing questions that you were asking above ?
 * A. Not quite. I'm looking for a table that shows, for each number of throws, what the threshold is at a few different confidence levels for proving that their machine works to ANY statistically significant extent -- and the corresponding strength of the effect that this would demonstrate.
 * My reasoning is that the effect strength would have to be close to 100% for an effect to be proven in 6 die throws, so if they hae just thrown 6 they would consult my chart and see that the machine would have to be very accurate to prove any efficacy in so few throws. By reading down the chart, they can find the number of throws corresponding to the strength effect they think they have -- if they think the machine works 1% better than chance, they would read down the list and see the 1% somewhere, maybe at 100 throws for 95% confidence and at 115 throws for 99% confidence, I don't know -- and I don't know how to calculate these numbers!
 * Could someone prepare such a chart for me or give me simple formulas so I can do so myself from any programming language? Thank you. 94.27.151.13 (talk) 18:33, 13 April 2009 (UTC)
 * Statistical power. Unless your machine can make perfect guesses you will never be able to set up your experiment such that you can be certain that a 95% confidence can will be reached.  Taemyr (talk) 06:04, 14 April 2009 (UTC)
 * when you said "you will never sbe able to set up your experiment such that you can be certain that a 95% confidence can be reached" my brain exploded. You owe me a new brain, Mister.  And then you owe me an explanation.  What could that statement possibly mean? 79.122.35.239 (talk) 10:55, 14 April 2009 (UTC)
 * Well, first of I made a typo. It should be will rather than can.  Other than that, as long as there is a non-zero possibility that your machine guesses wrong it's possible that it guesses wrong on all throws of the dice.  So you have the concept of Statistical power, which is the probability that an experiment detects an effect that is present.  Power is in tension with confidence, in that if you want a higher confidence you must either change the experiment or allow for a lower power. Taemyr (talk) 03:19, 16 April 2009 (UTC)
 * Statistical power. Unless your machine can make perfect guesses you will never be able to set up your experiment such that you can be certain that a 95% confidence can will be reached.  Taemyr (talk) 06:04, 14 April 2009 (UTC)
 * when you said "you will never sbe able to set up your experiment such that you can be certain that a 95% confidence can be reached" my brain exploded. You owe me a new brain, Mister.  And then you owe me an explanation.  What could that statement possibly mean? 79.122.35.239 (talk) 10:55, 14 April 2009 (UTC)
 * Well, first of I made a typo. It should be will rather than can.  Other than that, as long as there is a non-zero possibility that your machine guesses wrong it's possible that it guesses wrong on all throws of the dice.  So you have the concept of Statistical power, which is the probability that an experiment detects an effect that is present.  Power is in tension with confidence, in that if you want a higher confidence you must either change the experiment or allow for a lower power. Taemyr (talk) 03:19, 16 April 2009 (UTC)

Gaussian Derivative
In Gaussian_function it says "Mathematically, the derivatives of the Gaussian function are the Hermite functions", but it seems like the actual derivative is the inverse of a Hermite function. Where does the extra -1 factor come from? Is this an error in the article or am I missing something? Truthforitsownsake (talk) 16:00, 13 April 2009 (UTC)


 * OK, let's say "up to a multiplicative factor". Notice also that the definition of several special functions differs by some constant among the various authors, so you may even find a definition of Hermite functions without "extra -1". --pma (talk) 14:34, 14 April 2009 (UTC)

Polynomial question/thing
This question has always baffled me: f(x) is a monic polynomial of degree 6. It has the following values: f(0)=0, f(1)=1, f(2)=2, f(3)=3, f(4)=4, f(5)=5. What is the value of f(6)? Considering this is beyond my current working knowledge of polynomials, how would I even begin to solve this? Furthermore, what would the correct equation be of said polynomial, and how would I come up with it? Does this involve calculus of some sort? 141.153.216.159 (talk) 16:06, 13 April 2009 (UTC)


 * Follow the example at Lagrange polynomial. No calculus, just a bit of algebra.undefined&mdash;undefinedPt(T) 16:29, 13 April 2009 (UTC)


 * If we want a monic polynomial of degree 6, the best thing to do is to create an arbitrary monic polynomial of degree 6, $$f(x)=x^6+ax^5+bx^4+cx^3+dx^2+ex$$. As f(0)=0 there is obviously no constant term. There may be an easier way to solve this, but the method I used was to create a coefficient matrix for all five non-zero values f(x) takes that are provided. Using row operations, I transformed it to reduced row echelon form, whereby I could read off the coefficient values. The equation I got is $$f(x) = x^6-15x^5+85x^4-225x^3+274x^2-119x$$. f(6)=726. Readro (talk) 17:21, 13 April 2009 (UTC)
 * Messing around with matrices is a lot more complicated than Lagrange polynomials, I think. Algebraist 18:41, 13 April 2009 (UTC)
 * Indeed. And if one doesn't want to use the Lagrange polynomial formula explicitly, one may observe that f(x)-x is a sixth degree monic polynomial vanishing at x=0,1,2,3,4,5 : therefore f(x)=x(x-1)(x-2)(x-3)(x-4)(x-5)+x and f(6)=6!+6. --pma (talk) 19:58, 13 April 2009 (UTC)
 * That's a very nice and elegant argument. Wish I'd spotted that! Readro (talk) 21:13, 13 April 2009 (UTC)

Doing the computations with matrices is laborious, but understanding hor they're done is what makes it obvious that there must be a solution. But pma's solution above is the best of those proposed here. Michael Hardy (talk) 21:24, 13 April 2009 (UTC)

statistics: are there real effects that are impossible to show statistically with very high confidence?
I am new to statistics and I was wondering: are there real physical effects that are impossible to show statistically with very high confidence (no matter what test you devise?). I mean that you can show it with 98% confidence, but the effect is such that by its nature you cannot devise the experiment with 99.9% confidence? (maybe because it is not possible to repeat the 98% confidence test many many times for some reason, or have the sample space be much larger than enough for the 98% confidence level, for some reason)?

Thank you! 94.27.151.13 (talk) 18:54, 13 April 2009 (UTC)


 * Many measurements of cosmological parameters are subject to uncertainty from cosmic variance; that is, we can only observe a small fraction of the universe, which surely has slight statistical deviations from the whole universe. For example, the measurement of the Hubble constant is subject to errors from bulk flows in the visible universe.  In one of the best measurements of the Hubble constant, from the HST Key Project, the systematic error due to bulk flows is given as 5% (Table 14).  -- Coneslayer (talk) 19:16, 13 April 2009 (UTC)


 * Some economic effects are very difficult to show statistically. The reason is that (1) as with astronomy, it is not always possible to conduct controlled experiments, (2) as with fluid dynamics applied to small quantities of liquids, sometimes the number of people involved is small enough that individual idiosyncratic behavior drowns out the effect. Wikiant (talk) 19:34, 13 April 2009 (UTC)


 * Thank you, both respondents! These answers are just what I was looking for! :) -- follow-up question: In cases like this, what is the lowest confidence interval that is even worth mentioning the results over (in a paper or any other source)? 95% as you said, or 90% or even lower?  Thanks! 94.27.151.13 (talk) 20:10, 13 April 2009 (UTC)


 * The appropriate confidence interval depends on the cost of being wrong. In clinical medicine, there is a huge downside to erroneous measurement, so one would tend to go with large confidence intervals. In market research or portfolio management, you can be wrong in many individual cases as long as you are right more often than wrong, so smaller confidence intervals are acceptable. The lowest I've seen (and in the field of market research) is 75%. Wikiant (talk) 22:17, 13 April 2009 (UTC)
 * thank you Wikiant, both for the concrete and qualified number, and especially for your explanation preceding it. Totally informative. 94.27.151.13 (talk) 22:35, 13 April 2009 (UTC)

Finding P
I'm not one to ask the REFDESK to do my homework, but I would appreciate the help. I need to find the "...probability distribution for the sum of the numbers given", all 2 through 10. Percentages are given for the outcomes, but I need the equation to "Find P (Sum is prime)." If I could get the equation, I can be on my way. Thank you. —Mr. E. Sánchez (that's me!)What I Do / What I Say 22:01, 13 April 2009 (UTC)


 * If you're saying that probabilities P(sum=2), P(sum=3) and so on are given, then all you need to do is add up the values for the primes. —Tamfang (talk) 04:46, 14 April 2009 (UTC)


 * [[Image:Yes check.svg|20px]] Resolved. I see. Thank you! —Mr. E. Sánchez (that's me!)What I Do / What I Say 00:50, 15 April 2009 (UTC)

Connection between frequency modulation and Bessel functions
I am trying to understand the presence of spectral side-bands when the frequency of an oscillation oscillates itself. Most books on the subject assume I have much more or much less mathematical background than I do. It seems to ride on the following identity, which I can't see how it was derived.


 * $$ \cos ( \beta \sin (p) ) = J_0 (\beta) +2 J_2 (\beta) \cos ( 2 p) +2 J_4 (\beta) \cos ( 4 p)+.....$$


 * $$ \sin ( \beta \sin (p) ) = 2J_1 (\beta) +2 J_3 (\beta) \cos ( 3 p) +2 J_5 (\beta) \cos ( 5 p)+.....$$

Your help is most appreciated.128.223.130.198 (talk) 22:46, 13 April 2009 (UTC)


 * These are named Jacobi-Anger expansions and follow easily from the generating series of the Bessel's functions; have also a look here and here, and ask for details if needed. Put $$\scriptstyle t=i\exp(ip)$$ in the generating series of the $$J_n(\beta)$$, and use $$\scriptstyle\frac{1}{2}\Big(t-\frac{1}{t}\Big)=i\cos(p)$$.--pma (talk) 13:12, 14 April 2009 (UTC)


 * Thanks a lot, that did the trick. 128.223.23.171 (talk) 18:50, 15 April 2009 (UTC)