Wikipedia:Reference desk/Archives/Mathematics/2008 December 24

= December 24 =

Derivation details
How was the equation


 * $$\frac{d^2u}{d\theta^2} + u = -\frac{f(1 / u)}{h^2u^2}$$

from Orbit arrived at? I understand that all terms of the very first equation in that section were divided by $$r^2\left( \frac{d\theta}{dt} \right)^2$$, but why does $$d^2r/dt^2$$ yield $$d^2u/d \theta^2$$? --Bowlhover (talk) 00:13, 24 December 2008 (UTC)
 * See Kepler's law. Bo Jacoby (talk) 09:30, 24 December 2008 (UTC).

Möbius strip
Hi. I added more info to the article on Möbius strip. Please make sure it is correct. Also, I could not figure out an additional property of an odd-number-of-half-twisted-strip when cut down the centre lengthwise: if the strip has been twisted once, the resulting strip will have four half-twists, or two full twists. However, if that strip is cut in half again, it will be two separate strips. However, here's the problem. When I cut a strip with three half-twists in half lengthwise, it produced a single strip, but when I cut that strip in half again, it produced yet another single strip, indicating that the strip formed when I cut the original three-half-twisted strip in half also had had an odd number of half-twists (actually, it was supposed to have had three half-twists plus an overhand knot when unravelled, which makes sense, and I wrote it in, but it also doesn't make sense)! So, did I do something wrong, or is there not a rule on this, or is it alternating, or based on the number of half-twists, or something else altogether? Thanks. ~ A H  1 (TCU) 01:04, 24 December 2008 (UTC)


 * Your edits appear to be original research. -hydnjo talk 02:31, 24 December 2008 (UTC)


 * I would not say it's original research in that it is a well known subject of divulgation in maths (for instance GooglNo(moebius,scissors)>600,000 ). Maybe one could find some classic reference (Martin Gardner for sure). As to your problem (how many strips after a number of cuttings), it is nothing more than asking what happens to the discrete set $$\{0,1,2..,n\}$$ when you identify each $$k$$ with $$n-k$$ (and of course we may observe that cutting e.g. twice a strip in half, is the same as cutting it in four). So, paper strips and scissors make a nice scenography, but the underlying fact is just very simple finite combinatorics. If you then ask, how the resulting strips are linked together, this requires a bit of formalism from knot theory.--PMajer (talk) 08:16, 24 December 2008 (UTC)

Did you know that the Moebius strip is a quotient space? —Preceding unsigned comment added by Point-set topologist (talk • contribs) 12:00, 24 December 2008 (UTC)

Annuity
Seasons Greetings! What is the formula for the accumulated amount and the present value of a savings account annuity if I contribute more often than it compounds and if I earn interest on the deposits made between compounding periods? Thanks in advance. --Mayfare (talk) 19:07, 24 December 2008 (UTC)


 * I recommend Stephen G. Kellison's book called The Theory of Interest. Chapter 3 and 4 are especially relevant, but the whole book is enjoyable and helps put all of these different models into perspective.
 * For your specific question, I'll try to help here, but probably the book is better. I myself have been very surprised at the number of methods actually in use for calculating interest, and so would be nervous giving a specific formula lest I did not communicate the subtleties correctly.  For instance, do you deposit regularly?  If so, then when does the pay-out begin?  Does the lending agency know you will be depositing regularly, and if so how will they be calculating the partial period interest (both single and compound are commonly used within a single period).  You may also be interested in what are called sinking funds which are savings accounts where you deposit regularly, and then expect a single pay out at the end.  The amount of the single pay-out may in fact be exactly the "accumulated amount" you are looking for.  For the present value, I suspect you just want to depreciate the ending accumulated amount by whatever the time period is.
 * To be honest though, I may have misunderstood the situation you are trying to describe. It seems strange to me that one would setup an annuity to pay-out money while also paying-in money.  I am assuming that the situation goes through two distinct stages: the initial stage where you slowly build up the fund (the sinking fund stage), and then the final stage where the fund becomes an annuity that pays-out regularly, but is never increased except through interest on the remaining amount.  If this two stage idea is what you are talking about, then probably you just need formulas for sinking funds during the first stage, and annuities at the second.  Of course, you'll need to know exactly how the savings institution plans on paying out interest for partial periods (and probably very importantly, whether the interest rates are fixed are not; I would assume not during the building stage). JackSchmidt (talk) 22:09, 24 December 2008 (UTC)

I never knew that there are so many types of annuities. Sorry for the misunderstanding. For simplicity, assume a fixed interest rate sinking fund in which I deposit cash at regular intervals between and on compounding periods and withdraw the entire investment lump sum. Thank you for recommending the book. I will read it once I have a hold on it. --Mayfare (talk) 20:56, 25 December 2008 (UTC)


 * If the contribution between time t and t+dt is called x(t)dt, and the rate of interest is r(t,y), the present value y(t) satisfies the ordinary differential equation dy = (r y + x) dt. Bo Jacoby (talk) 10:01, 26 December 2008 (UTC).

Stirling's approximation and asymptotics
I am studying a few particular positive integer valued sequences, and am trying to write down nice "asymptotic expressions" for them. I suspect understanding positive integer valued sequences is utterly impossible for the human mind, but I would expect to have gotten a little farther than I currently have.

I would like something written in a textbook form which describes the various approximations of a nice function like n! with some emphasis given to why the previous approximations have been improved by the new ones. Does anyone know of such a textbook?

Trying to construct such a thing has led me along this path:
 * (n-1)*log(2) < log(n!) < n*log(n) is the naive estimate
 * n*log(n) - n + log(n)/2 + O(1) < log(n!) < n*log(n) - n + log(n)/2 + O(1) is the DeMoivre result
 * n*log(n) - n + log(n)/2 + log(2π)/2 + o(1) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + o(1) is the Stirling result
 * n*log(n) - n + log(n)/2 + log(2π)/2 + 1/(12n+1) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + 1/(12n) is a more explicit Stirling result

Now up to here it seems ok: there is not much pattern to the madness, but it is not too hard to see an improvement at each stage. However, then Stirling's approximation takes two infinite journeys and suddenly nothing is clear any more. In particular, in the first case in many ways the more "precise" estimates are actually worse, and somehow the second sequence tries to fix this.
 * There is some sequence (the Stirling sequence) a_k such that n*log(n) - n + log(n)/2 + log(2π)/2 + Sum( a_k/n^k,k=1..K ) + o(1/n^K) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + Sum( a_k/n^k,k=1..K ) + o(1/n^K), but the little oh term has some fairly pathological behavior that is only hinted at in an image in the article.
 * There is some other sequence of approximations that is not as pathological

So I am a little worried that there are two "right answers", and I tried to consider how unique an answer could be, and how to tell if an answer was any good. I spent a little time trying to figure out if the complex logarithm was defined and or analytic at infinity, and if so if using its Taylor expansion would allow me to remove the n*log(n) term and instead write the estimates as Laurent polynomials. Using just basic facts about the logarithm made it look somewhat hopeless, and so I wondered what sort of terms *should* be allowed. There is an article on asymptotic scales, but it doesn't exactly address the basic question of why they exist in human discourse.

It seems possible to me that asymptotics were just omitted form the famous "lies, damned lies, and asymptotics" quote, so that perhaps one just chooses an asymptotic scale to suit one's nefarious purposes, but I was hoping for something a little more definitive since I don't actually have any such nefarious things (or at least they were satisfied long ago). In other words, I have this function f(n), and I personally am happy with poly1(n) < log(f(n)) < poly2(n) where poly1 and poly2 are more or less unspecified, but I suspect people want something more precise, like poly1(n) + o(1) < log(f(n)) < poly1(n) + o(1). Maybe I can get to that point and maybe I can't. How do I tell how close I am, or whether my estimate is "better"? What if like Stirling's approximation, people want "better" ones; what is the shape of the next better one?

Is there some standard sequence of expressions that are the asymptotic approximations of a function? I've read through asymptotic expansion, but didn't get anything out of it. I would have liked it to be "Laurent polynomials of lower degree k as k goes to infinity", but I think log(n!) has no such approximation, since it needs that pesky n*log(n) term. What if it needed a 1/n^13*log(n) term, or a log(n)/(n^5*log(log(n))) term? This isn't completely outrageous, as one of my "other" estimates has log(n)/log(log(n))*n^3 in it, and I just have no idea how it compares to my polynomial bounds. Have I improved the bound, or just made the expression longer and harder to use?

At any rate, I basically just need some standard introduction to this stuff. I hope there is some mathematical answer, but if the answer is just a cultural tradition of choices of asymptotic scale, then I at least would like that tradition spelled out somewhat clearly in a citable source. JackSchmidt (talk) 21:55, 24 December 2008 (UTC)


 * Textbooks: I'm not a specialist, but if I understand your needs, Concrete Mathematics by Graham, Knuth & Patashnik may be a very interesting and enjoyable reading, for it describes by examples various techniques and tools; recent Analytic Combinatorics by Flajolet & Sedgewick has a more foundational setting, and there is a whole part on deriving asymptotics via complex analysis (by the way, you can download this book for free from the authors' homepage till the end of this year if I remember well). One powerful technique for deriving asymptotics for an integer sequence is working directly on its generating function: the idea is that you can get asymptotics on the coefficients of a power series with few information on it, e.g. it's functional or differential equations. If you think, the simple radius of convergence already may be seen as a first asymptotics for the coefficients; one beautiful classical example is Schur's theorem in combinatorics.
 * As to the Stirling approximation, or more generally, asymptotic expansions, the idea is to get $$\scriptstyle f(x)\sim\sum_{k=0}^\infty c_k\phi_k(x)$$, meaning e.g. that for any n one has $$\scriptstyle f(x)=\sum_{k=0}^{n}c_k\phi_k(x)+o(\phi_n(x))$$ as $$\scriptstyle x\to\infty$$; therefore for fixed n the approximation is better and better as x becomes larger. Nevertheless, for a fixed range of $$x$$ there is possibly an optimal n giving the best approximation there, and as a matter of fact, the series is possibly divergent as $$\scriptstyle n\to\infty$$ (this is the case of the Stirling one indeed), so larger n just give a worse approximation, in that range. However I would not call pathological this phenomenon; it is similar in nature to the non-convergence of the Taylor expansion of a $$\scriptstyle C^\infty$$, not analytic function. The asymptotic expansion at a finite point or at infinity just involves a notion of higher contact or tangency; the convergence of the series is an additional fact that holds under additional hypotheses. So the choice of the approximation really depends on the use we want to do with it. I have the impression I have not answered most of your questions however. (I've in mind some other more specific texts, where it is also treated in general the problem of existence and unicity of asymptotic expansion of a function by means of a given asymptotic sequence $$\{\phi_k\}_k$$, but in this moment I remember none precisely :)ah yes the ones quoted in the article of course)--PMajer (talk) 11:56, 25 December 2008 (UTC)

See also here Count Iblis (talk) 00:39, 29 December 2008 (UTC)


 * Thanks for the replies. There are definitely helpful.
 * Unfortunately, I think they do not answer my basic question of "how does one compare two asymptotic approximations." In particular, I don't really need help deriving asymptotics for a sequence whose terms are known, but for judging various asymptotics for a sequence whose terms will never be know exactly by humans.
 * The book of Flajolet and Sedgewick is interesting, and somewhat helpful. I am not counting pretty things, and there will not be a generating function.  What I am doing however is finding pretty things that give over or under counts, and so bounding the true thing between.  Being able to more quickly estimate some of the pretty things will be helpful, though I have not have any trouble so far just looking them up.
 * Which references discuss existence and uniqueness of asymptotic expansions? There are lots of wikipedia articles and lots of references in them, virtually none of which were annotated or used for relevant inline citations.  I would be happy to be able to answer clearly the following two questions:
 * Is there exactly one sequence of real numbers c_k (k>0) such that for every positive integer K, there are two real numbers d_K,e_K > 0 such that for all n
 * d_K/n^(K+1) < log(n!) - n*log(n) - n + log(n)/2 + log(2π)/2 + sum( c_k/n^k, k=1..K) < e_K/n^(K+1)
 * Is there exactly one sequence of real numbers c_k (k>-2) such that for every integer K > -2, there are two real numbers d_K,e_K > 0 such that for all n
 * d_K/n^(K+1) < log(n!) - n*log(n) - sum( c_k/n^k, k=-1..K) < e_K/n^(K+1)
 * The hoped for answers are yes for the first and no for the second.
 * The article of Boyd had my hopes up because it asks many of the same questions I ask in much the same language, but unfortunately doesn't exactly answer them. It did however explain some fairly reasonable situations where my Laurent series (his power series) could not possibly answer reasonable questions.  It wasn't clear to me if this was suggesting there cannot be one true asymptotic scale, or whether no asymptotic scale would work.  I think it does partially answer one of the bolded questions: power series are definitely the standard asymptotic scale in engineering and physics.  It's heuristic treatment of error terms for the "optimally truncated" series was also helpful in explaining why people might find infinitely many approximations to the same function useful (I would summarize it as "so we can accelerate convergence"), though it did not particularly explain the more fundamental question of why they might find two approximations useful.
 * Thanks again to both for the references. The Boyd article was an enjoyable read, and I am still having a good time reading the F and S book, mostly from the standpoint of learning basic combinatorics.  Thanks in advance for existence/uniqueness/comparison references. JackSchmidt (talk) 03:14, 29 December 2008 (UTC)


 * As to the unicity issue, I would say yes. For (e.g. in the first) if you have such another sequence $$c'_k$$, together with its $$d'_k$$ and $$e'_k$$, consider the least number $$\textstyle K$$ such that $$\textstyle c_K\neq c'_K$$. Substracting you get:
 * $$\textstyle \frac{d_K-e'_K}{n^{K+1}} < \frac{c_K-c'_K}{n^K}< \frac{e_K-d'_K}{n^{K+1}}$$,
 * that cannot be true for all $$\scriptstyle n\in\N\ $$ if $$\scriptstyle c_K-c'_K\neq0$$. Notice that this is in fact the argument for unicity of the asymptotic expansion for any given $$\{\phi_k\}_k$$. By linearity everything reduces to unicity for the expansion of 0; one just look at the first nonzero coefficient, if any. I hope this is what you needed. Another two trivial remarks: in general even if for a fixed asymptotic sequence $$\{\phi_k\}_k$$ you do have unicity of the coefficients $$c_k$$, there are other possible variations of the sequence $$\phi_k$$ that may make loose the unicity in a wider context (but as an example of non-uniqueness of asymptotic expansion it's cheating). Also: it is true that power sequences are the most common asymptotic scales; in fact very often one has forms like $$\phi_k(x)=\phi_0(x)x^k$$, and the asymptotic for f is then written dividing everything by the common factor $$\phi_0$$, thus taking the form of an asymptotic of $$\scriptstyle f/\phi_0$$ by means of plain powers. --PMajer (talk) 08:06, 29 December 2008 (UTC)