Wikipedia:Reference desk/Archives/Mathematics/2007 June 20

= June 20 =

Determine a Period for irregular time series data
I have time series data (taken at irregular intervals) - Julian Date and Object Brightness. The objects brightness varies over time.

I am now trying to get a Power Spectrum that, for a series of test Periods, provides me with the 'strength of fit' of each of those periods to the data.

All the papers I have read refer to the use of a Fourier Transform defined by: m(t) = C0 + Σ (aj*Cos(2*Pi*j*(t-E)/P) + bj*Cos(2*Pi*j*(t-E)/P)) {j=1 to J}

My math skills are basic. ie I can find A and B in X = A + B*Y where X and Y are supplied. I fear I may be out of my depth. Note that I have Mathematica 6 (if that helps).


 * Do you have reason to assume that the fluctuations of the brightness are periodic in time with a mixture of constant periods? Also, how dense are the data? If you plot the data you have, does it look like you could draw a smoothly sinuating curve through the points that does not requite too much fantasy to "connect the dots"? In other words, if you left out some of the points and drew the curve, would it pass near the points left out? --Lambiam Talk  07:37, 20 June 2007 (UTC)

Just curious
Why is the Fibonacci sequence always written (except on Wikipedia) as starting with {1,1,2,3,...} instead of {0,1,1,2,...} ? After all, zero plus one is one. -Haikon 05:06, 20 June 2007 (UTC)


 * Because it's hard to draw a picture of zero of something. Because zero (as distinct from "not") is a rather advanced concept.  Because if we started with zero the layman would think "So I start by adding zero – where does that get me?".  Pick one ... &mdash;Tamfang 05:24, 20 June 2007 (UTC)


 * I've seen it presented both ways, and have always just taken it as a convention. If starting with 1 is more common, it could be caused in part by the fact that all of the examples under "Origins" in the Wikipedia article naturally start with 1. 209.189.245.114 06:19, 20 June 2007 (UTC)


 * The Fibonacci numbers have many remarkable properties, such as that F5n is a multiple of 5. Mathematicians have a strong tendency to start numbering the elements of a sequence at 1, as in a1, a2, ... . If you do that with the Fibonacci sequence starting at 0 (0, 1, ...), the expression of many of these properties becomes less pretty. Of course, the issue is resolved by counting like a0, a1, ..., as some would argue should be done all the time. For several sequences, including the Fibonacci sequence, you can actually extend the indexing to Z: F−n = (−)nFn, giving a two-sided infinite sequence ..., 2, −1, 1, 0, 1, 1, 2, ... . --Lambiam Talk  07:18, 20 June 2007 (UTC)


 * It is not; see, for example, Graham, Knuth, &amp; Patashnik, Concrete Mathematics (ISBN 978-0-201-14236-5). But if we are counting rabbit mates as they reproduce (per Fibonacci himself), it makes sense to start with one pair. --KSmrqT 08:16, 20 June 2007 (UTC)


 * That is only the case when you assume that rabbits reproduce only starting from the second month. But if you take easter bunnies, which lay an egg every month starting from the first month and the eggs hatch after a month, then it makes sense to start with zero bunnies (and an egg).  :)  &#x2013; b_jonas 13:53, 21 June 2007 (UTC)


 * Our article defines F(0)=0, which is consistent with MathWorld and OEIS (to pick just two sources). Whether you start a list of Fibonacci numbers with F(0) or with F(1) is a minor point. Gandalf61 13:13, 20 June 2007 (UTC)


 * And a Google search gives lots of other sources including 0. PrimeHunter 13:26, 20 June 2007 (UTC)


 * I see. Thank you reference desk people. -Haikon 17:09, 20 June 2007 (UTC)

choosing a projection hyperplane
I have a set of 306 points in 11 dimensions. I want to display them in 2 or 3 dimensions without collisions. One way to do that, it seems to me, is to enumerate the 46665 lines joining the given points, and find the 8- or 9-dimensional hyperplane that maximizes its minimum angle with any of those lines; the projection space, then, is the 3- or 2-dimensional hyperplane perpendicular to that one. How would I go about finding the desired hyperplane? &mdash;Tamfang 05:36, 20 June 2007 (UTC)


 * Maybe there will be better suggestions, but unless you have excessive computing power available, finding the optimal solution appears to me not quite feasible. What I'd try is finding an acceptably good solution, as follows:
 * Pick many random directions in 11-D space (many = 10000 or 100000 or more, depending on factors like available time).
 * For each direction picked, compute the projections of the points to the lower-dimensional space.
 * Find the pair of nearest neighbours – using methods from computational geometry that are much faster than O(n2), such as quad trees.
 * Keep the B best (largest minimum separation), where B is some number like 100 or 200.
 * Finally, use gradient ascent, simulated annealing, or other local optimization methods to optimize each of the B retained directions, and take the best result. Possibly you can also take a linear local approximation of the objective function and use linear programming, but I haven't given that idea any thought, and even if it works it may be overkill.
 * The "coarse" cycle 1 (many random directions) and the "refinement" phase 2 (local optimization) are separated here for clarity, but since the probability of a new random direction being better than any of the currently B best diminishes fast enough, you can actually immediately do the refinement for each addition to or replacement of the B best without too much additional cost, thereby having a best result available at all times, which may make it easier to decide when to stop. --Lambiam Talk  08:07, 20 June 2007 (UTC)


 * Thanks. First question: how do I pick a multidimensional "direction"?  I know how to get three mutually perpendicular vectors in 3-space, but not in n-space!  &mdash;Tamfang 20:23, 20 June 2007 (UTC)
 * Take the values of n independent normally distributed random variables with mean 0 and variance 1. The resulting n-vector has an isotropically random direction. --Lambiam Talk  22:49, 20 June 2007 (UTC)
 * Independently random vectors won't do: I need a three-dimensional hyperplane, generated by three mutually perpendicular vectors. Oh, I guess I see now, I can choose a vector in 11-space and then choose another from the orthogonal 10-space ... &mdash;Tamfang 01:29, 21 June 2007 (UTC)


 * (A seedy-looking character mumbles something incomprehensible about the possible utility of principal components analysis, then shuffles off. You are in a maze of twisty little dimensions, all different.) --KSmrqT 16:38, 20 June 2007 (UTC)


 * Principal components analysis is what I tried first, and got 11 eigenvectors of roughly the same magnitude – all 11 dimensions are necessary. Now, accepting that any projection is going to lose most of the shape information, I'm looking instead for one that keeps it neat, i.e. a projection in which the images of the vertices don't collide.  &mdash;Tamfang 20:23, 20 June 2007 (UTC)

On yet another hand, I might simply place my objects in a 2D array and "anneal" (once I learn in more detail what that means) with the original proximity measure as the function to be optimized. &mdash;Tamfang 20:23, 20 June 2007 (UTC)


 * It might help us in giving advice if we understood what purpose this should serve. Geometric properties that are preserved by projection are that points that are very close remain close, that angles that are small remain small, and that collinear points remain collinear. However, that can hardly be why you want a projection: points and lines that were far apart or not collinear may now become so. If tidyness is the goal, can't you take just any projection and then move points that come too close a bit apart? --Lambiam Talk  22:49, 20 June 2007 (UTC)


 * Well .. you've seen drawings of hypercubes, right? A well-chosen projection has the properties I'm after.  I'd like it to be tidy and have the metric properties of a faithful projection.  Thanks for asking, though; the question does bring me down to reality a bit.  &mdash;Tamfang 01:29, 21 June 2007 (UTC)
 * In drawing a hypercube – or a 2D projection of a normal cube, for that matter – you want more than separation of the projected vertices. You also want vertices to remain separated from edges they are not incident with in the original. --Lambiam Talk  15:08, 21 June 2007 (UTC)

Polyhedral angles
If 3 plane faces meet at the vertex of a polyhedron, their angles at the vertex being alpha, beta and gamma, is there a standard relation between these angles? I'm thinking of something like their cosines having a constant product. What happens with 4 or more angles at a vertex?

This arose when I was considering the possibility of 4 triangles, with suitably matching sides, hinging together to form a tetrahedron. Considering one vertex, if one of the triangles is taken to be the base, its angle there must be less than the sum of the angles of the other two (with the same being true no matter which is taken to be the base, i.e. there is an analogy with the side lengths of a feasible triangle). My particular problem: if the base triangle has angle gamma at the vertex and the others have angles alpha and beta, what is the angle to the base of the ridge line between the other triangles?86.132.236.13 10:36, 20 June 2007 (UTC)
 * Relevant links include and Defect (geometry). nadav (talk) 10:54, 20 June 2007 (UTC)


 * I'm not sure I'm correctly understanding the question. Between the angles α, β and γ, the only relation that's necessary for an arbitrary convex polyhedron is that α+β+γ<360 degrees. You can see this by cutting out some triangles from construction paper and pasting them together. Donald Hosek 16:44, 20 June 2007 (UTC)


 * I'm a little confused. Are these angles the angles between pairs of planes, or the angles between intersections of two planes on the third plane? In other words, is alpha equal to the angle between planes 1 and 2, or if we look at the polygon on plane 3, is alpha equal to the angle of its vertex at that point? There's a difference. In the first case, the only restriction I can think of is that alpha + beta + gamma < 540 degress. In the second, as Donald Hosek says, it would be that alpha + beta + gamma < 360 degrees. Both of these say that the shape is too few degrees around at the vertex to lie in a plane, and must form a point. I'm afraid I don't understand the "base of the ridge line" question. Black Carrot 04:15, 21 June 2007 (UTC)


 * I'm the original poster, despite the changed number below. Regarding my first question, yes I now see that the only relation for the angles at a convex vertex is summing to less than 360 degrees. Regarding the second, let me try to be clearer. Three triangles meet at a polyhedral vertex, their angles there being α, β and γ (i.e. for each triangle, the angle between the two lines meeting at the vertex). At a convex vertex, it would seem to be necessary that each such angle is less than the sum of the others. Take the triangle with angle γ as the base, and define as the ridge the line common to the other triangles. What I want is the angle between the ridge and its perpendicular projection onto the base, in terms of α, β and γ.81.154.108.148 10:58, 21 June 2007 (UTC)


 * You're right, my mistake, each would indeed have to be less than the sum of the others, otherwise the figure would pull itself apart trying to wrap around, just like the triangle inequality. I think I have an answer now to your second question, but it's a bit involved. I'd prefer to tell you how I got it, as well as the result itself, because I think it's a safe bet you'll want more freedom of movement than angles-gives-angles. Here's the logic. Take the vertex in question to be facing you, the face with angle gamma to be the base, the face with ange alpha to be to the right, and the face with angle beta to be to the left. Then, I will call the unit vectors running along each edge A, B, and C. A is the vector along the base to the right, B is along the base to the left, and C is the ridge, with the vertex as the origin. Next, I work out the coordinates for each of these vectors. If you happen to know the coordinates of each vertex of the pyramid to begin with, you can skip this part, by just dividing each set of coordinates by the magnitude of the vector they describe. Make sure you've first set the vertex as <0,0,0>, with all the others shifting around to match. If you only have the angles, here's how you might work out the coordinates. Assume the X-axis is along A, making A = <1,0,0>. With both the base of the pyramid and the XY-plane flat against the table, we can find B. B, along with its components parallel to the X and Y axes, would form a right triangle, with gamma as one of the angles. Therefore, its coordinates would be . Now, C is trickier. First, I draw a circle in the air where it might be, then I narrow it down to a point. The easier circle to draw is around A. If C was directly above A, it would have coordinates , much like B. However, it can be anywhere along a circle from there, because the only restrictions for now are that it be angle alpha away from A, and it be of unit length. So, spinning it and introducing a dummy variable k, the temporary coordinates for C are
 * $$ \left \langle \cos a, k , \sqrt{ \sin ^2 a - k^2 } \right \rangle . $$
 * If you take the norm of this, you'll see it's guaranteed to have length 1. As the X value never changes, it's confined to a plane parallel to the YZ plane, and within that the radius of the circle will be sqrt( k2+sqrt(sin2a-k2)2 ) = sin alpha. You could put +/- before each square root, but it seems reasonable to assume you want the solution above the table, not below it, meaning we've actually got a half-circle. Now, using the formula for dot product, for any vectors a and b, and the angle between them theta, a dot b = abs(a)abs(b)cos(theta). The dot product of B and C, then, should be equal to (1)(1)cos(beta), or just cos(beta). The dot product is the sum of the products of each of their respective components, so cos(alpha)cos(gamma) + ksin(gamma) + 0 = cos(beta). Solving for k to narrow the semicircle to a point, we get
 * $$ k = \frac{\cos b - \cos a \cos g}{\sin g} . $$
 * Sorry, I'm not sure how to put greek letters in formulas. Now, we can substitute that into the coordinates for C.
 * $$ \left \langle \cos a, \frac{\cos b - \cos a \cos g}{\sin g} , \sqrt{ \sin ^2 a - \left ( \frac{\cos b - \cos a \cos g}{\sin g} \right ) ^2 } \right \rangle . $$
 * Now that you can get A, B, and C, you can work out the angle between C and the plane containing A and B (the table) fairly easily. First, you take the cross product of A and B, to get their normal vector. Or, since we've set this particular one up along the axes, you could just take the vertical unit vector <0,0,1>. Then you use the formula for dot product again. Since a dot b = abs(a)abs(b)cos(theta), theta = arccos(a dot b / abs(a)abs(b)). To find the angle between C and the normal, take their dot product, divide by the product of their magnitudes, and take the arccosine of that. In this case, it would simplify to taking the arccosine of that really complicated term in C. Subtract the result from 90 degrees to get the angle between C and the table.
 * It's worth pointing out, by the way, that if I've made no mistakes, this should give you the shortest angle between the ridge and the plane containing gamma, regardless of whether that falls inside the pyramid. You may have to make adjustments if that's not what you want, for instance if the angles of the pyramid are obtuse and you want the internal angle. Black Carrot 16:58, 22 June 2007 (UTC)


 * Thanks for this - I'm just going away for a few days but will have a careful read through on returning.81.153.218.145 19:52, 22 June 2007 (UTC)

More complex analysis fun
Hi all, thanks for the help you've given me so far. I have another exercise I can't seem to figure out. (Again, this is not homework, this is preparation for a PhD qualifying examination.)
 * Let P(z) be a polynomial with real coefficients, say $$P(z) = a_nz^n + \cdots a_1z + a_0$$, satisfying $$a_0 > a_1 > a_2 > \cdots > a_n > 0$$. Prove that all the zeros $$\zeta$$ of P(z) satisfy $$|\zeta| > 1$$.

At first, I thought it could be done by induction, since the base case n = 1 is so simple; but I can't even prove it for n = 2! Am I missing something easy here, or what? Any help would be appreciated. Thanks! –King Bee (&tau; • &gamma;) 12:30, 20 June 2007 (UTC)


 * I haven't found a solution, but here are some ideas: The statement is clearly correct for the real roots; the imaginary roots come in conjugate pairs. This allows you to write $$P(z) = (z-z_0)(z-\overline{z_0})Q(z)$$, which immediately solves the case $$n=2$$ and possibly simplifies other cases. Rouché's theorem might also be of use. -- Meni Rosenfeld (talk) 13:39, 20 June 2007 (UTC)


 * Meni - Thanks for the idea! However, I think I have a nice, elegant, clever solution, but I can't get the strict inequality to quite work out. (Not my idea; fellow graduate student pointed me in the right direction.)


 * Note that $$zp(z) = a_nz^{n+1} + \cdots a_1z^2 + a_0z$$, and look at what $$zp(z) - p(z)$$ is; we achive
 * $$(z-1)p(z) = a_nz^{n+1} + (a_{n-1} - a_n)z^n + (a_{n-2} - a_{n-1})z^{n-1} + \cdots + (a_0 - a_1)z - a_0$$.
 * Suppose by way of contradiction that there exists a zero of p(z) inside the unit disk, say &alpha;. Then $$|\alpha| \leq 1$$. Substituting this in to the expression above, we get:
 * $$0 = a_n\alpha^{n+1} + (a_{n-1} - a_n)\alpha^n + (a_{n-2} - a_{n-1})\alpha^{n-1} + \cdots + (a_0 - a_1)\alpha - a_0$$.
 * Adding $$a_0$$ to both sides and then taking modulus (and using the triangle inequality and the fact that $$|\alpha| \leq 1$$), we obtain:
 * $$a_0 \leq |a_n| + |a_{n-1} - a_n| + |a_{n-2} - a_{n-1}| + \cdots + |a_0 - a_1|$$,
 * and since all of the differences above are already real and positive, we can remove the absolute value signs and we get a telescoping sum which sums to $$a_0$$. However, this leads me to $$a_0 \leq a_0$$, which is not a contradiction; should some inequality above be strict? I guess I can use this method to show that there aren't any zeros of p(z) inside the open unit disk, but then I would have to deal with the boundary case separately. Any help on that one? –King Bee (&tau; • &gamma;) 16:12, 20 June 2007 (UTC)
 * This is quite simple, actually. My hint is: The triangle inequality is strict unless all the summands are on the same ray from the origin. -- Meni Rosenfeld (talk) 16:38, 20 June 2007 (UTC)
 * slaps forehead* Excellent! I am a fool, thanks for that tidbit. =) –King Bee (&tau; • &gamma;) 16:47, 20 June 2007 (UTC)

The theoretical definition of decimal expansion
Hello.

I can't prove the following statement from my book (Principles of Mathematical Analysis - Walter Rudin).

Consider the following definition of decimals. We are assuming that we have the Dedekind definition of the reals.

Let x>0 be real. Let n_0 be that largest integer such that n_0 <= x. Having chosen n_0, n_1, ... , n_(k-1) let n_k be the largest integer such that n_0 + (n_1/10) + ... + (n_k/10^k) <= x.

Let E be the set of these numbers n_0 + (n_1/10) + ... + (n_k/10^k) (k=0,1,2,...)

Then x = sup E. The decimal expansion of x is n_0.n_1n_2n_3...

My problem is that I am not able to show that x = sup E. The difficulty seems to be the reason why a number of the form x - e (e>0) can't be a supremum.

Suppose x-e is the supremum. By way of contradiction we must now have an element of E strictly greater then x-e. But elements of E get progressively smaller and smaller increments in their values & so its not clear how to produce such an element. For example suppose that y = n_0 + (n_1/10) + ... + (n_k/10^k) for some k is some elemnent of E. By hypothesis since x-e is the supremum so y <= x-e. Taking the worst case y < x-e, we need to produce an element of E greater then x-e. For this we need to consider elements like y + (n_(k+1)/10^(k+1)), y + (n_(k+1)/10^(k+1)) + (n_(k+2)/10^(k+2)) etc. But as terms like (n_ (k+1)/10^(k+1)) are very small there is no guarantee that there addition to y will make it enough to surpass x-e. So one can't claim from the above procedure that x-e is not the supremem. I have a gut feeling that there are some more things required in the proof of which I am not aware. Can anyone shed some light on them. Thanks. --Shahab 14:18, 20 June 2007 (UTC)


 * Tidying the notation up a bit will help. We have:
 * $$S_k = \sum_{i=0}^k\frac{n_k}{10^k}$$
 * By definition of $$n_k$$,
 * $$S_k \le x$$
 * $$S_k + 10^{-k} > x\;\!$$
 * $$E = \{S_k|k \ge 0\}$$
 * If $$e > 0\;\!$$ and $$x-e\;\!$$ is an upper bound for $$E\;\!$$, then there is some $$k\;\!$$ such that $$10^{-k} < e\;\!$$. Can you see how to proceed from here? -- Meni Rosenfeld (talk) 14:56, 20 June 2007 (UTC)


 * A further remark: the last step above would tend to use the so-called Archimedean property of the real numbers. However, to prove that is a bit similar to what you're asked to prove here, so it is not clear you are allowed to use it. Your introduction of e is a bit of a red herring. Instead of x−e, consider any u less than x, and examine why u can't be the supremum. Since u < x, you know that x and u have different Dedekind cuts. It means that there exists some rational number by which you can tell them apart. What do you know about that number? You should from this point not use anything but properties of integers and rational numbers. Does that help?  --Lambiam Talk  15:16, 20 June 2007 (UTC)

Complex Analysis
I can't figure out how to do this question:

If f is analytic and $$|f(z)||f'(z)|\le 1$$ for each z then show that f must be constant. Thanks--Shahab 14:27, 20 June 2007 (UTC)


 * Hint: Consider $$g(z)=f(z)^2\;\!$$. -- Meni Rosenfeld (talk) 14:58, 20 June 2007 (UTC)

Thanks for the response. But I don't understand. As far as I can see then g'=2ff' and so g' is bounded and so constant. But what then? Can you be more explicit.--Shahab 17:40, 20 June 2007 (UTC)


 * Maybe this will help: Since $$g'$$ is constant, that means that $$g$$ is at best a polynomial of degree 1. But $$g$$ was defined above as the square of a different analytic function...–King Bee (&tau; • &gamma;) 17:51, 20 June 2007 (UTC)


 * Still the fog persists in my mind. Is there any result that g' is constant implies that g is a poly of degree at best 1. Intuitively it is clear but how to prove it. Again if g(z)= az+b=f(z)^2, where is the contradiction? I am afraid that I must trouble you folks again.--Shahab 18:17, 20 June 2007 (UTC)


 * Yes, if $$g'$$ is constant, then $$g$$ is a polynomial of degree at most 1 (this you can cite freely; I doubt anyone would argue with you, but you can write up a proof if you wish). You have now that $$f^2$$ is either a linear function or a constant. Since $$f$$ is entire, which of these two is possible, and why? Just think about it for a while, and I think the light will come on eventually, and you'll be happy you did it yourself! –King Bee (&tau; • &gamma;) 18:25, 20 June 2007 (UTC)


 * Thanks for the response. I think I have got it now. From the Taylor's expansion it follows that g' is constant implies g is a poly of degree at best 1. Also $$f(z)=\sqrt{Az+B}$$ is not entire as the square root function isn't. So $$f(z)=\sqrt{C}$$ i.e. f is a constant. Cheers.--Shahab 18:57, 20 June 2007 (UTC)
 * You don't need Taylor for that. Think: g is an antiderivative of g'. If g'(z) = A, how can you find the general antiderivative? --Lambiam Talk  22:57, 20 June 2007 (UTC)


 * Note that there is some subtlety in that last part: Since every nonzero complex number has two square roots, and it is not clear which one is "better", it is best to avoid the notation $$\sqrt{}$$ unless you are completely sure you know what you are doing. "$$f(z) = \sqrt{Az+B}$$" is not a correct consequence of $$f(z)^2 = Az + B$$. The correct fact you should cite is that there is no entire function (or even a continuous one) whose square is the identity function, from which it is easy to show that there is no entire function whose square is $$Az+B$$ with $$A \neq 0$$, thus $$A=0$$. So $$f(z)^2$$ is constant - but this, by itself, still doesn't mean that f is. However, showing now that f is constant is not hard, using the fact that f is continuous. -- Meni Rosenfeld (talk) 19:27, 20 June 2007 (UTC)


 * Write $$f(z)=a+bz+z^2h(z)$$ for some series expansion h. Then


 * $$Az+B=g(z)=f^2(z)=a^2+2abz+b^2z^2+2a^2z^2h(z)+2bz^3h(z)+z^4h^2(z)$$.


 * By uniqueness of series expansion, $$z^4h^2(z)=0$$, i.e. h=0. From $$Az+B=g(z)=f^2(z)=a^2+2abz+b^2z^2$$, we have $$b^2z^2=0$$, i.e. b=0. Therefore f=a is a constant. Is it acceptable? Twma 00:26, 21 June 2007 (UTC)

3 car purchases over a 3 year period - simple maths brain problem
My brain is not working very well maths wise today and i'm trying to figure out something a little 'odd'. Basically if I bought a car with a loan rate of 0%-APR (36 payments) for £7,000 owned it for a year, sold it for £6,500, used the money to repay the loan and get a new loan for £7,000 (same 0% APR/36 payments) owned it for a year, sold it for £6,500, use the money to repay the loan and got a new loan for £7k (same APR/payments and repeated. How much money would I have lost?

It is a 'scenario' so no running-costs/bank charges for paying early and i'm not bothered about loans being 'weighted' to paying more interest at the start of the loan. Sorry just my brain has packed in for the night due to poorliness/tiredness. What I want to know is how much money you would 'lose' over the ownership. My brain says £1,500 (£500 loss per car) is wrong but i'm not sure why at all- am I overcomplicating it? ny156uk 22:16, 20 June 2007 (UTC)


 * I think you only lost £1000 (£500 between the purchase of the 1st and 2nd cars and £500 between the 2nd and 3rd cars). Of course, you are then obligated to pay the £7000 loan on the third car, so have a total obligation of £8000. StuRat 04:53, 21 June 2007 (UTC)