Wikipedia:Reference desk/Archives/Mathematics/2010 August 1

= August 1 =

General topology, ultrafilters and maximal Ideals
Hi, I am not sure about this two definitions. 1. What is the differenc between an ultrafilter and an Ideal. 2. for example, if we take the power set of A={1,2,3) with the partial order of inclution. Who are the maximal ideals and who are the maximal filters? 3. can a subset of P(A)=P{1,2,3} be a maximal ideal without containing the empty set as an item? Thanx! —Preceding unsigned comment added by Topologia clalit (talk • contribs) 13:27, 1 August 2010 (UTC)
 * Ideals are closed downward, while filters are closed upward. Thus every ideal in P(A) contains the empty set, while no proper filter contains the empty set. An ultrafilter is a maximal proper filter. In P({1,2,3}), the maximal (proper) filters are the principal filters {{1},{1,2},{1,3},{1,2,3}}, {{2},{2,1},{2,3},{1,2,3}} and {{3},{3,1},{3,2},{3,1,2}} while the maximal (proper) ideals are the complements of the principal filters. Algebraist 13:50, 1 August 2010 (UTC)

Mapping R3 onto itself
I'm just working on a question about basis transformations and the corresponding transformations of linear maps to agree with their basis and have come to a part that I don't really know where to begin; I think my problem is that I'm not a fan of thinking geometrically and this seems to need some geometrical thought to know what to do. I'm not asking for the answer, indeed I don't want the answer, just a helpful hint to set me on my way.

"The mapping $$\mathfrak{A}$$ of R3 onto itself is a reflection in the plane $$x_1 \sin \theta = x_2 \cos \theta$$. Find the matrix A of $$\mathfrak{A}$$ with respect to any basis of your choice, which should be specified."

I would offer you my working so far but unfortunately, there is none! Thanks. asyndeton  talk  13:42, 1 August 2010 (UTC)
 * First find the matrix of reflection about the plane $$z=0$$ in the standard basis. Then take advantage of their generous offer to use a basis of your choice, and figure out which basis will be easiest to work with. -- Meni Rosenfeld (talk) 15:36, 1 August 2010 (UTC)
 * Being as dim as I am, I am still somewhat confused but anyway, I think this is what I'm looking for. $$

\begin{bmatrix} \cos\theta & \sin\theta & 0\\ \sin\theta & -\cos\theta  & 0\\ 0 & 0 & 1 \end{bmatrix}$$ Assuming that is, I don't understand the significance of using a basis of my choice; is there one particular basis that makes this (seemingly rather neat) matrix any neater? Why can I not just make life easier for myself and stick with the standard basis? asyndeton  talk  16:37, 1 August 2010 (UTC)
 * I'm afraid that matrix isn't right (with respect to the standard basis). Notice that $$(\cos\theta,\sin\theta,0)$$ is in the plane, so the map should take it to itself.  Unfortunately, it doesn't.  However, it does look like this matrix was made by following the usual means, but the final step was omitted.  So, this is almost there.
 * The reason choice of basis can make a difference is that the correct choice will make this as easy to write as the matrix for reflection in the xy-plane. —Preceding unsigned comment added by 203.97.79.114 (talk) 17:08, 1 August 2010 (UTC)


 * If you want to reflect in a given plane then you are asking that that plane be (pointwise) fixed. This means that that plane is an eigenspace of the reflection with eigenvalue 1. A vector perpendicular to this plane will be an eigenvector with eigenvalue −1 since you are reversing everything. So, take two linearly independent vectors in the plane in which your reflect and a vector perpendicular to that plane as your basis. The matrix will then be the diagonal matrix
 * $$\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array}\right] $$
 * This uses a simple fact that we can diagonalise a matrix if we take its eigenvectors as a basis. The diagonal entries will be the corresponding eigenvalues. (There are some technical considerations that mean it's not always so easy, but they don't cause any problems in this case. For example, you need to be able to decompose your space into the direct sum of eigenspaces. But that isn't a problem here.) — Fly by Night  ( talk )  17:41, 1 August 2010 (UTC)
 * I think I'm with you. I meant to ask you this about your answer to my previous question but it slipped my mind; what does 'pointwise' mean in this context? I also need to state the basis so is all I need do use the eigenvalues, ie the entries of the diagonal matrix you list, and then just apply Ax=tx where t is the eigenvalue? asyndeton   talk  19:11, 1 August 2010 (UTC)
 * "Pointwise fixed" means every point in the set goes to itself. "Fixed as a set" means every point in the set goes to some point in the set, but not necessarily the same point. --Tango (talk) 19:16, 1 August 2010 (UTC)
 * I'd have used the word "invariant" for what you're calling "fixed as a set". Michael Hardy (talk) 19:44, 1 August 2010 (UTC)
 * So would I, but the word escaped me! I have seen "fixed (as a set)" used as a synonym for invariant, though. --Tango (talk) 19:55, 1 August 2010 (UTC)


 * Pointwise means that it fixes every point. The linear map ƒ(x,y) = (x, –y) fixes the line y = 0 pointwise, i.e. it fixes each and every point of that line. The linear map ƒ(x, y) = (2x, –y) fixes the line y = 0, but it doesn't fix all of the points. It moves the point (x,0) to the point (2x, 0). So it fixes the line but it doesn't fix the points on the line. As for the basis, well, you just need to choose two linearly independent vectors in the plane $$x_1 \sin \theta = x_2 \cos \theta$$, and one vector that's perpendicular to that. The vector (u, v, w) lies in the plane $$x_1 \sin \theta = x_2 \cos \theta$$ if and only if u.sin(θ) = v.cos(θ). The w-coordinate is free. So two linearly independent vectors would be (for example) a = (cos(θ), sin(θ), 0) and b = (cos(θ), sin(θ), 1). A perpendicular vector would be given by the cross product of a and b, namely a×b = (sin(θ), −cos(θ), 0). And this gives your final answer. The plane is spanned by a and b and is pointwise fixed by the reflection. The perpendicular is spanned by a×b and is pointwise reversed by the reflection. Your basis is {a, b, a×b} and the matrix of the reflection with respect to this basis is the one I gave above with 1, 1 and −1 along the diagonal. —  Fly by Night  ( talk )  20:23, 1 August 2010 (UTC)

Ah, I see! Thank you, that was very clear and very helpful. asyndeton  talk  20:48, 1 August 2010 (UTC)

arithmetic-geometric means
Starting with 0 < a0 < b0, let

\begin{align} a_{n+1} & = \sqrt{a_n b_n}, \\[12pt] b_{n+1} & = \tfrac{1}{2}(a_n + b_n), \end{align} $$

so that
 * $$ 0 < a_0 < a_1 < a_2 < \cdots < b_2 < b_1 < b_0 \, $$
 * $$ 0 < a_0 < a_1 < a_2 < \cdots < b_2 < b_1 < b_0 \, $$

and the common limit is the arithmetic-geometric mean. It is not hard to show how to define an, bn for negative indices n. For n = 1/2, if we picked any point between a1 and a2 and called it a1/2 and similarly for b, we'd get the right set of inequalities, but I'm not sure that would be consistent with a reasonable pair (&fnof;, g) of functions that is a "square root" of the pair of operations used above, in the sense that
 * $$ \Big( f\big( f(a,b), g(a,b)\big),\ \  g\big( f(a,b), g(a,b)\big) \Big) = \left(\sqrt{a,b},\  \tfrac12(a+b)\right). \, $$
 * $$ \Big( f\big( f(a,b), g(a,b)\big),\ \  g\big( f(a,b), g(a,b)\big) \Big) = \left(\sqrt{a,b},\  \tfrac12(a+b)\right). \, $$

Is there some reasonable way of defining an and bn for non-integer n that would have all the properties one would hope it would have (feel free to include arguments about which properties one should hope it would have). Michael Hardy (talk) 18:01, 1 August 2010 (UTC)
 * PS: Just to be explicit, we would have
 * $$ a_{-1} = b_0 - \sqrt{b_0^2 - a_0^2},\qquad b_{-1} = b_0 + \sqrt{b_0^2 - a_0^2}. $$
 * Michael Hardy (talk) 18:10, 1 August 2010 (UTC)
 * PPS: Actually my suspicion is that if we want to define the sequences for fractional indices that work only for one particular pair of starting values, we would have considerable freedom, but if we want them to work regardless of the pair of starting values, maybe there would be only one way. Michael Hardy (talk) 18:17, 1 August 2010 (UTC)
 * You need a function that slides smoothly from n=0 to n=1 giving the answers you require. n=0: a0=sqrt(a0*a0) b0=0.5*(b0+b0). So how about a(n)=sqrt(a0*(b0*n+a0*(1-n)) and b(n)=0.5*(b0+(a0*n+b0*(1-n)). You may wish to alter the sliding function used in a(n) to something more product based rather than addition based. When suitable functions are found, three applications of n=0.333 presumably should get you to one of n=1. I've not tried these suggestions. -- SGBailey (talk) 20:53, 1 August 2010 (UTC)


 * It would have to be a strange function. The AGM converges quadratically so the square root function should converge with power about square root I think. The value can be determined approximately using that. Dmcq (talk) 22:19, 1 August 2010 (UTC)
 * Be precise and use correct English grammar. What do you mean by "converges quadratically", "should converge with power about square root", "the value can be determined approximately using that"? It really is hard to decipher what you are saying. PS  T  07:29, 2 August 2010 (UTC)

SGBailey: Your proposed function doesn't work at all. Not even close. I set
 * a0 <- 1; b0 <- 5.

Then I should obviously get
 * a1 = sqrt(5) = 2.236..., b1 = 3.

So then I applied your functions with n = 1/2:
 * ah <- sqrt(a0*(b0 + a0)/2)
 * bh <- (1/2)*(b0 + (a0 + b0)/2)

And then applied them again:
 * a1 <- sqrt(ah*(bh + ah)/2)
 * b1 <- (1/2)*(bh + (ah + bh)/2)

And I got
 * a1 = 2.228..., b1 = 3.433...

So you'd need to try again to get this. Michael Hardy (talk) 01:09, 2 August 2010 (UTC) ...and Dmcq, I am not at all optimistic about your answer making sense. Michael Hardy (talk) 01:10, 2 August 2010 (UTC)

So putting it more tersely, for
 * $$ 0 < a < b, \, $$

the question is whether the mapping
 * $$ (a,b) \mapsto (\mathrm{GM}, \mathrm{AM}) \, $$

has a compositional nth root, where GM and AM are respectively the geometric and arithmetic means of a and b. And to what extent is the answer unique? Michael Hardy (talk) 01:23, 2 August 2010 (UTC)


 * If we set
 * $$M = AGM(a_0,b_0)$$
 * $$2e_n = a_n+b_n -2 M$$
 * $$a_n \approx M - e_n$$
 * Then as far as I can see from approximating the square root
 * $$e_{n+1} \approx \frac{e_n^2}{4 M}$$
 * so the convergence is quadratic. As far as I can make out we have approximately
 * $$e_{n+x} \approx 4M \left( {\frac{e_n}{4M}} \right) ^{2^x}$$
 * We can get an accurate approximation of an intermediate value for large n and then work backwards. Dmcq (talk) 08:44, 2 August 2010 (UTC)
 * p.s. I'd look at the log of $$a_n-M$$ and see if there's a series that looks like that if there's the foggiest hope the result is anywhere nice. Dmcq (talk) 10:40, 2 August 2010 (UTC)


 * I tried out this algorithm and for $$a,b=1,2$$ I iterated twice and then used the approximation to $$e_{n+x}$$ above with $$x=1$$. Using $$M-e$$ and $$M+e$$ and going back twice I got the square root of 2 and 1.5 to 12 decimal places. I had to use
 * $$M-e+\tfrac{e^2}{4M}, M+e+\tfrac{e^2}{4M}$$
 * to get the values at both ends of the range accurate to 12 places, this is just a correction which doesn't matter for higher numbers of iterations, but for more than 2 iterations double precision arithmetic hasn't got enough pprecision.


 * It is obvious this gives a nice smooth functions for f and g and I don't see how you'll find anything nicer - though of course one can always do something like just stick in a straight line at the start and iterate that. And as for expressing them cleanly one iteration would approximately cut b-a down by raising it to the square root of 2 power - which doesn't sound like anything I'm very familiar with! Dmcq (talk) 09:06, 3 August 2010 (UTC)

OK, I'll see if I can digest the above this afternoon. I posted this question to mathoverflow and so far it's gotten two up votes but no answers. Michael Hardy (talk) 16:49, 3 August 2010 (UTC)

Summation Convention
This is a problem that has been eluding me for a while and every time I try it, I just get the same answer.

The question is about a 3x3 matrix that represents the map $$\mathfrak{A}$$: R3 &rarr; R3 given by $$\mathbf{x'} = (\mathbf{x}\cdot\mathbf{n})\mathbf{n} + \mathbf{n} \times \mathbf{x}$$ where n is some unit vector. We have $$R_{ij} = n_{i}n_{j} - \varepsilon_{ijk}n_{k} $$, where R represents the map as a matrix. We are then asked the entries of the transpose matrix, which are $$R^T_{ij} = n_{i}n_{j} + \varepsilon_{ijk}n_{k}$$. We then have to show that $$RR^T = I$$. My approach to this is rewrite R as $$R_{ip} = n_{i}n_{p} - \varepsilon_{ipk}n_{k} $$ and then rewrite the transpose as $$R^T_{pj} = n_{p}n_{j} + \varepsilon_{pjk}n_{k}$$ and then I aim to show that $$R_{ip}R^T_{pj} = \delta_{ij}$$ but each and every time I try this I end up with $$R_{ip}R^T_{pj} = -2\delta_{ij}$$ or some variant on it. I would normally suspect that a variety of algebraic blunders are to blame but in this case there must be something fundamentally wrong with my method as it goes wrong the same way each time. Anyone have any ideas? Thanks. asyndeton  talk  19:01, 1 August 2010 (UTC)
 * A good strategy for this sort of quandary is to pick a concrete example and follow it through. You should pick the simplest where it doesn't trivialize, which in this case would be 3x3 matrices.  Write out the equations longhand (that is, expanded out rather than as a summation) and see where they differ from what happens as a summation.  That should show you where the error is.  --Trovatore (talk) 20:02, 1 August 2010 (UTC)
 * I was hoping that there might be some obvious problem with my method that I've been missing, eg my choice of indices for R and its transpose, that someone else might see immediately and save me having to do this (I'd had the idea before but thought that it seemed like quite a slog and would just provide me with more places to make mistakes) but I suppose it's worth a shot. asyndeton   talk  20:45, 1 August 2010 (UTC)
 * It shouldn't be that bad. Your difference from the expected result is only on the main diagonal, and it's consistent on the main diagonal.  So you should be able to find the mistake just by writing out the two expressions for one element on the main diagonal. --Trovatore (talk) 21:34, 1 August 2010 (UTC)
 * I really am useless. It looks like my mistake was in using the index k in both R and its transpose and ending up with terms that had four terms with a k index. Relabeling one pair of 'k's as a pair of 'q's seems to make all work out. Thanks. asyndeton   talk  21:53, 1 August 2010 (UTC)
 * Do you know anything about the vector n? When I try and multiply it out, the n's don't cancel (it's entirely possible I made a mistake, but I doubt it - there is only one term of order 4 (in n), so there is nothing it could cancel with), so it won't be the identity for arbitrary n. --Tango (talk) 20:21, 1 August 2010 (UTC)
 * My apologies for being so unclear. I've missed out details that I shouldn't have. The question is about a 3x3 matrix that represents the map $$\mathfrak{A}$$: R3 &rarr; R3 given by $$\mathbf{x'} = (\mathbf{x}\cdot\mathbf{n})\mathbf{n} + \mathbf{n} \times \mathbf{x}$$ where n is some unit vector. I have added this into the original question. asyndeton   talk  20:41, 1 August 2010 (UTC)


 * In case anyone fancies some extra background reading. This map is a specfic case of Rodrigues' rotation formula. 129.234.53.239 (talk) 19:01, 2 August 2010 (UTC)