Wikipedia:Reference desk/Archives/Mathematics/2010 October 26

= October 26 =

Monomial Ordering
How and why does monomial ordering effect the quotient of a polynomial ring by an ideal? For example, take a polynomial ring over a field of characteristic zero, with variables x and y, and with the so-called degree reverse lexicographical monomial ordering. If we calculate
 * $$ {\mathbf K}[x,y] /\! \left\langle x+x^2,y^2\right\rangle \cong {\mathbf K}\!\left\langle 1, x, y, xy, y^2, xy^2 \right\rangle . $$

However, if we use the so-called negative degree reverse lexicographical monomial ordering, then we have
 * $$ {\mathbf K}[x,y] /\! \left\langle x+x^2,y^2\right\rangle \cong {\mathbf K}\!\left\langle 1, x, y \right\rangle . $$

I understand the first one, but I don't understand why terms are missing in the second one. I used SINGULAR to do the second calculation. More information on SINGULAR's monomial orderings can be found at this page. It seems that the second line involves the localized polynomial ring; whatever that might be. Any suggestions? — Fly by Night  (  talk  )  13:04, 26 October 2010 (UTC)


 * How do you understand the first one? It seems to me that $$y^2$$ and $$xy^2$$ shouldn't be there, if it's an ordinary ideal with two generators, and an ordinary ring quotient. –Henning Makholm (talk) 13:21, 26 October 2010 (UTC)


 * (e/c) I don't understand either, as far as I can see the result should be $$\mathbf K\langle1,x,y,xy\rangle$$. The quotient of a ring by an ideal depends only on the ring and the ideal, nothing else; monomial ordering only comes into the picture when looking for a basis.—Emil J. 13:26, 26 October 2010 (UTC)


 * There's a typo' in my post; that's all. It should have been y3 instead of y2. I deleted this question, but Henning Makholm seems to have reinstated it . I removed the question because I found the answer. The two monomial orderings change the calculations. One gives you the whole polynomial ring as the numerator and the other gives you the localized polynomial ring. It's all in the link I posted. Anyway, thanks all the same. — Fly by Night  (  talk  )  14:14, 26 October 2010 (UTC)


 * Maybe you already found a source for what's going on in which case you can ignore this but here goes anyway. The quotient ring K[x,y]/ has dimension 4 (the total multiplicity of the roots of these polynomials).  In reverse lex order, the monomial basis is 1,x,y,xy.  For the local version, you're no longer dealing with K[x,y], but its localization at the origin (in other words, its localization by the maximal ideal ).  This ring consists of all rational functions which are defined at the origin.  In this setting every polynomial with a non-zero constant term is a unit.  The ideal  ⊂ K[x,y] extends to the local ring, but there it behaves a bit differently.  In particular we can only see the properties of the ideal near the origin (this is where the term "localization" comes from).  x2+x = x(1+x) but 1+x is a unit, so  = .  That's why LocK[x,y]/ only has dimension 2 (the multiplicity of the root at the origin).  In negative reverse lex order the monomial basis of this quotient ring is 1,y.  The monomial order chosen can affect the monomial basis of the quotient (and equivalently the Grobner basis of the ideal), but it won't change the dimension.  However, global orderings (where higher degree terms are larger) only makes sense in the context of the global ring, while local orderings (where the smaller degree terms are larger) only makes sense in the context of the local ring, so SINGULAR deduces which one of those you want to be working in based on your choice of a global or local ordering. Rckrone (talk) 05:18, 27 October 2010 (UTC)
 * What a fantastic answer: superb! Thanks a lot Rckrone. — Fly by Night  (  talk  )  08:49, 27 October 2010 (UTC)

Averaging points on a sphere
If I have a cluster of points on a sphere, is there some way of averaging them analytically? In other words, is there any method other than brute-force computation to find the point that minimises the distance between those points (in terms of angular displacement, obviously - the point should be on the surface of the sphere itself, so simply finding the euclidean distance wouldn't be much use)? If it helps, the particular case I'm looking at only involves a small section of a sphere, which should avoid the weird effects you'd get if you tried to average antipodal points. Thanks! Laïka 17:52, 26 October 2010 (UTC)


 * You can use vector cosine to easily calculate the angle from each axis to the point. Average the angles to get place a new point at an average angle to each axis. --  k a i n a w &trade; 17:55, 26 October 2010 (UTC)


 * Unless you care about which exact kind of average you find for widely separated points, an easy thing to do would be to average the points in 3D and normalize the resulting vector. If you try to average exact antipodes (so the 3D sum ends up being zero) it's not quite well-defined what one would want the operation to yield anyway. –Henning Makholm (talk) 18:35, 26 October 2010 (UTC)
 * Life might be easier if you convert to spherical coordinates first. Or, you can use Euler angles to give precedence to a particular rotation.  Nimur (talk) 18:45, 26 October 2010 (UTC)
 * How does this problem become easier in spherical coordinates? –Henning Makholm (talk) 18:49, 26 October 2010 (UTC)
 * Normalization is free. Nimur (talk) 19:20, 26 October 2010 (UTC)
 * But vector addition is very expensive. –Henning Makholm (talk) 20:45, 26 October 2010 (UTC)
 * Ah, should have mentioned, I'm working in spherical co-ordinates anyway. Laïka  18:53, 26 October 2010 (UTC)
 * To compute the cartesian arithmetic mean in this case, it doesn't seem to suffice to compute the mean of the spherical angles--take a couple of points in the x-y plane, one at 5 degrees and the other at 355 degrees. Is there any way to avoid lots of conversions that are equivalent to just converting to cartesian anyway? 67.158.43.41 (talk) 22:17, 26 October 2010 (UTC)

There is quite a bit of work on how to do this correctly Directional statistics is the field where this is discussed and "Statistics of Directional Data" by Mardia and Jupp, is probably a good introduction.--Salix (talk): 19:08, 26 October 2010 (UTC)

A point that minimizes the sum of the distances is not what an average is. The average of a list of number is the number that minimizes the sum of the squares of the distances from them. The number that minimizes the sum of the distances is the median. Michael Hardy (talk) 22:54, 28 October 2010 (UTC)
 * This might also be relevant here. Michael Hardy (talk) 22:55, 28 October 2010 (UTC)

prove that a + b will have the same value as b + a
How do you do that? What if b is negative. Then a + -b is the same as a - b. So if a = 5 and b = 3, it's easy: 5-3. But let's look at b + a. Now it's -3 + 5, then you have to start counting back from 3, (you count -2, -1, 0) and then go forward. (...1, 2, finishing the 5). So in this case b + a would be 3 + 2, which is -b + (|a|-|b|). And there are a bewildering number of other similar arrangements (both negative? only b negative, only a negative, etc). Then there's zero. After all is said and done, I'm not even sure if a+b = b+a at all. (Like if they are both zero -- is 0+0=0+0; when I try to verify that I just get divide-by-zeros, so maybe it's not a valid operation at all!!)

Please help. Is a+b = b+a for ALL cases??? How do you prove that??? Thanks a million!!! 84.153.187.197 (talk) 18:50, 26 October 2010 (UTC)

Addition is commutative, subtraction is not. I think that you are getting confused between the two. On a number line addition mean "First do this, then do that" . Positives move the counter to the right, negatives to the left. So A+B means count A then count B. If A is 5 and B = -3, A+B means start at zero and move 5 to the right then move 3 to the left. The answer is where you end up. For B+A it means move 3 to the left then 5 to the right, oh look you get the same answer. I don't know if it can be proved or if it just observed that addition is commutative. Theresa Knott &#124; Hasten to trek 18:59, 26 October 2010 (UTC)


 * Of course it can be proved, but how to prove it depends heavily on what your basic definitions and axioms look like. (Such as, what is a negative number anyway?) With the definitons in Integer, commutativity of integer addition follows immediately from commutativity for addition of natural numbers. For natural numbers, then you can either say, well, it's obvious, or prove it from the Peano axioms using induction. –Henning Makholm (talk) 19:16, 26 October 2010 (UTC)

Proof by pictures?: For adding two positive integer, a + b, the picture is a dots on the left and b dots on the right. Count up all the dots to compute a + b. If I rotate the picture 180 degrees, now all the dots that were on the right are on the left, and vice-versa. So now the total number of dots represents b + a, but since I haven't erased any dots, or drawn any extras this count is the same as the count I got before.

One could extend this to the sum of a positive integer and a negative integer by drawing o's for the positive number and x's for the negative integer. Connect an o with an x using a line to "cancel" them against each other, and repeat until you've run out of o's or x's. The count of o's or x's that survive gives you the result of the sum. Rotate the picture 180 degrees and, again you get that a + b = b + a. Quantling (talk) 20:32, 26 October 2010 (UTC)


 * I "define" addition similarly. I imagine counting clearly distinct stones on a beach. For some total number of stones, we can clearly group the stones into two piles, call one pile, say, "8" and the other "3" depending on how many stones are in each pile. We may reconstruct the total pile by either creating an "8" pile and then setting a "3" pile next to it, or by reversing this order. There we have commutativity from a few physical properties--conservation of stones, and the ability to move them about in space from pile to pile using different timings, along with our ability to recognize piles and stones. These arguments can give the basic properties of addition of natural numbers, where integer properties follow as Henning Makholm mentioned. The more rigorous approach of Peano Arithmetic (or similar), I would say, relies on an implicit connection between the axioms and reality, where the user of those axioms is at all interested in them because they list a sort of minimal set of intuitively obvious properties that a physical definition of addition satisfies. For instance, I believe PA "mirrors" my intuitive definition of addition. If I didn't, I wouldn't be able to add, and I wouldn't be able to trust calculators, so I wouldn't be able to pay for both an item and shipping--which is a stupid place to be. There's not really much more you can do than accept some axioms as given. 67.158.43.41 (talk) 06:00, 27 October 2010 (UTC)

While mathematicians will address the commutativity directly, it is very reasonable and logical of you to consider the situation in a case by case basis. Just what are your "bewildering number of ... arrangements"? Well, if a < 0 then b < a or b = a or a < b 0 for five arrangements. (And yes, I am picturing a number line as I write this.) If a = 0 then b < a or b = a or b < a for three arrangements. If a > 0 then b < 0 or b = 0 or 0 < b a for five arrangemants. So, there are only thirteen arrangements, which is not all that many for you to check, case by case, on a number line to see that it really does work. Compare this to the proof of the four color theorem, where the initial proof required examining 1,936 separate cases. I think that you may be experiencing one of the wonders of mathematics. When an equality is proven, then it will hold true. There is a certain, never diminishing joy which occurs when the results are obtained through a previously unanticipated means, and that equality still holds. You knew that it would (as long as the logical basis for mathematics was sound) but it brings joy nonetheless. Enjoy! -- 124.157.254.112 (talk) 21:42, 26 October 2010 (UTC)


 * See Proofs involving the addition of natural numbers.
 * —Wavelength (talk) 22:03, 26 October 2010 (UTC)
 * Note that OP asked for addition where at least one digit is negative, having no difficulty when only the natural numbers are considered. Taemyr (talk) 00:02, 28 October 2010 (UTC)
 * I disagree. It mentions 0+0 specifically, and 0 isn't negative. (0 is arguably not a natural.) For that case, 0+0 = 0+0 by hand-wavey rules of substitution I've never actually heard clearly articulated. In some sense, one 0 is indistinguishable from another in that we simply call them all "equal", whatever that means. By assumption the result of the two-argument addition function + is +(0, 0) = 0 (where +(0, 0) is using prefix instead of infix notation). We then have 0+0=0=0+0, so 0+0=0+0 follows from another assumption, the transitivity (and I suppose symmetry) of equality. One might wish to define a sort of structural equality, where two structurally identical formulas are always equal. That seems problematic for some types of definitions: like the function which returns the number of times it's been evaluated. Mathematical functions, at least, are static, though, so I'd be willing to accept such structural equality in some cases. This is a surprisingly interesting question! 67.158.43.41 (talk) 10:21, 28 October 2010 (UTC)
 * This book gives a proof: Edmund Landau, Foundations of Analysis. (orig. title: Grundlagen der Analysis) &#x2013; b_jonas 13:02, 29 October 2010 (UTC)

Sounds like you're a mathematician or logician in the making; if so, may I recommend An Introduction to Symbolic Logic, by Susanne Langer? It's a gem, a delightful book, and very readable by any intelligent student with no background in logic or mathematics. The latest Dover edition is entirely sufficient (you don't need the expensive reprinting by a new publisher) and it should be available used for five dollars or so, online.

Some directions to rigorous proofs have been given above. If you'd like a more immediately intuitive demonstration, maybe I can help. But the answer to your question, like any proof, depends on the assumptions you're willing to start from. Will you grant the following assumptions?


 * 1) addition and subtraction are  associative, e.g. (x+y) + z = x + (y+z).
 * 2) a well-formed proposition is  either true or false.
 * 3) when the same number is subracted from both sides of an inequality, the two sides remain unequal.
 * 4) when you take a first number, add a second one, then subtract the first, you get the second one, i.e. (x+y)-x = y.

Actually, we can't really call the following a "proof", because that last statement above, at least, is an informal assumption. So let's call what follows an "intuitive demonstration".

(The demonstration is by the reduction to absurdity method. If you're not familiar with the method, it's extremely useful. It's based on the fact that a true statement will have only true consequences. If some statement has false consequences, then the original statement itself is false. You start by assuming the opposite of what you actually want to prove, and then show, through a series of allowed steps, that this opposite assumption implies a false consequence, a contradiction, or (same) "absurdity". If all your steps are allowed ones, then the only conclusion is that the original "opposite" assumption was itself false.)

We begin by considering the statement that's the "opposite" of the one you want to prove. That is, we consider the inequality a+b < > b+a and see what follows from it, by allowed steps:

a + b < > b + a

(a+b) - b < > (b+a) - b

a + (b - b) < > a note this step relies on the unproved, intuitive notion that (b+a)-b  = a

a < > a

But the statement "a < > a" is false ( "absurd" ) for all values of a. Since all of the three preceding steps are allowed by our assumptions, the only conclusion must be that the original statement, a + b <> b + a, is false. And if it's false that two terms are unequal, the only alternative is that they are equal, i.e. the only alternative possibility is that a + b = b + a.

It's common to write something like "proof by reductio ad absurdum" at the end of such a demonstration, btw. Hope that helps. Cheers, –  OhioStandard  (talk) 07:06, 1 November 2010 (UTC)


 * I don't mean to nitpick, but the final condition, (x+y)-x = y is equivalent to ((x+y)-x)+x = (y)+x or (x+y)+(-x+x) = (x+y)+(0) = x+y = y+x, allowing associativity and the behavior of 0, which you actually implicitly used as well. That is, you're assuming something equivalent to what you wanted to prove, but which is less intuitive (in my opinion). It's unfortunate since the rest of the post illustrates a number of nice basic proof techniques so well. I would almost be inclined to reason in the reverse direction, from a+b = b+a to (a+b)-a = b, since the same points may be illustrated. 67.158.43.41 (talk) 07:40, 1 November 2010 (UTC)


 * No, not "nitpicking" at all; you're just using correct principles. God is in the details in logic and mathematics, after all, so thank you. I had actually been aware that I was using the "to be proved" in this "demonstration" ( I can't call it a proof ) I gave, and considered making that explicit, but I wasn't sure whether the OP was actually seeking a real proof, or just an intuitve assist. I felt the "take a first number, add second number, then subract the first one" might feel intuitive-enough to satisfy the OP, if that's all he might have been seeking. But you're of course correct that (x+y)-x = y and x+y = y+x are equivalent statements. I just thought it might make the OP feel better to have a look at it this way. But it's worth stressing that this can't be considered a proof at all, and I appreciate your doing so. I probably should have been more clear about that. I have the Landau book (somewhere) that Jonas mentioned above, perhaps I'll take a look, too, at what he has to say. Good stuff, this, and I'm always very impressed at the extraordinary degree of mathematical sophistication represented here. Thanks again for your comment, very much. Best, –  OhioStandard  (talk) 08:50, 1 November 2010 (UTC)


 * I just realized how dense I was being, above. I shouldn't try to answer ref desk questions while sleepy, it appears. Of course all the running about the room shouting reductio ad absurdum was unnecessary. If the OP was willing to accept the "take a first number, add a second number, then subtract the first one" or (same) (a+b)-a = b premise on an intuitive basis, then just adding "a" to both sides would have done the trick, as you showed above. It would have "done the trick" for an intuitive acceptance, I mean; again, no sort of proof. Thanks, –  OhioStandard  (talk) 09:52, 1 November 2010 (UTC)


 * I figured the rest of the post was worth the meandering through inefficiency :). It might be cleaner to use a series of equalities, but it's almost certainly not clearer for novices. 67.158.43.41 (talk) 21:24, 1 November 2010 (UTC)


 * Hmm ... I also just noticed that above I incorrectly gave "a + b < > b + a" as the "opposite" of the sentence "a + b = b + a". The use of the informal word "opposite" was careless; the word I should have used was "negation". If I'd used that correct and precisely-defined word, negation, I might have noticed the implicit quantifier "for any (a,b)" that precedes "a + b = b + a", and then realized that the correct negation of the equality isn't "for any (a,b) a + b < > b + a" as I tacitly assumed, but rather "there exists at least one (a,b) such that a + b < > b + a". –  OhioStandard  (talk) 15:16, 1 November 2010 (UTC)

Linear transformation (map) problem, shortcut?
Given a basis of R^n B=[a,b,c. . . ],  T(a), T(b), T(c). . ., (where t is an unknown transformation from R^n to R^n+1) and u (a vector in R^n)

find: T(u), a basis for the kernal and  a basis for the range.

I think I understand one way to solve this class of problem. T can be represented by a matrix, of known dimensions, I could do the multiplication T(a, b, c) with the entries of T as unknowns, and come up with a system of equations, and thus find a representation of T.    Is this necessary? Is there a way to solve this class of problem without finding T explicitly? --hacky (talk) 21:33, 26 October 2010 (UTC)
 * A better way is to decompose u into a linear combination of your basis vectors. You can then apply linearity to compute T's effect on u. For instance, if you find
 * $$u = \sum_{i=1}^n \alpha_i a_i\,\,\,\text{where}\,\,\,\alpha_i \in \mathbb{R}$$
 * (where I've written your basis as B=[a1, a2, ..., an]), you have
 * $$T(u) = T(\sum_{i=1}^n \alpha_i a_i) = \sum_{i=1}^n T(\alpha_i a_i) = \sum_{i=1}^n \alpha_i T(a_i)$$.
 * Decomposing u can be done in a number of ways. You might be given u as a linear combination of basis vectors directly. You might be given u in another basis, in which case you could apply a change of basis transformation. 67.158.43.41 (talk) 22:31, 26 October 2010 (UTC)
 * Also, $${T(a_i)}$$ should itself be almost a basis for the range of T. However, it might not be linearly independent, so not really a true basis. I believe Gram-Schmidt Orthogonalization (or a slightly modified form, which can deal with intermediate 0's) should cull out the extra vectors while producing a nice orthonormal basis for the range. The 0's should give information on the null space as well.... 67.158.43.41 (talk) 03:46, 27 October 2010 (UTC)
 * There's a better way to find the matrix for T than how you described. Suppose M is the matrix for T.  Then Ma = T(a), Mb = T(b), etc.  So if [a,b,c,...] is the matrix with the elements of B as the columns, and [T(a), T(b),T(c),...] is the matrix with their images as the columns, M[a,b,c,...] = [T(a),T(b),T(c),...].  You can solve this by right multiplying by [a,b,c,...] inverse, or better yet, making the augmented matrix [a,b,c,...|T(a),T(b),T(c),...] and performing Gaussian elimination.--130.195.2.100 (talk) 01:29, 27 October 2010 (UTC)
 * It is possible that you didn't want these answers. If I understand your question, you wanted to know if there was a simpler method than one using matrices. The answers so far discuss simpler ways to use matrices.
 * The reason is, that in many or most situations, matrix calculations do provide the simplest method. There are also numerous situations, where they don't, but these usually will not involve arbitrary transforms. If your unknown transform derives from some known situation, where you possess extra information, then there may well be a faster way. If not, I'm inclined to answer your question thus:
 * No, there isn't; thus, you should concentrate on learning efficient matrix calculation methods, in the first place. JoergenB (talk) 20:14, 29 October 2010 (UTC)

physical constants as reported in Wikipedia
Why are Wikipedia physical constants displayed with parens around the last two digits? —Preceding unsigned comment added by J 0JFCfrmAyw59oVFk (talk • contribs) 22:09, 26 October 2010 (UTC)
 * They're reporting the measurement uncertainty in current measurements. For instance, 1.34(12) means the value is known to lie within 1.34-0.12 and 1.34+0.12. 67.158.43.41 (talk) 22:23, 26 October 2010 (UTC)
 * I haven't seen this in articles, but if it is enWP practice, then there should be a link to an explanation. Perhaps some template  should yield "1.34(12)" or something.&mdash;msh210 &#x2120; 14:41, 27 October 2010 (UTC)
 * The physical constant article uses that notation extensively. 67.158.43.41 (talk) 22:47, 27 October 2010 (UTC)


 * As a layman, I've always assumed that 1.34(12) meant that the best estimate or measurement was 1.3412, but the last of these digits are not known to be accurate. Different from 1.34±0.12.  Have I been wrong?  --  Tom N (tcncv) talk/contrib 04:55, 28 October 2010 (UTC)
 * I'm afraid so. See Uncertainty. It's also wrong to say that 1.34(12) means the value is known to lie within 1.34-0.12 and 1.34+0.12. The value 0.12 is an estimate of the standard error. Qwfp (talk) 06:48, 28 October 2010 (UTC)
 * I should have mentioned I was glossing over the uncertainty of the bounds. I figured the OP wouldn't mind the difference, but it's a good thing to mention that nothing is certain in experimental physics. 67.158.43.41 (talk) 10:05, 28 October 2010 (UTC)