Wikipedia:Reference desk/Archives/Mathematics/2008 November 23

= November 23 =

Sum of Roots
Hello. What is the significance of the sum of roots of a tangent quadratic equation? For example, the sum for tan2 x - 9tan x + 1 = 0, 0 < x < 2π is 3π and for tan2 x - 5tan x + 6 = 0 is 3.5π. The sums are well-rounded numbers. Thanks in advance. --Mayfare (talk) 03:36, 23 November 2008 (UTC)


 * Recall Viète's formulas, and also the formula for the tangent of a sum:
 * $$\tan (a + b) = \frac {\tan(a) + \tan(b)}{1 - \tan(a) \tan(b)}$$
 * So, if a and b are the roots of your second tangent quadratic equation, then $$\tan(a) + \tan(b) = 5$$ and $$\tan(a)\tan(b) = 6$$, so that $$\tan(a + b) = -1$$, and a + b is an inverse tangent of -1. Eric.  131.215.158.213 (talk) 05:06, 23 November 2008 (UTC)


 * Let me check these examples numerically, I think I made a mistake somewhere.... Eric.  131.215.158.213 (talk) 05:06, 23 November 2008 (UTC)


 * The sums I got were not 3π an 3.5π, but 0.5π + nπ and 0.75π + nπ (n an integer, with n = 0 for the solutions that satisfy 0 < x < π -- tangent has period π, not period 2π). Possibly I made an error, but could you check where you got those values from?  Eric.  131.215.158.213 (talk) 05:24, 23 November 2008 (UTC)

The second root is π more than the first for tangent according to the CAST rule. Since there are four roots, the sum of roots includes 2π. --Mayfare (talk) 06:09, 23 November 2008 (UTC)


 * Ah, now I understand what you mean. It seems strange though to look at solutions on the interval (0,2π) instead of either (0,π) or $$(-\infty,\infty)$$.  Regardless, does my explanation above make sense?  Eric.  131.215.158.213 (talk) 08:34, 23 November 2008 (UTC)

Numbers with the sme numerals
Hi,

I am trying to figure out what is the term for a number that contains the same numerals, like 22, 55, 777 etc?

Cheers!! - (Demeach) —Preceding unsigned comment added by Demeach (talk • contribs) 04:52, 23 November 2008 (UTC)


 * They're called repdigits. —Preceding unsigned comment added by 86.134.53.78 (talk) 05:16, 23 November 2008 (UTC)

See also palindrome

Topology Expert (talk) 05:39, 23 November 2008 (UTC)


 * OEIS can also tell that. &#x2013; b_jonas 20:41, 23 November 2008 (UTC)

The Thinking Person's (Re-)Introduction to Probability
I want to improve my understanding of statistical procedures. For this purpose, I don't just want to improve my rote learning of which situation calls for which recipe; I actually want to understand what I'm doing. To this end, I have bought various books, and have decided that I must actually understand probability stuff -- not just read it and kind-of-understand it, but actually tackle problems and get them right. I am now slowly working through the first part of the Dover reprint of Bulmer's Principles of Statistics. I like it a lot: it's concise (I loathe the kind of book epitomized by Gödel, Escher, Bach), compact and light, and literate (and cheap) and has interesting examples, and its cover doesn't address me as a "Dummy", "Idiot", or "Drooling Cretin". One problem is that the "exercises" (and also, for those who dare, the "problems") merely have "answers" in the back, not actual solutions. (Sometimes I'm right; but when I'm wrong I haven't a clue why. Today I even did a "problem" -- worked out the probability of some poker hands -- but Bulmer doesn't deign to provide the answers, let alone an explanation for them.) Another is that I fear the second part will advance too quickly for my fading memory of mathematics.

Books on "mathematical statistics" at first seem to offer what I'm looking for. But in mathematics, unlike architecture, linguistics, etc., "introductory" books seem to assume at the very least that the reader's fresh out of high school with good results. My results weren't bad but that was ages ago; "rusty" would be an understatement. (Oh, as for the notion of learning from some mammoth online encyclopedia somewhere, one hurdle among many is that I'm not even familiar with the notation for "Mathematical expression of frequency" here.) Any book recommendations? -- Hoary (talk) 13:14, 23 November 2008 (UTC)


 * I highly recommend Contemporary Business Statistics with Microsoft Excel by Anderson, Sweeney and Williams (ISBN: 978-0324020830). The applications are all business oriented. However, the book is well written and, because it makes heavy use of excel, it is much easier to get an intuition for what you are doing because you can see the data and all the steps to the problem laid out in front of you. Wikiant (talk) 14:32, 23 November 2008 (UTC)


 * Thank you, but that sounds similar to a number of practical stats books I've seen. Of course Excel (or in my case OpenOffice) is much better than SPSS (or in my case perhaps PSPP, though I haven't bothered to install that): the stats books that I bought or considered buying do no more than mention any of these programs; instead, they actually give the formulae or algorithms and I use OpenOffice to type the stuff in. (Eventually I intend to learn R, but there's no rush.)


 * But I want to know my way around probability. It's partly a matter of enjoyment. Some people listen to their Ipod in the daily commute; me, recently I've been enjoying puzzles (even when I fail at them). Here's Bulmer: A yarborough [...] is a hand of 13 cards containing no card higher than 9. It is said to be so called from an Earl of Yarborough who used to bet 1000 to 1 against its occurrence. Did he have a good bet? Or again: If traffic passes a certain point at random at a rate of 2 vehicles a minute, what is the chance that an observer will see no vehicles in a period of 45 seconds? Perhaps I'm perverse, but I like this stuff.


 * And here's the killer: Amazon says Contemporary Business Statistics with Microsoft Excel -- which does sound intelligent and helpful and at "40 used from $0.01" is priced reasonably -- has 744 pages and weighs 3 pounds. (Bulmer's has about 250 pages and is a reprint of something from back in the good old days when publishers hadn't thought of "sidebars" and suchlike gimmickry, so it's compact enough to slip into my jacket pocket.) -- Hoary (talk) 15:41, 23 November 2008 (UTC)

If a problem is too hard, try a simpler one. A poker hand is five cards out of fifty two. Try a 'minipoker' hand, say, two cards out of eight: 1 2 3 4 of hearts and clubs. Count the possible hands. (You should get 28). Which hands are pairs and which are not pairs? When you can do that by counting, you understand that some counting is saved by computing. Then try three cards out of eight. Bo Jacoby (talk) 15:24, 23 November 2008 (UTC).


 * Thank you, but I think I solved that particular problem. (However, it's on a scrap of paper in my bag, in another room, so I'll spare you.) -- Hoary (talk) 15:41, 23 November 2008 (UTC)

Dover's website offers a range of books, reasonably priced. I'll root around there and I should find something. -- Hoary (talk) 23:54, 23 November 2008 (UTC)

Perhaps you would like How To Lie With Statistics, by Huff. It is often recommended and gets good reviews. And The Pleasures Of Counting by T W Korner is very readable and literate. 89.243.205.213 (talk) 17:22, 26 November 2008 (UTC)

eigenvalues
when presented with an equation of the form; $$f(x) = a_1x_1^2 + a_2x_2^2 + a_3x_3^2 + a_4x_1x_2 + a_5x_1x_3 + a_6x_2x_3$$ it can be rewritten into the form $$f = \mathbf{x^TAx}$$ where $$\mathbf{x}= \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} $$ and A is some 3x3 matrix. this can then be transformed to the form $$f = \mathbf{x'^T \Lambda x'}$$ where $$\mathbf{ \Lambda}$$ is the diagonal matrix constructed of the eigenvalues of $$\mathbf{A}$$; such that $$\mathbf{ \Lambda_{ii}}= \lambda_i$$.

I have two questions regarding this;
 * 1) how do you know which eigenvalue is which when you have obtained them, in other words, how do you know what order they go in down the diagonal of $$\mathbf{ \Lambda}$$.
 * 2) how is the relation between $$\mathbf{x_i}$$ and $$\mathbf{x_i'}$$ obtained.

thank you. —Preceding unsigned comment added by 129.67.39.18 (talk) 15:31, 23 November 2008 (UTC)
 * 1: You can put them in any order (you'll get different x', of course). 2: See Diagonalizable matrix. Algebraist 15:34, 23 November 2008 (UTC)


 * I presume you mean $$f(x) = a_1x_1^2 + a_2x_2^2 + ...$$ →86.132.232.116 (talk) 16:13, 23 November 2008 (UTC)
 * yes. changed.


 * You have to be careful as sometimes can't turn a matrix into the form of a diagonal matrix if you have repeated eigenvalues. In fact I see you have to look at Diagonalizable matrix about that. Dmcq (talk) 19:38, 23 November 2008 (UTC)
 * That's not a problem here, since we can always choose A to be symmetric, and symmetric matrices are always diagonable (in fact diagonable by an orthogonal transformation). Algebraist 19:47, 23 November 2008 (UTC)


 * Under the circumstances, A can be always taken to be a symmetric matrix, in which case the spectral theorem can be applied: any such matrix is diagonalizable, with real eigenvalues, and mutually orthogonal eigenvectors.  siℓℓy rabbit  (  talk  ) 19:47, 23 November 2008 (UTC)


 * Very true, and silly of me. Thanks. I should think before typing. Dmcq (talk) 20:01, 23 November 2008 (UTC)
 * In fact I wonder if there's an interesting problem in there somewhere about all the various matrices giving the same equation above? Might as well look for an opportunity :) Dmcq (talk) 20:10, 23 November 2008 (UTC)
 * That equation determines A up to addition of an element of the special orthogonal Lie algebra (aka a skewsymmetric matrix). Algebraist 20:16, 23 November 2008 (UTC)

maths
what maths done in 1945 has only been done then. never before or since? —Preceding unsigned comment added by 86.137.64.28 (talk) 18:06, 23 November 2008 (UTC)


 * The only thing that springs to mind that happened in 1945 but never before or since is the use of nuclear weapons on people, so perhaps some of the maths related to effects of that? I'm not really sure of any maths that would be done for that that wouldn't be done for tests that have happened before and since, though, or at least done theoretically. Why do you ask? The context might help. --Tango (talk) 18:39, 23 November 2008 (UTC)


 * There was also some early work done on rockets (Germany and the Soviet Union) and jet airplanes (Germany and England) which may have become obsolete after the war, as newer techniques made the old methods no longer useful. I believe the Germans also experimented with rocket-powered planes. StuRat (talk) 22:54, 23 November 2008 (UTC)

Quick Parabolic Interpolation
Suppose I have two or three points of data regarding a parabola. The Y-axis data is known to be high-resolution, and the x-axis data is known to be low resolution, as per the linked drawing. In said drawing, the sensor bars have many possible output voltages (high Y-resolution) but are very imprecise in their x-resolution, since each bar is fairly thick. What is the simplest way to calculated the absolute minimum point of the resulting parabola, based on the relative Y-values of each bar?

If you would like more context - I'm trying to replicate the calculation performed here

Magus (talk) 21:07, 23 November 2008 (UTC)


 * Are you sure you really want a parabola ? Those 5 bars seem like they would fit a sine wave much more closely. StuRat (talk) 22:51, 23 November 2008 (UTC)


 * Yes - the ultimate goal is to be able to precisely determine the exact position of a finger depressing the sensors, at a higher resolution than the width of the bars will allow - to do that, I have to determine the minimum of the parabola. A sine wave would be great if I needed to know depth and width of the finger, but I need to know depth and precise location of the center of the finger. Magus (talk) 23:27, 23 November 2008 (UTC)


 * Maybe use the center of mass formula to locate the center? That is, $$x = (I_1 + 2 I_2 + 3 I_3 + 4 I_4 + 5 I_5) / (I_1 + I_2 + I_3 + I_4 + I_5)$$ where $$I_k$$ is the pressure on bar $$k$$. Fredrik Johansson 23:46, 23 November 2008 (UTC)


 * I may be mistaken, but I think the stupid answer is the right one here: take your data to be centered on your sensors, interpolate, and find the minimum. Since what you're measuring is wider than even your broad sensors, it doesn't make much sense to try to further localize its effect on one of them, its varied effects on adjacent sensors are your sub-sensor-resolution data.  If you knew the data were precisely parabolic, but you didn't have precise x coordinates, you would have a more complicated constraint-satisfaction problem that would in this case yield an interval of possible minima.  (Then you would presumably use its midpoint.)  But this is data from a real sensor, and so you want the noise-supressing effects of the normal interpolation anyway.  Of course, if testing reveals that sensor #4, say, is more sensitive on its left side than its right, or is simply more sensitive than the others, you could make your model more complicated to handle that.  Doing so properly would however require a good model for the shape of the sensor response; the parabola works here because you're looking only for even symmetry.  --Tardis (talk) 18:27, 24 November 2008 (UTC)


 * That would give a resolution for the result of one bar-width. You may be able to to better by taking the other points into account. The classical way would be to use a least squares approach: construct a model which depends on the x-position and gives results for each sensor. The vary the x-value and find the x-value which minimises the sum of squared differences, between the observed values from the model and the actual values from the finger. A parabola looks a bit off the pictures look more like a bell curve which tails off for sensors far away from the finger. --Salix (talk): 21:18, 24 November 2008 (UTC)
 * I think you get much better than one bar-width. For instance, suppose that a finger of width ω applies a pressure of $$p(\delta):=\frac1{1+\left(\delta/\omega\right)^2}$$ at a distance of δ from its center (deliberately not actually quadratic), and that a sensor at x with width w reports the total force on it, which is $$s(x,w,f):=\int_{x-w/2}^{x+w/2} p(\xi-f)\,d\xi$$ for a finger centered at f.  Then for a finger with $$\omega=2$$ and 5 sensors at -2,&hellip;2 each with width 1, for $$|f|<1$$ the vertex of the quadratic fit to $$\{(-2,s(-2,1,f)),\ldots,(2,s(2,1,f))\}$$ is always within 0.174 of f, and is within about 0.1 for $$|f|<\frac12$$.  As f gets larger, eventually the estimate breaks down because one half of the pressure is past the sensors and what remains is not at all parabolic.  But if we widen the pressure ($$\omega=7$$), the parabola's vertex moves only to 4.07 even for the finger at the very edge of the board at $$f=5/2$$.
 * Certainly if we instead have the sensors sample $$p(\xi-f)$$ at one point ξ in their interval, and then we suppose that they might conspire against us and at one moment use their left endpoints and at another use their right, we cannot do better than an error of $$\pm w$$. But for a real physical object that has no such malice, we really can do better without a sophisticated model. --Tardis (talk) 17:50, 26 November 2008 (UTC)