Wikipedia:Reference desk/Archives/Mathematics/2010 September 2

= September 2 =

find scalar values a, b and c
Given $$\mathbf{A} = \begin{bmatrix}

2 & -5 \\ 3 & 1 \\ \end{bmatrix}$$, find scalar values a, b, c (NOT ALL ZERO) for which $$a\mathbf{I}+b\mathbf{A}+c\mathbf{A}^2=\mathbf{O}$$

I get these 4 simultaneous equations in 3 unknowns:

$$a+2b-11c=0$$

$$-5b-15c=0$$

$$3b+9c=0$$

$$a+b-14c=0$$

What do I do now? Wikinv (talk) 01:41, 2 September 2010 (UTC)


 * Check your initial work. I believe one of the four equations is wrong.  Next, take a look at Simultaneous equations for various techniques for solving these.  One approach is to rearrange one of the equations to isolate one of the unknowns (i.e., $$a = ...$$) and then substitute the results into the next equation.  After the second iteration, you should have a value for one of the unknowns.  Repeat the process until you've solved them all.  --  Tom N (tcncv) talk/contrib 02:09, 2 September 2010 (UTC)
 * After attempting to solve it myself, I found that there's a gotcha in those equations. The problem has a solution, but that solution may not be unique.  I assume this is homework, so I will not give you too obvious a hint.  Write back if you need additional help.  --  Tom N (tcncv) talk/contrib 02:33, 2 September 2010 (UTC)
 * Fixed the equation. I was aware that there are multiple solutions, indeed that is why I don't know how to solve it.--Wikinv (talk) 07:05, 2 September 2010 (UTC)
 * Although, having fixed the equation, the solution becomes quite easy by inspection, but is there an analytic way of solving it?--Wikinv (talk) 07:09, 2 September 2010 (UTC)
 * Just realised that the second and third equations are in fact exactly the same. Furthermore, it is evident that there is an infinite number of solutions (that's why it was so easy to solve by inspection!)--Wikinv (talk) 07:14, 2 September 2010 (UTC)
 * Observe that if (a,b,c) is a nonzero solution to the equation $$\scriptstyle a\mathbf{I}+b\mathbf{A}+c\mathbf{A}^2=\mathbf{O}$$ where $$\scriptstyle \mathbf{A}$$ is any square matrix, and k is a nonzero number, then (ka,kb,kc) is a nonzero solution too. Bo Jacoby (talk) 05:15, 2 September 2010 (UTC).


 * In general, the equation $$a\mathbf{I}+b\mathbf{A}+c\mathbf{A}^2=\mathbf{O}$$ has a one-dimensional space of solutions (a, b, c) for any 2x2 matrix $$\mathbf{A}$$. To see this, let
 * $$\mathbf{A} = \begin{bmatrix}

a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} \text{ and } \mathbf{A'} = \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \\ \end{bmatrix}$$
 * Then
 * $$\mathbf{A'A} = \det(\mathbf{A})\mathbf{I}$$
 * but
 * $$\mathbf{A'} = \mathrm{tr}(\mathbf{A})\mathbf{I}-\mathbf{A}$$
 * so
 * $$(\mathrm{tr}(\mathbf{A})\mathbf{I}-\mathbf{A})\mathbf{A} = \det(\mathbf{A})\mathbf{I}$$
 * $$\Rightarrow \det(\mathbf{A})\mathbf{I} -\mathrm{tr}(\mathbf{A})\mathbf{A} + \mathbf{A}^2 = \mathbf{0}$$
 * so
 * $$(a,b,c) = (\det(\mathbf{A}), -\mathrm{tr}(\mathbf{A}), 1)$$
 * or (as Bo says) any multiple of this. Gandalf61 (talk) 08:54, 2 September 2010 (UTC)
 * Alternatively, just take the coefficients of the characteristic polynomial. —Preceding unsigned comment added by 203.97.79.114 (talk) 09:11, 2 September 2010 (UTC)
 * See Gaussian elimination. Properly used, it works also for systems of linear equations that have no solutions, multiple solutions, and\or redundant equations. I don't know if our articles cover the details involved, but if not, any introductory linear algebra book will. -- Meni Rosenfeld (talk) 09:17, 2 September 2010 (UTC)

Numerical analysis notation
I was reviewing my numerical analysis textbook, and stumbled across a notation I didn't remember, concerning propagation of errors. For background, the question is, assuming that $$f : \mathbb{R} \rightarrow \mathbb{R}$$ is a differentiable function, and that $$\overline x$$ is an approximation of $$x \in \mathbb{R}$$, what is an upper bound on the uncertainty $$|f(\overline x) - f(x)|$$? Now, the mean value theorem directly gives that there is a $$\xi$$ between $$x$$ and $$\overline x$$ such that $$|f(\overline x) - f(x)| = |f'(\xi)| |\overline x - x|$$. We know that $$|\overline x - x| \leq M$$, but we do not know an upper bound on $$\textstyle |f'(\xi)|$$.

Now, what the book does here is that it replaces the unknown $$\textstyle f'(\xi)$$ with the known $$f'(\overline x)$$. The idea, presumably, is that since $$|\overline x - \xi|$$ is small, then so is $$|f'(\overline x) - f'(\xi)|$$. The book tacitly makes this assumption, mind you, without requiring a single fact about $$\textstyle f'$$—not even that it be continuous! Of course, it is not then necessarily true that $$|f(\overline x) - f(x)| \leq |f'(\overline x)| M$$, and the book does not claim so, but replaces the $$\leq$$ sign with a $$\lesssim$$ sign, telling the reader to read it as "less than or approximately equal to"—I would say it seems reasonable to interpret this as "unequality that almost certainly holds unless the function is too irregular". It feels kind of sloppy though. Is it just my book or is this common notation? Is this a case of "since we're dealing with applications, we don't have to care about badly-behaving functions"? 85.226.206.114 (talk) 07:57, 2 September 2010 (UTC)
 * Trying to find an estimate for $$\textstyle f'(\xi)$$ gets you back to the original problem but with derivative of the function. So this estimate will be in terms of $$\textstyle f$$ and if you try it estimate this it will be in terms of $$\textstyle f'$$ etc. But each time you're multiplying the estimate by $$|\overline x - x|$$ which is assumed to be small. So I think the interpretation of $$\lesssim$$ is "Less than equal up to a quantity that is small compared to the other quantities." You can construct examples where the derivatives get very large or infinite so that the error term is still large even with the $$|\overline x - x|$$ factors, but these aren't encountered often in practice. The $$\lesssim$$ notation is a bit vague perhaps but if you want more precise notation try a different numerical analysis book, there is no shortage of them.--RDBury (talk) 15:05, 2 September 2010 (UTC)


 * All right, thanks! 85.226.205.5 (talk) 08:42, 3 September 2010 (UTC)

Drawing 45-45-90 triangles on spheres
Can you draw a 45-45-90 triangle on a sphere so all three sides have whole-number lengths? 20.137.18.50 (talk) 16:31, 2 September 2010 (UTC)
 * You cannot draw a 45–45–90 triangle on a sphere at all. The sum of angles of any triangle on a sphere is strictly more than 180°, see spherical geometry.—Emil J. 16:36, 2 September 2010 (UTC)


 * You can, however, draw an isosceles right triangle on a sphere so that all three sides have whole-number lengths. Will that do? -- ToET 16:53, 5 September 2010 (UTC)
 * Doesn't this require the radius of the sphere to be an integer multiple of 2/π?—Emil J. 12:27, 6 September 2010 (UTC)
 * Only for 90-90-90 equilateral right spherical triangles. For isosceles right spherical triangles with other leg : hypotenuse rations, other restrictions on the radius will hold.  -- 124.157.234.26 (talk) 06:39, 7 September 2010 (UTC)
 * Also of interest here might be The Right Right Triangle on the Sphere, a 2008 paper from The College Mathematics Journal in which:
 * The question explored here is whether having a 90°- angle is the most fruitful analogue in spherical geometry to right triangles in Euclidean geometry. A strong case is made for the property of a triangle with one angle equal to the sum of the other two.
 * I don't have access to this paper beyond its abstract, and am not aware of a related Wikipedia article. -- 124.157.234.26 (talk) 03:53, 7 September 2010 (UTC)