Wikipedia:Reference desk/Archives/Mathematics/2008 February 19

= February 19 =

Singular Value Decomposition & Hermitian Matrices
This is a question posed in my class and the teacher himself could not figure it out either. The question is that given the singular value decomposition of a mxm square matrix $$A=U \Sigma V^*$$ where the * represents the complex conjugate transpose of a matrix, is there anything that we can say about the eigenvalue decomposition (diagonalization) of the 2mx2m Hermitian matrix $$B=\begin{bmatrix}0 && A^*\\A &&0\end{bmatrix}$$? Can B's eigenvalue decomposition be written in terms of $$U,\Sigma,$$ and $$V$$ ? I have also tried a few numerical examples on Matlab and it appears to me that the two are completely unrelated. Any help would be appreciated.A Real Kaiser (talk) 06:10, 19 February 2008 (UTC)
 * If $$A=U \Sigma V^*$$, then $$U^*AV=\Sigma$$. As $$\Sigma$$ is self-adjoint, we get $$V^*A^*U=(U^*AV)^* = \Sigma$$ as well.  Now compute:
 * $$\begin{pmatrix}0 & U^*\\V^* & 0\end{pmatrix}\begin{pmatrix}0 & A^*\\A  & 0\end{pmatrix}\begin{pmatrix}V & 0\\0  & U\end{pmatrix} = \begin{pmatrix} U^*AV & 0 \\ 0 & V^*A^*U\end{pmatrix} = \begin{pmatrix}\Sigma & 0 \\ 0 & \Sigma \end{pmatrix}.$$
 * Moving the unitaries (one can check that those block matrices on the left and right of your $$B$$ are unitary) to the right gives the singular value decomposition. You have the absolute values of the eigenvalues at this point, but I'm not sure exactly what you mean by eigenvalue decomposition.  Do you mean diagonalizing B? J Elliot (talk) 07:13, 19 February 2008 (UTC)
 * The eigenvalue decomposition goes similarly. For motivation, find the eigenvalue decomposition of [0,1;1,0] (taking A to be the 1x1 identity matrix).  A=USV*, so A*=VSU*, AV=US, A*U=VS, so [0,A*;A,0][V;U] = [VS;US], so [V;U] is an "eigenvector" with eigenvalues S, and similarly [0,A*;A,0][V;-U]=[-VS;US], so [V;-U] is an "eigenvector" with eigenvalues -S.  Since our matrix is hermitian, we want our eigenbasis to be unitary, so we'll divide the eigenvectors by their overall norm, 1/sqrt(2).  Putting it all together gives:
 * $$\begin{bmatrix}0&A^*\\A&0\end{bmatrix} = \left( \frac{1}{\sqrt{2}} \begin{bmatrix}V&V\\U&-U\end{bmatrix} \right) \cdot \begin{bmatrix}\Sigma&0\\0&-\Sigma\end{bmatrix} \cdot \left( \frac{1}{\sqrt{2}} \begin{bmatrix}V^*&U^*\\V^*&-U^*\end{bmatrix} \right)$$
 * So the eigenvalues of B are plus or minus the singular values of A. JackSchmidt (talk) 15:53, 19 February 2008 (UTC)

Thanks guys, that makes a lot more sense. But I still have a follow-up question. You have shown that [V;U] and [-V;U] are eigenvectors with respect to S and -S but how do we know that those are the only eigenvalues? What if S^2 of -2S^5 also turn out to be eigenvalues of our matrix B? How can we conclude that S and -S are the ONLY eigenvalues?A Real Kaiser (talk) 23:53, 19 February 2008 (UTC)


 * Sorry, I spoke too informally. U and V are actually m x m matrices, and S is a diagonal matrix with m values.  [V,V;U,-U] has full rank (because it is unitary), so is actually 2m independent eigenvectors for B, and S,-S gives the 2m eigenvalues.  The informal language was just to indicate how block matrices can simplify things. JackSchmidt (talk) 00:58, 20 February 2008 (UTC)

fitting a conic
I want to fit a general parabola (of unknown size and orientation) roughly to a given set of points in the plane. My first thought was to seek the coefficients that minimize the sum of the squares of $$ A {x_i}^2 + 2 \sqrt{AC} x_i y_i + C {y_i}^2 + D x_i + E y_i + 1 $$; to make the problem more linear, I then sought to settle for a general conic, $$ A {x_i}^2 + 2 B x_i y_i + C {y_i}^2 + D x_i + E y_i + 1 $$; but then it occurs to me that this penalizes those curves that go near the origin.

My next idea is to consider the family of cones tangent to some plane $$z = z_0 \ne 0$$; I'm not sure what to minimize.

Anyone know a better way? —Tamfang (talk) 06:24, 19 February 2008 (UTC)
 * Minimize the sum of squares of $$ A {x_i}^2 + 2 B x_i y_i + C {y_i}^2 + D x_i + E y_i + F $$ using some other normalizing condition than $$ F=1 \,$$ (which excludes curves through the origin). Bo Jacoby (talk) 07:50, 19 February 2008 (UTC).


 * You could try an iterative approach. Define for brevity
 * $$F(x,y) = A {x_i}^2 + 2 B x_i y_i + C {y_i}^2 + D x_i + E y_i + 1$$
 * and
 * $$F_i = F(x_i,y_i).\,$$
 * Given estimates for the coefficients A, B, etcetera, you can determine the distance ei of each point (xi,yi) to the curve determined by F(x,y) = 0. If we give weight wi to the term Fi2 in a weighted sum of squares, we want the weighted square to come out like ei2, which suggests setting
 * $$w_i = \frac{e_i^2}{F_i^2}$$
 * as the weights for a next iteration.
 * Instead of determining the values ei exactly, which is computationally expensive, you can approximate it by using the linear approximation
 * $$F(x+\Delta x,y+\Delta y) \approx F(x,y) + \frac{\partial}{\partial x}F(x,y)\cdot\Delta x + \frac{\partial}{\partial y}F(x,y)\cdot\Delta y.$$
 * Applied to the point (xi,yi), we write this as
 * $$F(x_i+\Delta x,y_i+\Delta y) \approx F_i + \frac{\partial}{\partial x}F_i\cdot\Delta x + \frac{\partial}{\partial y}F_i\cdot\Delta y.$$
 * The least value for (Δx)2 + (Δy)2 for which the right-hand side can vanish, which provides an estimate for ei2, is then given by
 * $$(\Delta x)^2 + (\Delta y)^2 = \frac{F_i^2}{\frac{\partial}{\partial x}F_i^2 + \frac{\partial}{\partial y}F_i^2}\,.$$
 * So for the weights for the next iteration, you can use then
 * $$w_i = (\frac{\partial}{\partial x}F_i^2 + \frac{\partial}{\partial y}F_i^2)^{-1}\,.$$
 * Since the value being inverted can become arbitrarily small and even vanish, you must exercise caution not to make this numerically unstable, and put limits on the size of the weights. --Lambiam 08:49, 21 February 2008 (UTC)
 * Since the value being inverted can become arbitrarily small and even vanish, you must exercise caution not to make this numerically unstable, and put limits on the size of the weights. --Lambiam 08:49, 21 February 2008 (UTC)

Rationalising Surds
I was going over some of my notes and i tried this one but my answer isnt the same as in the book.. im not sure what im doing wrong Rationalise the denominator $$\frac{1{}}$$ $$=\frac{1{}}$$ $$*\frac$$ $$=\frac$$ Kingpomba (talk) 11:26, 19 February 2008 (UTC)


 * Looks fine to me and quick calculator check confirms your answer. Could also be written as $$\frac$$ or $$-\left(\frac\right)$$. What does your book say ? Gandalf61 (talk) 11:47, 19 February 2008 (UTC)

it says: $$\frac{1{}}$$ $$(\sqrt5 + \sqrt7)$$ (hmm i got a different answer on paper and i understand how the 1/2 thing works i guess typing it out on wikipedia helped me solve it, well cheers anyway =] Kingpomba (talk) 11:57, 19 February 2008 (UTC).
 * Are you sure that's what it says? There should be a minus sign in front, shouldn't there? --Tango (talk) 12:54, 19 February 2008 (UTC)


 * Here's a quick sanity check: 5 < 7, so sqrt(5) < sqrt(7) (by the properties of the square root), and hence sqrt(5) - sqrt(7) < 0. Thus the denominator of the original fraction is negative, meaning the whole fraction is negative. Confusing Manifestation (Say hi!) 23:47, 19 February 2008 (UTC)