Wikipedia:Reference desk/Archives/Mathematics/2006 August 9

Let's play with algebra
What other ways can I express this inequality: $$ \| \mathbf{Ad} \|_2 \geq \epsilon \|\mathbf{d} \|_2$$ where $$\mathbf{A}$$ is a matrix, $$\mathbf{d}$$ is a vector, and $$\epsilon$$ is a positive scalar? Is there something I can say about the relationship of $$\mathbf{d}$$ to the nullspace of $$\mathbf{A}$$? I also tried playing with the triangle inequality but that didn't get me anywhere. Is there a way to express the equation linearly (strangely enough, in the elements of $$\mathbf{A}$$) or otherwise well to use as a constraint in an optimization problem in $$\mathbf{A}$$?

What if $$\mathbf{d}$$ is the gradient of a function? I have this idea that if we're projecting a function $$f(\mathbf{y})$$ into a new function $$g(x)=f(\mathbf{Ax})$$, ensuring that $$ \| \mathbf{A} \nabla f(y) \|_2 \geq \epsilon \| \nabla f(y)\|_2$$ will ensure that critical points of $$g(\mathbf{x})$$ will also be critical points of $$f(\mathbf{y})$$, and that generally if we're performing optimization we can make some progress in minimizing $$f(\mathbf{y})$$ by performing optimization over $$g(\mathbf{x})$$ and then setting $$\mathbf{y}_*=\mathbf{Ax}_*$$, then maybe starting again. Any graphical or intuitive understanding of what this inequality would mean or a better one to choose to accomplish that purpose would be helpful.

If I have second order information, would I do better setting $$\mathbf{d}=\mathbf{H}^{-1}\nabla f(y)$$ (a Newton's method step) where $$\mathbf{H}$$ is the Hessian matrix or some approximation to it?

I know my question is confusing. Please just answer whatever small parts you can and go off on any tangents you think might be helpful. 18.252.5.40 08:30, 9 August 2006 (UTC)


 * The obvious first suggestion is to read our article on matrix norms. --KSmrqT 09:25, 9 August 2006 (UTC)