Talk:Sherman–Morrison formula

Invertibility?
Is there an easy condition that asserts that the inverse exists? Haseldon 08:59, 18 February 2007 (UTC)


 * I'm pretty sure it exists if and only if $$1 + vA^{-1}u \neq 0$$  but unfortunately I don't have a proof here. Ocolon 09:50, 18 February 2007 (UTC)


 * .. and $$A^{-1}$$ should also exist for the claim to make sense. For the proof, I think that it suffices to note that the proof given in the article shows that the inverse exists as long as all equations are defined; This is the case assuming that $$1 + vA^{-1}u \neq 0$$. Haseldon 12:26, 18 February 2007 (UTC)
 * I added these assumptions to the text. Haseldon 12:33, 18 February 2007 (UTC)

Name?
Where does the name come from? --RainerBlome 23:54, 27 August 2005 (UTC)


 * Good question. The article should begin by saying


 * In mathematics, in particular linear algebra, the Sherman-Morrison formula [PFTV92], named after XXXXX and XXXXX, computes the inverse of ...


 * for suitable values of XXXXX. Michael Hardy 22:43, 28 August 2005 (UTC)


 * Fixed, the relevant references are now in the article. --RainerBlome 21:49, 17 September 2007 (UTC)

In a publication that I used, the identity is also called the "Bartlett Identity" (literally "Bartletts Identität", as it is in German), citing. Can anyone verify this? --RainerBlome 10:26, 17 September 2007 (UTC)

BEFORE:


\begin{matrix} XY &=&     (A &+& uv)(A^{-1} &-& {A^{-1}uvA^{-1} \over 1 + vA^{-1}u}) \\ &=& AA^{-1} &+& uvA^{-1}  &-& {AA^{-1}uvA^{-1} + uvA^{-1}uvA^{-1} \over 1 + vA^{-1}u} \\ &=&      I &+&  uvA^{-1}  &-& {uvA^{-1} + uvA^{-1}uvA^{-1} \over 1 + vA^{-1}u} \\ &=&      I &+&  uvA^{-1}  &-& {(1 + vA^{-1}u) uvA^{-1} \over 1 + vA^{-1}u} \\ &=&      I &+&  uvA^{-1}  &-& uvA^{-1} \\ &=&      I \end{matrix} $$

AFTER:

 * $$XY = (A + uv)\left( A^{-1} - {A^{-1} uv A^{-1} \over 1 + vA^{-1}u}\right)$$


 * $$= AA^{-1} + uvA^{-1} - {AA^{-1}uvA^{-1} + uvA^{-1}uvA^{-1} \over 1 + vA^{-1}u}$$


 * $$= I + uvA^{-1} - {uvA^{-1} + uvA^{-1}uvA^{-1} \over 1 + vA^{-1}u}$$


 * $$= I + uvA^{-1} - {(1 + vA^{-1}u) uvA^{-1} \over 1 + vA^{-1}u}$$


 * $$= I + uvA^{-1} - uvA^{-1}\,$$


 * $$= I.$$

Although it's nice to have the "="s nicely aligned, various other aspects of the alignment in the \matrix version (i.e. the "BEFORE" version) of this display look bad. In particular, the fractions on the left should not all get centered the way they are. Also, the lines are too close together; it makes the fractions hard to read. Finally, the right and left parentheses are not big enough in some cases. Contrast the following:


 * $$({1 \over 2})+3$$


 * $$\left({1 \over 2}\right)+3$$

(To see what makes the difference, click on "edit this page" and see what I typed here.) Michael Hardy 22:38, 28 August 2005 (UTC)

Numerical Complexity
Assuming that $$A^{-1}$$ is an $$n$$ times $$n$$ matrix, for arbitrary update vectors u and v, the numerical complexity is
 * $$(n+1)V(n) + nV(n) + nM+ n^2M + n^2S + 1A + 1D$$,

where
 * $$V(n)$$ denotes the complexity of a scalar product of two vectors of size $$n$$,
 * $$M$$ the complexity of a multiplication of two scalars and
 * $$D$$ the complexity of a division of two scalars and
 * $$S$$ the complexity of a subtraction of two scalars,
 * $$A$$ the complexity of an addition of two scalars.

Since $$V(n)$$ is on the order of $$nM+1A$$, and $$S$$ is roughly $$A$$, this amounts to $$(n^2+2n+2)A + (3n^2+2n)M +D$$.

For a single-nonzero-element vector u and an arbitrary vector v (only one row of $$A$$ is updated), the numerical complexity is
 * $$ (n+1)V(n) + nM + n^2M + n^2S + A + D$$,

which amounts to $$2n^2M + 2nM + (n^2+n+2)A + D$$.

For a single-nonzero-element vector v and an arbitrary vector u (only one column of $$A$$ is updated), the numerical complexity is
 * $$ (n+1)V(n) + nM + n^2M + n^2S + A + D$$,

which amounts to $$2n^2M + 2nM + (n^2+n+2)A + D$$.

For single-nonzero-element vectors v and u (only one element of $$A$$ is updated), the numerical complexity is
 * $$ n^2M + nM + n^2S + A + D$$,

which amounts to $$(n^2+n)M + (n^2+1)A + D$$.

In each case, O(n^2) multiplications are needed. However, the single-row-update and single-column-update cases require twice as many multiplications as the single-element-update case, and the general case requires thrice as many. This agrees with the note on relative algorithm speeds given at. Can someone provide a better reference for this? —Preceding unsigned comment added by RainerBlome (talk • contribs) 11:39, 17 September 2007 (UTC)

Format of references
Jmath666, why did you remove the publisher from the refs? --RainerBlome 19:56, 30 September 2007 (UTC)


 * The standard practice in scientific literature is not to list journal publisher, only the journal title. The publisher is usually listed for for books, but not for journals. A quick look at some math book or paper will confirm this. Jmath666 20:14, 30 September 2007 (UTC)

True, from the sample that I took. Speaking of the standard practice, Jmath666, you tend to add "MR" links to references. Currently, "MR" links are not a standard component of references. And I am inclined to think that they should not be. I wonder how large a fraction of Wikipedia readers can actually make use of these links. It is my impression that these links are only really helpful for MathSciNet subscribers. Considering the price of such subscriptions (on the order of 1000 USD), I assume that the number of subscribers is very small, compared to the number of article readers. To me, these links feel somewhat like links to a particular merchant's pages, and therefore inappropriate. If the article in question is freely available online, that should be linked to. Otherwise, the reader may use her favorite search engine to seek a commercial source. With books, the ISBN number linking mechanism is merchant-neutral. Unfortunately, there is no such vendor-neutral numbering scheme for articles. Therefore I suggest to remove these links.

Furthermore, The MR links in this article look like dead ends to me, and at most serve to confirm that the citations themselves are correct. Other online resources, for example JSTOR, are better suited to confirm citations. What do you think? --RainerBlome 10:09, 3 October 2007 (UTC)


 * I have seen MR links here so I started doing that myself. Virtually all professional mathematicians (at least in the US, and most in Europe) have access through their institution (controlled by IP number) so they get the review and BibTeX entry; others can at least verify the reference and often link to fulltext (same as DOI, unfortunately fulltext costs too unless you work for an academic institution that has subscription). In some AMS journals and on Wikipedia MR has become a de-facto way to uniquely identify/verify math references. This is a good thing. See for example the MR references to Woodbury reports at Woodbury matrix identity. From European perspective, it might be valuable to add ZBL links instead of removing MR links (and ZBL gives you a free review). There are really only two sources. Maybe in time some sort of mechanism for linking references to databases will develop - it is becoming a bit unwieldy, papers may have DOI MR ZBL JSTOR ISBN preprint links, even if the info is valuable. Maybe you might want to start some effort in that direction. If you still want to start removing MR links you should do that consistently from all of Wikipedia and not only (some of) those that I added, and, in any case, please take it to Wikipedia_talk:WikiProject Mathematics and build some consensus first. Thanks. Jmath666 16:16, 3 October 2007 (UTC)


 * Unfortunately DOI cannot replace MR because many articles, esp. old or obscure ones do not have DOI. It would be a worthy project to improve and unify the links of references to bibliographic databases similarly as ISBN though it may not be as simple as the treatment of ISBN. I do not think that removing MR links is a good solution, though. Jmath666 16:39, 3 October 2007 (UTC)


 * Students of academic institutions that subscribe to MathSciNet have access also, even if they may not know it. Because access is IP number controlled, the link will resolve to the full entry from any computer on campus. This is a good thing, students learn about MathSciNet, a professional necessity. I find MathSciNet indispensable for my work, including occasional research of sources for Wikipedia. Jmath666 18:24, 3 October 2007 (UTC)
 * There is Template:Cite journal. There are ways to modify templates to add fields (like MR or ZBL) and to hide the ugliness and only expand on click. This might be interesting. Jmath666 03:17, 6 October 2007 (UTC)

Vertor size
Should it be noted that :$$(uv^T)$$ must be a nxn matrix consistent with :$$(A)^{-1}$$ or are we to assume that qualification is present? Basically i am asking if the size of u and v with respect to A must be given? —Preceding unsigned comment added by I5kfun (talk • contribs) 04:48, 5 February 2009 (UTC)

Well-definedness
Does $$v^{T}(A^{-1}u)=(v^TA^{-1})u$$? As operators the first one is the only sensible one, but since we are dealing with matrices, the second makes sense also. If they are not equal, which is correct? Watson Ladd (talk) 14:43, 24 December 2009 (UTC)


 * No need for the parentheses, these are equal because the multiplication of matrices is associative. Jmath666 (talk) 05:43, 25 December 2009 (UTC)

Modified a part of the proof
Once we have shown $$ XY = I_d $$ it is useless (and misleading) to check $$ YX = I_d $$. It is a classical result of algebra that for square matrix, $$ AB = I_d $$ then $$ B = A^{-1} $$. I removed the proof of $$ YX = I_d $$. Also, the for the reverse implication, it suffices (and is more elegant) to show that if $$ 1 + v^T A^{-1}u = 0 $$ then $$ A + uv^T $$ is not invertible. One can see easily that $$ A^{-1}u $$ is a non-trivial element of the kernel of this matrix. —Preceding unsigned comment added by 129.104.4.248 (talk) 17:20, 25 February 2019