Talk:Eigenvalues and eigenvectors/Archive 3

Combination of "stretches"
Section §Overview has been edited to read
 * If two-dimensional space is visualized as a rubber sheet, a linear map with two eigenvectors would be a stretching along two directions corresponding to the eigenvectors.

This sounds like a definition, but is too ill-defined. It could describe two stretches, applied one after the other, neither of which preserves any eigenvectors of the other. In particular, it gives no sense of the crucial defining property of an eigenvector: that its direction is preserved by the map. —Quondum 17:43, 19 March 2015 (UTC)


 * I gave it a shot, if you find it's not an improvement, simply revert. Purgy (talk) 16:55, 23 June 2015 (UTC)

Significance and meaning
We explain what they are (stretches, etc)... but what is their actual significance? We don't say this at all, can someone add something on it thanks!

By this, I mean to include two kinds of "significance:
 * why do they matter, and why do they come up in these fields?
 * what sorts of things do they signify in those fields and uses where they can be said to "signify" something?

Can someone add a section on "significance and meaning" to cover this? Thanks! FT2 (Talk 12:21, 23 June 2015 (UTC)


 * There are some hints at the end of the *Overview*, axes of rotations and inertia are mentioned, ... I thinks these are almost ubiquitious, and one section on "Significance and meaning" wouldn't do the trick. Purgy (talk) 16:59, 23 June 2015 (UTC)

Using a hyphen in "Two-dimensional example" but not in "Three dimensional example"
I think we should be consistent, but I do not know which one is correct. — Preceding unsigned comment added by 155.4.131.254 (talk) 11:16, 24 April 2016 (UTC)
 * Good catch! I have added the hyphens, also for "infinite-dimensional". Favonian (talk) 11:22, 24 April 2016 (UTC)

Typo
where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable).

"It is" should be "H is" — Preceding unsigned comment added by 129.67.118.226 (talk) 23:56, 30 November 2013 (UTC)

Assessment comment
Substituted at 05:09, 13 May 2016 (UTC)

General definition of eigenvector in the lead
In the opening three sentences of the article, there seems to be disagreement about two points:


 * 1) Mapping vs. equation notation for the formula giving the general definition of an eigenvector
 * 2) Whether to mention the vector space V and the scalar field F in the first or second sentence

1) Mapping vs. equation

First, unfortunately I don't have a relevant textbook handy, but referring to several course lecture notes available online we see the general definition using the notation:
 * $$T:V \mapsto V,$$
 * $$T(\mathbf{v}) = \lambda \mathbf{v}.$$


 * "Eigenvalues, eigenvectors, and eigenspaces of linear operators", Clark University, page 1, Definition 1
 * "Eigenvalues and Eigenvectors", Stony Brook University, page 1
 * "Eigenvalues and Eigenvectors", Dartmouth University, slide 9
 * "Eigenvalues and Eigenvectors", UC Davis, page 2, Definition 2

Second, the linear map article generally sticks to a convention where the mapping notation is used to denote the vector spaces that a transformation maps between, while parentheses and an equal sign are used to denote applying a transformation to a vector and getting a result. For example, in the section "Definition and first consequences", f:V→W, f(αx)=αf(x).

Third, assuming either notations is acceptable, I think it makes sense to choose the notation that more obviously parallels the matrix version of the same equation so the link between them is clearer in the lead and overview.

Fourth, whatever notation is used here should match the notation used in the Overview and in the General Definition later in the article so the discussion is easy to follow as you move through the article.

2) Mentioning the vector space and field in the first or second sentence

Earlier on this talk page there was a discussion about going light on the math-speak early in the article. Many of this article's thousands of daily visitors, especially visitors who are here to read about matrices specifically and visitors learning about eigenvectors for the first time, may not know what a vector space and a field are. Additionally, some search engines pick up the opening sentence as the snippet of the article to preview on the search results page, which shows a limited number of characters and reaches an even wider audience. With these in mind, my intention was to simplify and shorten the language in the first sentence to make it more broadly accessible, then introduce the more formal terms in the second sentence. If the Wikipedia community would rather see a mathematically precise definition starting in sentence one, no big deal to me.

Error9312 (talk) 04:57, 18 August 2016 (UTC)


 * As I see it, you disagree with the opening three sentences of the article in their previous version. The sentences themselves are imho perfectly consistent, and reflect a contemporary mathematical view.
 * Of course, one might want to discuss the amount of mentioning mathematical fine print in the lede, but I strongly oppose to stepping back to adhering to the matrix view. Matrices are an extremly important means for any real world calculations, but impose strong restrictions on mathematical generality, especially, they impede any independence of basis selection.
 * Please, may I point you to a subtlety ($$\mapsto$$ vs. $$\rightarrow$$) in your citation of the mapping notation: while $$T:v \mapsto T(v)$$ and $$T:V \rightarrow V$$ are fine, $$T:V \mapsto V$$ in this here context is not. It would possibly describe the identity map on a set of vector spaces, whereas the first version allows for the equation $$T(v)= \lambda v$$, and the second version describes domain and co-domain of the map $$T$$.
 * I plead for reinstating the more modern mapping view, rather pointing to matrices as examples, than placing them in front row. Breaking up with accustomed views generally pays the rent in mathematics.
 * I am unsure as to which degree vector spaces and their respective scalar fields are well known notions to those looking up eigenvalues and eigenvectors, and thus should be omitted in ledes. Purgy (talk) 07:23, 18 August 2016 (UTC)
 * Although I agree with Purgy regarding $$\mapsto$$ vs. $$\rightarrow$$, I prefer Error9312's version of the first few sentences. The first sentence in particular should be as jargon-free as possible without being flat-out wrong. McKay (talk) 07:56, 18 August 2016 (UTC)
 * I agree with McKay. However, I also agree with Purgy that I do not think focusing on matrices really simplifies anything.  If there were a way of de-emphasizing the field and vector space, I would be fine with that.  Can't we just say something like this: "Let T be a linear transformation.  Then a non-zero vector v is an eigenvector if $$Tv=\lambda v$$ for some scalar &lambda;."  This leaves out some details, but that's generally fine in the lead.   Sławomir Biały  (talk) 11:25, 18 August 2016 (UTC)
 * Didn't catch the $$\mapsto$$ vs. $$\rightarrow$$ typo last night. Thanks for pointing it out. The typo doesn't invalidate my points above supporting use of the equation notation instead of using $$\mapsto$$. I'm fine with switching to more modern notation as long as it has sources to cite and the notation is consistent within the article and preferably with closely related articles, too.
 * Regarding matrices, just to be clear, I'm not advocating focusing only on eigenvalues and eigenvectors of matrices in the lead and the overview as it was before March, 2016. Rather, I'm arguing that the parallels between the general case and the matrix case are clearer when the equation notation is used for both,
 * $$T(\mathbf{v}) = \lambda \mathbf{v}, $$
 * $$Av = \lambda v,$$
 * rather than using different notation for each,
 * $$T:\mathbf{v} \mapsto \lambda \mathbf{v}, $$
 * $$Av = \lambda v.$$
 * Sławomir, we could also use $$Tv=\lambda v$$, but that notation wouldn't be consistent with the linear map article or the sources I'm familiar with for this topic.
 * Whatever we settle on, it should be consistent across the lead, overview, and general definition sections. A compromise might be to use the equation notation in all three, then in the general definition section add a sentence stating that the $$\mapsto$$ notation is also valid and citing a source to back it up. Error9312 (talk) 18:17, 18 August 2016 (UTC)
 * I don't object to writing $$T(\mathbf v)$$. However, as far as I am aware writing $$T(v)$$ for linear transformations and $$Av$$ for matrices is not one that is generally supported by sources, and I don't think it is helpful to insist on that distinction here.  Indeed, many authors write $$Tv$$ for the action of a linear transformation on a vector.  Some even prefer, for their own mysterious reasons, to denote this $$vT$$.  Anyway, generally speaking, consistency between different Wikipedia articles is too much to hope for, and we usually just ask that an article be internally consistent as far as is possible.   Sławomir Biały  (talk) 20:28, 18 August 2016 (UTC)


 * I do object to ...
 * ... inconsistency in notion throughout one given Wikipedia article, favouring more modern notation (evolution!), which has reached general acceptance throughout the whole pertinent literature, the \rightarrow specifying domain and co-domain, the \mapsto defining properties or notation of the map.
 * ... blurring the difference between
 * $$T:\mathbf{v}\mapsto \lambda \mathbf{v}, \quad $$ and
 * $$Av = \lambda v,$$,
 * as the second employs an additional (representational) object $$A$$ to express the desired relations, and refers to an additional operation (matrix multiplication). I am indecisive whether to write $$T(\mathbf v)$$ or $$T\mathbf v$$. A subtle discrimination between variables and distinct values, based on this notational difference, is in my experience only rarely to find. Operating from the left or the right, respectively, are widespread diversifications in math notation. :)
 * ... calling matrices "in parallel" to "linear maps". In this here context I prefer to see them as "specific representations" of linear maps, which impede the important concept of basis independence, are of extreme importance in numerical calculations only, and additionally are only appropriate for finite dimensional spaces. They should be treated in the article for that importance and for historical, or heuristical reasons, but should not clutter the foundations too much.
 * I see "eigenspaces" as properties of linear maps, living in arbitrary vector spaces (modules?), and they should be treated as such, without fencing them in by revering historical habits and education. Finally, I do not think that a good deal of the originally presented points are not affected by this discussion. Purgy (talk) 08:39, 19 August 2016 (UTC)

Suggestion for a new lede
How about the following (meanwhile withdrawn):

______________________________________________

In linear algebra, an eigenvector or characteristic vector of a linear transformation $$T$$ is a non-zero vector that does not change its direction under application of this linear transformation. In other words, if $$\mathbf{v}$$ is a non-zero vector, then it is an eigenvector of a linear transformation $$T$$ exactly if $$T(\mathbf{v})$$ is a scalar multiple of $$\mathbf{v}$$. This condition can be written as the mapping
 * $$T:\mathbf{v}\mapsto \lambda \mathbf{v},$$

where $$\lambda$$ is a scaling factor, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector $$\mathbf{v}.$$

There is a one-to-one correspondence between n by n square matrices and linear transformations from n-dimensional vector spaces to themselves. So for finite dimensional vector spaces, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations or the language of matrices.. Such a linear transformation $$T$$ can be uniquely represented as an n by n-square matrix $$A$$, and the vector $$\mathbf{v}$$ by a column vector, which is an n by 1-matrix. The above mapping is then rendered as a matrix multiplication on the left hand side and as a scaling of the column vector on the right hand side in the defining equation
 * $$A\mathbf{v} = \lambda \mathbf{v},$$

which holds for eigenvectors $$\mathbf{v}$$ and corresponding eigenvalues $$\lambda,$$ belonging to the linear transformation $$T$$ represented by the matrix $$A.$$ Therefore these are usually called the eigenvectors and eigenvalues of the matrix.

Geometrically, an eigenvector corresponding to a real, nonzero eigenvalue points in a direction that is stretched by the transformation and the eigenvalue is the factor of this stretching. If the eigenvalue is negative, the direction is reversed.

__________________________________

Please, comment. Purgy (talk) 16:45, 21 August 2016 (UTC)


 * I'm not sure it is better than the current lead, and there are several things that are worse. Firstly, the notation $$\mapsto$$ is arguably not used correctly, or at least is misleading.  When we write $$f:x\mapsto f(x)$$ we usually interpret this as a lambda expression, not something only true for a particular value of x.  Secondly, I don't see how mentioning a one-to-one correspondence is helpful.  What we actually mean (and the current lead says) is that the linear transformation is represented by the matrix.  This is much stronger than "one-to-one correspondence".  I'm not clear what else is different about the proposed lead.   Sławomir Biały  (talk) 18:10, 21 August 2016 (UTC)


 * While I can appreciate your reservations to the \mapsto, I lack understanding for "representing" being stronger than "one-to-one" in this here context, if one wants to avoid "isomorphisms" or the like. In my effort I tried to collect and compress the current content under the premise of minimized changes. I am not shy to confess that I am eager to repress the general use of language of matrices in math articles wherever they impose their, partly mentioned, native restrictions. Would not leaving out the one-to-one connection weaken the matrix position in even finite dimensional environments still more? In trying to contribute to improvement of this article, I certainly will never fight for some specific content. Purgy (talk) 05:43, 22 August 2016 (UTC)


 * What is meant is that the linear transformation is represented as a matrix, not that there is a one-to-one correspondence between the set of linear transformations and the set of matrices. Representation is the mathematically correct term here.   Sławomir Biały  (talk) 09:48, 22 August 2016 (UTC)

little text error
In the section: Algebraic multiplicity second paragraph: Whereas Equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can instead be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, I had to read it a couple of times to realize the word of was missing.

I did a little linear math 40 years ago and I enjoy reading this text! --Caretta.nl (talk) 10:18, 6 July 2017 (UTC)


 * Thanks. :) Purgy (talk) 12:15, 6 July 2017 (UTC)

Eigenvalues and eigenfunctions of differential operators
At the beginning of this section it is mentioned that "eigenvectors and eigenvalues make sense also in infinite-dimensional Hilbert or Banach vector spaces". However, it seems to me that this is misleading: they make sense in any (finite or infinite) vector space regardless of the topology. — Preceding unsigned comment added by 147.122.31.17 (talk) 13:31, 18 July 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on Eigenvalues and eigenvectors. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20151101101339/https://www.lsa.umich.edu/UMICH/math/Home/Undergrad/Ugrad_Courses.pdf to https://www.lsa.umich.edu/UMICH/math/Home/Undergrad/Ugrad_Courses.pdf
 * Added archive https://web.archive.org/web/20100325112901/http://khanexercises.appspot.com/video?v=PhfbEr2btGQ to http://khanexercises.appspot.com/video?v=PhfbEr2btGQ

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 11:05, 18 September 2017 (UTC)

Colloquial definition in lead
The recently modified interpretation of the definition in the lead, "In the graphic setting of real vector spaces the direction of an eigenvector does not change, or is exactly reversed, under this transformation, just its length may be arbitrarily affected" is improper for several reasons. No matter how this is reworded, it will still have problems. I recommend removing it altogether. The first sentence, "In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that only changes by an overall scale when that linear transformation is applied to it" is enough to give the reader a feel for what an eigenvector is, without any inconsistencies, before reading the technical definition.—Anita5192 (talk) 16:52, 16 October 2017 (UTC)


 * The two edits before mine triggered my action: I felt compliant with the perception that scaling by (-1) does not leave the direction of a vector unchanged, and I wanted to add a visually supporting formulation to the formally fully correct version of scaling the vector. Not sure about the effects of my newbie support, I added, fully intentionally, a "?" to my efforts. Since there are recommendations of removal, I will undo my edit. Perhaps someone finds a better way to support the elementary graphic geometric aspects of real EVs. Purgy (talk) 17:27, 16 October 2017 (UTC)


 * I was not faulting you. I think all the recent edits were performed in good faith.  However, regardless of everyone's good intentions, I think the first and third sentence are enough.  I don't believe we need to refer to specific vector spaces to give an intuitive feel to something very general.  Regards.—Anita5192 (talk) 17:40, 16 October 2017 (UTC)


 * I think something like this would work better as an image caption, much like the image in the overview section (although I also think a better illustration should be found for this purpose). But, as text, "graphic setting of real vector spaces" strikes me as likely to be confusing to the target audience.   Sławomir Biały  (talk) 22:49, 16 October 2017 (UTC)


 * I never felt faulted in the slightest way, and I certainly see the confusing aspects in my weak formulations, but I have no grasp on their improper-ness, and I still think that eliminating the previous remarks about direction (which still appear later on) makes the learning curve required for the lede steeper. Maybe an animation would be best. Purgy (talk) 07:42, 17 October 2017 (UTC)


 * What I thought were improper were several concepts either not yet defined, or not well defined, e.g., 1. graphic setting, 2. direction, 3. reversed, 4. length. I also thought it unrealistic to restrict the topic to real vector spaces.  E.g., in $Z_{2}$, the only vector that could be an eigenvector is 1, and the only scalar multiples of 1 are 1 and 0.  The terms direction and reverse are only trivially represented, at best.—Anita5192 (talk) 17:52, 17 October 2017 (UTC)

Adding Application to maxima-minima of multivariable functions
Eigenvalues are used in determining if a point is a local maximum, minimum or saddle point by calculating the eigenvalues of the Hessian Matrix the full article on it is Second partial derivative test. — Preceding unsigned comment added by Loneather (talk • contribs) 10:58, 6 December 2017 (UTC)

Etymology
The article claims that the German prefix "eigen-" means "proper" or "characteristic". I don't want to simply edit that statement, because it is linked to a source, but actually the main meaning of "eigen" is "own" (as in "my own", not as a verb): "mein eigenes Haus" = "my own house". So "Eigenwert" (eigenvalue) means something like "its very own value". Unless I'm missing a special meaning of "proper" (I'm German), this translation appears inappropriate to me. "Characteristic" fits better, but the main meaning "own" should be mentioned first in my opinion. 217.248.11.10 (talk) 21:24, 15 March 2018 (UTC)


 * The source cited reads:

eigen, adj. (Dat.) proper, inherent; own, individual, special; specific, peculiar, characteristic; spontaneous; nice, delicate, particular, exact; odd, strange, curious; ticklish;…
 * Eigenvectors are also called characteristic vectors in some textbooks. The "own" meaning is not the most relevant here.—Anita5192 (talk) 21:52, 15 March 2018 (UTC)


 * If you look beyond an English German-dictionary (Grimm or Adelung) you will find the Greek root ἔχειν, confirming the meanings of "property" and "ownership" as the core meaning of "eigen", used as a prefix, as adjective, or even as verb ("eignen") in a field with these notions as its center. Former ages, where personal property determined the perceived personality to a greater extent, already coined the view that "property, one owns" makes up (to a good deal) the "character" of a person. As I perceive it, the translations of "eigen" to "own", "characteristic", "specific", ... immediately hit the spot. Maybe, the intended meaning of "proper" in this context, prefixed to the noun ("proper value") is less immediate to a non-native speaker, compared to the postfixed use ("value proper"). Other translations, given in the source, result from a (factual) slight shift in meaning to the pejorative side ("peculiar", "strange" (="curious"), ..., but still "characteristic"), others are -say- rare, if not curious ("spontaneous", "exact", "ticklish").


 * As a natively German-speaker (I'm Austrian) I would -unauthorized, but spontaneously- prefer to use "own" wrt material goods, but abstract conceptions were "proper" to me. Honestly, I do think that linear maps "own" their "eigen"values, in the same sense as I "own" (without any rights) my mental conceptions of meanings. So maybe, changing the order of "own" and "proper" is merited, or it is not. :D Cheers, Purgy (talk) 08:28, 16 March 2018 (UTC)


 * The source cited supports the possible meanings of the prefix, but not the specific choices of "proper" and "characteristic." The editor who first inserted this evidently left no source or explanation for his or her choice of meanings.  Most textbooks refer to "eigenvectors" as "characteristic vectors," but do not use other meanings.  I would like to know the history of the term "eigenvector" and why the prefix "eigen–" was chosen, but none of my sources address this.  The German article at [] indicates that "eigen–" means "characteristic quantities," and dates to a publication by David Hilbert in 1904.  Perhaps we should remove the prefix "proper" from the article.—Anita5192 (talk) 18:08, 16 March 2018 (UTC)

Historical origin of the use of lambda for eigenvalues?
My guess, it is from the early works of linear algebra and eigenvalues and eigenvectors arising from analyzing wave equations, where lambda would be used for wavelength, and different modes (eigenvectors) would correspond to various special solutions that can be linearly combined? Then set in stone when essentially same was done to Schrodinger's equation in Hamilton formulation of QM. Unfortunatly it is hard to find sources where the lambda symbol become popular for use for eigenvalues and what is the real origin of this popularity. 2A02:168:2000:5B:94CB:836:78C3:226E (talk) 12:17, 24 June 2020 (UTC)

Lead is now ineffective, and possibly wrong
"an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes by a scalar factor when that linear transformation is applied to it. "

This fails to make clear that the salient feature of an eigenvector is that it is a vector in the direction in which the linear transformation applies no rotation. As it stands:
 * 1) the description is incorrect in that it doesn't exclude all the directions in which the linear transformation applies a scalar factor and a rotation.
 * 2) It does rule out genuine eigenvectors whose eigenvalue happens to be one.

I suggest some rewording that eliminates these incorrect aspects, and makes clear that eigenvector is about the direction of non-rotation, rather than whether or not there is scaling. Gwideman (talk) 14:11, 22 February 2021 (UTC)


 * I don't see anything wrong with the definition above. In other directions a linear transformation need not be a rotation; it could, for example, be a sheer. The definition need not exclude other directions; the definition is about what happens to an eigenvector—not what happens to other vectors. It does not rule out eigenvalues of one; one is a valid eigenvalue and is encompassed by the definition above.—Anita5192 (talk) 17:03, 22 February 2021 (UTC)


 * From the definition section:
 * "If $T$ is a linear transformation from a vector space $V$ over a field $F$ into itself and $v$ is a nonzero vector in $V$, then $v$ is an eigenvector of $T$ if $T(v)$ is a scalar multiple of $v$. This can be written as
 * $$T(\mathbf{v}) = \lambda \mathbf{v},$$
 * where $λ$ is a scalar in $F$, known as the eigenvalue "
 * This is not the same as "changes by a scalar factor". It is the same as "changes only by a scalar factor, or remains unchanged".
 * To answer your points:
 * "the definition is about what happens to an eigenvector—not what happens to other vectors." Of course it's also about other vectors! We're trying to state criteria by which all those other vectors fail to qualify as eigenvectors.
 * "In other directions a linear transformation need not be a rotation; it could, for example, be a sheer." Shear describes the transformation of the plane (for 2D), not the transformation of an individual vector. When a shear is applied, most vectors rotate. Eigenvector identifies the ones that do not. There is an excellent visualization on YouTube channel 3Blue1Brown titled "Eigenvectors and eigenvalues | Essence of linear algebra, chapter 14" starting at 2:59, and at 3:12 "any other vector is going to get rotated". (Sorry, Wikipedia blocked the URL.)
 * "It does not rule out eigenvalues of one". The word changes rules out the scalar being 1. Gwideman (talk) 11:09, 1 March 2021 (UTC)


 * I think the fact that the scalar could be one is a moot point. That is, it could be argued semantically that "changing" by a factor of one is not really "changing." Nonetheless I have reworded the lead slightly to clarify this.—Anita5192 (talk) 17:17, 1 March 2021 (UTC)

Eigenvalues and the characteristic polynomial
The characteristic polynomial will only be monic if using the def

$$p_A(\lambda)=det(\lambda I-A)$$

otherwise the def needs to be

$$(-1)^n p_A(\lambda)=det(A - \lambda I)$$

where A is an nxn matrix.

See https://en.wikipedia.org/wiki/Characteristic_polynomial

Therefore I have changed the definitions in that section.

In my opinion, $$\lambda I-A$$ should be used in all of this article. For finding eigenvalues it does not matter, but it will in other cases. And I believe it is better to have correct form from the start when beginning maths.

Edit: At least as far as I know, there is no case where the $$A-\lambda I$$ would be a preferred, except for not having to do as many minus signs in ones equations :)

Mudthomas (talk) 13:09, 25 January 2022 (UTC)


 * I reverted your change of order per BRD. There is no need for the characteristic equation to be monic. Most textbooks use $$(A-\lambda I)$$.—Anita5192 (talk) 16:23, 25 January 2022 (UTC)


 * Quoting the article on the characteristic polynomial, linked from the relevant section: "The characteristic polynomial $$p_A(t)$$ of a $$n \times n$$ matrix is monic (its leading coefficient is $$1$$) and its degree is $$n$$."
 * Edit: While I do not disagree that most textbook use $$(A-\lambda I)$$, I do not believe that they should :) -Mudthomas (talk) 19:45, 25 January 2022 (UTC)


 * Quoting When to use or avoid "other stuff exists" arguments, "In Wikipedia discussions, editors point to similarities across the project as reasons to keep, delete, or create a particular type of content, article or policy. These 'other stuff exists' arguments can be valid or invalid." Although the Characteristic polynomial article indicates that the polynomial is monic, I have never seen this in a reputable source.—Anita5192 (talk) 21:30, 25 January 2022 (UTC)


 * I'm pretty sure I remember it from my courses in both ODE and Numerical linear algebra, but I wouldn't bet my life on it. I'll be back if I find corroborating sources! Mudthomas (talk) 21:39, 25 January 2022 (UTC)
 * From [Lloyd N. Trefethen and David Bau III, Numerical Linear Algebra, SIAM, Philadelphia, ISBN 0-89871-361-7, page 183:]
 * "The characteristic polynomial of $A ∈ ℂm × m$, denoted by $$p_A$$ or simply $$p$$, is the degree $$m$$ polynomial defined by $$p_A(z) = \det(zI-A).$$ Thanks to the placement of the minus sign, $$p$$ is monic: the coefficient of its degree $$m$$ term is 1. "
 * Furthermore the article on Characteristic polynomial cites the source [Steven Roman (1992). Advanced linear algebra (2 ed.). Springer. p. 137. ISBN 3540978372.] while the section here has NO source, reputable or otherwise. -Mudthomas (talk) 08:36, 26 January 2022 (UTC)