Talk:Vector space/Archive 5

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 3 Archive 4 Archive 5

Convex analysis section

Although this should be mentioned, these are articles real coordinate space and affine space which have to consider this in such details. As an whole section, is is off-topical here because the convexity requires an ordered field (or, with some adjustment, a valued field). “Vector space” ≠ “vector space over ”. Incnis Mrsi (talk) 11:52, 23 April 2013 (UTC)

I'll remove this section, which is unsourced and jeopardizes the GA status. Kiefer.Wolfowitz 16:17, 1 May 2013 (UTC)
It was not a very good section. Should convex analysis be added to the See Also links? --JBL (talk) 19:22, 1 May 2013 (UTC)

Diagrams

Here are two images which show the main idea of basis vectors - namely taking linear combinations of them to obtain new vectors and that a vector can be represented in more than one basis.

A linear combination of one basis set of vectors (purple) obtains new vectors (yellow). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis.
A vector (red arrow) can be represented in two different bases (purple and yellow arrows).

Feel free to take/leave, edit the captions, request labeling (which I left out to ireduce clutter and see what it would be like without), move them to another article, complain, etc. Best, M∧Ŝc2ħεИτlk 17:20, 25 April 2013 (UTC)

I prefer the traditional parallelepiped design. Maschen’s drawings suggest a Riemannian curvature-style non-commutativity of translations, something that defeats a proper cognition of the concept of basis. Incnis Mrsi (talk) 10:13, 26 April 2013 (UTC)
Parallelepipeds are understandable but I thought at first the arrows would be simple and enough. I'll fix later. M∧Ŝc2ħεИτlk 16:41, 26 April 2013 (UTC)
Done. Better? M∧Ŝc2ħεИτlk 22:49, 26 April 2013 (UTC)
No: currently, parallelepipeds are ugly. I would recommend to draw all edges (not only those 3 which have to bear arrows), and to fill sides with transparency. The latter is quite cheap, but moderately effective approach, which is IMHO underestimated by the majority of polyhedron drawers. Incnis Mrsi (talk) 05:30, 27 April 2013 (UTC)
Orientation of a 3-volume on its boundary.
Right; draw all edges and make the sides transparent, something like this (minus all the circulations) --->
I'll get to that soon. M∧Ŝc2ħεИτlk 06:52, 27 April 2013 (UTC)
Done. Better? M∧Ŝc2ħεИτlk 09:27, 27 April 2013 (UTC)
Nice, but I’d suggest a more reasonable use of colours. Vectors and adjacent sides in exactly the same colour is definitely not a good design. Incnis Mrsi (talk) 09:35, 27 April 2013 (UTC)
BTW, what mean orange crooked arrows in File:3d_basis_addition.svg? I do not understand the idea behind this series of four pictures. Incnis Mrsi (talk) 09:35, 27 April 2013 (UTC)
I recoloured. The point of having green arrows in a green space for one basis basis, and similarly blue for another basis, was to show the bases span the space, with the colour correspondence between bases and the space. Also, the cyclic yellow/orange arrows (now red) are supposed to show that new bases can be found from old. The yellow basis can obtain the purple one, and vice versa. M∧Ŝc2ħεИτlk 12:44, 27 April 2013 (UTC)
All right, except the name of File:3d_basis_addition.svg. Does it depict two bases and their mutual presentations? Then the name should be changed, and IMHO the corresponding mutually inverse matrices should be presented at the image description page explicitly. A good job, anyway. Incnis Mrsi (talk) 12:54, 27 April 2013 (UTC)
Sorry, I didn't realize your new reply... How about moving File:3d_basis_addition.svg to File:3d basis linear combinations.svg, File:3d basis transformation.svg, or File:3d basis change.svg? I can't think of anything better, that's the whole point of the diagram... Feel free to move it. Thanks for the continuous feedback. M∧Ŝc2ħεИτlk 13:09, 27 April 2013 (UTC)
It is apparently solved by now. Incnis Mrsi (talk) 08:29, 4 May 2013 (UTC)

I'd drop the insensity of the colours of the sides a bit so the vectors are more dominant.--Salix (talk): 13:25, 27 April 2013 (UTC)

OK. Since a lot more feedback is continuously appearing than I expected, here and elsewhere, I'll wait another day for more opinions in case there are any, then do all the proposed changes at once. Thanks in advance. M∧Ŝc2ħεИτlk 13:40, 27 April 2013 (UTC)
3d basis transformation

Since File:3d_basis_addition.svg has changed plenty of times and the name could be better, once and for all I have uploaded a replacement File:3d basis transformation.svg: which should be clear and correct satisfying all concerns. M∧Ŝc2ħεИτlk 08:16, 4 May 2013 (UTC)

Yet one quibble: it is fairly consistent that parallelepipeds of the purple basis are purple. Thereby parallelepipeds of the red basis have to be red, no other. Incnis Mrsi (talk) 08:29, 4 May 2013 (UTC)
The parallelepipeds are a light blue, the bases are red and purple (deliberately dark colours so they stand out). M∧Ŝc2ħεИτlk 08:43, 4 May 2013 (UTC)
A linear combination of one basis set of vectors (purple) obtains new vectors (red). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis.
A vector can be represented in two different bases (purple and red arrows).
Here are the purple/red recoloured versions. (The "2" in the names are because I didn't want to overwrite the other versions - if these newest ones are preferred then the others can just be deleted since they will not be of any use). M∧Ŝc2ħεИτlk 09:10, 4 May 2013 (UTC)

Adjusting Hermann Grassmanns time frame.

In this article grassmann is given a side swipe so to speak. This is in fact how it went historically, but in actuality The Grassmanns made an influential contribution dating from around !827 to 1844 . Peano's work is wholly influenced and inspired by Hermann Grassmann's Ausdehnungslehre in 1844, which in tur drew heavily on Justus Grassmanns insights in !827. . The 1862 version of Ausdehnungslehre was a redaction overseen by Robert Grassmann in unwilling cooperation with Hermann, after which Hermanns earlier work became accessible to a wider local Audience! Peano seemed to get it in the !840's! This is the section i am referring to

History

Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, Descartes and Fermat founded analytic geometry by equating solutions to an equation of two variables with points on a plane curve.[1] To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines and planes, which are predecessors of vectors.[2] This work was made use of in the conception of barycentric coordinates by Möbius in 1827.[3] The foundation of the definition of vectors was Bellavitis' notion of the bipoint, an oriented segment one of whose ends is the origin and the other one a target. Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions and biquaternions by the latter.[4] They are elements in R2, R4, and R8; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.

In 1857, Cayley introduced the matrix notation which allows for a harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations.[5] In his work, the concepts of linear independence and dimension, as well as scalar products are present. Actually Grassmann's 1844 work exceeds the framework of vector spaces, since his considering multiplication, too, led him to what are today called algebras. Peano was the first to give the modern definition of vector spaces and linear maps in 1888.[6]

An important development of vector spaces is due to the construction of function spaces by Lebesgue. This was later formalized by Banach and Hilbert, around 1920.[7] At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.[8] Vector spaces, including infinite-dimensional ones, then became a firmly established notion, and many mathematical branches started making use of this concept.

Jehovajah (talk) 08:41, 14 June 2013 (UTC)

The story here is very different from that in Euclidean vector#History. A good reference I've found is
Michael J. Crowe, A History of Vector Analysis; see also his lecture notes on the subject.
--Salix (talk): 09:14, 14 June 2013 (UTC)

References

  1. ^ Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91
  2. ^ Bolzano 1804
  3. ^ Möbius 1827
  4. ^ Hamilton 1853
  5. ^ Grassmann 2000
  6. ^ Peano 1888, ch. IX
  7. ^ Banach 1922
  8. ^ Dorier 1995, Moore 1995

Is it worth mentioning that more general definitions exist? From Abstract Algebra by Pierre Antoine Grillet (GTM 242):

A vector space is a unital module over a division ring

For one thing, this allows for vector spaces over the non-commutative field of quaternions, which I have seen being referred to as true vector spaces elsewhere. YohanN7 (talk) 10:32, 25 March 2014 (UTC)

During my ongoing work on noncommutative geometry I frequently discuss right and left modules. But I never heard of right and left vector spaces; I know only (ambilateral) vector spaces. It is hardly possible to include every minority opinion into an article. Incnis Mrsi (talk) 12:33, 25 March 2014 (UTC)
It's not a matter of opinion. It is a matter of what can be found (as definitions) in reliable (and as in this case, reputable) sources. While you mention it, left and right vector spaces aren't unheard of either, just like left or right modules. YohanN7 (talk) 12:59, 25 March 2014 (UTC)
And Hungerford's Algebra (GTM 73) gives the same definition. That exhausts my source of pure math algebra texts. The definition is used in Lie Groups - An Introduction though Linear Groups by Wulf Rossman. It can't be all that uncommon or all that fringe. YohanN7 (talk) 14:22, 25 March 2014 (UTC)
You can add to the list Algebra by M. Isaacs (see pg 185), First course in noncommutative rings by T.Y. Lam (see page 4), both of Jacobson's abstract algebra books on multiple pages, as well as two less-well-known works by respected authors Geometric algebra by E. Artin and Linear algebra and geometry by Kaplansky. It is definitely a well-established usage in abstract algebra. However, that said, there are mountains of linear algebra books which will only use fields. This is justified since talking about bilinear forms does not go well with division rings, and such books are bound to cover bilinear forms.
I'm not entirely convinced that the main definition of "vector space" should use division ring. I definitely think this expanded usage deserves mention, but staying with "field" would better reflect the bulk of the literature, and would better satisfy the needs of readers who aren't very high in mathematics. Rschwieb (talk) 13:05, 26 March 2014 (UTC)
IMO, this terminology question has to be related with the use of "division ring" vs. "skew field" or "non commutative field". I guess (I have not verified in the cited book) that most of the authors who talk of "vector spaces" in the non-commutative case use "skew field" or "noncommutative field". In other words the phrase "vector space over a division ring" seems much less common than "vector space over a skew (or non-commutative) field". Again this would deserve to be checked. This being said, as we do not use "field" in the non-commutative case, I agree with the conclusion of Rschwieb. D.Lazard (talk) 14:53, 26 March 2014 (UTC)
This collective conclusion would be very nice if included in the article. Or perhaps in an article devoted to the topic (it seems like a particularly notable type of module), or failing that, in a section of Module (mathematics). —Quondum 15:14, 26 March 2014 (UTC)

Is it okay then if I add a sentence (at most two) mentioning this generailzation? I'll mention it in terms of non-commutative fields and make division rings a parenthetical remark. Quaternions will be mentioned as an example. Quondums remark came while I was previewing. The remarks would go into every affected article (E.g. Field (mathematics) should mention the terminology of non-commutative fiels. YohanN7 (talk) 15:59, 26 March 2014 (UTC)

Perhaps a short section titled "Vector spaces over [skew fields, or non-commutative fields, or division rings" lower down in the article? --JBL (talk) 16:29, 26 March 2014 (UTC)
An entire section seems like overkill. Why not just a note with refs at the end of the definition section? Rschwieb (talk) 16:32, 26 March 2014 (UTC)
We must not forget that a large part of the article generalizes verbatim to the non-commutative case, namely everything about bases, dimension, linear maps, matrices (in fact, sections 4, 5.1, first half of 5.2, 6.1 and 6.2), and also Gaussian elimination and row echelon form. The properties that do not generalize are the bilinear and multi-linear ones (determinant, tensor product, ...). The reader must also be informed of that. D.Lazard (talk) 17:05, 26 March 2014 (UTC)
Good point. This is a GA article, and shouldn't be messed around with too much. I vote for a couple of sentences at the bottom of the "Definition section". Some authors... YohanN7 (talk) 17:20, 26 March 2014 (UTC)
And now I really see your point, which should mean a sub-section! YohanN7 (talk) 17:27, 26 March 2014 (UTC)
I would like to point out that in projective geometry it is almost mandatory to define vector spaces over skewfields (and yes, we do talk about right and left vector spaces). Desarguesian planes are those that are defined in the usual way from vector spaces over skewfields, while Pappian planes come from vector spaces over fields. Skewfield does seem to be the preferred term (over division ring and non-commutative field), but I have seen the other terms used by geometers from time to time. Bill Cherowitzo (talk) 04:43, 27 March 2014 (UTC)
Agreed on most points, especially the role of division rings in nonDesarguesian geometry (which I've had the great pleasure of learning about this past year.) However, in my experience "skew field" is not preferred over division ring, and ngrams seems to corroborate that.Rschwieb (talk) 17:03, 27 March 2014 (UTC)
@D.Lazard : I really like the approach you described, and not categorically opposed to a section. I just didn't want a section devoted to saying "some places commutativity is dropped" and nothing else :) Rschwieb (talk) 17:37, 27 March 2014 (UTC)
While I don't think it is particularly crucial, I should have said that skewfield was preferred by geometers. One thing that the n-gram viewer doesn't provide is "who is saying what". I would hazard a guess that the spike in that graph in the late 40's and early 50's is due almost entirely to geometers (it was a very active time period for this type of geometry). Of course the books/articles that were written then are now "classics" and the terminology lives on in them, but the field is fairly well mined and you don't see as much new work coming from those who use this language. Bill Cherowitzo (talk) 18:01, 27 March 2014 (UTC)

Is a "component" a vector?

It would be useful to say whether a "component" of a vector V is a vector (in the same vector space as V) or whether "component" refers to a real number or member of the abstract field used in defining the vector space. Perhaps "component' is used ambiguously to mean either a vector in the same space as V or a real number that appears as an entry in a coordinate representation of V. If there is such an abiguity, it would be useful to point this out.

Tashiro (talk) 21:11, 26 November 2014 (UTC)

This is a good observation. This article uses the term component in the second sense only, but I agree that an article such as this should define the term explicitly, and I would like to see the first sense defined here as well, since that is the "correct" sense, the second sense being better called a basis coefficient (a basis coefficient multiplied by its basis vector yields the component of a vector in the direction of that basis vector when decomposed suitably). It should also be noted that Component of a vector redirects to this article, which essentially mandates that the term be defined here. —Quondum 22:56, 26 November 2014 (UTC)
The term "component" is explicitly defined in Basis (linear algebra). IMO, Component of a vector must be redirect there (and I will do that), as the term "component" or "coordinate" is meaningful only when a basis has been chosen. However, these terms have also to be defined here, in the section "Basis and dimension". D.Lazard (talk) 10:00, 27 November 2014 (UTC)
This definition seems to fit with "A vector of dimension n is an ordered collection of n elements, which are called components." While it may be the dominant use, it is reminiscent of the dichotomy of the imaginary part of a complex number, and of a quaternion. Surely we should also define the use that interprets the components as vectors? —Quondum 16:12, 27 November 2014 (UTC)
Depends, are there sources for that use?TR 17:08, 27 November 2014 (UTC)
It is admittedly a minority use, but examples can be found, e.g. "When a vector is resolved into two vectors which are perpendicular to each other, the two vectors are called rectangular vector components, or more briefly the vector components, of the original vector." and "These two orthogonal forces, w1 and w2, are vector components of F." I also see several explicit references to "scalar components", thus distinguishing between scalar components and vector components. I realize that the one I've found might not be particularly notable, and that authors such as Élie Cartan use the term component to mean scalar component. —Quondum 02:30, 28 November 2014 (UTC)
On further thought, I wonder whether the terminology splits on lines of discipline? For example, in especially engineering and possibly in physics, the idea of a decomposition into vector components may be fairly well-known, but not in mathematics. —Quondum 02:34, 28 November 2014 (UTC)
In fact, we have two redirects, with different targets Component of a vector and Vector component, and only one is cited in the dab page component. There is also the term "homogeneous component", which is used without links in several articles, and defined in homogeneous polynomial. In fact, all these meanings refer to the same broad concept, the components of an element of a direct sum, the case of scalar components corresponding to the case where a unit or basis vector has been chosen in each summand of the direct sum. Unfortunately, the term "component" is not defined in Direct sum nor in Direct sum of vector spaces. IMO, these two latter article must be edited to add a definition of "component", Vector component must be added to component, and component (direct sum) must be created as a redirect to Direct sum. Here, only a slight modification is needed to include links to Vector component and component (direct sum). D.Lazard (talk) 10:07, 28 November 2014 (UTC)
I see too that Scalar component redirects to Vector projection, with the term not appearing at all in the article (it should appear in bold). I like the introduction of the concept of a decomposition as a direct sum: it gives a suitable general framework in which to define that general use of term. There is clearly a bit of cleaning up needed. —Quondum 22:23, 28 November 2014 (UTC)

Lay Access to Mathematics

I have enjoyed reading Wikipedia Math Articles over the years and have been very glad to watch the quality of the articles and the inclusion of the lay readership consistently improve with time. There is no math concept which cannot be explained to an interested non mathematician, although clearly many subjects will be explainable only superficially. This cursory explanation for lay folks can often take the form of a one or two sentence introductory remark at the beggining of an article, and is immensely helpful to non mathematicians who are learning, and also in no way inhibits the subsequent sentences from becoming as technical as is necessary to be accurate and helpful to the most gifted and learned mathematician.

So... thankyou for this and other very well written articles.

At the end of the second paragraph I read ... "vectors are best thought of as abstract mathematical objects with particular properties, which in some cases can be visualized as arrows." At this point it would be much appreciated if an example of an alternative visualization for vectors was given. I can't even think of an alternative right now.

Again, thanks to all you mathematicians out there who have created this and other excellent articles for all of us. — Preceding unsigned comment added by 97.125.84.64 (talk) 18:31, 11 February 2015 (UTC)

This particular article (or at least its lead) is well worth fine-tuning to the lay intuition. I've removed the effectively redundant phrase "which in some cases can be visualized as arrows". Providing an alternative visualization is probably not for the lead; my instinct for an alternative would be to give an example of a vector space over a finite field, but that is getting too abstract. If anything, one might try to instill some intuitions, such as that distances and angles do not have a meaning at all in a vector space without an inner product, but whether two vectors are parallel (one is a scalar multiple of the other) is always meaningful. —Quondum 20:07, 11 February 2015 (UTC)
I suggest we add back a sentence: When an inner product is present, which is often the case, lengths and angles become meaningful and one can, and should, again think of vectors as arrows. It is in fact explicitly recommended, even in graduate texts, to visualize them this way whenever possible. Nothing is best though of abstractly if there is a concrete visualization. I'll make an attempt right away. YohanN7 (talk) 22:13, 11 February 2015 (UTC)
Please, can we aim more for simplicity? The sentences "Vectors in general vector spaces don't necessarily have to be arrow-like objects as they appear in the mentioned examples because the concepts of length and direction don't always have a meaning. In such cases, vectors are best thought of as abstract mathematical objects, possessing only the properties guaranteed by the defining properties of a vector space. However, if an inner product is present on the vector space, which is often the case like with the dot product in 3, lengths and angles become meaningful again and one can and should visualize vectors as arrows." are very terminology- and pedantry-heavy; it is not necessary in the lead to cover precisely all possible situations. Readers to whom it matters won't have any problem navigating the article even if it doesn't distinguish "vector spaces with inner product" as a special case in the lead section, but readers who have taken a standard 200-level linear algebra class will be completely baffled by this tangent. --JBL (talk) 22:59, 11 February 2015 (UTC)
I think Y's observation that geometric (arrow) visualization applies, (even for thinking of the more abstract case, though abstractly). So the added text should perhaps not simply be dumped. At the same time, the lead is too complex as Joel says (and already was before the recent of edits), so this suggests moving the whole discussion/tangent into the body from the lead. —Quondum 23:12, 11 February 2015 (UTC)
(edit conflict) I agree with JBL. Moreover the explanation in the quoted sentence is mathematically wrong: It is not because of the existence of a dot product that the vectors are represented by arrows; in an affine space, there is no dot product and it is very common to represent translation vectors by arrows. In fact, arrow visualization is related to geometry, and is only used in the context of geometry. Nobody try to visualize by arrows the elements of the vector space of the polynomials, or of the continuous functions. I suggest to replace the quoted sentence by Vectors in general vector spaces may not always be visualized as arrow-like objects as they appear in the mentioned examples. Therefore vectors are best thought of as abstract mathematical objects, possessing only the properties guaranteed by the defining properties of a vector space. This allows to apply the properties of vector spaces in many different contexts. For example, polynomials and continuous functions form vector spaces. D.Lazard (talk) 23:33, 11 February 2015 (UTC)
The last sentence, pointing out examples of elements of vector spaces really belongs in the lead somewhere. (I'd use polynomials and matrices as examples.) Then I don't really agree. It doesn't say "arrows if and only if dot-product". Also, I'd guess based on experience that mathematicians can be divided into two classes. Those that think of vector as arrow and say it, and those that think of vectors as arrows, but don't admit it It is a too powerful intuitive tool to let go of. For instance, John M. Lee, Introduction to topological manifolds urges the reader to think of tangent vectors in a manifold, (usually modeled as derivations at a point) as arrows, and that book has plenty of arrows in its figures. So it is not a high school concept only.

I returned to this thread after only a day had passed to find all the above additions all of which have merit. So I did what I did not do before and read the entire talk page specific to the lead of this article. I was glad to find at least one specific solicitation of lay impressions regarding the lead. I qualify: Therefore, Here are a couple example quotes from this talk page which are extremely good for my lay intuition and immediate learning.

"A vector space is a set" GeometryGirl (talk) 13:44, 22 January 2009....If true, these few word are a great inclusion in the lead when combined with Jakob.scholbach's quote immediately below.

"(the objects in the collection are vectors, the collection of vectors is the vector space). ... Jakob.scholbach (talk) 23:22, 22 January 2009"....This is very clear and concise to me. Is it acceptable to mathematicians other than the author? If so its great!

"In mathematics, a vector space is an algebraic structure consisting of a set of vectors together with two operations, addition and scaling. These vectors track one or more quantities, such as displacement (the distance travelled and in which direction), or force. To qualify as a vector space, the set and operations must satisfy a few conditions called axioms. ... Alksentrs (talk) 17:55, 24 January 2009"....Very well written. Again combining this with Jakob.scholbach's statement letting us know it is the actual collection which is referred to as the "space".

"...it is not the purpose of the lead to teach readers what a vector space is, but to whet their appetite to learn more." Geometry guy 23:15, 24 January 2009

My purpose in pasting the above quotes is NOT to instruct or even suggest how this article be written, as you all are doing great without my help. My sole purpose is to give the mathematicians writing this and other articles a couple concrete examples pulled from this talk page of very short concise statements which work very well for lay readers like myself. I include Geometry guy's statement concerning the purpose of the lead not being to teach the reader the entirety of the topic, because it is of course absolutely true. As a lay reader I don't expect to understand the entirety immediately. What I hope for is a quick, accurate, statement which helps me and does not grate on mathematicians because it is essentially wrong in its simplicity. Are these conflicting qualities? No, only hard to achieve, but this is why a great lead is so difficult. So... as I say, assuming Jakob.scholbach's quote above is acceptable to other mathematicians, then it's inclusion in a lead would be beneficial, because it is exactly the type of statement which embodies the necessary lead qualities including; extreme brevity, sufficient generality, usefulness to lay readers, and accuracy as judged by the acceptance of the statement by learned mathematicians.

Regarding visualizing vectors: Someone above says its better to visualize concretely whenever possible. So long as the concrete visualization does not sacrifice accuracy then I agree because a concrete visualization often embodies deeper understanding. Being able to picture concretely the relationship between proximity to a mass and the experienced gravity from that mass is deeper than understanding only the associated formula. So I ask ... Do any of you conceive vectors concretely in any easily describable way which could be included in a lead... other than as arrows? I found no reference whatever to any other visualization. I ask because such an alternative visualization if available would be exactly the type of mind bending thought most likely to ... "whet their appetite to learn more." (Even if the explanation for the alternate visualization has to wait ... what a great hook!)

Hope this kind of input is useful. — Preceding unsigned comment added by 97.125.84.64 (talk) 05:03, 12 February 2015 (UTC)

Returning to the original post, there are sensible geometric interpretations when at least some sensible structure exists. A norm provides geometrical interpretation, a metric almost as much. Already a locally convex vector space has some geometric structure worth mention. (Specifically, John B Conway, A course in functional analysis devotes a section to geometrical interpretations of the Hahn-Banach theorem in locally convex spaces.) I have no idea how such could be described in the article though (not the lead of course). But it is important because nobody thinks (while they may say so) of these spaces in terms of the (rather complex) axioms when seeking for inspiration for the next research paper. YohanN7 (talk) 08:14, 12 February 2015 (UTC)

I've put back in the old lead. I disagree rather strongly with the attempts to mathematicize the first paragraph. Also, the problem with visualizing vectors as arrows has nothing to do with "magnitude and direction". Indeed, an arrow is just a directed line segment: this makes sense in any affine space. So vectors, at least over the real numbers, can be visualized as arrows. It probably doesn't make much sense to formally define vectors that way, but so what? One is still free to "visualize" vectors or not, depending on what is most natural for the application. Sławomir Biały (talk) 12:20, 12 February 2015 (UTC)

This version is fine. It does get the point through that arrows are still okay in some cases. This was missing in the version I edited, and I attempted rather to demathematicize than mathematicize it from the purely abstract to leaving in some hope for visualization. After all, it is no coincidence that in all educational systems which I am aware of, vectors are introduced as arrows at first. Then I don't understand the comparisons with affine spaces. Others made that comparison too above. No attempt was made to define either arrows or vector spaces in terms of arrows. YohanN7 (talk) 12:44, 12 February 2015 (UTC)
Sławomir, I'm missing what you mean by "attempts to mathematicize the first paragraph", where I assume you are referring to this edit. I considered it merely a reduction of verbosity to fairly layman-oriented terms, such as "set". The terseness should be an aid: no need for someone process all those words (e.g. "mathematical structure formed by a collection of elements" instead of " set"). The lead as a whole definitely needs severe trimming; IMO it is twice the length it should be. But as you wish. —Quondum 14:06, 12 February 2015 (UTC)
I object to introducing vector spaces as a set with operations in the first sentence. It is just possible that some readers may not be comfortable with that level of abstraction. Also, I fail to see how excess verbosity plays any role here. the old lead was several hundred bytes shorter than the one after the recent buot of edits. The old version of the lead had been massively worked on, in response to peer review and comments made at both GA and FAC. It does not need to be rewritten, and its length is appropriate for an article of this size (compare to leads of other GA class articles). Sławomir Biały (talk) 16:03, 12 February 2015 (UTC)
Let's separate it out. Part is replacing the definition of a set with "set", and part is about the operators. Are you saying saying "set" is making it more abstract? (I think it is not, for the average nonmathematician: "collection" is another term that effectively means the same to, but is no more familar.) Or is it the changes relating to the attached operators? Here I might agree with you that the language of the edit might be considered more abstract. —Quondum 21:45, 12 February 2015 (UTC)
The main issue isn't so much the word set, or the notion of operation, as that these concepts aren't really explained. Your version of the lead says a vector space is a "set of vectors". That's fine, but is it defining a vector space as a set of vectors? Or is it something else? It's not clear. We could say "a vector space is a set whose elements are called vectors in this context", but that would be essentially the same as what is currently written, with the trivial substitution of "set" for "collection". The problem with "operations" is that this too is not explained. When we say that there is an operation of vector addition, we mean that two vectors can be added together. When we say that there is an operation of scalar multiplication, we mean that vectors can be multiplied (scaled!) by numbers, that are called scalars. These things need to be spelled out, I think. Sławomir Biały (talk) 11:41, 13 February 2015 (UTC)

Introduction and Definition

The first example of that section fails to note that its vector space is 2D. It is, imho, worth noting. The second example is also 2D, and seems mostly redundant. I question the use of two 2D examples - why not have, as a second example a triple or quadruple of reals? This gives the impression that vector spaces are typically 2D, which is a mistake. Anyway, lets get down to the real problem with the first example: for two vectors you use a parallelogram to construct their sum. OK. Let me take the two vectors v and v (that is the same vector twice) and see what happens when I place both starting points at the origin, where do you suppose I will end up? Exactly at the identical point as with a single vector, v! That is, your first example claiming that 2w is equal to w+w has the problem that the parallelogram is NOT a general method to solve the addition of vectors, and requires the vectors to be DIFFERENT. This seems to me that its not a trivial issue, since given a general vector addition v+w, unless you happen to know that v ≠ w, then you're left without a method (the first example actually shows two contradictory (at face value) methods of addition). So, either the qualifications required for the parallelogram method should be stated or the diagram should remove the 2w = w+w claim, or both. I know its true that 2w = w+w, but the point is that you can't get there by using the parallelogram method, and so confuses the issue. There's a clear problem with the elementary (pedagogical) literature in that sometimes vectors all originate from 0 and sometimes are placed head to toe. IF a vector is allowed to extend from any point other than the origin to another point, then it 'exists' in a space different from its vector space ... right? Apples and Oranges.Abitslow (talk) 13:49, 26 March 2015 (UTC)

The two first example are before the definition for motivating it. It is therefore normal that they are very elementary. More sophisticated examples are given below in section "Example". Nevertheless, you set to good points: firstly, in the first example one has to say that parallelogram rule does not apply to collinear vectors. Secondly one has to make explicit that the first example reduces to the second one if one represent vectors by their Cartesian coordinates. D.Lazard (talk) 15:40, 26 March 2015 (UTC)
I think the first example could be rewritten so that vector addition is tail to head, rather than using parallelograms. I don't think two examples as such are needed. These used to be combined in a single "motivation" section, which was still pretty weak, but marginally better than dignifying each example with its own level 2 section heading. But rather than quibble about details like that, it seems like we might want to consider doing things rather differently. Surely we can introduce vectors in a better way than assailing the reader with two-dimensional platitudes. Sławomir Biały (talk) 16:23, 26 March 2015 (UTC)
I'm rather surprised that Abitslow's claim that the parallelogram method produces the original vector was not objected to. When constructed according to the definition (opposite sides parallel and of equal length), this gives the expected result: v + v = 2v. Both methods (parallelogram and head-to-toe) always give the correct result. —Quondum 17:06, 26 March 2015 (UTC)
You are right, but, IMO, understanding what is a degenerated parallelogram is too technical here D.Lazard (talk) 17:19, 26 March 2015 (UTC)
It is more intuitive to use the head-to-toe construction, and it works nicely for all cases, though one does technically have to define the equivalence class of arrows with various "toes". The parallelogram construction has the advantage that all the vectors are rooted at the origin, which is more reflective of a vector space's structure. I remember a nagging disquiet of something "not quite complete" when I first learned about vector addition using the head-to-toe method. We can simply treat the degenerate example explicitly, without referring to the parallelogram. —Quondum 17:35, 26 March 2015 (UTC)
But can't this also be a "teaching moment" from the point of view of the article. "Head to toe" is "Toe to head" is the commutative law. The picture one draws then is a parallelogram, so the fact of Euclidean geometry that the opposite sides of a parallelogram are congruent gives the commutative law. (One could also draw pictures associated with the other vector space axioms. E.g., compatibility of the scalars using similar triangles.) Sławomir Biały (talk) 19:20, 26 March 2015 (UTC)
I have no objection, but in this spirit, if the head-to-toe image is used, the intuition about freely floating identical vectors being the same vector should be mentioned explicitly, and that they should all be translated to the origin to obtain the "actual" vector. I suppose I'm trying to address the slight dissonance between a vector space and a homogeneous space that is so often glossed over. I see the failure to clearly distinguish between the two is prevalent in WP, e.g. in Euclidean space. —Quondum 20:46, 26 March 2015 (UTC)
Ok, fair enough. The vectors in the homogeneous space version are sometimes defined as equivalence classes of arrows, with two arrows being equivalent if they are opposite edges of a parallelogram and point in the same direction. But if we were to adopt this perspective explicitly, then I think probably my observation about Euclidean geometry giving the commutative law becomes more or less a tautology. Anyway, I think it is not desirable to dwell too much here on such foundational issues to do with directed segments in the plane. It seems to me that we should be able to get away with saying "place the tail of w at the head of v" without too much concern over rigor, possibly supported by a link to Euclidean vector in a footnote, where such issues are discussed at slightly greater length (though, again, perhaps the treatment there is also not quite ideal, partly due to the need of Wikipedia to summarize what is in different sources, with different conventions). Sławomir Biały (talk) 13:32, 27 March 2015 (UTC)
Yes, simply using word choice to imply that the vector must be moved to the head-to-toe position as part of the process of addition (thus boosting the intuition that the "actual" vector is still at the origin) is probably the best way to go. Diagrams might need to depict both the original and the displaced vector. —Quondum 14:58, 27 March 2015 (UTC)
Not too sure really that there is much intuition about vectors floating around actually being "the same". It is just taught that way initially, i.e. one can think of them being the same. The mathematical model justifying the idea is the manifold version of vector spaces with tangent spaces at each point and parallel transport between them. Vectors parallel transported are canonically mapped to each other. I'm not trying to push any kind of idea about article content here, just suggesting that some more afterthought could go in before edits. Maybe a mention that the operation of "placing head to toe" can be formalized. YohanN7 (talk) 15:27, 27 March 2015 (UTC)

Assessment comment

The comment(s) below were originally left at Talk:Vector space/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

Potential Good Article nominee. Tompw 14:15, 5 October 2006 (UTC)

Substituted at 20:13, 26 September 2016 (UTC)

Redundancy of Axioms

This section pertains to some recent activity on notes on the redundancy of the vector space axioms. The notes are currently not present in the article. Here I will explain on a rigorous basis why commutativity is within the logical span of the other axioms (see what I did there?). I feel that once this is established, assuming my argument (well, I got it from Planet Math) is found convincing, it falls under the math calculation exception to the OR rule, i.e., it needs no citation because no reasonable person could object. Anyhow, here it is: it was brought up that uniqueness of inverses in a semigroup, which is true, was used in the article in the aforementioned note, but I'll make an argument that clearly does not use it.

Here is the argument: we want to show that given the other seven axioms for V a set, it follows that for all (x,y) in V, x + y = y + x. We proceed as follows: we use the fact that we have an element 0 such that x + 0 = x for all x in V. And we have that for any x in V we can find at least one element -x such that x + (-x) = 0, where we let 0 denote one chosen identity element from here on (there is in fact necessarily only one, but let's not go into that one just now). Anyhow, we use the fact that scalar multiplication distributes over field addition to derive 0x = (0 + 0)x = 0x + 0x, where 0 is the additive identity in our field. Now, we simply right add one (there could be (but aren't) multiple ones) potential -(0x) to both sides. By the axioms for invertibility, identity, and associativity, we end up with 0x + -(0x) = 0x + 0x + -(0x) reducing to 0 = 0x. Now, we can consider, letting 1 be the multiplicative identity in the underlying field and -1 an additive inverse thereof, once again 0 = 0x = (1 + -1)x = 1x + (-1)x, which, using the axiom 1x = x (which also happens to be redundant, but I digress), becomes 0 = x + (-1)x. This shows that (-1)x is one out of potentially many right inverse for x. I know this next bit is long, but it's all true, so bear with me please.

It's the Planet Math proof in more detail that a right inverse is a left inverse: we let -x be a right inverse for x. Next, we take -x + x = -x + x + 0 = -x + x + (-x + -(-x)), for we define -(-x) to be one right inverse of -x, which exists from our invertibility axiom. Now this is just, by canceling the two middle terms to 0 (associativity) and then getting rid of 0, the identity, -x + -(-x), which we already know is 0. Thus, -x + x = 0, so any one right inverse is also one left inverse for any element x. Nowhere did we assume anything like -(-x) = x, for this wouldn't mean anything, because - hasn't even been defined as a function! Thus we don't need anything about uniqueness. All throughout we're just saying "let's let - such and such be a right inverse of such and such" pursuant to the axiom for invertibility.

Anyhow, going back to the paragraph before that last, we can now state that (-1)x is a left as well as right inverse for any x. Thus, we take y + x = y + x + 0 = y + x + ( (-1)(x + y) + (x + y)) = y + x + (-1)x + (-1)y + x + y. This all follows from our proven schema that minus one times any vector is a left and right additive inverse of that vector. We can now cancel the middle x and (-1)x terms to 0 for the same exact reason, and the 0 goes away obviously, and then we cancel the resulting y and (-1)y for that reason yet again, and we are left with x + y. Thus, I have shown not only that commmutativity follows from the other axioms but also that a left inverse is always a right inverse in, in fact, any semigroup. If you have any objections, Slawekb, please be kind enough to state exactly which step perturbs you and on what grounds, instead of vaguely accusing me of using uniqueness of inverses. Thanks! David815 (talk) 21:48, 23 April 2015 (UTC)

Ok, the argument is pretty convincing. But we still need a reliable source to cite. Sławomir Biały (talk) 22:49, 23 April 2015 (UTC)

Fair enough. Someday I'll get it published and then we can cite that. David815 (talk) 15:44, 26 April 2015 (UTC)

Actually, I think that this is a gap that needs filling: results that mathematicians usually do not bother about but which need rigorous treatment, and that absorb lots of energy every time someone looks at it because it is unfamiliar; a notable publication could serve as a reference to fill some of this gap. In mathematics, "Is there a generalization in this direction?" is an important line of questioning. If no texts answer a specific instance, someone will ask it again, and do all the work again to determine that no, dropping commutativity of vector addition does not lead to a (nontrivial) generalization. One would have to re-ask exactly the same question in the context of modules.
Sławomir did not directly answer the question about WP:CALC, although he did by implication. In mathematics articles, we allow many "sufficiently simple" demonstrations of correctness without citation. A criterion might be that it must be obviously correct and not a new result, but is so accepted that most authors would simply use the result without proving or citing references for it. This case has a "only partially satisfies the criteria" feel about it. —Quondum 19:04, 26 April 2015 (UTC)
Oh, a nitpick: "the axiom 1x = x (which also happens to be redundant ...)" is incorrect. It is not redundant. It could be replaced with the axiom 1x = 0 without contradicting the other axioms. —Quondum 19:11, 26 April 2015 (UTC)
Redundancy of axioms is not unusual. Even set theory (ZFC) is redundant. In textbooks it is often relegated to the problem set. (Though I have not seen this particular one.) I think we should not treat it at all here, and certainly not include proofs. YohanN7 (talk) 10:35, 28 April 2015 (UTC)
I agree -- interesting, but not a good fit in an encyclopedic article of this sort. --JBL (talk) 00:41, 29 April 2015 (UTC)

Clarifying ambiguity regarding scalar field

The article as written is ambiguous. It says basically "a vector space requires numbers/scalars, there are vector spaces for any field", which is like saying "a country requires a leader, and there is a country for every president". Just as there are countries whose leader is not a president, this leaves open the the possibility of there being vector spaces whose scalars don't form a field (integers only, for example). Modules exist with this property but not vector spaces, the definition of a vector space is inextricable from that of a field. For every field there is a vector space and a vector space requires a field. I've tried to fix this twice, and both times it was undone right away. The first objection was that modules shouldn't come up in the first paragraph, which is fair, the first paragraph is for introducing/defining the subject, and describing generalizations should come later in the article. But instead of fixing it, the individual just undid the change. My second attempt to resolve the ambiguity left out modules, but was rejected because presumably the individual is so familiar with the topic they don't see how this is ambiguous for those who aren't. --Yoda of Borg () 13:11, 15 November 2015 (UTC)

Why is it that some editors believe that the first sentence of the article needs to answer every conceivable question every possible reader of an article might have? That is not true: many readers are generally expected to be able to read more than one sentence of an article. The article has 100kb of text in which to answer the readers' questions, including a separate "Definition" section, which contains a complete and unambiguous definition of the topic.
As for how to structure the lead of the article, as I see it, in this this case, if a reader is only willing to read the first sentence, then the distinction between a vector space and a module over the integers is irrelevant. When stucturing a lead, it is helpful to keep in mind several different classes of target reader. One target reader is someone with no mathematical background. It's enough for such a reader to know that you can add two elements of a vector space, and scale them by numbers. That's what the first sentence says. The second sentence is intended for readers with a little more mathematical sophistication: those who have heard of real numbers, rational numbers, and possibly even fields. This clarifies what "scale" means in the first sentence. There is no need to do this in the first sentence, because the second sentence already has that job! A basic principle of writing clear articles: don't overcrowd the first sentence. It already has enough to do without cramming lots of extra information into it. More mathematically sophisticated readers are, of course, expected to be able to read more than the first two sentences, possibly even as far down as the table of contents (!) where they can see a helpfully organized list of topics that appear in the article, including the "Definition", "Examples", "Basic constructions", and so on.
Anyway, it's not as if the proposed revision resolves the mathematical ambiguity. The first few sentences of an article are seldom intended to contain a formal definition mathematical. Thirty or so pithy English words, regardless of their configuration, cannot express the mathematical concept of a vector space. The formal definition, as it currently appears in the article, is:

A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below. Elements of V are commonly called vectors. Elements of F are commonly called scalars. The first operation, called vector addition or simply addition, takes any two vectors v and w and assigns to them a third vector which is commonly written as v + w, and called the sum of these two vectors. The second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av.

[followed by the list of eight axioms]. There's no way to cram this definition into the first sentence of the article. Sławomir
Biały
13:51, 15 November 2015 (UTC)

(edit conflict)

I am on mobile, so please excuse not having written before and the shortness of this reply. I think "the definition of a vector space is inextricable from that of a field" is wrong, in practice: the vast majority of people who learn what a v.s. is never learn what an abstract field is -- they take a computational class in which everything happens over Q or R or C, and usually it is common to do things like pass to a field extension without even noting that it is happening. Adding the words "field" and "module" early in the article make it less readable by that audience, who should be able to read the beginning of the article without a problem. Only people who already are very familiar with the objects in question (and so are not at risk of confusion) are in a position to raise this kind of very technical objection. --JBL (talk) 13:59, 15 November 2015 (UTC)
There is a historical footnote which provides another reason not to be too precise in the opening sections. There were several prominent authors who did not include the commutivity of multiplication in their definition of a field (Emil Artin and Reinhold Baer come immediately to mind). For them, a field could be a skewfield (more commonly known as division rings today). Thus, when they talked about vector spaces, they were including modules over division rings as vector spaces. Among projectve geometers this practice was continued and you see the definitions of right- or left- vector spaces defined over skewfields as fairly standard terminology. Bill Cherowitzo (talk) 18:18, 15 November 2015 (UTC)

Two binary ops

Hey, @Slawekb:! Thanks for picking me up on my incorrect edit. From the article:

"A vector space over a field F is a set V together with two operations…"
"The requirement that vector addition and scalar multiplication be binary operations…"

I agree only the one operation is binary (vector addition obviously, since scalar multiplication takes a field element and vector and spits out a vector), but there are a good number of editors of maths articles who have studied more maths in more detail than I have (I'm a physics boy), so I am hesitant to correct things. Field multiplication and vector addition are binary ops; scalar multiplication is not. This is my understanding, but the second emboldened statement above seems to be wrong, since it says that scalar multiplication is a binary operation. Your thoughts? --BowlAndSpoon (talk) 17:31, 29 May 2016 (UTC)

Similar titles of article-sections

  • Re this edit: It is not so much a matter of style (hence covered by the WP:FNNR rule: "Editors may use any section title that they choose.") as it is a matter of clarity. WP:FNNR says that two possible article-sections are "explanatory footnotes" that give information which is too detailed or awkward to be in the body of the article and "citation footnotes" that connect specific material in the article with specific sources. Usually, you can tell which is which by glancing the contents. In "Vector space", "explanatory footnotes" were called "Notes", while "citation footnotes" were called "Footnotes". It was not clear which is which (which section contained "explanatory footnotes" and which section contained "citation footnotes"). --Omnipaedista (talk) 11:31, 1 January 2017 (UTC)

Vector redirect

I really dislike it that vector (mathematics) redirects here. Matrix (mathematics) gets its own page, and vector is similarly primitive.

When you encounter a page such as row and column vectors, which ought to be a topic of vector (mathematics), but seems ridiculous as a topic of vector space, it becomes apparent that this misguided parsimony is missing something important under the kilt. — MaxEnt 20:49, 30 January 2017 (UTC)

You're right, I think; am in support of the above. This really should have its own articles, directed at a basic, nontechnical/nonspecilaist audience. Ema--or (talk) 03:43, 19 September 2017 (UTC) PS So, I redirected the redirect. (It) Now goes to vector (mathematics and physics). Ema--or (talk) 03:49, 19 September 2017 (UTC)
Im not sure that I understand. If youre looking up "vector_(mathematics)" youre going to get the most general topic there is. As you ought to. If you want to look up specific kinds of vectors, as youve pointed out, you already go to specific other pages. What is the issue then? 50.35.97.238 (talk) 20:01, 6 November 2018 (UTC)

Please check for me

I read "Not to be confused with Vector field." and "The simplest example of a vector space over a field F is the field itself" i am not good in English, so cannot determinate if there is a contradiction or not (a particular case ?). Thank you for your effort. Magnon86 (talk) 10:46, 27 July 2018 (UTC)magnon86

Not -- they are two different technical senses of the word "field". --JBL (talk) 11:40, 27 July 2018 (UTC)
I think the point is that fields are vectors in their own right. Real numbers, for example, are themselves vectors. Functions are vectors. Polynomials are vectors. Real numbers can be treated as the scalar field and the set of vectors at the same time; thus making the Reals a vector space over the field of Reals. 50.35.97.238 (talk) 20:05, 6 November 2018 (UTC)

Dissatisfactory Definition

This discussion is specific to the subsection titled "definition". I find that the definition is unnecessarily wordy, convoluted, in some places redundant. But more importantly, its not general enough and I find it far too specific. There are a few other problems with it too. I considered rewriting this but I know the pretentiousness of math contributors on wikipedia who have a stranglehold and monopoly on contributions and would undo any changes I make. And fight me tooth and nail no matter what. So lets have a discussion about it instead. I will try to enumerate some of the problems I see.

  • We dont need to reiterate the closure of a binary operation when we've already written the notation F:AxA->A.
  • We dont need to reiterate the closure of a binary operation when we've already established something is a field, or group, etc.
  • My recommendation, simply say (F,+,*) is a field and (V,+) is an Abelian group. In this way the first bullet point would simplify and about half the so-called "axiomatic" properties in the table would disappear.
  • Im concerned about the use of 1 and 0 as symbols, as it tends to imply numbers and distracts from generality.
  • Im concerned about the use of the + operator to mean both field addition of scalars, and vector addition. We have blurred the line between the two by not explicating the distinction. Notice the scalar field operators were never specified. For an article overly wordy, I suspect this was done deliberately because someone didnt know how to deal with it. Or else perhaps they take too much on faith themselves, didnt notice, and thus probably shouldnt have authored it in the first place.
    • One might take note that the property "distributivity of scalar multiplication with respect to field addition", we go from field addition on the left hand side to vector addition on the right hand side, thus this is not a true distributivity property. You have to conflate the two operators to make it work. Perhaps mention of an isomorphism is appropriate here.
  • Id like to see the use of the word "automata" in reference to the scalar multiplication and Id like to see the use of "left" and "right" distributions where appropriate.
  • Technically speaking, if you take a look at the article on "distributivity", it is defined on single sets with two operators, and not on operators between two sets. So something is generally fishy about this. Seems to me the use of the word "distributivity" isnt appropriate whatsoever.

Now, I could be wrong or have limited awareness, myself. But if so, this section is generally lacking in links and what links do exist dont shed satisfactory light on the topic. In the case of distribution, there is in fact a direct contradiction, it seems.

50.35.97.238 (talk) 19:35, 6 November 2018 (UTC)

In other words, you object to the idea of presenting the material in the way that it is presented in multitudes of reliable sources, and would prefer to present it in a way that you have invented that is incomprehensible to anyone who doesn't already understand mathematics at a graduate level. This is not going to happen. --JBL (talk) 20:58, 6 November 2018 (UTC)
I agree. What sources disambiguate between in the field and in the vector space? Not even Bourbaki goes to these extreme and unnecessary lengths. It would be wholly inappropriate in a general-purpose encyclopedia article, whose target audience includes mostly non-mathematicians. Sławomir Biały (talk) 13:27, 8 November 2018 (UTC)

50.35.97.238 You make some good points, and then spoil them by insulting your fellow editors. I'll ignore the insults and just respond to your points.

The article is written for the general reader (who is probably most often a college undergraduate). Therefore it needs to spell out things that a mathematician already knows.

The article must follow the customs of current mathematical usage, which include the use of + for addition in both the field of scalars and the group of vectors, and the use of 0 and 1 for additive and multiplicative identities. The 0 is usually written in bold face in the case of the zero vector.

Your other points are technically correct, but are a subject for a more advanced article. This article is for the general reader, and reflects common usage. An example of common usage that is not technically correct: technically we should say that the numeral 2 represents a number which, when added to itself, using the usual addition in the ring of integers, produces a number represented by the numeral 4; instead we say 2 + 2 = 4. Rick Norwood (talk) 12:49, 7 November 2018 (UTC)

Recent edits

I agree with the sentiment that the current status of the history section is not great. I suggest reverting it to, say, the version (of this section) from the (failed) FA nomination back in 2009. Any objections?

Another thing I disagree with is this (good faith!) edit by Pifvyubjwm: the note is both unencylopedic and unnecessary, in my mind. I would just remove it. Any objections? Jakob.scholbach (talk) 07:42, 28 October 2019 (UTC)

Since no-one objected, I have implemented these changes. Jakob.scholbach (talk) 07:32, 31 October 2019 (UTC)