Talk:Tensor/Archive 5

Scalars and vectors are distinct from tensors
While it is usually convenient to ignore the difference, scalars and vectors are not tensors, especially if you use the definition as a tensor as a multi-linear map. Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:28, 26 March 2015 (UTC)


 * You have, of course, a source for that claim? Andy Dingley (talk) 13:42, 26 March 2015 (UTC)


 * Oddly enough, that definition appears in the very article under dispute. Tensor; with that definition of tensor, tensors of rank 1 over a vector space $$V$$ are elements of $$V^*$$ or $$V^{**}$$, not of $$V$$. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:14, 26 March 2015 (UTC)


 * The article concerns the finite-dimensional case, where $$V$$ and $$V^{**}$$ are naturally identified. See Tensor for details.   Sławomir Biały  (talk) 11:21, 27 March 2015 (UTC)


 * The issue is not whether here is a canonical isomorphism; the issue is whether they are the same. There are three common, and different ways of defining a vector on a manifold, and there are canonical isomorphisms among them. Would you claim that they are the same? It's common and legitimate to state at the beginning that an exposition will ignore a technicality where the meaning is clear; it is not legitimate to deny the existence of the technicality. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:29, 31 March 2015 (UTC)


 * The article does not claim that the three definitions of tensors are the same, so I don't see how this is an issue. If you can find sources that point out that there is some issue in making the double-duality identification, as common in linear algebra, then present those.  Otherwise, this truly is splitting hairs beyond the point of idiocy.  You will next claim that, because there are different constructions of real numbers, that there is no such thing as "the set of real numbers", because real numbers that are equivalence classes of Cauchy sequences are not "the same" as real numbers that are Dedekind cuts.  In mathematics, we routinely identify objects that are naturally isomorphic.  In linear algebra, it is standard to identify a vector space and its double dual (so, yes, they "are the same", because the category is always taken "up to natural equivalence").  But this article is certainly not the place to belabor such elementary points regarding the modern practice of mathematics.  And anyway, to point out that there is some such subtle  (and, actually, irrelevant) distinction when, by far, the vast majority of sources do not, would require sources of exceptionally high quality.  So far, you haven't given any, despite being asked for them.    Sławomir Biały  (talk) 00:58, 1 April 2015 (UTC)


 * Now you are not only engasging in additional ad hominem arguments ,e.g., idiocy, but lying about what I would say. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:23, 1 April 2015 (UTC)


 * , you have requested mediation relating to this matter, yet you are behaving unacceptably. Sławomir may have been harsh in his handling of you, but he has now made a significant effort to respond to your concern. Desist. —Quondum 20:53, 1 April 2015 (UTC)


 * Scalars are "multilinear maps" of degree zero. Vectors are multilinear maps of degree one on the dual space.  (The definition of tensors as multilinear maps usually requires the double duality identifIcations.)  Bluntly, any definition of tensors that excludes vectors and scalars is simply wrong.  One "fails the course" by making ridiculous claims like this.  Sławomir Biały  (talk) 14:08, 26 March 2015 (UTC)


 * See above. Bluntly, your appeal to an ad hominem argument and your refusal to discuss the issue before deleting my correction suggest that it is time to request formal mediation. As to failing the course, I had no trouble getting a masters in Mathematics, TYVM. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:14, 26 March 2015 (UTC)


 * "fails the course" is a figure of speech. To claim that there is a distinction is splitting hairs to beyond the level of interest of a mathematician. To make an issue of it in WP would be to confuse the matter beyond the comprehension of the average WP reader. And why should one make a claim based on a narrow interpretation of one definition?  What of objects that transform under certain rules?  What of objects that are equivalent up to isomorphism?  You cannot deny that there are elements of a tensor algebra that are also vectors and scalars. You'd have to make a case as to how to classify these separately from – wait for it – vectors and scalars. —Quondum 21:27, 26 March 2015 (UTC)


 * I would have failed my orals had I ignored distinctions between different mathematical systems.


 * As to objects that transform under certain rules, would you claim that covariant and contravariant vectors in a Riemannian manifold are the same, even though they transform differently? There is a canonical isomorphism.


 * Objects that are equivalent under isomorphisms are just that; it doesn't make them the same. Take Homology and Cohomolgy groups; there are many inequivalent ways to define them, but subject to certain assumptions they are all naturally isomorphic.


 * And don't tell me what I can and cannot deny. I can deny anything that is false, and the Devil is in the details. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:29, 31 March 2015 (UTC)


 * This is a non-sequitur. Different cohomology theories are genuinely different.  Sheaf cohomology is a completely different cohomology theory than simplicial cohomology.  They are not naturally isomorphic.  It is standard practice in linear algebra to identify a finite-dimensional vector space with its double dual.  In Riemannian geometry, it is not standard practice to identify vectors and covectors, not so much because it is not possible to do so, but because vectors and covectors have different properties that are rather important to the subject, but that has nothing to do with standard practice in linear algebra, where V and V** have no properties that distinguish one from the other.  And anyway, a version of the lead which says that vectors and scalars are not tensors misleads far more than it aids in understanding the concept.  (It would be like insisting that the integer 3 is different from the real number 3, see below.)  I find it unfathomable that an editor sincerely acting in good faith is apparently unable to acknowledge this.  But there it is.  You aren't going to convince anyone by trotting out your own rather middling credentials in mathematics and engaging in petty point-scoring.  Go ahead and start an RfC, although there seems to be little point because it has no chance of success.  Otherwise, you've clearly lost this argument.  WP:STICK.   Sławomir Biały  (talk) 01:24, 1 April 2015 (UTC)


 * Yes, Different cohomology theories are genuinely different., but cohomolgy theories on the same space are canonically equivalent if they satisfy the Eilenberg–Steenrod axioms. Did you genuinely not notice certain assumptions or did you choose to ignore it? I think that one of us is trolling, but it isn't me. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:23, 1 April 2015 (UTC)


 * First of all, this is still a non-sequitur, regardless of whether one imposes the Eilenberg-Steenrod axioms. (And then, as far as I know from reading recent literature on the subject, it is rather rare to see someone indicate which homology theory one is working with, which seems to suggest that they are identified). But regardless, what does identification or non-identification of different homology theories have to do with identification of a finite-dimensional vector space and its double dual?
 * If you just mean that there is some obscure analogy between the situation in topology and that in linear algebra, why is that same analogy not present in the issue of different constructions of the real numbers? You said I was "lying" above when I imputed to you the view that there are different kinds of real numbers depending on the model.  Presumably, because I was "lying", you then agree with the negation: that these different constructions are identified in practice.  So, let me just summarize what I think your views are, as far as you have been able to communicate them: (1) different homology theories that are naturally isomorphic are not usually identified, (2) there are other "different but naturally isomorphic" objects in mathematics that are identified (the real numbers, for example), (3) a vector space and its double dual are not usually identified in linear algebra.
 * Assuming I have this right, I do not think (1) can be used as an argument for (3). Point (2) shows, if nothing else, that what mathematical traditions persist in some areas of mathematics (e.g., algebraic topology) may not in others (e.g., linear algebra).  So unless there is some sense in which point (1) relates in a substantive and direct manner to the question under consideration ("are vectors tensors?"), it seems just to be a red herring.   Sławomir Biały  (talk) 12:41, 2 April 2015 (UTC)


 * Right. Pointing out that scalars are multilinear maps of degree zero and that vectors are linear maps on the dual space is "ad hominem".  Whereas trotting out one's credentials amounts to "discuss[ing] the issue".  Please return to whatever whacko universe that makes sense in.   Sławomir Biały  (talk) 22:56, 26 March 2015 (UTC)


 * That's the dumbest thing you've written yet. It is clear to the meanest intellect the it was "fails the course" that was ad hominem, along with "troll" and "whacko". — Preceding unsigned comment added by Chatul (talk • contribs)


 * Well, I for one am confused why someone would trot out their (rather middling) credentials yet again in a discussion, particularly whilst accusing others of making ad hominem arguments. People who live in glass houses.   Sławomir Biały  (talk) 00:58, 1 April 2015 (UTC)


 * Similarly, is the real number 3 also a complex number, or is it merely identified with the complex number 3 + 0 i? Mgnbar (talk) 22:28, 26 March 2015 (UTC)

Folks, it's pretty clear that Chatul is just trolling us again. Chatul, if you actually have some sources to discuss, or you make a valid point, supported by reasons, go right ahead. But until you do that, I plan to file this under WP:DENY. Sławomir Biały (talk) 23:21, 26 March 2015 (UTC)


 * There you go again. I wasn't trolling then and I'm not trolling now, but I'm beginning to suspect that you are. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:29, 31 March 2015 (UTC)


 * Troll is as troll does. Do you have any suggestions for improving the article, supported by references?  Or are you just wasting everyone's time?   Sławomir Biały  (talk) 11:22, 1 April 2015 (UTC)

In all fairness, a slightly slippery argument is needed to establish that scalars are multilinear maps. They "vacuously" satisfy the definition - or whatever it is called, but this is not important here. What is important is that the argument is made in most math and physics texts (that I can recall anyway). YohanN7 (talk) 06:42, 27 March 2015 (UTC)


 * The authors of the citation below say almost exactly what I wrote in the disputed text, and their usage is common.
 * — Preceding unsigned comment added by Chatul (talk • contribs)


 * No, that source contradicts you. It says that the two spaces are identified (see where I have already cited it below).  You have insisted that exactly the opposite is true.   Sławomir Biały  (talk) 20:14, 12 April 2015 (UTC)

Vectors are tensors
Sources:
 * Bowen and Wang "Introduction to vectors and tensors" p. 218 "$$T^1(V)=V,\quad T_1(V)=V^*$$ Here we have made use of the identification of V with V** as explained in Section 32."
 * Borisenko and Taparov "Vector and tensor analysis with applications", p. 61: "2.3. First-order tensors (vectors)"
 * Kobayashi and Nomizu, "Foundations of differential geometry, volume 1", p. 20: "$$T^1$$ is nothing but V"
 * Lee, "Introduction to smooth manifolds", p. 180: "Clearly there are natural identifications $$T^0M=T_0M=M\times R, T^1M=T^*M, T_1M=TM$$.
 * J Schouten, "Tensor analysis for physicists", p. 17: "p=0, q=0 gives a scalar; p=1, q=0 a contravariant and p=0, q=1 a covariant vector."
 * L. P. Eisenhart, "An introduction to differential geometry with use of tensor calculus", p. 89: "A contravariant vector is a contravariant tensor of the first order."
 * Marsden and Ratiu, "Manifolds, Tensor Analysis and Applications", p. 340: "$$T_0^1(E)=E$$... and make the convention $$T_0^0(E;F)=F$$"
 * Arfken and Weber, "Mathematical methods for physicists", p. 131: "A scalar is specified by one real number and is a tensor of rank zero. ... In three dimensional space, a vector ... is a tensor of rank one."
 * Hawking and Ellis, "The Large-scale structure of space-time", p. 18: "In particular, $$T_0^1(p)=T_p$$"

etc. -- Sławomir Biały (talk) 11:58, 1 April 2015 (UTC)

Components of a linear map
Regarding this editorial dispute:

---

(fyrri | þessa) [https://en.wikipedia.org/w/index.php?title=Tensor&oldid=656016445 11. apríl 2015 kl. 20:08‎] Slawekb (Spjall | framlög)‎. . (49.045 bæti) (-505)‎. . (Ok... reverted to before badly confused addition.) (afturkalla þessa breytingu | þakka)

(fyrri | þessa) [https://en.wikipedia.org/w/index.php?title=Tensor&oldid=656014373 11. apríl 2015 kl. 19:52‎] LokiClock (Spjall | framlög)‎. . (49.550 bæti) (-14)‎. . (→‎As multidimensional arrays: Correction - insufficient information; new article chosen for reference) (afturkalla þessa breytingu)

(fyrri | þessa) [https://en.wikipedia.org/w/index.php?title=Tensor&oldid=656013731 11. apríl 2015 kl. 19:48]‎ LokiClock (Spjall | framlög)‎. . (49.564 bæti) (+429)‎. . (Undid revision 656007924 by Slawekb (talk) Extended. This simplification does not tie the correspondence between multidimensional arrays and tensors to any behavior they share, and is superfluous.) (afturkalla þessa breytingu)

(fyrri | þessa) [https://en.wikipedia.org/w/index.php?title=Tensor&oldid=656007924 11. apríl 2015 kl. 19:00‎] Slawekb (Spjall | framlög)‎. . (49.135 bæti) (-346)‎. . (→‎As multidimensional arrays: This recent addition seemed confused, and possibly wrong. Simplified.) (afturkalla þessa breytingu | þakka)

(fyrri | þessa) [https://en.wikipedia.org/w/index.php?title=Tensor&oldid=656000889 11. apríl 2015 kl. 18:05]‎ LokiClock (Spjall | framlög)‎. . (49.481 bæt) (+436)‎. . (→‎As multidimensional arrays: Trying to lower the barrier for entry a little by explaining motivational context.) (afturkalla þessa breytingu)

---

For K the field of scalars, we have

$$\operatorname{Hom}(V,V) \cong \operatorname{Hom}(V,V^{**}) = \operatorname{Hom}(V,\operatorname{Hom}(V^*,K)) \doteq \operatorname{Hom}(V \otimes V^*,K)$$

giving us an isomorphism between the linear endomorphisms and the type-(1,1) tensors. I'll ignore the isomorphism from $$(V \otimes V^*)^*$$ to $$V^* \otimes V$$, since I'm using the article's definition by multilinear maps section for its definition of the components of a tensor.

Given bases $$(e_i)_{i \in I}$$ and $$(\varepsilon_j)_{j \in J}$$ for V, we can obtain from any $$f \in \operatorname{Hom}(V,V)$$ the components of the matrix with respect to those bases, as described in the article I linked to in my most recent edit, by taking the projections of each $$f(e_i):i \in I$$ onto each $$\varepsilon_j : j \in J$$ (i.e., taking the inner product of the two vectors). Hence, by taking the $$\varepsilon^j(f(e_i)) : i \in I, j \in J,$$ where $$(\varepsilon^j \in V^*)_{j \in J}$$ is the dual basis for V* as described at dual space. I am writing it this way and not assuming $$\varepsilon=e, I=J$$ so that one does not have to assume bases themselves have canonical duals, in accord with the article's style, in which case you would merely assume $$(\varepsilon_j)_{j \in J}$$ is a basis for V*, adjust isomorphisms to homomorphisms if necessary, etc..

Given that we have these bases $$(e_i)_{i \in I}$$ for V and $$(\varepsilon^j)_{j \in J}$$ for V*, we can take the value of their tensor product under the map in $$\operatorname{Hom}(V \otimes V^*,K)$$ that $$f$$ is sent to under the natural transformations given above. According to the definition in the multilinear maps section, these are the components of the tensor $$f$$ is sent to by the natural transformation. Therefore, if the components of that tensor are equal to the entries of the matrix described in my addition, then as described in my addition, the components of the tensor assigned to $$f$$ by the natural transformations with respect to the basis $$(e_i)_{i \in I}$$ and dual basis $$(\varepsilon^j)_{j \in J}$$ generalize or at least are defined so as to agree with the derivation of the entries of the matrix representation of $$f$$ with respect to the domain basis $$(e_i)_{i \in I}$$ and codomain basis $$(\varepsilon_j)_{j \in J}.$$

"There is a natural homomorphism Ψ from V into the double dual V**, defined by (Ψ(v))(φ) = φ(v) for all v ∈ V, φ ∈ V*."

$$f \mapsto ( f': v \mapsto \Psi(f(v)) )$$ where $$\forall \varphi \in V^*, (\Psi(f(v)))(\varphi) = \varphi(f(v))$$ That is, compose each $$f \in \operatorname{Hom}(V,V)$$ with the natural isomorphism from V to V**.

Then apply the tensor-hom adjunction to $$f'$$. That is, take

$$f': v \mapsto ( \varphi \mapsto \varphi(f(v)) )$$

to

$$f'': v \otimes \varphi \mapsto \varphi(f(v)).$$

Evaluating $$f$$ at $$e_i \otimes \varepsilon^j,$$ we obtain $$\varepsilon^j(f(e_i)),$$ which is by this article's definition the component of the tensor $$f$$ w.r.t. $$e, \varepsilon,$$ but also the component of $$f$$ w.r.t. $$e, \varepsilon$$ as derived in the article I linked to in my most recent edit. ᛭ LokiClock (talk) 03:22, 14 April 2015 (UTC)


 * It is unclear what you are trying to say. I don't think that what you were adding is mathematically incorrect, but it did not help to elucidate anything IMO.  The preceding statement "Just as a vector with respect to a given basis is represented by an array of one dimension, any tensor with respect to a basis is represented by a multidimensional array" already was general; following this with a statement about order-2 tensors only and with a lot of mathematical language is not adding information, and would simply be confusing to the average reader.  Perhaps you can say here what you were trying to achieve with the addition that is not already said in simpler language and full generality (for a finite-dimensional vector space)? —Quondum 05:06, 14 April 2015 (UTC)


 * This seems to be a very convoluted way of describing the matrix of a linear transformation with respect to a basis. I had offered that simplification to the article, but it was reverted.  So I reverted to before the disputed text was added in the first place.   Sławomir Biały  (talk) 10:54, 14 April 2015 (UTC)

Perhaps the best way to explain what I hope the article to say is to build on Slawekb's version:


 * For example, the matrix of a linear transformation in a basis is a two-dimensional array.

Assume a reader that doesn't know what a linear map is in practice, but knows how to multiply a coordinate vector by a matrix. They may be interested in how to view a particular matrix they have, like a rotation matrix, as a coordinate-free object. This reader can memorize the statement that a linear transformation is a coordinate-free version of a matrix, which is loose, but deducible for example from the table of tensors by type. This allows the above statement to be interpreted as a tautology - a matrix is a 2D array, and this matrix I have is a linear transformation in some basis. This is just comparing like with like, using vectors and their coordinates, and doesn't provide the background about the relationship between linear maps and matrices that a reader should have before approaching the multidimensional arrays definition used in this article. If instead we make "matrix of a linear transformation" a link, then as if reading a technical term made of familiar words, the reader can infer that there is a separate topic, the particular hyperlinked connection between the two.


 * For example, the matrix of a linear transformation in a basis is a two-dimensional array.

This separate topic is the foundations of matrices in linear algebra. But reading this above sentence, it does not indicate that rather than being a more special class of things matrices symbolize in a basis-dependent way, like a stress tensor, it is the usual use of matrices. This can determine whether the reader looks at this external information as a helpful lead on understanding the subject. So I suggest this:


 * For example, the matrix of a linear transformation in a basis is a two-dimensional array, whence comes matrix multiplication.

I think what I just said addresses why the statement being about order-2 tensors only wouldn't confuse the reader. Moreover, just because you can talk about something in full generality doesn't mean you should only talk about it in full generality. It's important to cover linear maps and vectors in particular because not only is linear algebra about both objects and arrows, but in applications linear algebra might be done entirely by manipulating coordinate vectors by applying matrices to them, and without those matrices being coordinate-free, the only value to having vectors defined in a coordinate-free way is an abstract comfort with the concept that doesn't extend to what it must mean to have a corresponding coordinate-free version of a matrix.

The ambiguities that affect understanding every part of the article created by an informal understanding of the subject are confronted here by opportunistically deferring the reader to the foundations. For the purposes of moving from coordinate-based to coordinate-free linear algebra, there's the problem in the gap between concepts and their formal basis of when to identify spaces, and why for example the dual space is not identified with the original, which is clarified by learning how to take or preserve a relative point of view to linear algebra. It should be clear that whether the two spaces are the same space before and after a linear map has no influence on the ability to treat a matrix in a coordinate-free way, that the sense in which matrices are the arrays of coordinates for a linear map is immune to the considerations of identification of domain and codomain, that this does not mean identifying the values of vectors under a linear map such as a change of basis, and it's not required to do the latter to formulate a concept of coordinates. The section at the link to linear map indicates this through the explicit use of a basis for both the domain and the codomain to construct the matrix for the linear map.

Perhaps this makes it clearer why the linear map case is the one to address specifically, as the other definitions of the article ultimately depend on this common understanding. The characterization of matrix multiplication as directly connected to the linear map case has the side-effect of addressing the issue of what a matrix means in mathematics as opposed to a multidimensional array, beyond that matrices are specifically 2-index - If every 2-index tensor has a 2-index array of components, but there is more than one kind of 2-index tensor, what kind of tensor does my matrix encode? I use matrix multiplication in order to compose transformation matrices, so likely a linear map, not a bivector or bilinear form. Again, this is an opportunity to defer the reader to the foundations. ᛭ LokiClock (talk) 23:32, 19 April 2015 (UTC)


 * I don't really agree that linear transformations are key, or in any way more central to this understanding of the correspondence between multidimensional arrays of components and tensors. Even in the 2nd order case, the linear transformation becomes strained – e.g. we do not think of the electromagnetic tensor as a linear transformation. And any attempt to ease the strain on the reader should not be in the definition: it breaks the purpose and flow. If this is to be addressed, we need to find a separate section to put the relationships between abstract and matrix-based linear algebra into. —Quondum 03:17, 20 April 2015 (UTC)


 * The location is not really appropriate for pointing out some subtlety about how the valence of the tensor corresponding to a linear transformation is consistent with matrix multiplication. Something along these lines could be added after the concept of valence has been introduced.   Sławomir Biały  (talk) 12:01, 20 April 2015 (UTC)

Covariant derivative of a tensor density
Those interested in tenors, and logical completeness of assertions about tensors, may be interested in opining at. It seems more input from someone with greater familiarity than mine with the subject would be helpful. —Quondum 15:04, 29 April 2015 (UTC)

Ambiguous terminology: Rank and Order
Rank and Order are the same thing. The article doesn't make clear this and it's confusing. — Preceding unsigned comment added by 190.24.189.75 (talk) 12:55, 29 May 2015 (UTC)


 * From the article:
 * The total number of indices required to uniquely select each component is equal to the dimension of the array, and is called the order, degree or rank of the tensor. Footnote: This article uses the term order, since the term rank has a different meaning in the context of matrices and tensors.
 * -- Sławomir Biały (talk) 12:58, 29 May 2015 (UTC)


 * Also from the article:
 * There is a plethora of different terminology for this around. The terms "order", "type", "rank", "valence", and "degree" are in use for the same concept. This article uses the term "order" or "total order" for the total dimension of the array (or its generalisation in other definitions) m in the preceding example, and the term "type" for the pair giving the number contravariant and covariant indices. A tensor of type (n, m − n) will also be referred to as a "(n, m − n)" tensor for short.
 * -- Sławomir Biały (talk) 13:01, 29 May 2015 (UTC)

RFC: is V = V**?
Should Tensor state that tensors of rank 1 are vectors, given that $$V$$ and $$V^{**}$$ are distinct vector spaces? Standard usage where the author does not wish to distinguish objects with canonical isomorphisms is to state initially that although they are different, he will ignore the differences for the sake of simplicity.

Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:32, 12 April 2015 (UTC)


 * Comment. This RfC is not a neutral summary of the dispute in question.  Here, as I see it, is a summary of the dispute.  A few weeks ago, Chatul committed this edit, stating that vectors and scalars are not tensors.  This edit was reverted by User:Andy Dingley, with the edit summary "No, they really are. (first rank and zero rank tensors)" a sentiment that I (and most sources) agree with.  Chatul reverted Dingley, with the edit summary "No, they are not the same - see talk page".  Then, on the talk page, we see: "Scalars and vectors are distinct from tensors.  While it is usually convenient to ignore the difference, scalars and vectors are not tensors, especially if you use the definition as a tensor as a multi-linear map."  Dingley then restored the content, with a reference that very clearly and directly supported the statement.  Chatul placed a tag on the article.  I removed the tag.  (Editors placing tags like these are expected, at a minimum, to come up with sources that clearly and directly support their point of view (particularly in the light of countervailing sources.  These aren't to be used for something like "there is a point of view reflected in the article that I personally disagree with".)
 * Chatul was asked for a reference that supports his perspective. Instead of providing such a source, Chatul argued above that in mathematics generally, it is important not to identify objects that are canonically isomorphic.  He used the example of homology and cohomology theories in algebraic topology that, in good cases, can sometimes be identified.  I rebutted that this was a red herring, since whether homology or cohomology theories can be identified has nothing to do with the subject of this article.
 * Following that, I gave a long list of references that supported that the identification of a vector space and its double dual is indeed standard throughout the subject. For ease of reference, here are the statements as they appear in various books on the subject:
 * Bowen and Wang "Introduction to vectors and tensors" p. 218 "$$T^1(V)=V,\quad T_1(V)=V^*$$ Here we have made use of the identification of V with V** as explained in Section 32."
 * Borisenko and Taparov "Vector and tensor analysis with applications", p. 61: "2.3. First-order tensors (vectors)"
 * Kobayashi and Nomizu, "Foundations of differential geometry, volume 1", p. 20: "$$T^1$$ is nothing but V"
 * Lee, "Introduction to smooth manifolds", p. 180: "Clearly there are natural identifications $$T^0M=T_0M=M\times R, T^1M=T^*M, T_1M=TM$$.
 * J Schouten, "Tensor analysis for physicists", p. 17: "p=0, q=0 gives a scalar; p=1, q=0 a contravariant and p=0, q=1 a covariant vector."
 * L. P. Eisenhart, "An introduction to differential geometry with use of tensor calculus", p. 89: "A contravariant vector is a contravariant tensor of the first order."
 * Marsden and Ratiu, "Manifolds, Tensor Analysis and Applications", p. 340: "$$T_0^1(E)=E$$... and make the convention $$T_0^0(E;F)=F$$"
 * Arfken and Weber, "Mathematical methods for physicists", p. 131: "A scalar is specified by one real number and is a tensor of rank zero. ... In three dimensional space, a vector ... is a tensor of rank one."
 * Hawking and Ellis, "The Large-scale structure of space-time", p. 18: "In particular, $$T_0^1(p)=T_p$$"
 * Now, to the crux of the matter, even if, in some models, it is true that a vector space and its double dual are not identified, overwhelmingly they are in practice. Even the book that Chatul cites does not directly support his views here: "We shall adopt such a convention here and identify any $$v \in V$$ as a linear function on $$V^*$$."  This is the convention throughout linear algebra, mathematical physics, differential geometry, etc.  There is no need to say that "vectors are not tensors".  That just conveys a view that is unnecessarily confusing to the average reader, and not in agreement with standard practices in the literature.  Anyone for whom such a statement is likely to be meaningful is presumably also clever enough to read the discussion of such nuances in the text of the article, especially the footnote that reads: "The double duality isomorphism, for instance, is used to identify V with the double dual space V**, which consists of multilinear forms of degree one on V*. It is typical in linear algebra to identify spaces that are naturally isomorphic, treating them as the same space."  But for the average reader, such minutiae are extremely unimportant marginalia (as evidence by the distinct lack of discussion in the literature).   Sławomir Biały  (talk) 21:41, 12 April 2015 (UTC)


 * Opinion – In this article, it is sufficient to make the reader aware of any such distinction in a footnote. Text highlighting the distinction between V and V∗∗ should not occur in the lead, since it is distracting and unnecessarily confusing for the average reader of this article. Any discussion of the matter belongs elsewhere, e.g in Dual space. —Quondum 00:23, 13 April 2015 (UTC)
 * To clarify, are you saying that this sentence in the lead should have a footnote but not be reworded? A vector can be represented as a 1-dimensional array and is a 1st-order tensor. Scalars are single numbers and are thus 0th-order tensors.


 * My preference would be to change is to can be treated as, but a footnote certainly satisfies my objection. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:31, 13 April 2015 (UTC)
 * I would not be happy with the footnote as you've given here. Firstly, the distinction between V and V∗∗ is not the same thing, because if one is making this sort of distinction, "elements of V" and "vectors" are not the same thing, and "scalars" in the context of tensors is not the same thing as "the underlying field of V". Secondly, all that this article should do, even in a footnote, is to say that the tensors of order 0 T0(V) are referred to as "scalars", and the tensors of order 1 T1(V) and referred to as "vectors". It could also mention that there is a canonical embedding of the underlying field and of V respectively that identifies them with these tensors. I think that perhaps this hinges around the precise way in which the words "scalar" and "vector" are to be interpreted, and I'm suggesting a particular widely understood meaning in this context. —Quondum 16:29, 13 April 2015 (UTC)
 * That might be reasonable for Tensors, but with Tensors the elements of $$V^{**}$$ are tensors of order 1 distinct from the elements of $$V$$. Changing the text to treated as or adding a footnote would resolve the issue.
 * Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:28, 14 April 2015 (UTC)
 * There is already a footnote in the article that explains this. See above, where I quote the footnote at length.   Sławomir Biały  (talk) 21:42, 14 April 2015 (UTC)


 * I'm sorry, I seem to have confused myself above. The objection in previous comment seems to have been in reaction to something that I misread or read over-hastily; I'm having difficulty understanding what I wrote.
 * Starting again: I see no benefit to the rewording from "is" to "can be treated as", since implies a particular definition of the word "vector", which has not been pinned down that precisely, and in the context I do not see any benefit in doing so. Even using the definition as a linear map, we run into problems without assuming the "is", such as that vectors are identifiable with, but are different objects, when regarded as (a) maps from scalars to vectors, (b) maps from covectors to scalars or (c) objects mapped by a covector onto scalars. Add all the variations for tensors as linear maps (according to what they are acting on), and you end up with a real mess. So it is simplest to define tensors as elements of spaces that map to linear maps, rather than as being linear maps. For example, a tensor product is not in general a linear map in this sense at all, but there is a canonical isomorphism. So, if one wishes to approach this in a rigorously consistent way, it is far more complicated than some wording or a footnote change. —Quondum 22:24, 14 April 2015 (UTC)


 * I'm trying to adhere to Neutral point of view by crafting text that acknowledges the multiple definitions. I can live with any text that differentiates between rigorous usage and pragmatic usages. The issue is not which issue to use, but rather whether to briely acknowledge the pragmatic status of the identifications at issue. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:28, 28 May 2015 (UTC)


 * Wikipedia needs an article about tensors over finite-dimensional vector spaces. In the future that might be something like "Tensor (finite dimensional case)", but currently this article is it.


 * For finite-dimensional vector spaces this identification V = V** is canonical. As I understand it, the preponderance of references make use of this identification without a lot of discussion. So this article should mention, but not dwell upon or emphasize, the identification. A footnote is adequate. That said, the current text that references the V = V** footnote should be improved. Mgnbar (talk) 00:10, 15 April 2015 (UTC)


 * The primary topic is this article. An approach more compliant with our guidelines would be to have an article tensors in infinite dimensions, rather than disambiguate a title that is not actually ambiguous.  There are sections of this article that address tensors in infinite dimensions that point to relevant subtopics on Wikipedia.   Sławomir Biały  (talk) 11:58, 15 April 2015 (UTC)


 * I was talking about this article. My point was that this article is primarily about tensors over finite-dimensional vector spaces (despite its name, which might suggest something more general to some editors). Therefore this article should use the V = V** identity liberally. I think that you and I agree. Mgnbar (talk) 13:57, 15 April 2015 (UTC)


 * I too am in agreement. I like Mgnbar's characterization. Making clear in the article (in footnotes or otherwise) what its scope is and necessary identifications seem like a good idea. —Quondum 14:05, 15 April 2015 (UTC)


 * The question is: should the lead be written for users who don't know what tensors are and are hoping to find out, or for users who already know and are hoping to read something pedantically correct? As a user closer to the first type, I agree with Mgnbar. Maproom (talk) 06:48, 20 April 2015 (UTC)

"In this article, it is sufficient to make the reader aware of any such distinction in a footnote. Text highlighting the distinction between V and V∗∗ should not occur in the lead, since it is distracting and unnecessarily confusing for the average reader of this article. Any discussion of the matter belongs elsewhere, e.g in Dual space."
 * Opinion- The bot summoned me. I agree with User:Sławomir Biały regarding the article title, i.e. don't disambiguate an unambiguous title. Remember, this is about Wikipedia naming conventions.  Your frame of reference is within-Wikipedia, and Wikipedia readers are your audience. Next, after reading about half of the entire article talk page, I came to the conclusion that other editors and IP commenting readers are clamoring for a basic, not a nuanced, description of tensors. That is why it I also agree with Slawomir Bialy regarding the identification of finite-dimensional vector spaces as V = V** AND with what Quondum said here:
 * Finally, I would not agree with the prior proposed rewording from "is" to "can be treated as", as it will more likely confuse rather increase understanding of the intended audience for this Wikipedia article. Vectors are tensors in finite dimensional space, which is what this article is about. I'm sorry that my wording has such a brusque tone. That isn't my intent (I'm in a hurry).--FeralOink (talk) 01:37, 25 April 2015 (UTC)


 * Opinion- The bot summoned me too and I substantially agree with FeralOink. I am however no mathematician and my view of this matter is partly different.
 * Firstly, the lede is far too extensive. I realise that it is popular to say that the lede should have such and such a ratio to the size of the article, but that is hooey; the point of the lede is to enable the reader to tell within a short time whether he might expect to find it rewarding to read on. If his reaction is "Huuuhhh???" he knows not to, and the same is true if he says "Oh that!", or even just "Oh." No time lost; the purpose of the lede has been achieved economically and neatly. The body of the article is intended for readers who reacted to the lede with any variation on "Oh really? Well then, let's see...!" or "Hmmm... yes. Then how did that relate to..." or even "Rubbish, how could that be?" To clutter the lede with details and arguments that belong in the body sections or that confuse the issue by duplicating or contradicting body text and demanding maintenance in step with the body, have no place in the lede. In practice that means that most ledes, decidedly including this one, are far too cumbersome and are obstacles to readers rather than aids. IMO the current first paragraph, perhaps plus the first sentence of the second paragraph, with a bit of editing, would be quite adequate for a lede. The rest of the material in the present lede could be dispersed through the body text or put into an introductory section.


 * Compressing the lede seems reasonable. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:28, 28 May 2015 (UTC)


 * As for the question of whether a tensor is a vector or not, that is partly a matter of semantics and partly a matter of empirical logic. It has precious little to do with the topic and nothing to do with the lede. Semantically, if a mathematician wishes to regard a given string of values as a (first-order) tensor, that might suit his personal semantic purposes because of the contexts in which he uses the string, but logically it does not put him into any coordinate set calculated to forbid another practitioner to call it a vector. If I encounter (3,1,4) or indeed (a, 1,c) and say "Ah, a vector! It might represent say, a (class of) 3D position," then who is to forbid me to use it to represent a position, and on what basis?  The basis that someone else wants to use it to represent a direction?  I don't THINK so! And the fact that the same string represents certain classes of order-1 tensor similarly does not disqualify it from being a vector, or FTM vice versa.
 * Now, all this about footnotes etc is hardly better. If there indeed are certain contexts in which it is important to remember in working with order-1 tensors, that certain classes of vectorial operations would not be valid, then that needs to be said explicitly and in the place in the text where it is relevant. A footnote is for saying things that don't belong in the text and as such is to be treated as a last resort, not as a lazy substitute for a parenthetical remark. Most footnotes simply need to be properly put into context in the text and clearly stated. Many footnotes are hangovers from conventions suited to hard-copy books and have no functional place in soft copy. Nothing in what the constraints on operations on tensors might be has anything to do with what else a given string of values might be used for, whether those happen to be numbers, symbols, names, or any other positional values meaningful in context, such as in a state vector, and hence the constraints on what may be (called) a tensor do not put any constraints on whether it may be a vector. To argue the contrary would make as much sense as saying that an elongated wooden object taken from a tree is not a stick because I have chosen to use it as a pointer. JonRichfield (talk) 07:44, 28 April 2015 (UTC)
 * "the point of the lede is to enable the reader to tell within a short time whether he might expect to find it rewarding to read on." Wrong.  Please see WP:LEAD.  "In practice that means that most ledes, decidedly including this one, are far too cumbersome and are obstacles to readers rather than aids..."  Well, many leads follow our guidelines.  You seem to be saying that the guidelines are wrong, and should be changed.  You're entitled to such a view, of course, but until you have successfully changed the guidelines, this article also should adhere to them.   Sławomir Biały  (talk) 11:20, 28 April 2015 (UTC)
 * @User:Sławomir Biały wrong again. I said and implied nothing of the kind. Guidelines are for when one needs guidance; they are not prescriptions for how to serve readers' needs. That is reflected in this very guideline: The lead should be able to stand alone as a concise overview. It should define the topic, establish context, explain why the topic is notable, and summarize the most important points... The lead is the first part of the article most people read, and many only read the lead... it should ideally contain no more than four well-composed paragraphs... (my emphasis)  Too many editors seem to think that means that you need four paragraphs (after all, it is the guideline; you can't argue with that...!) even if you have to cram ten topics into the four paragraphs though that means four full pages, or split one topic into four if one topic is all you can think of to say. If an editor cannot tell what "well-composed" means, perhaps an appeal for assistance might help. Similarly if one cannot tell what would be most valuable to the readers who ...only read the lead... a little help from one's friends might be in order. Any user who wants to read more will come to no harm in reading past the end of a lede sufficient for such purposes, if he finds himself in a full introductory section that goes far beyond what a casually uninterested reader could or should want to follow. Many a lede could adequately fit into a single clear sentence. Not many should need more than half a screenful. I agree with you that "this article also should adhere to" the guidelines; anyone reading it may see that although it has four carefully-titrated paragraphs, the current lede does not so adhere; it certainly violates the functional intentions. JonRichfield (talk) 02:44, 2 May 2015 (UTC)
 * Nonsense. The lead of this article is four well composed paragraphs.  They define the subject for a nonspecialist and summarize its most important aspects.  That's exactly what a lead is meant to do per our guidelines.   Sławomir Biały  (talk) 13:21, 2 May 2015 (UTC)


 * Yet another Opinion A footnote in a lede is a gesture of futility. The very notion of a footnote is questionable in a soft-copy medium; it is an inconvenience to the reader at best and it is hard to think of an example even outside a lede where it is helpful, let alone necessary. If it is a short footnote it belongs in a simple competently written parenthetical remark. If it is a relevant and long footnote that would be inappropriate to include in context, it should be the target of a link. For example consider: "Vectors and scalars themselves are also tensors.[Note 1]". In a lede that is counter-functional; it adds nothing to the reader's understanding and breaks the coherence of the text. To add insult, the added footnote is less than a line and wouldn't even have been out of place in the mutilated statement with the footnote. If (as seems reasonable in context) the author feels that the material is relevant, it could better read something like:  Tensors may be regarded as a generalisation of vectors, much as vectors may be regarded as a generalisation of scalars; in suitable contexts one may regard a scalar as a zero rank tensor, and a vector as a first rank tensor. A reader who understands that is not inconvenienced and can see what the intention of the terminology is, and a reader who does not yet understand and wishes to know, now has the links accessible that can clarify technical points obscure to him. Or he can press the information overload button and go somewhere else. After all, not everyone who reads an encyclopaedia and does not know what a tensor (or indeed a vector) is, is equipped to know it. If you don't realise that, you have been living too sheltered an academic life. Furthermore, I have had a look at all the footnotes in the article, and on similar grounds, not a solitary one is justified, in or out of the lede. Please re-think! Rattle my cage if you would like participation. JonRichfield (talk) 03:28, 2 May 2015 (UTC)


 * Your suggested wording would satisfy my objections. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:28, 28 May 2015 (UTC)


 * Here is not the place to discuss the merits of footnotes, but briefly, they fill the same function as in hard media. I agree that the content of some of the footnotes should be inline. To be precise, notes 1-4 could well be inline while 5-8 could remain as footnotes. (Part of their purpose, by the way, is to relieve the reader from information overload.) — Preceding unsigned comment added by YohanN7 (talk • contribs) 06:08, 2 May 2015

Suggestion: Transpose the table in Examples section
For those of us familiar with linear algebra, and trying to understand more general tensors, paired indices (n,m) always index first the rows and then the columns of a matrix. When viewing matrices as operators / linear functions, one reads from right to left as one does with functions and sees m inputs and n outputs. Putting these together, the columns are always associated with the input and the rows associated with the output.

If I'm understanding the type of a tensor correctly, a (0,2) tensor (like an inner product) takes in an input with 2 indices (like 2 or more vectors, or one matrix) and gives an output with 0 indices (a scalar). This is compatible with reading from right to left, going from input to output as is should be. But this table is laid out as the transpose to such (ingrained) perspectives. When reading the table, the right most index (m) iterates over the rows, and the left-most index (n) iterates over the rows. When looking for a (0,2) tensor, I would look for it in row 0 and column 2. Unless there is some strict convention where this table has always been laid out this way (in the literature, in texts), which I doubt there is, I suggest it be transposed. -- Yoda of Borg (✉) 16:34, 15 November 2015 (UTC)


 * I was the editor who added the (first version of the) table long ago. And I can't recall any clear rationale for having it in that orientation. So I just transposed it for you. Please check it for errors. Mgnbar (talk) 17:55, 15 November 2015 (UTC)

Intoductory definition
"Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors."

I am not a mathematician, so may be missing some nuance here, but this definition seems to be an infinite loop and says, in effect:

"Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other geometric objects that describe linear relations between geometric vectors, scalars, and other geometric objects that describe linear relations between geometric vectors, scalars, and other..." etc. ad infinitum.

If it's my lack of mathematics that is the problem, could this be re-written so that I and other non-mathematicians can understand it? Hundovir (talk) 17:19, 11 October 2015 (UTC)


 * The first sentence is not intended as a formal definition. Several different definitions of tensors appear in the definition section of the article.  S ławomir Biały 17:26, 11 October 2015 (UTC)


 * This is a frequent complaint. The opening sentence for this article is difficult to write. For what it's worth, the easily-overlooked word "other" is crucial to avoiding the infinite loop. And the core content of the sentence is the easily-overlooked word "linear". Mgnbar (talk) 18:06, 11 October 2015 (UTC)


 * I am not sure why you think that such and infinite loop would be problematic. Linear maps between space of tensors are again tensors. It seems to me that you (as a self-described non-mathematician) understood the sentence just fine. TR 15:43, 12 October 2015 (UTC)


 * it is problematic because you can't define a word with the word itself. It doesn't matter of the word "other" is added or not.   Watch:  a cube is a shape that is similar to other cube-like objects.   ... Not really helpful.  — Preceding unsigned comment added by 74.197.144.134 (talk) 17:06, 28 December 2015 (UTC)


 * Who is defining the word in terms of itself? The first sentence is not the definition of "tensor".  Various actual definitions appear in the section of the article labelled "Definition".   S ławomir Biały 17:41, 28 December 2015 (UTC)


 * ok, since people are taking the '#%*× you' approach to any critisims of this let me ask who is this written for? The "definition" seems to be written for someone that knows what it is already.  When you go to an article like this and people get brushed off for commenting that it is written assuming a degree in the subject.. It seems like a wiki-circle jerk instead of anything remotly helpful.  — Preceding unsigned comment added by 74.197.144.134 (talk) 15:06, 29 December 2015 (UTC)


 * Your criticism was that the first sentence was a circular definition. I was just pointing out that it is not a definition.  If you want to read any of the three (!) definitions in the article, they are there for you.  If you don't want to read them, that's fine too.  But then your criticism is that: "The definition is too hard for me too understand, and the first sentence is not a definition."  Yes, we already know this.
 * As for the intended readership, unfortunately, this is an encyclopedia article, not a tutorial. If you want a tutorial on the subject, there are textbooks on linear algebra where you can learn the basics of vectors and transformations.  Then you can grab a book on tensor analysis.  It is quite typical that mathematical definitions require some background in order to be fully appreciated.   S ławomir Biały 15:16, 29 December 2015 (UTC)


 * 74.197.144.134, the editors here do try to write clear, understandable articles. And we appreciate constructive feedback from readers. If we could write a brief introduction that satisfied rigor, motivation, and applications, then we would. But we can't. So instead we have separate sections for those. Many Wikipedia articles on advanced math topics are like this. To learn more you might read the policies Wikipedia is not a textbook and Manual of Style/Lead section. Regards -- Mgnbar (talk) 22:08, 29 December 2015 (UTC)

Matrix actions
In the section "As multidimensional arrays", numerical matrices act on the right of lists of basis vectors, because we tend to think of abstract vectors as "columns". Thus, to change the basis $$[\mathbf{e}_1\ \mathbf{e}_2\ \cdots \mathbf{e}_n]$$, we must multiply this on the right by a numerical matrix. This is consistent with viewing a basis as an isomorphism from $$\mathbb R^n$$ to the vector space, with the action of numerical matrices given by right-composing with an element of GL(n). It is also consistent with the description given an covariance and contravariance of vectors. tl;dr, the correct order for the change of basis action on a linear transformation is $$\hat{T}=R^{-1}TR$$.  S ławomir Biały 11:14, 3 March 2016 (UTC)
 * It was requested that I give a reference for writing the change of basis in this way, with the GL(n) action on the right instead of the left. I refer, for instance, to the books "Lectures on differential geometry" by Shlomo Sternberg, or Kobayashi and Nomizu, "Foundations of differential geometry".   S ławomir Biały 18:41, 3 March 2016 (UTC)


 * According to my book, for another frame field $$S'$$, the change of basis is


 * $$S'=A\cdot S$$,

where
 * $$A=

\begin{bmatrix} a_1^1&\cdots &a_1^q\\ \vdots& &\vdots\\ a_q^1&\cdots& a_q^q \end{bmatrix},\quad S= \begin{bmatrix} s_1\\ \vdots\\ s_q \end{bmatrix}, $$

$$\{s_i\} $$ are vectors, ie. a frame field. Under this setting, for a (1,1)-Tensor $$T$$, after change of basis, we get a new (1,1)-tensor $$\hat{T}$$, if we write in a matrix form, we should have
 * $$\hat{T}=A\cdot T\cdot A^{-1}.$$

The is in chapter 4 of the book "Lectures on Differential Geometry" by Shiing-Shen Chern. In his book, pp108, he also writes clearly, for a curvature matrix $$\Omega=d\omega-\omega\wedge\omega$$, after change of basis, there is
 * $$\Omega'=A\cdot\Omega\cdot A^{-1}.$$

And $$\Omega(X\wedge Y)=D_X D_Y - D_Y D_X - D_{[X,Y]}$$ is a well-known as a (1,1)-tensor. Wttwcl (talk) 19:25, 3 March 2016 (UTC)
 * That seems idiosyncratic. By definition, a basis of a vector space V is a linear isomorphism $$ f:R^n\to V$$.  If g  is a change of basis matrix (that is, an element of the group GL (n)) then we would normally put g on the right of f, as in $$ f\circ g $$.  You're arguing that we should actually write g on the left of f.  It's true that some authors do function composition in the opposite order, but that is unusual, and I don't think it is helpful in an encyclopedia article.  And it is actually wrong per the definition given in the article, that explicitly uses the more usual conventions for composites of linear maps.   S ławomir Biały 22:08, 3 March 2016 (UTC)


 * That's a definition; the more common definition is that a basis is a sequence of linearly independent vectors spanning the entire space, except in contexts where is necessary to distinguish a Hamel basis from a Schauder basis. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:07, 7 March 2016 (UTC)


 * Yes, that's what I said. It's a linear isomorphism from $$R^n$$ to V.  That is a surjective (=spanning) monomorphism (=linearly independent).  The key point is that it is from $$R^n$$ to V, rather than the other way around (it would then be a dual basis).  Thinking of bases as linear isomorphisms in this way is important in the context of defining tensors, because that is precisely how one defines notions like "components in a basis", which is why I've opted here to define things in this way very directly.  Refer also to the aforementioned books by Kobayashi and Nomizu, Wells, etc.
 * In any case, if we agree that linear transformations are to act on the left of elements of V, as per the usual conventions, then we are forced to make numerical matrices act on the right of basis sequences. That is, a basis is a "row vector" whose individual elements are vectors in V.  The above discussion just expresses this fact in a cleaner abstract language which makes it much clearer that the choice of action on the right is actually canonical and natural.   S ławomir Biały 20:33, 7 March 2016 (UTC)


 * No, it (FSVO it) is not what you said, FSVO it. What you said was that it was defined as an isomorphism and what I said was that the definition in terms of sequences was more common. The two are, of course, equivalent. Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:02, 30 March 2016 (UTC)


 * The point is that the group GL(n) acts naturally on the right of bases, if the group GL(V) acts on the left. It is not merely "equivalent" to describe bases in this way, but "naturally equivalent".  That is, the two approaches describe precisely the same mathematical structure, and so it is common in practice to regard the definitions "a basis is a list of n linearly independent spanning vectors" and "a basis is a linear isomorphism of $$\mathbb R^n$$ to V" as the same.  There is a natural equivalence between the two functors: A list of n vectors in V defines a linear transformation from $$\mathbb R^n$$ to V, and vice-versa.  If we agree that this functor should be covariant with respect to the action of the group GL(n), and that the two definitions are "naturally equivalent", then the group GL(n) acts naturally on the right of bases.  A different way of specifying a "basis" would be to give n linearly independent coordinate functions (that is, a linear isomorphism $$V\to\mathbb R^n$$).  This is a contravariant functor, on the category of vector spaces, and so not naturally equivalent to the covariant functor that assigns a basis to a vector space.  (Probably the easiest way to see this is in infinite dimensional vector spaces, where there are a great many more linearly independent linear forms than there are linearly independent sets of vectors.)   S ławomir Biały 18:44, 30 March 2016 (UTC)

Other
Please fix this sentence (has no verb): "The components of a more general tensor transform by some combination of covariant and contravariant transformations, with one transformation law for each index." — Preceding unsigned comment added by 88.11.24.124 (talk) 01:03, 19 May 2016 (UTC)


 * Sławomir Biały (talk) 01:04, 19 May 2016 (UTC)

Organized
Shouldn't the word "organized" in "organized multidimensional array" be deleted from the definition? The word has no meaning in this context. All mathematical objects are "organized" in some way and there is no mathematical term "organized" in any textbook that I can find. This trigram was probably mistakenly copied from a computer science lecture or textbook where there was originally a comma between "organized, multidimensional array". Hobsonlane (talk) 00:12, 16 September 2016 (UTC)


 * To clarify, you're talking about the intro, not the Definition section. Yes, "organized" should be deleted. In the future, feel free to be bold and make such small changes yourself. Mgnbar (talk) 02:18, 16 September 2016 (UTC)


 * I agree the sentence seems odd. I believe the original wording was something like "a tensor can be organized as a multidimensional array", with organized the main verb.  But now as an adjective, it doesn't make a lot of sense.   S ławomir Biały 23:03, 16 September 2016 (UTC)

Proposal: less fluffy introductory sentence
Tensors are maps between products of vector spaces that are linear in each of its arguments.

I prefer this for a couple of reasons:

- It makes complete sense for an algebraist to discuss tensors without using any geometric interpretation.

- As someone who works extensively in geometry and topology I have no clue what the terms "geometric vector", "other geometric objects" mean. This is annoyingly imprecise and comes off rather pretentious.

- The entire introduction appears to be written for/by engineers and physicists. If such an article must exist can we have "Tensor" contain a precise discussion of multilinear maps between vector spaces, the classification etc. and add an entry like "Geometric Tensor" to describe the usage in engineering and physics? — Preceding unsigned comment added by 71.145.211.63 (talk) 01:37, 10 February 2017 (UTC)


 * I agree with the criticism. YohanN7 (talk) 08:59, 10 February 2017 (UTC)


 * I have restored the lead of the article. It had been moved into a separate section, with edits incorrectly marked as minor.   Sławomir Biały  (talk) 11:06, 10 February 2017 (UTC)

Split "definition: multidimensional arrays" into a motivation and proper definition
I'm trying to make sense of the tensor, but a couple of problems get in the way: I would suggest splitting this section up into an informal introduction where concepts such as covariant/contravariant/tensor are explained; and a formal definition where they are defined. — Preceding unsigned comment added by TheZuza777 (talk • contribs) 17:45, 22 June 2017 (UTC)
 * 1) covariant/contravariant indices are discussed at large before any intuition on what they are is given (at least I can't infer it)
 * 2) products such as $$e_j R^j_i$$ are used without any intuition on what it means
 * 3) the section doesn't seem to provide any proper definitions: I see a big intermix of undefined concepts (neither properly defined nor intuitively defined) used to explain the concepts themselves

Blackboard Bold
user:Sławomir Biały recently reverted an edit that changed R from boldface to blackboardbold, with an explanation of consistency. However, \mathbf{R} appears in the article only once. Further, the use of blackboard bold for, e.g., C, I, N, Z, Q, R, is conventional. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:14, 12 July 2017 (UTC)


 * Both bold and blackboard bold appear to be "conventional". In WP math articles, bold seems to dominate.  There does not seem to be a clear reason to prefer BB; also, not all fonts support it, which then relies on browsers intelligently substituting some other font.  This would suggest that in this context, simple bold should be preferred.  Refer to  and .  —Quondum 17:53, 12 July 2017 (UTC)
 * Incorrect, the article uses ordinary bold in one other place. Changing from one style to the other should be done in both places, or not at all, and only after a discussion of why an exception should be made to the usual convention at WP:MSM.  Sławomir Biały  (talk) 21:45, 12 July 2017 (UTC)
 * Yes and no. The markup \mathbf{R} appears only once, but I failed to not that the markup  R  for the Reals also appears. Sorry for the confusion.


 * As a side note,  R  is used both for the Reals and for matrix elements, a potential source of confusion. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:10, 13 July 2017 (UTC)

Field of scalars
As presently written, the article assumes that the field of scalars is the Reals. The article seems to be more oriented to Physics than to Mathmatics, but even in Physics tensor products with complex scalars are relevant, e.g., Fock space. Also, the article mentions spinors. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:20, 13 July 2017 (UTC)


 * Tensors on complex manifolds could be mentioned in the generalizations section. Spinors and elements of a Fock space (or other tensor products) are not tensors in the sense of this article.  Sławomir Biały  (talk) 17:19, 13 July 2017 (UTC)


 * Then the lede needs to be rewritten, because it refers to tensors in Mathematics. In Mathematics a Tensor is an algebraic entity and its field of scalars meed not be $$\mathbf{R}$$. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:44, 19 July 2017 (UTC)


 * I disagree that the use of the word "tensor" in mathematics is primarily an algebraic one. (In fact, this edit of yours is actually incompatible with that belief.)  Usually, when we talk of tensors, we have a smooth structure in mind.   Sławomir Biały  (talk) 17:55, 19 July 2017 (UTC)


 * How is adding the phrase and Fréchet manifolds. to the existing text of Tensor incompatible with the belief that a tensor is an algebraic entity? Would you have preferred that I remove Tensors thus live naturally on Banach manifolds  instead?


 * Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:38, 24 July 2017 (UTC)


 * Tensor products of Frechet spaces are topological, not algebraic.  Sławomir Biały  (talk) 14:35, 3 August 2017 (UTC)


 * I also disagree with your assertion that "In Mathematics a Tensor is an algebraic entity". In mathematics, the word "tensor" is only rarely used to describe an element of a generic tensor product of spaces. It is almost exclusively used for the linear algebra concept that is the inspiration for the more general "tensor product" concept in algebra. Also, where does this article assume vector fields over the real numbers?TR 07:53, 20 July 2017 (UTC)


 * Tensor is not Tensor field. I'm not sure what you mean by the linear algebra concept that is the inspiration for the more general "tensor product" concept in algebra. or how it differs from the tensor product. Whether you define it as a quotient space of sums of formal products or as a space of multi-linear maps, you get a space of tensors and both definitions are in the purview of Linear Algebra. Even where the primary interest is in tensors over $$\mathbb{R}$$, the better authors start with a more general treatment, e.g., Even some texts aimed at Engineering students, e.g.  allude to the complex case. Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:38, 24 July 2017 (UTC)


 * You ask where does this article assume vector fields over the real numbers?. Try Tensor. Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:38, 24 July 2017 (UTC)


 * One can have tensors at a point, yes. These transform by the Jacobian at the point.   Sławomir Biały  (talk) 14:37, 3 August 2017 (UTC)


 * What Jacobian? The article is not about tensor fields. The hatnote This article is about tensors on a single vector space. For tensor fields, see Tensor field. is quite clear Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:19, 4 August 2017 (UTC)


 * The Jacobian of the coordinate transformation. For example, spherical coordinates.   Sławomir Biały  (talk) 19:14, 4 August 2017 (UTC)


 * Please read the hatnote. There are no coordinate transformations, because there is no underlying manifold. Coordinate transformations are relevant to tensor fields, not to tensors. Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:19, 8 August 2017 (UTC)


 * Yes I'm talking about tensors, not tensor fields. Overwhelmingly, one is concerned with tensors under smooth diffeomorphisms.  See the chapter on tensors, for example, in Aris' "Vectors, tensors, and the basic equations of fluid mechanics".  I oppose any proposal to abstract tensors away to the tensor algebra of a vector space.  That is the subject of a different article.   S ławomir Biały 18:52, 8 August 2017 (UTC)


 * Your comments are only relevant to tensor fields, not to tensors. For tensor fields one is certainly concerned with the transformation of components under diffeomorphisms, but not for tensors on a single vector space, the phrase used in the hatnote. Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:07, 8 August 2017 (UTC)


 * How so? The stress tensor at a single point will change by the Jacobian of the coordinate transformation.  There is nothing about tensor fields.   S ławomir Biały 19:19, 8 August 2017 (UTC)


 * There is no at a single point, Jacobian or coordinate transformation, because the article is not about tensor fields. Again, read the hatnote, which I have quoted multiple times. Shmuel (Seymour J.) Metz Username:Chatul (talk) 22:52, 9 August 2017 (UTC)


 * See example below.  Sławomir Biały  (talk) 11:09, 12 August 2017 (UTC)

I think the only assumption that the scalars are real numbers is in the formula in the section "As multilinear maps". At the same time, there was a confusing section lower down "Tensors on complex manifolds", while this article is about tensors on a single vector space, not tensor fields on a manifold. I edited that lower section some, so that it now refers to complex vector spaces, but perhaps the generalization to complex and other fields could be incorporated in the "As multilinear maps" section itself. &mdash; Carl (CBM · talk) 23:18, 9 August 2017 (UTC)


 * The "As multilinear maps" section should be probably be more inclusive towards other fields. On the other hand, I would be very hesitant to go the usual mathematics literature route of phrasing everything with respect to some arbitrary field. That is exactly the sort of thing that makes this type of article hard to read for readers with relatively little mathematics background. Maybe a short remark at the end of section, explaining that the definitions can be straightforwardly extended to vectorspaces of arbitrary fields, would suffice?TR 12:40, 15 August 2017 (UTC)