Talk:Tensor/Archive 2

Merge done
I have taken down the notice for a merge of the rest of tensor (intrinsic definition), since I don't think it would improve this article to take in more from that one.

I'm a little concerned that the "Cartesian tensor" perspective is not addressed explicitly, and may have entered in some places by the back door. Charles Matthews (talk) 14:35, 2 February 2010 (UTC)

Merging
I'm starting on cleaning up the article, and beginning merging, and also reconsidering the logical flow, while introducing inline citations from the Encyclopedia of Mathematics (many relevant articles, ducking the unified "tensor" article. Bear with me as this takes shape now. It's already a long article, nearly all needing some rewriting. Charles Matthews (talk) 23:33, 31 December 2009 (UTC)


 * Excellent. Thank you very much for stepping up Charles.  Let me know if there is anything I can do to assist, besides bearing with you.  Sławomir Biały  (talk) 00:52, 1 January 2010 (UTC)


 * So far so good. I have made three of the suggested merges, leaving tensor (intrinsic definition) which has had some useful material copied to here, but where other parts seem not to fit well. I have split out the new page tensor software. If you could look over in particular the remarks on tensor densities, as a check of my explanation, that would be a help. Obviously the applications section is still rough, also. But the article as a whole appears more coherent now. Charles Matthews (talk) 09:57, 1 January 2010 (UTC)


 * Fabulous job. It looks much better.  The explanation of the tensor densities section looks mostly correct, except that it appears undecided about whether the basic densities transform by the Jacobian or its absolute value.  I am of the opinion that "densities" should transform by a power of the absolute value of the Jacobian (I know there are other conventions that could benefit from a more nuanced treatment).  But the last sentence seems misleading from this perspective: "In the orientable case non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight is not restricted to integer values."  Non-integral densities always make sense, not just on orientable manifolds.   Sławomir Biały  (talk) 12:52, 5 January 2010 (UTC)


 * OK, thanks, I have gone into the tensor density section and moved it around so that (I hope) it now covers your point. Charles Matthews (talk) 13:12, 5 January 2010 (UTC)


 * It's looking quite swell! Where have the contents of the two sections of the old classical treatment article gone, though? Obviously a lot will be missing, like the specific concern with spacetime, but I'd like to know if the other stuff found a place somewhere or if it was just cut. LokiClock (talk) 09:25, 15 January 2010 (UTC)


 * The talk about general relativity didn't seem like it had a place here: the use of physical language such as "event" really lends nothing to the mathematics in that instance. And the tensor field notation is just a special case. Charles Matthews (talk) 09:50, 15 January 2010 (UTC)


 * Mmm, I see now the bits I found important are under "compatibility" and "compatibility for tensor fields." LokiClock (talk) 16:49, 3 February 2010 (UTC)

Question
If V and W are vector spaces is any element of $$V\otimes W$$ then considered to be a tensor? If so, the article needs to be more general. TimothyRias (talk) 16:09, 3 February 2010 (UTC)
 * As far as I know, yes. More general how? LokiClock (talk) 16:27, 3 February 2010 (UTC)


 * But this article does not need to be more general. As it says, the usage for "tensor on a vector space" uses a space and its dual to build up tensors. You could also argue that the tensor product of two modules consists of tensors. The point is alluded to, in fact; but not covered in detail. Charles Matthews (talk) 16:45, 3 February 2010 (UTC)


 * Well, as the article is currently written it gives the impression that a linear map V->W, represented by a not necessarily square matrix, is not a tensor. This both not true, and potentially confusing, since a lot of people will come here thinking of tensors as generalisations of matrices/linear maps. TimothyRias (talk) 09:49, 4 February 2010 (UTC)


 * For pedants, it is a "tensor on a vector space" where the vector space is the direct sum of V and W. It would be worth pointing out the way this fits non-square (cuboidal etc.) tensors in with the definitions: it's the fairly obvious point that different index sets can be used if regarded as disjoint subsets of a bigger index set, but that explanation perhaps descends to excessive detail. Charles Matthews (talk) 08:19, 12 February 2010 (UTC)

The article should probably go with what books say about tensors, rather than speculating about how the notion might be generalized. Based on a glancing familiarity with several of the sources currently in the article, an element of the tensor product $$V\otimes W$$ does not seem to be something that would usually be known as tensors (barring constructions like those mentioned by Charles). We have the article tensor product, which is about tensor products&mdash;different from tensors, though closely related to them. Sławomir Biały (talk) 12:48, 12 February 2010 (UTC)


 * As far as I can tell most literature does in fact restrict to tensors on a (single) vector space, enough so that it justifies taking that as the main subject. If we ever come across some evidence of notable more general use we can add it in a "generalisations" section. That being said we might want to be more explicit in saying that rank/degree 2 tensors are represented by square matrices. TimothyRias (talk) 14:55, 12 February 2010 (UTC)

Order or degree
I just removed a citation needed tag from the statement that the order of a tensor is its number of indices, since this isn't really something requiring a citation: rather it is an entirely standard basic fact of tensor analysis that can be found in any book on the subject (assuming, that is, one understands what is being described). But perhaps there is some question about whether this number should be called the order, degree, or total (or general) valency of the tensor. Dodson and Poston use "degree". The Encyclopedia of Mathematics uses "order" here and "general valency" here. I know the term "rank" is also in wide use, but as has already been amply pointed out, this term should be avoided because of potential confusion. I would support changing it to "degree" if that is less likely to cause upset, but I also have no issue with order, even though that seems to be in less common usage in the circles that I move in. Sławomir Biały (talk) 01:40, 12 February 2010 (UTC)
 * Synonyms should be mentioned and the choice of terminology for the article with them. General/total valency does have the upshot of the relation with valency. LokiClock (talk) 04:08, 12 February 2010 (UTC)


 * Well, you are pushing the article down a road that doesn't lend to any greater readability, in fact. This type of point really belongs in footnotes. Charles Matthews (talk) 08:21, 12 February 2010 (UTC)


 * Of all the possibilities, "general valency" is the least of my preferences in the lead. Consider how uninviting and awkward the second paragraph of the lead would be if every instance of the word "order" were replaced with the term "general valency". A footnote seems best, if we are to mention these other variations at all: there do seem to be as many such slight variations as there are authors in the subject. Regardless of the issue of whether and how to mention these variations in terminology, the article does need to settle on some terminology in order to discuss these things.  The purpose of my original post was to see if there was any support for changing "order" to "degree".  So far, I reckon there is not consensus to change to "general valency".   Sławomir Biały  (talk) 11:36, 12 February 2010 (UTC)


 * Valency -noun: the capacity of one person or thing to react with or affect another in some special way, as by attraction or the facilitation of a function or activity. - its fairly obvious how that applies to the electron shells of an atom, but how that word ever came to refer to the order of a tensor is lost on me if not merely by some gross contortion of language that should seldom if at all be repeated. IMHO.  Also, that was fun to say.  Kevin Baastalk 22:22, 12 February 2010 (UTC)


 * People won't stop to read the footnotes when they're reading the article, because they have no indication of the importance of the footnote to their understanding of the subject (in practice, i.e. understanding terminology used in papers) without reading the footnote. Tucking the information away effectively means not presenting it at all. If people don't see the terms defined within the text, they'll assume they're not defined at all. You can't remember "having read about that somewhere" when you see the term crop up elsewhere if you never read it to begin with. Regardless, for the sake of referability, the synonyms should be mentioned along with the choice of terminology. To be clear, I've never seen the term before, and I do not have a strong opinion in favor of the term. Obviously it doesn't help with readability. I was just trying to think of possible upshots to suggest. Really, the same effect is had as long as the term is mentioned. LokiClock (talk) 01:47, 14 February 2010 (UTC)

Covariant vs. Contravariant
The article seems to mix these up. I'm used to "normal" vectors (such as elements of the tangent space at a point) being called contravariant, and their duals covector/1-forms being called covariant. This seems to be inline with what the Covariance and contravariance of vectors article is saying. I'll try to fix some of the cases I've come acroos, but some extra eyes are welcome. TimothyRias (talk) 15:10, 12 February 2010 (UTC)


 * My understanding is covariant is like you're putting a new system of measurement on top of a pre-existing one, and contravariant is you're actually measuring something with a particular measuring system. covariant: vector of measuring + vector of conversion ("co-variant") = new vector of measuring.  contravariant = vector of measuring + vector of thing to be measured ("contra-variant") = result of measurement.  sounds the same as what you describe. Kevin Baastalk 16:43, 12 February 2010 (UTC)
 * Though looking at that edit, the usage of "covariant" and "contravariant" there isn't meaningful to me. co- and contra- are prefixes that describe a relationship among two different things.  (namely "with" and "against")  (in this case, one tensor in respect to another.)  that para seems to suggest otherwise; that it's an intrinsic quality and simply a matter of top or bottom indices.  that seems "intrinsically" wrong to me.  hence my brain goes on "ignore" mode when i read over that paragraph. Kevin Baastalk 17:02, 12 February 2010 (UTC)
 * Ofcourse these article all look like a bunch of hopelessly inaccessible esoteric jargon to me now, so I've kind of given up on them, anyways. I know the difference between contravariant and covariant.  If someone who doesn't wants to find out, I suppose they'll just have to go someplace else.  Kevin Baastalk 17:14, 12 February 2010 (UTC)
 * I think we would be better off reserving the terms co and contravariant to describe the transformation law that the components undergo. Thus it is meaningful to say that the components of a tangent vector relative to a basis of coordinate derivatives are contravariant, but fairly meaningless to say that the vector itself is contravariant.  Referring to the vector itself as contravariant is an abuse of language, but one that is obviously very popular.  This may reflect a certain difference of cultures, but it is heartening to see that Kevin (an engineer?) agrees with me.   Sławomir Biały  (talk) 18:17, 12 February 2010 (UTC)


 * Let me be clear - I understand that a basis generally has its indices on bottom and therefore the parts of a tensor that are covariant to said basis would likewise be on the bottom, while the contravariant ones would be on top, that is not the case in general and neither is "top" or "bottom" indices the proper meaning of the terms. Now while I understand that it makes things simpler to use that convention wherever possible, we should be wary lest we encourage the reader to confuse a convention of notation with the definition of a spatial relationship.  I.e. they should understand that "covariance" isn't simply having lower indices, and isn't an innate property of a tensor (or vector), but a description of a relationship between two mathematical objects.  Kevin Baastalk 22:42, 12 February 2010 (UTC)
 * I don't think the idea is that we call tensors with lower indices covariant, but that we write the components of a tensor that transforms covariantly with lower indices. TimothyRias (talk) 23:34, 12 February 2010 (UTC)
 * You missed my entire point. It's the same thing whether you call it that before or after you put it there.  My left is not always north. And whether its my left or my right depends not on which direction i'm facing but which side of me you are on.  My point of all that is that there really is no such thing as "a tensor that transforms covariantly".  As I said covariance or contravariance is not intrinisic but it's a property of a relationships between two objects.   The lower indices are only covariant to the lower indices of a tensor being multiplied by that tensor.  They are contravariant to the upper indices.  So it is simple wrong to say that they are always co- or always contra-.  All components of a tensor "transfrom covariantly" with respect to some tensors put in certain mathematical relationships to them. None are covariant to all.  roughly speaking it's not a matter of which side they're on it's a matter of whether they are on the same side or different sides. So not only is it an abuse of language it is also logically false.  Kevin Baastalk 14:55, 15 February 2010 (UTC)
 * The w.r.t. of the terms covariant and contravariant in the context of tensors is simply the chosen basis on your vector space. The components of a vector must transform contravariant to the basis in order for the vector itself to be invariant. TimothyRias (talk) 15:08, 15 February 2010 (UTC)
 * Yes. Key point here is to the basis.  "Covariant" and "contravariant" are always only meaningful with respect to something else (often the basis).  (Though regarding "w.r.t.": not always.  The "w.r.t." is whatever the mathematical formula says it is.)  As Sławomir Biały put it. "I think we would be better off reserving the terms co and contravariant to describe the transformation law that the components undergo.  Thus it is meaningful to say that the components of a tangent vector relative to a basis of coordinate derivatives are contravariant, but fairly meaningless to say that the vector [or tensor, or component of a tensor] itself is contravariant [or covariant]." Kevin Baastalk 16:25, 15 February 2010 (UTC)
 * But components are always w.r.t. some specific basis. For any tensor the components will always transform in the same way under basis transformations, either covariant or contravariant to the transformation. As such it IS a meaningful statement to say that that "the components of the tensor transform covariantly", which by some abuse of language is shortend to "the tensor is covariant". Of course, the tensor itself is coordinate/basis independent and thus scrictly speaking it is invaraint. TimothyRias (talk) 16:49, 15 February 2010 (UTC)
 * a tensor (not neccessarily universally, but in many cases, and certainly conceptually) is part of a chain of differentation/integration, both of which are only meaningful with respect to the-thing-that-one-is-differentiating-with-respect-to or the-region-that-one-is-integrating-over (notwithstanding indefinite integrals and o.d.e.). co- and contra- variance are descriptions of these relationships.  Since these relationships (differntations/integrations) only exist (are manifest) from combination (muliplication/division/etc.) of tensors/vectors/what-have-you, these descriptions are only meaningful in that context.   I hope that wasn't too confusing.
 * i tried to work with your phrasealogy "the components of the tensor transform covariantly" to put what i'm trying to say in your word /syntax choices, but it seems that phrasealogy simply is not amenable to the mathematical logic which i mean to describe, and which Charles already has: components of a tensor that are covariant to certain vectors are not covariant to their duals, so to say that they "transform covariantly" (apart from straining grammar) is demonstrably false, regardless of the tensor and regardless of the component. Kevin Baastalk 18:31, 15 February 2010 (UTC)

Actually, Kevin, I don't think you understand the issue, nor what Sławomir was saying. It's really not about choice of basis - it's absolutely not about that. It's about which space you look on as the "original" vector space, and which the "dual". What is covariant for the original is contravariant for the dual, and vice versa. If we are clear what are the vectors and what are the covectors, all the rest follows. Sławomir was making the point that there is one choice to make, but choosing a basis is not what it is. Charles Matthews (talk) 16:51, 15 February 2010 (UTC)
 * @Charles: "It's about which space you look on as the "original" vector space, and which the "dual". What is covariant for the original is contravariant for the dual, and vice versa." Yes, that's what I'm saying.
 * @Timothy: "But components are always w.r.t. some specific basis." The basis is not a part of the tensor, if that's what you're saying.  And to say "this tensor transforms ALL basis to this coordinate space (or what have you)" is either wrong or meaningless. A tensor can transform one coordinate space to another.  But start with a different coordinate space and you won't get the same result.  To get the same result from starting with a different space you need a different tensor.  A tensor is a transformation law not the thing transformed from or to.  it is entirely separate from that.  To define what is "co" and what is "contra" you always need to start with some space (hypothetical or what have you) and define it from there.  If, for example, you start with the dual space of that, as charles noted, all the co- and contr- are going to be flipped.  So you see the co- and contra- are determined not by the tensor itself, but it's relationship to whatever space (or what have you) you choose as your origin. Kevin Baastalk 17:10, 15 February 2010 (UTC)


 * Sorry, not too much of that even makes sense. A tensor is a transformation law not the thing transformed from or to. No, that is not accurate. Charles Matthews (talk) 17:19, 15 February 2010 (UTC)
 * How about "Many physical quantities are naturally regarded not as vectors themselves, but as correspondences between one set of vectors and another. An example is the stress tensor, that takes one vector as input and produces another vector as output and so expresses a relationship between the input and output vectors."? Because that's what "A tensor is a transformation law not the thing transformed from or to." means.   It seems to me from this and the previous example that the problem is not so much with me "making sense", but of the "transformation law" between the writer and the reader, so to speak.  So please be kind to direct your comments on content and not editors.  Kevin Baastalk 17:48, 15 February 2010 (UTC)


 * A "tranformation" is not a "transformation law". Certain tensors can be read as transformations, but only those with one covariant and one contravariant index.


 * Could you stick with how to improve the article? Charles Matthews (talk) 17:53, 15 February 2010 (UTC)


 * I'm using the word "transformation" in a more liberal sense which I thought was clear from the context. I am talking about how the article can be improved thou admittedly we got a little side-tracked.  What I've been saying is that there are places in the article that seem to encourage a popular misconception about co-variant and contra-variant that is both an abuse of language and mathematically inconsistent.  And we could improve the article by making it clear in these spots that the co- and contra- are -- as me, you, Sławomir, and the article have stated in our own words -- only meaningful with respect to a given basis/vector space/'nother tensor/what-have-you. Kevin Baastalk 18:04, 15 February 2010 (UTC)

You know what: screw relative correctness. Reading through the articles I see that there are two different usages of co-/contr- variance, one in reference to the convention of top/bottom indices (which corresponds w/the side of derivatives on bottom or top) with respect to the original vector space (which is presumably on bottom / "with respect to"). and provided this usage is always used correctly and it's meaning is not confused with the other meaning, it introduces no contradiction. Just something that one should treat with a bit of delicacy and carefulness. Kevin Baastalk 19:03, 15 February 2010 (UTC)
 * I should concede that picking a side and sticking with it is as useful as saying "5/1" is a natural number and "1/5" is a fraction even though they are simply multiplicative inverses of each other. Likewise d?/dx and dx/d? - even though quantities here are now meaningless without the introduction of a basis (which is ultimately rather arbitrary) - certainly has a clear difference in meaning and use even though one is merely the "dual" of the other. Kevin Baastalk 19:24, 15 February 2010 (UTC)

General criticisms
The name of the first two sections don't really imply their content.

"Formulation" is a motley assemblage. There's one section on one specific kind of notation, but the article would do good to have a separate notation section that involves a few example tensors and how they would be described in each notation. The other section, on valence, would be better put to use as a starting point for a coherent explanation of tensors, which isn't really attempted past the article's history section. Its main text says too little and misleads. e.g., "...a tensor T can be formed from multiplying vectors together..." without mentioning the tensor or Kronecker product.

As for "modern mathematical usage," the modernness of the treatment doesn't contribute much in the way of determining the relevance, priority, or nature of the content. Something like, "tensors as abstract objects," "tensors independent of a basis," or, "component-free treatment," with a one-line aside on how, in modern times, tensors are generally studied as abstract objects independent of a choice of tensor basis/coordinate system, for historical context. The main text of the section consists of many half-thoughts. e.g., the first two sentences, "While tensors can be represented by multi-dimensional arrays of components, tensors behave in specific ways under coordinate transformations. The abstract theory of tensors is a branch of linear algebra, now called multilinear algebra, in which the transformation property is built in by axiomatic definition." Of course they behave in specific ways, everything does. How is that contrasting with tensors being representable by multi-dimensional arrays of components? For that matter, a component is not defined explicitly in the article, merely implied by the term's use in describing the generalization from scalars and vectors. Obviously it's a summary, so the focus was probably on scraping off the surface of a few main ideas. However, most of the ideas don't have enough context to make any sense, and jumping from idea to idea without providing any way to use those statements as context for the mathematics makes the section schizophrenic, and more confusing than helpful.

Its subsection, "Compatibility," needs to describe what it's describing in the first sentence. It has a title for a reason, so there should be a way to quantify that reason to the reader, so they can know immediately what they're supposed to be finding out. Additionally, it helps editors who didn't write the section to know what the topic is and what the section aspires to cover. LokiClock (talk) 15:49, 3 February 2010 (UTC)


 * Feel free to edit the article. As we know, some compromises are required to give a treatment at all that is neutral with respect to different and valid perspectives. Some of these criticisms could be aimed at just about any encyclopedia article aiming to cover the ground: try searching eom.springer.de and I think you'll find relevant material spread over around ten articles. Charles Matthews (talk) 16:51, 3 February 2010 (UTC)


 * I'll try and do what I can, but much of that I can't fix, for general lack of expertise. For example, I myself do not understand what the compatibility section is about. And of course they apply to other encyclopedias' articles, but that doesn't mean they shouldn't be helped. Hence "criticism" vs. "complaint." LokiClock (talk) 07:24, 4 February 2010 (UTC)


 * Just a suggestion: work on a section at a time. Charles Matthews (talk) 08:43, 4 February 2010 (UTC)


 * Okay, I cleaned up "modern mathematical usage," and changed the title. It seems better now. I tried editing "Compatibility" a little to get rid of some vertical bloat, but I'm not really sure it's better off. I'm not sure how to restructure "Formulation," though. There should perhaps be a section specifically for component-based information, since there's one specifically for component-free information. LokiClock (talk) 08:47, 4 February 2010 (UTC)


 * Do not use symbols like $$\forall$$. It is a mark of extremely poor mathematical writing.  Sławomir Biały  (talk) 11:34, 4 February 2010 (UTC)
 * Take a look at euclid's elements then tell me you dont like symbols...M00npirate (talk) 04:36, 23 February 2010 (UTC)
 * But Euclid didn't use any logical symbols at all: instead everything was stated in words. My point is, logical symbols like $$\forall$$ = "for all" that can just as easily be expressed in plain language should be avoided in mathematical writing.  Most professional mathematical writing adheres to this, although I'm sure you could find counterexamples if you tried hard enough.  Sławomir Biały  (talk) 11:37, 23 February 2010 (UTC)
 * I am with Biały on this, because creolizing the ideas can be halting to read, and only stating the notational form of the expression is useless to those who don't already know what it means. Instead of part English, part notational explanations, they should be separated and specifically stated to be rephrasings of each other. All it does is demonstrate inadequacy in both to need to mix the languages to make the point. LokiClock (talk) 21:31, 23 February 2010 (UTC)


 * Why so? I figured that if other logical symbols, like $$\in,$$ are being used, then quantifiers would be accepted. LokiClock (talk) 14:07, 4 February 2010 (UTC)
 * Ah, I see, you removed that as well. No problem with that. LokiClock (talk) 14:14, 4 February 2010 (UTC)

It should be obvious, now, that the article doesn't really discuss concrete tensors before jumping to the coordinate-free formulation. It is mentioned, sure, but mostly in reference to abstract formulation. People can't be expected to understand the abstract form without first understanding the generalization of the objects they're already familiar with, scalars and vectors, the latter of which they are also familiar with expressing as arrays. Additionally, as I noted in a quick scan of the recently archived threads, one problem people have is the underrepresentation of real tensors and operations on them - practical examples. So why not kill two birds with one stone? Pick an important, but approachable tensor, such as the Cauchy stress tensor, and use it as a model for the properties of tensors, where it is of value. Then, explaining how the same tensor could be described abstractly, I think people would comprehend very much more of what is being said and going on. LokiClock (talk) 10:33, 24 February 2010 (UTC)
 * One problem I see with the current article is that it does not make an attempt to explain what a tensor is before proceding to properties like valency. A good idea would to start the (main body) article with some definition and use some concrete examples to explain what this definition means. This approach has worked very well for the group (mathematics) article. I actually think that the Cauchy stress tensor as already somewhat obscure for most people. As starting examples besides the somewhat trivial scalars and vectors would be a linear map as an example of a type (1,1) tensor or an inner product as an example of type (0,2) tensor. These examples can then immediately be used to explain what is meant by the components of a tensor. A large portion of readers will already be familiar with the concept of representing a linear map by a matrix and how this depends on the choice of basis. TimothyRias (talk) 13:03, 24 February 2010 (UTC)
 * Do you think a Dyadic tensor section would be a start? ᛭ LokiClock (talk) 09:03, 4 March 2010 (UTC)

So what should the title of the component-based treatment section be? Should we change the name of the Abstract section again and have them be Classical and Algebraic formulation, as in Glossary of tensor theory? ᛭ LokiClock (talk) 08:57, 4 March 2010 (UTC)
 * I think it is a mistake to try to keep the two in separate sections. Discussing what concepts mean both in terms of components and coordinate free notation is generally beneficial in getting the reader to understand what the concept means. I would certainly stay away from terms like classical or modern cause these are not very accurate. Both notation are used and have been in use for a long time.TimothyRias (talk) 09:16, 4 March 2010 (UTC)

Automated archiving?
Anybody mind if I set up automated archiving for this talkpage? TimothyRias (talk) 15:43, 17 February 2010 (UTC)


 * I don't, so long as it's with a liberal archiving interval, such as a month. LokiClock (talk) 15:32, 18 February 2010 (UTC)

EquationRefs on certain sections?
Should EquationRefs be employed in sections such as Compatibility to improve readability? If numbered, the equations could be placed in blocks (i.e. mathematical paragraphs), and referred to in neighboring English text, allowing for lighter, more fluent explanation. ᛭ LokiClock (talk) 12:17, 6 March 2010 (UTC) This can already be applied to statements of the form: english explanation, newline, equivalent mathematical statement. For example: where the right-hand side is the Kronecker delta array. For every covector &alpha; in V&lowast; there exists a unique array of components &alpha;i
 * $$\alpha = \alpha_i\, \varepsilon^i.$$

I personally feel that this sentence pattern is poorly formed and confusing for those not acquainted with it, anyway, so I would happily do away with instances of them in any manner. Other statements could be redesigned to allow for this type of referencing, where it's seen that they could benefit from it. ᛭ LokiClock (talk) 12:23, 6 March 2010 (UTC)


 * This style of having a large block of numbered equations to accompany the English text does not seem to be used by any other articles on Wikipedia, nor indeed in most professional mathematical typesetting. So I am going to go with a "No" here.  Also, it is not clear to me how moving the equations further away from the text will improve readability.  Personally, I find that it greatly diminishes readability when one needs to continually refer back to numbered equations in a text, although sometimes numbered equations are necessary to avoid awkwardness or repetition.  Sławomir Biały  (talk) 14:21, 6 March 2010 (UTC)


 * Not large blocks, just a few equations that follow a direct logical progression, such as ones showing how one relation follows from established ones. Also, key equations if they're referred to multiple times, such as the frame transformation rule. Hold on, I'll try and make an example, assuming I've interpreted this bit correctly... ᛭ LokiClock (talk) 15:35, 6 March 2010 (UTC)

Okay, I hope I understood and described this correctly. Even if not, I hope you can see what I'm getting at:


 * Example before:

The transformation rule for covector components is covariant. What this means is the following: $$\mathbf{\alpha}\in V^*$$ be a given covector, and let $$\alpha_i$$ and $$\hat{\alpha}_i$$ be the corresponding component arrays. Then
 * $$\hat{\alpha}_j = A^i {}_j \alpha_i.$$

The above relation is easily established because


 * $$\alpha_i = \mathbf{\alpha}(\mathbf{e}_i),$$

and


 * $$\hat{\alpha}_j = \mathbf{\alpha}(\hat{\mathbf{e}}_j);$$


 * Example after:

The transformation rule for covector components is covariant. What this means is that a components of a covector (1) in one frame of reference are transformed to those of the same covector in a different frame of reference(4) with the same operation that transforms the first frame of reference into the other (equation for frame transformation rule). This follows from (2, 3).


 * (1) Let $$\mathbf{\alpha}\in V^*$$ be a given covector, and let $$\alpha_i$$ and $$\hat{\alpha}_i$$ be the corresponding component arrays.
 * (2) $$\alpha_i = \mathbf{\alpha}(\mathbf{e}_i),$$
 * (3) $$\hat{\alpha}_j = \mathbf{\alpha}(\hat{\mathbf{e}}_j);$$
 * therefore
 * (4) $$\hat{\alpha}_j = A^i {}_j \alpha_i.$$

-- —Preceding unsigned comment added by LokiClock (talk • contribs)


 * I don't see how this can possibly be considered easier to follow than just having the equations as part of the main text. Sławomir Biały  (talk) 16:12, 6 March 2010 (UTC)
 * Well, it took me two hours to figure out exactly what was meant by the previous explanation and how to express it in plain English, so that should say something about it's readability. ᛭ LokiClock (talk) 16:51, 6 March 2010 (UTC)
 * I agree with Sławomir: the way it's done now is far more natural and easier to follow, and is the same as used in other articles here and all books and papers I can recall. Numbers are for referring back to results in previous sections/on previous pages, and should be used sparingly otherwise the text is very difficult to read.-- JohnBlackburne wordsdeeds 17:06, 6 March 2010 (UTC)

Diagrammatic notation
Might we mention diagrammatic notations such as Roger Penrose's Diagrammatic Tensor Notation or Predrag Cvitanovic's Birdtracks? (which are both very similar) Many proofs can be verified simply by checking that lines go to the right place, which is a graphical representation of "juggling" indices. Diagrammatic notation is not very popular (due to its difficulty in printing), but it's quite clear, and it's an easier notation to read than abstract index notation. A quick mention to it and a link to Cvitanovic's webbook could do wonders to proliferate this notation. 128.2.17.196 (talk) 12:58, 9 March 2010 (UTC)


 * I could see a place for an article, say diagrammatic tensor notation, that mentions these. However, I don't think they have a place in the main tensor article, precisely because the notation is not very popular.  Also, we cannot link to birdtracks.eu, because the purpose of that link appears to be exclusively to market Cvitanovic's book.   Sławomir Biały  (talk) 14:38, 9 March 2010 (UTC)


 * The article Penrose graphical notation already exists. Charles Matthews (talk) 16:24, 9 March 2010 (UTC)
 * I am justly rebuked. :-)  Sławomir Biały  (talk) 18:17, 9 March 2010 (UTC)

Suggestion for structure of a definition section.
IMO the article should start out with a title like "Definition and Examples", the basic structure could be something like this: (definitions based on John Lee's "Riemannian Manifolds")

\\ & \text{n copies} & & \end{matrix}$$
 * 1) Define a (covariant) tensor of rank n on a vector space V (over R) as a linear map:
 * $$ \begin{matrix} T: & \underbrace{ V \times\dots\times V} & \rightarrow  \mathbf{R}.
 * By (at first) restricting to covariant tensors we delay having to talk about dual vector spaces. Components as "row vectors".
 * 1) Discuss components of covariant tensors by introducing a basis $$\{e_1,\ldots,e_k \}$$
 * $$T_{i_1\dots i_n} \equiv T(e_{i_1},\ldots,e_{i_n})$$:
 * 1) Discuss first example: inner product on a (real) vector space. Discuss that this by represented by components as a rectangular array (aka matrix).
 * 2) Discuss second example: covariant tensors of rank 1 and the relation to/definition of the dual vector space V*.
 * 3) Introduce contravariant and mixed type tensors.
 * 4) Discuss third example: tensors of type (1,1) and their relation to linear maps and matrices.
 * 5) Discuss fourth example: vectors in V as rank 1 contravariant tensors. Components as "column vectors"
 * 6) Discuss tensors whose components depend on space and time, a.k.a. tensor fields.

Every point in this list would be about a paragraph long. Maybe some subsection are required if they become any longer. Does this sound like a reasonable approach to everyone? If so I'd be happy to flesh this out so it can be included in the article. TimothyRias (talk) 09:51, 4 March 2010 (UTC)


 * Well, sounds like an entirely different article to me. The one we have is based on trying to get a geometric meaning for tensors as the thread going right through. There is a graphic in the lead, for that reason. Simply starting in with the definition of a tensor on a vector space is one possible approach, but the emphasis is necessarily then on the algebraic structure. Charles Matthews (talk) 16:09, 4 March 2010 (UTC)
 * I would think that expressing tensors as (multi)linear maps, makes the statement "a tensor is a correspondence between a set of vectors" much more explicit. I'm sort of curious how else you would explain what, say, a rank 4 covariant tensor is geometrically. The inner product is probably the most geometrical example of tensor after a simple vector. TimothyRias (talk) 16:39, 4 March 2010 (UTC)
 * Using enough duals any tensor "is" a linear map from some space to another. But I think the use of "geometry" here is pretty much ambiguous. There is geometric in the sense of "intrinsic", which is the sense I was using before: if you start with basis-free constructions that is guaranteed. But the article cannot be written from the basis-free point of view and at the same time respect the standard NPOV criterion of including all significant points of view. NPOV is not negotiable here, which is why we are still discussing this matter years on. The article Tensor (intrinsic definition) is relatively easy to write, from Bourbaki's point of view, and there is nothing much to discuss. But as we know Bourbaki doesn't care about applications. What you propose seems too much like cloning that article, to me. Charles Matthews (talk) 15:39, 5 March 2010 (UTC)
 * Well, I see no reason for Tensor (intrinsic definition) to exist as a seperate article at all. Note that I was not suggesting to only include the basis free approach, but actual that we discuss both and how they are related. That is the point of NPOV. TimothyRias (talk) 16:24, 5 March 2010 (UTC)
 * Indeed; In fact, intrinsic definition could be a starting point for an integrated article, by taking its contents and explaining it in these various ways. ᛭ LokiClock (talk) 18:00, 5 March 2010 (UTC) That is, continuing after the above framework. ᛭ LokiClock (talk) 18:01, 5 March 2010 (UTC)
 * Well, I disagree. I carried out a partial merge of that article into this one, and stopped there for good reasons, I believed. And I think you should address the NPOV issue as fundamental, rather than just saying that an article could be written starting from the POV of modern multilinear algebra. You need to be convincing on the side of saying you have a way to write a neutral article, since the outline given shows no sign of neutrality. It is not enough to say "... and then we fix up the neutrality by adding something here". This is the crux of the whole discussion. Charles Matthews (talk) 22:29, 5 March 2010 (UTC)
 * If there are multiple points of view you have to start with one POV that is inevitable. You just need to be clear that you are doing so. For example, by say "One way of defining a tensor is...". What definitely should not happen is that there are POV fork articles. TimothyRias (talk) 23:16, 5 March 2010 (UTC)
 * The outline doesn't have a firm plan of action for neutral explanation and POV forks are not the solution. I don't think you have to start with one point of view and then define the others, though. I imagine something more like interleaved inverted pyramids that discuss one abstract property of tensors before interpreting those properties in geometric, component, and component-free contexts. ᛭ LokiClock (talk) 07:50, 6 March 2010 (UTC)
 * I really think we are at an impasse here, which I don't find surprising, given that major edits are usually not done on the assumption of a "green-field site", and based on a single source. To be constructive, I think there might be greater convergence if the tensor (intrinsic definition) article were developed to include both more on terminology, and more about the history, so that its structure could be seen as parallel to this article. Then, I think, it might be clearer how to get a good merge. I don't rule out some progress in that direction, but I think that has to be based on a clearer view of the scope of the article to be written. Charles Matthews (talk) 10:12, 7 March 2010 (UTC)
 * Charles, I think I don't quite understand your reasoning for having two separate articles. Perhaps you can try to explain this? Also currently this article has no real structure that I can discern, so I'm sort of at a loss by what you mean bring the structure of the other article inline with this one. TimothyRias (talk) 14:59, 7 March 2010 (UTC)
 * The structure of this article? Lead, get a big distinction on terminology defined very early on, history for context - that's the start. The structure is therefore more inverted pyramid than Bourbakiste mathematical text, certainly. But it gets directly to the questions (what is a tensor and why to tensors matter?) in the terms of a general reader, by saying "a tensor is a kind of geometric quantity", "tensors turn up in an algebraic setting and in the theory of physical fields", and "historically tensors arose out of 19th century geometry, and in the 20th century broke into mathematical physics with general relativity, after which there were numerous other applications". That is before we get to the detail in the rest of the article. Whether or not this is a familiar idea for structuring articles to you, it is mainstream for Wikipedia. As for the two articles, there were four, and I merged two into this one. That process has brought on this discussion, where we have two editors still arguing for throwing away this article and starting with a blank sheet of paper. Obviously we have to go through the debate to clarify what the article is doing. I didn't want to merge the info on low rank tensors into this one, for reasons I think are obvious (wouldn't fit well). It might be simplest to place those in another article and get rid of the "intrinsic" article totally. But I note that the movement to rewrite this article completely started May 2009, more than eight months ago, and so far really has got nowhere much. As I have said, usually major edits are not done that way, but more resourcefully by looking at existing material, considering issues of logical flow, patching up weaknesses, adding better references and clarifying general points of view, and only then getting into detailed wording issues (which can usually be discussed piecemeal). Tensor (intrinsic definition) has the obvious weakness of assuming in the first sentence that "tensor" and the properties of tensors are known, the issue being mainly of how to describe them; and of existing in a historical vacuum. I.e. it is not for the general reader, but the expert. I was suggesting adding in general material as a way of clarifying what that article is doing. The approach through multilinear algebra has history and motivations, which could be given. Charles Matthews (talk) 16:43, 9 March 2010 (UTC)
 * I leave you to it then. As you apparently know much more about writing wikipedia articles then I do. In the mean time this article is a mess with unexplained jargon floating around everywhere, jumping into mostly arcane distinctions from the beginning an no attempt to establish terminology. TimothyRias (talk) 21:19, 9 March 2010 (UTC)
 * The rewrite attempt starting in May 2009 was diffused. Most of the problems with the article that people had complained about before, which prompted the rewrite attempt, are still relevant. The lead is I don't think top priority, it's the nonexistent bridge to Tensors as abstract objects. If, however, having a coherent middle requires significant modifications, or an overhaul, of the lead, then there should be no hesitation in that regard. ᛭ LokiClock (talk) 00:38, 10 March 2010 (UTC)
 * The main body should be independent of the lead anyway, per WP:LEAD. Once the main body text is finished the lead can be rewritten as to reflect the content of the article as is required. TimothyRias (talk) 09:22, 10 March 2010 (UTC)
 * I don't see why carrying out examples as above precludes a geometric explanation. Such graphics and geometric explanations would go well, I think, with the component- and index-based examples by keeping them in a real setting, and showing how to realize the geometric transformations numerically. A bit harder to pull off, doing all three at once, but with a big payoff, methinks. ᛭ LokiClock (talk) 18:40, 4 March 2010 (UTC)


 * By 1.2: restricting to covariant tensors, I hope you mean restricting the explanation, not the definition. ᛭ LokiClock (talk) 18:05, 5 March 2010 (UTC)
 * I'm mean waiting to define contravariant and mixed tensor till poiny five. TimothyRias (talk) 23:17, 5 March 2010 (UTC)
 * Right. Just make sure something is said to the effect that there are three valencies, contra., co., and mixed, and that the others can be understood in terms of a co. tensor before going into the explanation of a covariant tensor. ᛭ LokiClock (talk) 07:50, 6 March 2010 (UTC)

I basically agree with Charles that the article should not systematically emphasize the tensor-on-a-vector-space approach from the outset. On the other hand, while I don't agree with the proposal to add a definition section, I do think that the article is missing something very early on. Perhaps an Introduction section that gives an example and briefly discusses the covariant transformation law might satisfy everyone concerned. This should emphasize the "geometric" side of things rather than the algebraic, and also say what a tensor is qua array of components together with a transformation law under coordinate changes. Sławomir Biały (talk) 13:55, 10 March 2010 (UTC)


 * For an example, we could perhaps take the inertia tensor. There are some reasonable intuitions about spinning an object about an axis. In this example you would want to change coordinate system, to the principal axes. So perhaps it would seem less abstract, though of course it is special rather than representative. Charles Matthews (talk) 19:01, 10 March 2010 (UTC)


 * I think the Marion and Thornton's Classical Dynamics uses exactly that example to introduce their definition of a tensor as a collection of components satisfying certain transformation rules under coordinate transformations. The problem with trying to say what a tensor is by giving examples is that it doesn't naturally answer the question "what is not a tensor?". Of course, there are some obvious examples such as Christofel symbols or the Levi-Cevita symbol, but both are a bit technical. TimothyRias (talk) 09:44, 11 March 2010 (UTC)


 * I suggest that the parallel information be added without regard to the present article format (e.g., add matrix representations of the "abstract" section's content), and then rearrange the content into whatever structure arises. ᛭ LokiClock (talk) 07:58, 18 March 2010 (UTC)

On a tangent: Tensors are generalizations both of vectors and covectors, right? The term is used, but not defined, and disparate terms for them are used in the article without stating that they are synonymous. They are dealt with in formulae without any foremention, which reads as disenfranchizing jargon. ᛭ LokiClock (talk) 07:58, 18 March 2010 (UTC)

Perhaps a good early section, actually, would discuss vectors and covectors in the terms necessary to understand what it is for a tensor to be a generalization of them, and for a mixed tensor to contain aspects of both; as well as, of course, to describe them in the notation used in the article, so that the common elements in more complex tensors can be readily seen. Abstract index notation is specifically adapted to multilinear contexts, so how exactly is someone first hearing about a tensor going to be familiar with the representation of their foundational knowledge in said notation? ᛭ LokiClock (talk) 08:25, 18 March 2010 (UTC)
 * That might work. Although I have trouble picturing exactly what would be covered in such a section. Could you perhaps draft a short outline? Also I don't quite understand what you mean with your last remark. TimothyRias (talk) 10:48, 18 March 2010 (UTC)


 * Sorry, I know this is rough, but I'm not an expert exactly. First, showing a row matrix, a column matrix, saying that these are Ai and Ai in abstract index notation (If I even got that right!). Then simple examples of contravariant and covariant 2nd order tensors and how they can generalize vectors and covectors. Then either a mixed tensor example would be given and its geometric meaning explained, or the third possibility of such is mentioned. Those brief statements should be enough to qualify proceeding discussion of the inertia tensor, or whatever example is used. What I mean by that statement is that someone with a multilinear algebra background will already know what a Tensor is, and someone who doesn't will not be familiar with the expression of a vector and one form (however familiar they can be expected to be with those) in abstract index notation. So before using it, those two should be referred to alongside their various notational equivalents before evolutions of them are used. ᛭ LokiClock (talk) 19:35, 18 March 2010 (UTC)
 * Oh, and a table of examples, in the see also if you like, of covariant, contravariant, and covariant tensors of consecutive orders. The article needs some extrinsic context, and maybe something significant will be understood in common amongst them by being able to examine such examples in depth. ᛭ LokiClock (talk) 21:14, 21 March 2010 (UTC)

Here's another outline for explaining contravariance and covariance through mixed extensive and intensive definition: Start with a scalar. Explain what happens to it when you add significant covariant indices. Finish up by explaining that these are called covariant indices. Define a contravariant index. Then go back to the scalar and show what happens to it as you add significant contravariant indices. 3 indices should be enough to convey a sense of what changes throughout the progression. At each step explain what the new tensor is; what you might do or signify with such an object, its geometric properties, what it looks like in notation. ᛭ LokiClock (talk) 19:53, 7 May 2010 (UTC)


 * Not sure what you mean by "adding an index to a scalar". TimothyRias (talk) 20:25, 7 May 2010 (UTC)


 * For the contravariant progression(?), scalar -> 2D vector -> 3D vector. ᛭ LokiClock (talk) 23:49, 7 May 2010 (UTC)

Clarification of intro please
Could someone please take a look at the intro and first figure, and clarify the following points?

1. Intro says "the stress tensor..takes one vector as input and produces another as output". OK, so in the figure, what is the input vector, and what is the output vector? Are there actually three input vectors (e1, e2, e3), and if so are there three tensors (the three Ts) or is there just one tensor sigma as claimed by the last sentence: "row and col vectors that make up THE tensor..." Whichever is the actual tensor, is that the actual output spoken of in the first sentence, or does one have to apply it to produce something else, which is the output, perhaps not shown here?

2. "acting on the X, Y and Z faces" No X, Y or Z faces or axes are shown. Does it really mean the e1, e2 and e3 faces? (I'm assuming that e1-3 are normals representing the faces?)

3. "Those forces are represented by column vectors". OK, but in that case how come the three components of, say, T(e1), sigma11, sigma12, sigma13, appear as a ROW in the sig matrix? Should this be transposed?

Thanks Gwideman (talk) 09:28, 8 April 2010 (UTC)


 * Good questions. I rephrased to clarify 1 and 2. As for 3, it involves more explanation. You can think of &sigma; as a row vector of column vector of forces per area or as a column vector of row vectors per area. The fact that these produce the same result (that the stress tensor is symmetric) isn't necessarily obvious and actually isn't true in some weird cases. In introductory solid mechanics, that is glossed over and we assume the tensor is symmetric. See proof and details here. —Ben FrantzDale (talk) 13:03, 8 April 2010 (UTC)


 * The lead paragraph is obviously intended to be a bit of a gloss already. Some of the added details were extraneous, and should be presented in a separate section, if at all.  I am beginning now to wonder if there is any value in the picture of the stress tensor, since apart from being a pretty picture, it does not really communicate anything relevant to the lead.   Sławomir Biały  (talk) 13:08, 8 April 2010 (UTC)


 * I sort of agree. It took me a couple of minutes to exactly figure out what it was trying to represent. That suggests that a "lay person" is just going to be confused. Having some picture there would be nice though. TimothyRias (talk) 15:33, 8 April 2010 (UTC)


 * I think I put that image there. I want a picture in the intro because a tensor is a fundamentally geometric thing and so I think it is more direct to start with pictures than to start with matrices (which, when I learned this stuff, lead me down a confusing, fruitless path trying to understand when a matrix was or wasn't a tensor). I agree, though, that the particular picture obfuscates. Perhaps a visualization of a 2D covariance tensor would be more appropriate? —Ben FrantzDale (talk) 15:54, 8 April 2010 (UTC)

Reorder proposal: Order and terminology
The lede's paragraph on tensor order could be moved to a new section under formulation. Terminology could be moved into the original section, as it concerns the varying usage of the term exclusively and is more a disambiguation notice than anything else. ᛭ LokiClock (talk) 23:45, 8 May 2010 (UTC)


 * I strongly disagree. The fact that scalars, vectors, and matrices are examples of tensors should certainly be emphasized as early as possible in the article rather than being pushed down into a deep section.  It seems most natural to organize this discussion around the notion of tensor order.   Sławomir Biały  (talk) 01:18, 9 May 2010 (UTC)


 * I concur. The concept of "extrapolation" from order 0 and 1 to larger orders is one of the few handles for someone arriving new to the topic. Charles Matthews (talk) 10:33, 9 May 2010 (UTC)


 * That makes sense. But as for terminology? It pushes the history section and everything else out of the pole position. Even if kept, it should probably be further down the article, with a disambiguation sentence in the lede linking to that section. ᛭ LokiClock (talk) 23:08, 10 May 2010 (UTC)


 * I concur, that having the terminology section as it is first will serve more to confuse than to enlighten. I'm not even sure that it should be a separate section at all. It should be enough to note when tensor fields are introduced that the distinction between tensors and tensor fields is not always made clearly, especially in engineering and physics. (However, unlike is currently suggested it is also common in differential geometry to refer to the Riemann curvature tensor as a tensor rather than as a tensor field.) TimothyRias (talk) 07:50, 11 May 2010 (UTC)


 * I would rather the distinction be explicit. Keeping the section may or may not be necessary for that, but if we agree that the tensor field-tensor distinction is chief here, that is easy to state with brevity. ᛭ LokiClock (talk) 08:22, 11 May 2010 (UTC)


 * Any other opinions, or should I go ahead and move terminology? ᛭ LokiClock (talk)


 * Ultimately, I'd like to consider killing the section altogether. There seems to be some support for this idea.   Sławomir Biały  (talk) 13:13, 16 May 2010 (UTC)

OK, I've gone ahead and been bold and just removed the section. I'm pasting the contents here for reference in discussion. If we decide to add something like it later lets make sure it is actually true. TimothyRias (talk) 08:49, 17 May 2010 (UTC)

Terminology
The term tensor is slightly ambiguous, which often leads to misunderstandings; roughly speaking, there are different default meanings in mathematics and physics. According to the Encyclopedia of Mathematics, tensor calculus is the umbrella term for two meanings, and

is the traditional name of the part of mathematics studying tensors and tensor fields [...] Tensor calculus is divided into tensor algebra (entering as an essential part in multilinear algebra) and tensor analysis, studying differential operators on the algebra of tensor fields.

In the mathematical fields of multilinear algebra and differential geometry, a tensor is first an element of a tensor product of vector spaces. In physics, the same term often means what a mathematician would call a tensor field: an association of a different mathematical tensor with each point of a geometric space, varying continuously with position. This difference of emphasis (on the two parts of "tensor calculus") conceals the agreement on the geometric nature of tensors. In applications of tensors, different types of notation may be used to represent the same underlying calculations or structure.

Comment
For nearly seven years, the need for the article to recognise explicitly that tensor sometimes means "tensor field" has been on the agenda. I don't see that that has changed. It is mandated by NPOV, amongst other things. Charles Matthews (talk) 10:35, 17 May 2010 (UTC)


 * Added hatnote, as initially intended. ᛭ LokiClock (talk) 12:12, 17 May 2010 (UTC)


 * This not a matter of POV. In physics, people often just don't care about the distinction between a tensor or a tensor field. From their perspective these things are the same. A tensor "simply is" a multidimensional array of numbers which may or may not depend on your position in space. This very similar to the habit of some physicists of not making the distinction between a Lie group and its Lie Algebra.
 * To cover this point adequately it is enough to have a note at the point that this article first discusses tensor fields, that in many applications these are simply also referred to as tensors.TimothyRias (talk) 13:04, 17 May 2010 (UTC)


 * To quote WP:NPOV, the article is required to represent all significant views fairly, proportionately, and without bias. This has been my concern all along. It is not like the Lie group/Lie algebra distinction. Whether a tensor is an object of multilinear algebra or of differential geometry makes the difference of whether differentiating it is meaningless or meaningful, for example. I would feel much safer with a quoted definition of "tensor" from a physics text. There are clearly mathematical physics texts that take differentiation as a fundamental operation on "tensors", and the authors might take more kindly to the comment that the result might be zero, than that they have a sloppy idea of what they are talking about. Charles Matthews (talk) 20:10, 17 May 2010 (UTC)


 * I expect a significant people to look for the Tensor article with tensor fields in mind. Even if account for that doesn't qualify as an NPOV concern, it certainly qualifies as a primary ambiguity concern. Someone shouldn't have to already know the other terms by the name to be disambiguated. ᛭ LokiClock (talk) 21:08, 17 May 2010 (UTC)
 * Note, that diverting those people to tensor field is unhelpful since that article is pretty much inaccessable if you do not know what a tensor is. It starts, "a tensor field assigns a tensor to each point in space", sending those people right back here. This is unlikely to change. IMO the tensor field page should be viewed as a daughter of this page, with a summary style paragraph on this page linking to that article. That paragraph would be the place (IMHO) to put a remark about tensors fields sometimes being referred to as tensors. TimothyRias (talk) 06:40, 18 May 2010 (UTC)


 * Charles, for your comfort some textbook definitions of a tensor from typical physics textbooks:


 * In the context of the application, the first is actually used as a tensor (the inertia tensor of an object), while the second is used to describe things that are properly tensor fields. This not made explicit, it is just taken as natural that once you have an object that transform in a particular way, its components can also vary with space. TimothyRias (talk) 07:05, 18 May 2010 (UTC)
 * In the context of the application, the first is actually used as a tensor (the inertia tensor of an object), while the second is used to describe things that are properly tensor fields. This not made explicit, it is just taken as natural that once you have an object that transform in a particular way, its components can also vary with space. TimothyRias (talk) 07:05, 18 May 2010 (UTC)

Let me try and clarify. Firstly, I think setting up a "content fork" so that this article is basically about tensors as such, not tensor fields, is a promising way forward, and much better as an idea than past divisions of the material by "exposition". But secondly, there must then be compliance with Content forking. That's a content guideline, and it makes a good point here. Namely, it is permissible to fork under Summary style, but you must do one thing. Under the "Article spinouts" subsection see "the moved material must be replaced with an NPOV summary of that material ...". I see this as applicable here, certainly. The terminology must be explained, with due weight given to the views of those (many professional users of tensors) who would not recognise the Bourbaki-type approach based on tensor products, and who would recognise the tensor calculus approach as given by many traditional mathematical physics texts. I am not going to insist that the explanation be precisely the one that stood before in the article, though that was reliably referenced to a respectable online source. There will be other ways to do it. I am going to say that the guideline applies to this issue, and should be followed in developing the article. Charles Matthews (talk) 11:07, 18 May 2010 (UTC)

History
I think a good way to move this article forward is to first develop the history section. A well-developed, well-written history section could do a lot for helping set the context for readers, by providing them info on why tensors were introduced in the first place. It also helps the lessen some of the confusion of terminology, by the describing development of concepts.

That being said. This might been easier said then done. The history of tensors is somewhat convoluted and I'm having trouble finding good sources. Most sources, I've found are texts primarily trying to explain what tensors are, with all sorts of anachronisms slipping in, when discussing their history.

Part of the problem, is that concept of tensor did not quite arise in one place, but rather as the result of several different parallel developements in the 19th century. What is completely unclear to me is how these different strands interacted historically, and at what point the concepts (and terminology) were unified. I've yet to find a source that discusses that directly.TimothyRias (talk) 09:33, 20 May 2010 (UTC)
 * 1) The best documented of these strands seems to be the development so called tensor analysis in the context of differential geometry as developed by Riemann, Ricci, etc. There seem to be a good number of sources that actually describe this development. For example, the source currently quoted in the first paragraph of the lead.
 * 2) Almost simultaneously, there is the development of vector analysis, driven by the needs of developments in various fields of physics, such as electromagnetism and fluid dynamics. Here the need for "higher order" objects also became apparent, but the exact development of ideas is not that clear to me. Different sources say slightly different things. This seems to have culminated in the introduction of polyadics by Gibbs at he end of the 19th century, which basically define tensors in the "tensor product way".

Definition
This article perpetuates the infuriating Wikipedia tendency NOT to define terms rigorously in the first expository section of a mathematics article. Note the first sentence of the article. It DOESN'T define Tensor. It doesn't say what the hell the author is talking about. It just starts right in saying that tensors are "used to extend the notion of scalars, geometric vectors, and matrices." Then it follows with information about who "first conceived" the concept, blah, blah. But: "What the hell is a tensor?!" Geez, it's the first rule of definitions. PLEASE define the term. Explain what it is, then go on to explain what they are used for, and add any historically interesting information, etc. Most of the Wikipedia articles on mathematical terms and concepts suffer greatly from this fault. You need to DEFINE the term before you start talking about it's significance. And, no, I can't edit the article. I'm not a mathematician. That's why I looked up this page. DUH! —Preceding unsigned comment added by 74.239.2.104 (talk) 16:17, 21 May 2010 (UTC)


 * Would it be correct to say "[...] to extend the notion of scalars, geometric vectors, and matrices to higher orders?" ᛭ LokiClock (talk) 14:33, 28 May 2010 (UTC)
 * Note that that statement is meaningful before the formal definition of tensor order. The question is whether that is an accurate description of the intent and nature of the extension. ᛭ LokiClock (talk) 10:23, 29 May 2010 (UTC)


 * I think the above complaint is that the article doesn't start with a formal definition. Well, we simply don't begin mathematics articles that are intended for a wide audience with formal definitions.  I don't think that adding a few words to the lead is going to address the concern.  That said, the proposed edit seems reasonable to me, if you think it clarifies things.   Sławomir Biały  (talk) 11:54, 29 May 2010 (UTC)


 * The merit of the comment is that the section giving a formal definition should be clear in the ToC. Charles Matthews (talk) 13:05, 29 May 2010 (UTC)


 * I do think it clarifies things a little. You can make the leap of logic easily enough from spotting the progression of tensor order, but it doesn't outright say that that is the key generalization being made about scalars, vectors, etc.. ᛭ LokiClock (talk) 21:57, 31 May 2010 (UTC)

What is a Tensor?
I'm going to give getting a constructive discussion for improving this article started one more try.

What is a Tensor? This probably the most important question that this article needs to answer. Many different people we give you different answers based on their background. These include: (with varying degrees generality)


 * 1) A tensor is multidimensional array of numbers that respond in a certain way to coordinate transformations (Usually by engineers and physicists, e.g. Marion&Thornton in Classical Dynamics)
 * 2) With as an alternative: A tensor is an object with indices
 * 3) A tensor is a multilinear map on a vector space (and its dual). (Common in differential geometry and physics (GR) texts. E.g. John Lee in Riemannian Manifolds and Sean Carroll in Spacetime and Geometry.)
 * 4) A tensor is an element of a tensor algebra (Common in algebra)
 * 5) (Please add more if you can)

These answers can see very different at first sight, especially to people familar with one but not the others. But fundamentally they are talking about the same objects (although at different levels of generality). A main task for this article (as I see it) is to clearly present these different descriptions and most importantly how they are related to each other.

Currently the lead gives a rather vague answer to the question 'What is a tensor?':
 * A tensor is a correspondence between one set of vectors and another

This vagueness is OK for the lead, especially since it is followed by an example. But it should be clarified later in the article. (Am also still a bit confused at what this is supposed to mean. Could somebody perhaps explain to me how this statement applies to (2,2)-tensor, for example?). Any more specific answer given later in the article should however make connection with the vague answer given earlier.

My intent is to get an overview of what the viewpoints on this subject, such that we can work on representing them all fairly. TimothyRias (talk) 08:56, 7 May 2010 (UTC)


 * I did some digging on the origin of the different definitions, with some remarkable (to me) conclusions.
 * Definition 1, seems to be the one originally used by Ricci. (With the addition that the components were assumed to be function, i.e. he was discussing tensor fields.)
 * The concept of a polyadic as introduced by Gibbs, actually describes a tensor in the sense of definition 3. (If you follow a somewhat down to earth approach to defining a tensor product, rather than the abstract universal property route.)
 * To me this was surprising, in the sense that the most down to earth hands on approach to defining tensor, which nowadays is mostly common among engineers and physicists, originated with mathematicians. While the more abstract, "tensor product" approach originated with a physicist. TimothyRias (talk) 09:00, 20 May 2010 (UTC)

I've drafted a section that covers these definitions and their interrelations. I think it would be a good idea to include this after the current history section. It covers the various ways of defining tensors and introduces the necessary terminology to discuss other properties (like type and component). It is currently located in my sandbox. It is still far from perfect. It needs quite some work on sourcing, clarity of exposition, copyeditting. But I believe it is good start towards providing a NPOV coverage of the different views on tensors. It also provides good entry points for placing "main article" links to more technical and detailed articles describing a specific definition of tensor.

The drafted section absorbs most of the content of the current "Formulation" section and the "Tensor fields" subsection.TimothyRias (talk) 12:51, 25 May 2010 (UTC)


 * Looks promising.  Sławomir Biały  (talk) 02:17, 26 May 2010 (UTC)

This is a note to give props to the authors. I am not a math expert. I am just trying to get an idea of what a tensor is in relation to what I know about linear algebra / multivariable calculus. Because I found this article remarkably helpful for that purpose, I am writing this note to say thanks. Having consulted various other sources, I find that an intial intuive explanation is often omitted (e.g., begin by defining a tensor as something that obeys the transformation law). For someone self-studying the intuitive first idea is really important, more important than formalizing right away. You captured this idea in a nice way by building out of linear algebra and then into tensor fields. So basically what I am trying to say is that this article was really helpful in clarifying terminology etc (e.g., tensors and tensor fields) in a straightforward way. THANKS! 145.18.152.249 (talk) 12:03, 21 June 2010 (UTC)


 * That's good to know. Especially as there has been plenty of debate around exactly this issue of accessibility and approach. Charles Matthews (talk) 12:43, 21 June 2010 (UTC)

Formal definition section
There is quite some overlap between the current "Formal definition ..." section and the "... as Tensor products" subsection of the description section. (The latter basically is a brief summary of the former.) I think that the "Formal definition ..." section is currently too mathematically detailed. I think that the best thing to do, is to integrate to most essential points of the section into the last section of "description" (and that whole section might be better named definition(s). That subsection can be given a link to the tensor (intrinsic definition) article. Any details not integrated into the "... as Tensor products" subsection (which might need to be renamed) could be relegate to the tensor (intrinsic definition) article (for as far as they are not already mentioned.

(Note that there is somewhat of a WP:NPOV easy with calling the section formal definition, since defining a tensor a multilinear map is just as formal.) TimothyRias (talk) 13:04, 31 May 2010 (UTC)


 * Is there some support for this? I'm somewhat reluctant to go on without some of the other editors chiming in. TimothyRias (talk) 08:28, 1 June 2010 (UTC)


 * I'm on the fence. I'm eager to hear what Charles has to say about it.  If he doesn't comment, then I probably won't have much of an opinion either way until I see the end result, unfortunately.  (Sorry for being a useless spectator!)  Sławomir Biały  (talk) 11:31, 1 June 2010 (UTC)


 * This would seem to be the crunch point for the rewrite. (I have been standing out of the way, since I have enough to worry about on other fronts.) But it seems to me that a section on "formal definition" should both give a detailed discussion of the formal definition as generally recognised, and explain why this definition is appropriate, namely that it captures the concept of "tensor" as actually used. The proposal seems to be to use summary style to shorten one or both parts of that. One solution is to go ahead with applying summary style, but on the understanding that material may have to travel back and forth until issues of balance and appropriate completeness are dealt with by consensus. Another way would be to grasp the nettle at that point, and to say that tensor (intrinsic definition) is not the ultimate title we want for that material. I think that would amount to a merge in, and then split out, to get a different configuration. The latter is probably better, given that the way things are now is mostly historical accident. But it involves more serious thinking. Charles Matthews (talk) 13:16, 1 June 2010 (UTC)


 * Upon careful examination, I don't think that the current "Abstract definition" covers any facts not covered in the in the "... from tensor product" subsection of elsewhere in the current article. The main difference is that the abstract definition section, takes more care in explaning the individual steps leading up to those facts. In particular, it takes more care in introducing concepts like dual vector space, etc. Paradoxically, this is exactly what makes the section somewhat inaccessible to lay readers because it makes the section heavy on mathematical notation. There are many readers that are not conformable reading a text that is interrupted by equations every other line.
 * I think it is viable to improve the other sections in this article enough, that this section becomes completely obsolete. TimothyRias (talk) 15:08, 1 June 2010 (UTC)
 * But there needs to be a complete, formal definition. Its existence needs to be indicated clearly for readers who want it. Its connection to the subject as used by people who have a less formal definition needs to be put into place. No amount of "salami-slicing" actually will address these points. Nor will it cover the other writing issues and requirements of the guidelines that have been raised in several discussions here. Charles Matthews (talk) 06:51, 2 June 2010 (UTC)
 * Are you suggesting that the "... using tensor products" section does not provide a complete formal definition. Also note that declaring one of the given definitions in the the "Definitions" section as THE formal definition, is a violation of WP:NPOV. A definition being more abstract does not make it more formal. At least the last two given definitions are just as formal. (And even the multidimensional array definition can be argued to formal, but that is more up for discussion (and not worth discussing here)).TimothyRias (talk) 08:21, 2 June 2010 (UTC)
 * Issues of neutrality usually come down to careful expression, as in "a formal definition as found in [textbook] is" rather than saying the formal definition is something, which is misleading. The point is that "formal definition" rather than handwaving should be addressed. "Formal" here is an indication to the reader, rather than a comment of degrees of formalisation. One knows that Einstein struggled with tensors: but probably not with the formal definition, rather the geometry. I have said that the article ought to follow the geometry, but there has to be a place where the essential algebra is clarified. In situations where there is some doubt, recourse should be made to referencing. Charles Matthews (talk) 08:42, 2 June 2010 (UTC)
 * So, is it your opinion that the definitions given in the "... as multilinear maps" and "... using tensor products" subsection of definition are not formal? TimothyRias (talk) 09:23, 2 June 2010 (UTC)

Dimensionality of quantities
I suggest that it be specified whether a tensor can be used for dimensionless quantities. ᛭ LokiClock (talk) 21:51, 31 May 2010 (UTC)
 * Why? (Also, when exactly would you call a tensor dimensionless? The latter is not entirely unambiguous to me.)TimothyRias (talk) 07:32, 1 June 2010 (UTC)
 * Dimensionless in the sense of representing no physical unit. Analogues of . I suppose it isn't necessary to state this, but I'd like to know. ᛭ LokiClock (talk) 12:22, 9 June 2010 (UTC)
 * To me it is still ambiguous what you mean by a tensor being dimensionless. I guess there are two possible meanings:
 * The components of the tensor are dimensionless.
 * The value of the tensor when evaluated on a set of vectors is dimensionless.
 * Both make sense in certain situations, and they only agree when the set of base vectors with respect to which the tensor components are calculated is dimensionless. TimothyRias (talk) 12:43, 9 June 2010 (UTC)

...is the matrix
Someone keeps insisting that a second order tensor is a matrix. This is both wrong and unhelpful in gaining a geometrical understanding of tensors. What is true is that the components of a second order tensor form a matrix in a coordinate system. Further edits saying that a tensor is a matrix should be discussed here. I have already attempted to explain this matter on my own talk page, but (evidently) unsuccessfully. Sławomir Biały (talk) 22:13, 31 May 2010 (UTC)


 * Obviously "Components of stress, a second-order tensor, in a three-dimensional Cartesian coordinate system form the matrix..." doesn't imply that the tensor is the matrix. So I don't see how changing that to "...a second order tensor. In a three-dimensional... it is the matrix..." does imply that. If the statements are not equivalent, there is no interpretation I can think of for the full sentence. ᛭ LokiClock (talk) 05:45, 1 June 2010 (UTC)


 * In the version you propose, the pronoun "it" can only refer back to second-order tensor, since it is the only singular noun in the preceding sentence. Thus, it reads like "a second order tensor is a matrix in ...", which is not equivalent to the previous statement.TimothyRias (talk) 08:25, 1 June 2010 (UTC)


 * The language, which is what I sought to change, is poorly formed and, as evidenced by this mix-up, ambiguous. Change it to say what is really meant, but keep the sentence break. ᛭ LokiClock (talk) 19:13, 1 June 2010 (UTC)
 * Is this what is meant? Stress, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix... ᛭ LokiClock (talk) 19:16, 1 June 2010 (UTC)
 * Yes, with the catch that the first sentence needs to contain a verb ;) TimothyRias (talk) 21:12, 1 June 2010 (UTC)

Motivating examples
I'm contemplating adding a section with "motivating examples" before the definition section. This section would discuss the objects that a tensor is supposed to generalize and maybe something like a linear map. It would be a good place the introduce the representation of vectors in components and how these transform. It would also allow the introduction of the concepts of dual vectors and the canonical dual basis.

It could also talk about linear maps and their representation as matrices. This could discuss the similarity transformation as the effect of a change of basis.

The current definitions section takes knowledge of these concepts somewhat for granted, or introduces them very quickly. Having a somewhat more extended section introducing these concepts could do a lot for accessibility of the article. Do others agree that this is a good idea? And if so, what should be covered in this section? TimothyRias (talk) 09:05, 2 June 2010 (UTC)


 * I support this. It would balance with the generalizations section. Perhaps as a subsection of history, rather than on its own? ᛭ LokiClock (talk) 10:21, 2 June 2010 (UTC)

Image
I think this is wrong:

whose columns are the forces acting on the $$\mathbf{e}_1$$, $$\mathbf{e}_2$$, and $$\mathbf{e}_3$$ faces of the cube.

it should be:

whose rows are the forces acting on the $$\mathbf{e}_1$$, $$\mathbf{e}_2$$, and $$\mathbf{e}_3$$ faces of the cube.

--Thomas gölles (talk) 15:35, 17 February 2011 (UTC)


 * maybe the vector on the left side should be a column vector? maybe the picture is wrong?  a basis is usually a row vector, right?  which is the basis, the e(space) or the o(force)?  (i would think the e.) that would imply the left side should be a column vector and the wording should be changed as suggested. Kevin Baastalk 16:30, 17 February 2011 (UTC)

Tensors in machine learning and pattern recognition applications
The "tensors" referred to in the linked articles are just high-dimensional arrays erroneously referred to as tensors. Should this section be allowed to remain? Sławomir Biały (talk) 11:58, 1 March 2011 (UTC)


 * Are you sure? I've skimmed the cited Lu et al. article. I couldn't find any instance in which it used the "tensors" as multilinear maps or performed changes of basis on them. On the other hand, it does use some tensor products and contractions, and it does explicitly talk about tensor spaces. Is it possible that there really are tensorial things happening here, that are hidden? Mgnbar (talk) 14:08, 1 March 2011 (UTC)


 * A tensor product followed by a contraction is essentially a change of basis, whether it's explicitly called that or not is irrelevant. ("a rose by any other name...") Kevin Baastalk 14:35, 1 March 2011 (UTC)


 * Change of basis is a particular kind of tensor product followed by contraction. Not all examples of tensor product followed by contraction are changes of basis, right? Anyway, this is not a deal-breaker. A piece of mathematics can be tensorial without having a bunch of changes of basis in it. Change of basis was just a tell-tale sign that I was looking for as I skimmed the article. Mgnbar (talk) 16:22, 1 March 2011 (UTC)


 * Also, the applications of tensors to machine learning are fairly obvious. If they weren't  being used in machine learning, that'd certainly be something to start doing right this instance!  And really, we're generally not that slow.  In today's day and age, if you've thought of something, chances are somebody's (if not 100's of people) already thought of it and done it.  So it is quite unlikely that tensors are not being used in machine learning. Kevin Baastalk 14:40, 1 March 2011 (UTC)


 * I agree with your statement that "if you've thought of something, then chances are somebody's done it". I have no doubt that researchers in machine learning are smart people who are fully capable of using tensors. I don't know anything about machine learning, so it is not at all obvious to me that tensors are applicable, or that they are not applicable. I have to agree with Slawomir that it's hard to see how tensors are really being used in the linked material. Additionally, the Multilinear subspace learning article itself is in some dispute.


 * At this point I do not favor deletion of the material. My ideal outcome is for the editors who know and care about this material to shore it up and make it relevant to Tensor. Mgnbar (talk) 16:22, 1 March 2011 (UTC)


 * A lot of work has been done on Multilinear subspace learning over the past few days, but I am still ambivalent. That article says "The term tensor in MSL refers to multidimensional arrays. Examples of tensor data include images (2D/3D), video sequences (3D/4D)..." This supports Slawomir's point, that practitioners are not really using tensors. For example, I have trouble seeing how a 2D image (presumably a "2-tensor") is supposed to represent a bivector, linear transformation, or bilinear pairing (the only kinds of 2-tensors). On the other hand, if there really is an entire field using the word "tensor" in this way, then Wikipedia should at least mention that use. It's not our job to decide that some popular word usages are wrong. Mgnbar (talk) 16:50, 4 March 2011 (UTC)

New image.
[[Image:Components stress tensor.svg|right|thumb|300px|Stress, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix

$$\scriptstyle\sigma = \begin{bmatrix}\mathbf{T}^{(\mathbf{e}_1)} \mathbf{T}^{(\mathbf{e}_2)} \mathbf{T}^{(\mathbf{e}_3)} \\ \end{bmatrix} = \begin{bmatrix} \sigma_{11} & \sigma_{12} & \sigma_{13} \\ \sigma_{21} & \sigma_{22} & \sigma_{23} \\ \sigma_{31} & \sigma_{32} & \sigma_{33} \end{bmatrix}$$

whose rows are the forces acting on the $$\mathbf{e}_1$$, $$\mathbf{e}_2$$, and $$\mathbf{e}_3$$ faces of the cube.]]

I've done some editing for the picture in the lead. First of all, I've corrected the labeling. More importantly, the vectors $$\mathbf{T}^{(\mathbf{e}_i)}$$ now actually are the sum of their components $$\sigma_{ji}$$. Moreover, in the spirit of less is more, I've reduce the clutter by removing various elements which are not necessary on this page. Hopefully, this makes the image less confusing. (I know it had me confused for quite a while.) Any feedback before I insert it in the article?TR 10:23, 29 March 2011 (UTC)

Poorly written intro
The lead sentence says it all. "Higher orders"? Higher orders of what? Higher order functions (which at least have a wiki page)? The casual reader will never know. The intro carries on in a like vein without given the reader any clue at all as to what a tensor is. Then, having failed to give any clue as to what a tensor is, the intro provides an example. Of a stress tensor. A particular type of tensor. A particular type of thing that the reader knows nothing about because she/he has looked tensor up on Wikipedia.

Anyone who does not already know what a tensor is will find this intro utterly useless. Truly, this is one of the worst intros to a math-related article on Wikipedia. I'd start to fix it, but other than BOLD, I honestly don't know where to begin. Ross Fraser (talk) 02:58, 28 April 2011 (UTC)
 * (Why is it that people making remarks, always have to rant like teetering idiots? Is it so hard to take a deep breath, and write a mature comment?)
 * I agree that the current lede is not particularly good. It leaves the reader with the nagging question "But what is a tensor?". However, I'm still at a loss as how to best improve it.
 * The base of the problem is that there exist different views as to what a tensor is exactly. The main views can be summarized as
 * "A tensor is a multi-dimensional array of numbers with certain transformation rules."
 * and
 * "A tensor is a multilinear map."
 * With various variants existing.


 * This makes it hard start Tensor like you would most lemma's by saying something like "A tensor is XXX", where XXX is something rather concrete. Any choice of XXX however almost automatically picks one of the competing views over the other introducing a POV.TR 10:08, 28 April 2011 (UTC)
 * Calling another WP editor a ranting, "teetering idiot" is commensurate with your standard of what constitues a mature comment?
 * Ross Fraser (talk) 06:03, 29 April 2011 (UTC)
 * I've rearranged the lead a bit, to put the discussion of tensor order first. I think that for someone never before encountering the notion of tensor, the idea of a multidimensional array is probably the most likely to be attainable.  I don't find the new lead altogether satisfactory either, but perhaps you can improve it.   Sławomir Biały  (talk) 13:16, 28 April 2011 (UTC)

(unindent) As the second line we now have,
 * "A tensor is a multi-dimensional array of numerical values."

That will make many people cringe. More importantly it will serve to confuse readers who have had limited exposure to tensors, who have been taught that tensors are not multi-dimensional arrays. I do, however, understand why you want to mention multi-dimensional arrays first. Would any of the following alternatives work: Not sure what's best so I'm suggesting it here first.TR 14:16, 28 April 2011 (UTC)
 * 1) "A tensor can be represent as a multi-dimensional array of numerical values." (this certainly is true, but leaves the reader in the dark as to what a tensor is. Feature rather than bug?)
 * 2) "A tensor can be thought of as a multi-dimensional array of numerical values." (Also true, hints at the ambiguity in definition, but might also be interpreted as WP:WEASEL.
 * The second line makes me cringe. Originally I had something like your first suggestion, but I think the phrase "can be represented" is likely to be even more confusing to someone with limited background.  As the original poster said, the lead failed to convey adequately a sense of what tensors "are".  This at least says what they "are", even at the risk of telling a little white lie at first that we attempt to correct in the second paragraph.  Just my 2c.   Sławomir Biały  (talk) 14:30, 28 April 2011 (UTC)


 * I know that this article is hard to write, but I vote against the "multidimensional array of numbers" definition. That would exacerbate the problem that novices have in distinguishing tensors from matrices.
 * How about this amalgam of the first sentences of the first and second paragraphs: "In mathematics and physics, a tensor is a scalar, a vector, or a certain kind of relationship among, or function of, vectors." And people learn by examples, so why not put basic ones early in the article? "Elementary examples include the dot product, the cross product, the determinant, and all linear transformations." And applications are good, like the stress tensor relating surface normal vector to the force vector on the surface. The second paragraph could be all about coordinates of tensors.
 * Lastly, there is no reason to restrict to geometric vectors; tensors don't require an inner product on the underlying vector space. Mgnbar (talk) 14:26, 28 April 2011 (UTC)
 * Your suggestion seems like a reasonable one as well. But ultimately some stylistic decisions need to be made, and I think the "multidimensional array" definition is both more concrete and perhaps the most commonly encountered first definition (despite its obvious limitations).  Also, to my mind, geometric vectors don't assume an inner product, but are the naive coordinate-independent "directed line segment" definition of vectors.  (It's true that our article discusses inner products, but it also discusses the generalized notion somewhat.) I think it's important to distinguish the vectors as geometrical objects somehow.  Sławomir Biały  (talk) 14:36, 28 April 2011 (UTC)
 * A compromise must be made, and sadly (for me) I may get outvoted, but I'd like to assert that the dot product (which anyone who is trying to learn tensors already knows) is more familiar than multidimensional arrays. Examples make abstract concepts concrete. Mgnbar (talk) 14:48, 28 April 2011 (UTC)

Regarding "vector" vs. "geometric vector": To me this seems related to the whole issue of Tensor vs. Tensor (intrinsic definition), which I've always found unsatisfactory. Maybe things would be clearer if we divided them up along these lines: algebra of tensors vs. geometry of tensors? Mgnbar (talk) 14:48, 28 April 2011 (UTC)
 * I don't think the two notions are easily separated, though. Sławomir Biały  (talk) 14:52, 28 April 2011 (UTC)

Another suggestion
How about the following suggestion for the first paragraph,
 * Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples include the dot product, the cross product, and linear maps. A tensor can be represented as a multi-dimensional array of numerical values. The order (also degree or rank) of a tensor is the dimensionality of the array needed to represent it. A number is a 0-dimensional array, so it is sufficient to represent a scalar, a 0th-order tensor. A coordinate vector, or 1-dimensional array, can represent a vector, a 1st-order tensor. A 2-dimensional array, or square matrix, is then needed to represent a 2nd-order tensor. In general, an order-k tensor can be represented as a k-dimensional array of components. The order of a tensor is the number of indices necessary to refer unambiguously to an individual component of a tensor.

This combines various aspects of Sławomir Biały's and Mgnbar's suggestions and some of my own. The first line gives as a concrete a statement of what tensors are as we can, they are "geometric objects". It also makes clear that they have to do with linear relations between other geometric objects such as scalars and vectors. Hopefully, it doesn't go too far in the direction of saying that tensors are multilinear maps. If not, maybe replace "... objects that describe ..." with "... objects used to describe ...".

I liked Mgnbar's suggestion of including some concrete examples in the second line. There is a good chance readers will have heard of things like the dot product. (Note that I didn't include determinant as an example. Although the determinant can be interpreted as a rank N tensor, most readers will only know it as a map on linear maps/matrices. In that context it is not linear, which will act as an unnecessary point of confusion (IMHO).)

We can then safely say that this objects can be represented as multi-dimensional arrays, and explain what is meant by order of a tensor in that context. Some additional tweaking may be needed since the first line no longer tells us that vectors and scalars are themselves tensors (at least not explicitly).TR 08:12, 29 April 2011 (UTC)


 * Seems fine to me, except there is some circularity in the first line saying that tensors describe linear relations between tensors. Would it be better to say something like "multilinear relations between vectors"?   Sławomir Biały  (talk) 12:06, 29 April 2011 (UTC)


 * I agree that this is a good attempt except for the first sentence. I prefer something along the lines of "scalars, vectors, and ... among vectors" (as I wrote above).


 * Now there is one other issue. This treatment emphasizes geometry, going so far as to define "order", which makes sense only in the presence of an inner product (or more generally a nondegenerate bilinear pairing). I know that this is common in physics and engineering, but there are whole swathes of mathematics where tensors are used without inner products. There is no canonical way to raise and lower indices, so there are real differences among (2, 0)-, (1, 1)- and (0, 2)-tensors, for example. This is why I was arguing for a clear distinction between algebra and geometry of tensors above; the latter would assume an inner product, and the former would not. Mgnbar (talk) 12:51, 29 April 2011 (UTC)
 * First off, I think you are wrong about the last part. In no way does the definition of covariant and contravariant tensors require the introduction of of an inner product. The inner product is only required if you want to be raise or lower indices. More to the point, the suggested paragraph does not even mention tensor type, but only the total order/rank/degree.
 * As to the perceived circularity in the first sentence. That was intentional. In general high order tensors, do describe linear relations between lower order tensors. The inverse metric for example, is a tensor that describes a linear relation between pairs of covectors (rank 1 tensors) and scalars. I don't think it is bad thing, to hint at this sort of thing up front, since it gives an indication of the generality of the concept. It nibs in the butt, any sort of lingering questions like "what do you can something that describes a linear relation between tensors?" (answer, a tensor)TR 14:32, 29 April 2011 (UTC)
 * Well, I guess I'm satisfied. By me, I'd say go ahead and be bold.   Sławomir Biały  (talk) 14:34, 29 April 2011 (UTC)


 * We have miscommunicated about this "geometry issue". I was certainly not saying that you need an inner product to define (p, q)-tensors. In fact, I was saying the opposite: Without an inner product (or more generally a nondegenerate bilinear pairing) you must keep upper and lower indices distinct, because you have no raising/lowering mechanism. For example, there are three kinds of "2-tensors" --- (2, 0), (1, 1), and (0, 2) --- and they all behave differently and mean different things.
 * So I have never been able to recognize any significance or value to the "total order" concept p + q, in the absence of an inner product. (Please correct me if I am wrong!) This is why I said that a discussion of total order implicitly slants the article toward geometry. Please note that you could completely satisfy me by replacing p + q with the more detailed notation (p, q). Mgnbar (talk) 15:18, 29 April 2011 (UTC)
 * I understood what you meant. I just don't think the technical distinction between covariant and contravariant indices is something that should be emphasized in the lead. The current lead is pitched at an absolute novice.  Insisting on being strict about conceptually hard technical issues is going to make it far less accessible in my opinion.  Sławomir Biały  (talk) 15:39, 29 April 2011 (UTC)


 * Well, the significance of the total order is very simple. It is the dimension of the array needed to represent it/the number of indices, etc. This is the only thing that is currently mentioned in the first paragraph. Explaining, the (p,q) type of tensor is somewhat more involved, as it refers to concepts that most readers will have never heard of. Their definition is discussed in detail in the body of the article. For the above reasons, I don't think it is such a good idea to mention them in the lede.TR 15:45, 29 April 2011 (UTC)


 * Okay, I am still of the opinion that maintaining p and q separately clarifies the meaning of tensors rather than obscuring it, but we don't want to hit the reader with too much in the intro, so I concede that point.
 * TimothyRias, I know that p + q is the total dimension of the array; I just don't see what the point of computing this number is. If you were writing a computer program to do calculations with tensors, you would have to store the numbers p and q (rather than just the number p + q), and use them frequently, or else your computations would go wrong. (I have written such programs.) As further evidence, I note that Tensor (intrinsic definition) mentions the total order barely, as a tiny aside. But this is veering away from the editing of Tensor, so let's not argue about it here. Cheers. Mgnbar (talk) 17:09, 29 April 2011 (UTC)
 * This is indeed veering off-topic so this the last comment I'll make on the matter. If you want to store the components of a (2,2) tensor in a C program, what depth array would you declare? Exactly 2+2=4. If you are treating a tensor as multidimensional array the dimension of that array will always be very important. In fact the numbers p and q seperately, are somewhat useless since you will need to keep track of the valence of each index seperately. I.e. the knowledge that the 10-dimensional array A has 4 covariant and 6 controvariant indices is not that useful in an actual computation. What you need is the knowledge that indices 1,3,6,and 7 are covariant.TR 20:59, 29 April 2011 (UTC)
 * Salomon Bochner is reputed to have proclaimed at the beginning of a lecture at Princeton: "A tensor is something with indices!" This initial proclamation is perhaps too crude to be useful for most serious applications, but it is a very useful starting point.   Sławomir Biały  (talk) 17:25, 29 April 2011 (UTC)


 * Yeah, I'd like to hear the part of the anecdote where he talks about the Christoffel symbol. ;-) Mgnbar (talk) 19:25, 29 April 2011 (UTC)
 * Yeah, well he probably left that as a homework exercise... ;-)  Sławomir Biały  (talk) 19:46, 29 April 2011 (UTC)