Talk:Covariant transformation

What is a good category for this article?
Oleg Alexandrov 22:20, 5 Mar 2005 (UTC)

Andrew Critch says:

This article gives a better explanation of covariance/contravariance than anything else I've seen on the net, including other Wikipedia articles. If any merging occurs, I think other articles involved should be (more or less) appended to the end of this one. This article's introduction and examples should be left intact, and for the most part, at the BEGINNING of the merged article.

Comment:

Dude this article is very confusing. It lists rotation as an example for covariant and contravariant transformations. The underlying idea is good in that the basis vectors rotating counter clockwise is "covariance" and that the vector components rotating clockwise is "contravariance."

The only problem here is that covariant components and contravariant components of vectors act the same under rotation:

"When only rotations of the spatial are considered, the components of contravariant and covariant vectors behave in the same way. It is only when other transformations are allowed that the difference becomes apparent."

Thus this example could be very confusing. I.e. I'm trying to learn this stuff and reading that article just confused me even more... — Preceding unsigned comment added by 98.242.74.222 (talk) 23:05, 4 March 2012 (UTC)


 * I largely agree that basis rotation is a suboptimal example, for the following reason. It is a fine example for showing the opposite rotation behavior of basis vectors and the components of vectors in the vector space.  However, it is not a representative example when considering the basis and components of the dual space.  The example of rotation of the coordinate system is an unclear example for contrasting covariant and contravariant components (components of vectors and of covectors in the corresponding dual space).  This is because covariant and contravariant _components_ (i.e. the components of both vectors and the corresponding covectors) transform identically under pure rotations.  This is easy to prove - it is essentially because the inverse of a pure rotation matrix is also its transpose (because a pure rotation matrix is unitary) - and thus contravariant components (multiplying by a rotation matrix R from the left: Rx) and covariant components (multiplying by the inverse, yR^-1, if we set y = x^T, then this equals (x^T R^T) = (Rx)^T, which is just the transpose of the original product Rx - this will _not_ be the case for any R other than a pure rotation) undergo "identical" transformations.  The fact that this is _only_ the case under a unitary/orthogonal (rotation) matrix R, under which contravariant and covariant components transform identically, makes this arguably a confusing example.
 * another easy way of seeing this is to think about the induced transformation of the corresponding "dual basis" for the dual space. the dual basis dx^i corresponding to the underlying vector space basis e_i is always defined such that dx^i (e_i) = delta^i_j (kronecker delta).  therefore it is easy to see that under pure rotations of the basis e_i, the basis dx^i will be rotated "in unison" such that the above identity is always true.  if they rotated "in opposite directions" this would not be the case.  so this section, although it is good for understanding the opposite transformation behavior of basis vectors and vector components, is not representative of what it typically referred to as "covariant components", since those components correspond to covectors in the dual space, and in this particular case would actually transform identically to contravariant components.
 * examples that more clearly show the distinction between covariant and contravariant components involve "stretching" the basis vectors. for example, transforming the basis by the matrix 2, 0, 0],[0,2,0],[0,0,2 - since there will always be a clear opposite transformation behavior for covariant and contravariant components under stretching of basis vectors, unlike with pure rotations. PhDSciencePerson (talk) 05:45, 3 June 2024 (UTC)

Merge
At first glance this article is better. —Ben FrantzDale 23:51, 30 August 2006 (UTC)


 * There are at least three related articles that are closely related and need cleanup: this one, covariance and contravariance, and differential form. Thoughts? —Ben FrantzDale 10:12, 11 May 2007 (UTC)


 * Leave differential form out of it. It is another example of pointless abstraction, like category theory. JRSpriggs 11:06, 11 May 2007 (UTC)


 * I also support merging of this one and covariance and contravariance. Both articles have their merits and demerits and a merged article will contribute toward completeness and clarity which is lacking in many points. Lantonov 14:32, 23 July 2007 (UTC)


 * Covariance, contravariance and the associated transformations cannot reasonably discussed without fully discussing the others. In consequence, the content of the articles are to all intents duplicated. The topic is however more general, hence a merge of the articles Covariant transformation and Covariance and contravariance of vectors to a broader article Covariance and contravariance of tensors (currently a redirect) should be considered, in which vectors should be discussed en route to the complete concept. — Quondum☏ 12:20, 8 August 2012 (UTC)

Coordinate Representation
There needs to be a drawing of a set of arbitrary bases (duals), in the context of an orthogonal basis, representing the contravariant components alongside the covariant ones. I'll see if I can work on it.--Charlesrkiss 07:07, 28 October 2006 (UTC)


 * I plan to put soon (in maybe another week) such drawing in the article Curvilinear coordinates which I work on now. Lantonov 14:52, 23 July 2007 (UTC)

Transformation versus change of coordinates
I am a bit confused about covariance and contravariance. It looks like a covariant "transformation" is a change of coordinates and does not transform vectors at all. In the example (right), the vector v is represented in red in polar coordinates and in black in Cartesian coordinates, but both represent the same vector, v where v is in the tangent space of the point (3,4) (in Cartesian coordinates) (which is also the point (5, atan(4/3)) in polar coordinates). While the "transformation" from Cartesian to polar coordinates would change the numerical representation of v from (3/2,3/4) to something like (1.34, 1.067), we are still talking about the same vector.

But the text of the article says "The covariant transformation [from black to red] here is a clockwise rotation." But nothing is being rotated, right? —Ben FrantzDale 23:04, 28 April 2007 (UTC)


 * I can't see any counterclockwise rotation of the components, which are colinear to the basis vectors.(anonymous) —Preceding unsigned comment added by 90.7.103.35 (talk • contribs)


 * Exactly. —Ben FrantzDale 17:36, 17 May 2007 (UTC)


 * I think that the concept of rotations given here in an attempt to give a geometric interpretation of covariant and contravariant, is confusing. Personally, I couldn't see what is rotated in which direction, as the description is very ambiguous. It would be better to stick to the textbook description of co- and contravariant until someone finds more clear geometric interpretation. Otherwise, I am very ardent adherent to geometric clarification of obtuse math topics (covariance is one of them) and applaud any attempt in this direction, including this one. Lantonov 14:47, 23 July 2007 (UTC)


 * I have changed the pictures, in line with my comments in a section further down. I hope it's now rather clearer what is being illustrated and why.  Jheald (talk) 15:26, 27 October 2008 (UTC)

There is a very good geometrical treatment of this in a little book called "Geometirical Vectors" by Gabriel Weinrich.I found it very usefull and now think in terms of "arrow fields" and "stack fields" for contravariant and covarient "vectors".This aproach really is very helpful (at least I found it so)Dave59 09:57, 30 July 2007 (UTC)


 * It would be very helpful if you include this treatment from Weinrich book in the article. I am not aware of it. Lantonov 10:05, 30 July 2007 (UTC)

In the book (which is about 100 pages long) Weinreich uses geometrical arguments to formulate a version of traditional 3 dimensional vector calculus that is as far as possible topologically invariant. To do this he distinguishes between different types of “vector” that transform in different ways not only under rotations but also under “stretchings” and “compressions” of space. This leads to 4 different types of “vector” (covariant, contravariant, contravariant density and covariant capacity). It also becomes necessary to distinguish between scalar densities and scalar capacities. If this is done and then 1 co-ordinate system is defined metrically the algebraic forms of all the usual vector calculus identities become identical in all co ordinate systems as long as each type of vector and scalar is expressed in terms of its natural basis in terms of partial differentials as explained in this article i.e. contravariant vectors are given a co variant basis and vice versa. This is complicated and requires a lot of 3 d diagrams. I am not up to expressing it succinctly. Dave59 11:26, 30 July 2007 (UTC)
 * I am familiar with "contravariant density" and "covariant capacity" but not with their geometric interpretations. I can imagine that the whole thing is complicated, especially when it involves developing new constructs. Lantonov 11:38, 30 July 2007 (UTC)

General Meaning
Covariance has a more general meaning, which is not mentioned in the article. The second chapter of J. Bjorken and S. Drell, Relativistic Quantum Mechanics, McGraw-Hill, 1964, has the titel "Lorentz Covariance of the Dirac Equation". This does not concern co- or contravariant transformation, but more generally a linear representation of Lorentztransformations. In this broader sense one uses the word "covariant", when one states that Einstein's equations or the Rarita-Schwinger-equation are covariant. --Norbert Dragon (talk) 14:34, 22 August 2008 (UTC)
 * You might find the article General covariance and Lorentz covariance helpful for your inquiry into this interesting topic. By the way, it was Einstein around 1920 who coined the notion of covariance of natural laws.--LutzL (talk) 16:01, 23 August 2008 (UTC)

Picture
The picture is a good one. But for polar and cylindrical coordinates, it is customary to measure the angle &phi; in an anticlockwise direction, so the unit vector e&phi; also points in an anticlockwise direction. This then gives the conventional mapping of the frame of base vectors from {ex,ey} to {er,e&phi;} and vice-versa, rather than the {ex,ey} to {e&phi;,er} currently shown. I suggest it would therefore be appropriate to flip the direction of v&phi; in the picture.

It would also be nice, I think, to have a corresponding picture showing the coordinate representation of the vector v in the old co-ordinates and the new co-ordinates, to show pictorially how this transforms in a contravariant way. Jheald (talk) 12:41, 4 October 2008 (UTC)


 * Fixed.


 * (Though perhaps there is now one image too many -- I'm not sure whether the image in the middle, of the components of v, which is actually the image most like the image previously on this page, I'm not sure whether this image is actually useful or not. Perhaps the presentation might be more focussed without it.  But I'm going to leave that for other readers/editors to call).


 * Jheald (talk) 15:21, 27 October 2008 (UTC)

The summation are?
As it stands, the subject is "summation" and the verb is "are". The summation over all indices of a product with the same lower and upper indices are invariant to a transformation. If I'm reading it right, the statement is correct, and it's just a verb error. In other words, it is the summation (or to be precise, the result of the summation) that's invariant, so the correct fix is simply to change the verb to "is". It's been ages since school, though (hence my need to read the page in the first place), so I'm not quite confident of that, and I'll leave the fix to someone who is. --Dan Wylie-Sears 2 (talk) 14:22, 28 September 2010 (UTC)

Mkhomo (talk) 18:42, 24 March 2012 (UTC) Comments:

Ambiguous Motivation
The prior poster has identified one of a number of confusions in this document. It was written some time ago and has been overtaken by other related entries that do not create as much confusion. Invariants are those geometrical properties that can only be recovered by summing across all indices, their various contributing factors. Hence (if I number each equation in the article starting at 0), the vector v in the equation (0) illustrates that v is the same irrespective of coordinates primed (new) or unprimed (old).

The text leading to equation (0) has another linguistic confusion "on a chosen basis ei, related to a coordinate system xi (the basis vectors are tangent vectors to the coordinate grid)" should not be "tangent to the coordinate grid'"' but rather, "tangents identified with/by the coordinate grid''".

Leaving grammatical ambiguity aside, the introductory section lacks clarity on account of scattered motivations, and it does not help to make the covariant transformation be defined by the traditions - In Physics, as the notation that follows the by definition, gives the impression of a by-fiat convention. Further, invariance has no direct relationship to the co/contra-variance of spatial items, so equation (0) is a mild source of confusion that is amplified by the rotation illustration.

Having said all that, physicists do inform the algebraists and geometers as to which notational conventions properly correspond to physical realities of the empirical world, as the latter have no initial bias. A re-write of the article's motivation may pick up from my remaining comments that hopefully are constructive.

I suggest to leave the article as-is but preface it with the following:Mkhomo (talk)


 * The suggested text seems to be very obscurely written. I have removed it from the article.  I agree that the lead, as currently written, is not perfect.  But it is much better than what you had inserted.  Please try to satisfy your objections in a way that at least makes the article understandable to others.   Sławomir Biały  (talk) 13:19, 11 January 2013 (UTC)

Co/Contra-variant Transformations
The co/contra-variant nature of vector coordinates has been treated as an elementary characterisation in Tensor Analysis. For example in the classic text translated from the 1966 Russian 3rd Edition and published by Dover, the coordinate space is first introduced with oblique bases, with basis vectors in subscript indices (ei = $$\frac{\partial \;}{\partial {x}^i}$$) and coordinates in superscript indices (xi), and the motivation for this convention is deferred to later with the following passage as quoted :


 * "These designations of the components of a vector stem from the fact that the direct transformation of the covariant components involves the coefficients αki' of the direct transformation, that is A'i= αki'Ak. while the direct transformation of the contravariant components involve the coefficients αi'k of the inverse transformation Ai' = αi'kAk."

In the first instance, suppose f is a function over vector space, one can express the scalar derivative components of f in new coordinates in terms of the old coordinates using the chain rule and get $$\frac{\partial f\;}{\partial {x'\,}^i} = \sum_j \frac{\partial f\;}{\partial {x}^j} \; \frac{\partial {x}^j\;}{\partial {x'\,}^i}$$. Direct differentiation of the coordinate values produces a transformation where the transformed (new) bases equal the rate of change of the old bases with respect to the new coordinates, times the old bases. To paraphrase, transforms as change of OLD bases times the OLD, (transform directly).

In the second case where the components are not coordinates but some derivative of the coordinate such that vi = dxi/dλ, when we perform a change of bases, for each new coordinate component (i), xi, it fixes relative to independent scalar components (j), by the chain rule: $$v'^i = \frac {dx^i}{dx^j} \frac {dx^j}{d\lambda} = \frac {\partial x'^i}{\partial x^j}v^j$$, namely, that the new bases equal the rate of change of the new coordinates with respect to the old coordinates, times the old bases. To paraphrase, transforms as change of NEW bases times the OLD (transform inversely).

The co/contra-variance of a transformation is an algebraic property, and the designation cited above also applies in a generalised differential geometry setting where in a Category C, with vector spaces V, W belonging to the Category C, a covariant Functor L maps the set of homomorphisms Hom(V,W) to Hom(LV,LW), whereas a contravariant Functor L' maps Hom(V,W) to Hom(L'W,L'V). Again notice the transformation domain's push forward direction in the covariant, and pull-back direction in the contravariant case. This construct extends the covariant differential to Manifolds such as Vector Bundles and their Connections, for example by Parallel Transport extending covariant derivatives to Vector Fields over Manifolds by affinely connecting tangent vectors from one Tangent bundle on the manifold to neighbouring fibres along a 'curve'.(see for example:covariant derivative).

Invariance
Metric Invariance is a geometric concept indicating how a physical property does not change with arbitrary coordinate frames. For example, in (early) tensor analysis elasticity, namely Young's Modulus, should not change simply because we measure it with tensile testing of a rectangular object (a bar) or cylindrical one (a rod). Invariance refers to the fact that elasticity is an independent physical property which is only recovered through the determinant of stress/strain tensors - (summation over all indices), irrespective whether in cylindrical or rectangular coordinates and the transform and its adjoint commute. Other references to invariance exist - of say parallelism structures on fibre bundles where the covariant derivative can be recovered unchanged across bundle charts under parallel transport. Other examples motivated from Physics of variance and invariance as a neutral alternative follow.Mkhomo (talk)

Confusion about duals transforming contravariantly or covariantly
There is a contradiction present in this article, notably when looking at the end of the "dual properties" and the beginning of the following section. It reads "the elements of the dual space (called dual vectors) transform covariantly and the elements of the tangent vector space transform contravariantly," however, in the section on transformation of tensor components, the article refers to "dual vectors" as "differential forms," which, earlier in the article, are stated to "transform contravariantly" (as an explicit example of contravariant transformation). So, which is it? Do dual vectors transform covariantly, or do they, as identified as differential forms, transform contravariantly? This is pretty confusing as I try to understand the distinction between these two ideas and I would appreciate some clarity there. Thanks. — Preceding unsigned comment added by 2600:6C52:7A00:18C:D0A6:BFB0:250C:947 (talk) 08:53, 28 October 2021 (UTC)


 * the dual _basis_ "transforms contravariantly" and the _components_ of dual vectors transform covariantly.
 * this is just like the case of vectors in the underlying (not dual) vector space - the _basis_ vectors "transform covariantly" (essentially by definition) while _components_ of vectors in the vector space transform contravariantly.
 * the reason I put "transforms" in quotes above is that the "index" of a basis vector (e_i, in the case of the vector space basis, or dx^i, in the case of the 1-form basis) is _not_ really a component, but a label for the basis vector. So part of the confusion is that in the equations for transforming _bases_, those indices do not refer to individual components of a vector, but refer to a single basis vector.  For example, in the expression x^i e_i, the x^i refers to a vector component (this will transform contravariantly) whereas the e_i refers to the i-th basis vector (and the basis vectors "transform covariantly" as explained).
 * to further elaborate, differential forms are indeed dual vectors / covectors, and their basis are the dxi. The basis 1-forms dx^i "transform contravariantly", while the _components_ of general 1-forms in the space of covectors transform covariantly. PhDSciencePerson (talk) 20:17, 2 June 2024 (UTC)

Inverse?
" The inverse of a covariant transformation is a contravariant transformation." The inverse??? 2A01:CB0C:CD:D800:6C99:F9E9:1059:8F03 (talk) 15:57, 20 December 2021 (UTC)