Talk:Gradient

Where is the jar gone?
"In mathematics, the gradient is a generalization of the usual concept of derivative to the functions of several variables."

The gradient is something like the rate at which something increases over an area. The above sentence says something like, gradient is a non-specific reference to consequentials of non specifics, which isn't very specific, but is definitely a derivative consequential to specifics. ~ R . T . G  13:33, 25 August 2014 (UTC)
 * What does mean this comment in a non-jargon English, in particular, what are consequentials and specifics, and where are they defined? D.Lazard (talk) 13:56, 25 August 2014 (UTC)
 * It means, using those words like that in a sentence makes it very difficult to understand without significant prior knowledge of the subject, and your response repeats and examples it beautifully in every sentiment, thankyou. ~  R . T . G  15:14, 25 August 2014 (UTC)
 * Perhaps it should say "In mathematics, the gradient is a generalization of the usual concept of derivative of a function in one dimension to a function in several dimensions"? In fact, as currently stated, it is not true; the gradient with arbitrary coordinates is not the same thing as a vector formed from the partial derivatives of a function of those coordinates, which the wording appears to suggest. I'll change it accordingly.  —Quondum 06:29, 26 August 2014 (UTC)
 * Your edit has improved the readability. To be more helpful I'll have to take some time and actually figure it out.  But you are commended to approach a jargon request at all, kudos.  ~  R . T . G  22:45, 26 August 2014 (UTC)

Rewrite the section Gradient of a vector
I have rewritten the section Gradient of a vector. The general definition of vector gradient is given by GTM 93, and I give the detailed reasoning as follows.

By Kozsul connection, one has


 * $$\nabla_c \mathbf{f}^b = (\frac{\partial f^{\nu}}{\partial x^{\mu}}+{\Gamma^{\nu}}_{\mu\sigma}f^{\sigma})(\mathbf{e}^{\mu})_c(\mathbf{e}_{\nu})^b.$$

Given the metric g, one obtains


 * $$g^{ac}\nabla_c \mathbf{f}^b = g^{\mu\rho}(\frac{\partial f^{\nu}}{\partial x^{\mu}}+{\Gamma^{\nu}}_{\mu\sigma}f^{\sigma})(\mathbf{e}_{\rho})^a(\mathbf{e}_{\nu})^b$$

in which the Greek symbols denote coordinates. By using English alphabet, one may get the results in the article.--IkamusumeFan (talk) 23:10, 26 October 2014 (UTC)


 * This new version cannot been kept for several reasons. Firstly, it uses undefined notation, such that the g that appears in several formulas. Secondly, it is too technical, as it cannot be understood by most readers, even those who know the notion of gradient and use it frequently in the common case of Euclidean metric. Thirdly, your terminology is not standard and confusing: you use vector in the meaning of vector field. In standard terminology, "vector" means element of a vector space. If this vector space has a metric, this must been specified. If the vector depends on one or several variables, this must also been specified. Finally, when a special case (here, the gradient with respect of the Euclidean norm) is widely used, it must be explained before its generalization. Therefore, I'll revert your edits. Please, if you want to add or expand the generalizations of the simpler case, do it without destroying what is already there. D.Lazard (talk) 08:53, 27 October 2014 (UTC)


 * I disagree with the revert. The issue of vector versus vector field is already a problem with that section independently of the recent edit, and should be fixed by normal editing.  The assumption of the exitence of a metric also has implicitly been assumed.  The proposed revision at least asserts flat out that there is a metric tensor, unlike the older version of the section.  (A metric tensor is always required to define the gradient, because it is needed to raise the index of the differential, even if all we care about are gradients of scalar fields.)  Also, it isn't any more technical than what is there already, which includes a formulation using Christoffel symbols for calculation in curvilinear coordinates.  Surely, the invariant formulation of this should also be mentioned.  So I have restored the edit, but moved it to the end of that section, with some tweaks.   Sławomir Biały  (talk) 10:42, 27 October 2014 (UTC)

I think it is important to clarify whether this represents a covariant derivative, Frechet derivative, or Gateaux derivative. When I asked him, Maciej Zworski told me that the question "what is the gradient of a vector field" is ill-defined on account of the meaning of "gradient of a vector field". All I know is that yes, the gradient of a vector field is a tensor of type (0,2) as it's supposed to provide the "best linear approximation" of the vector field in question. As someone who has a difficult time understanding tensor math, I think it's also imperative that its expression in coordinates also be given, particularly the Euclidean case and the case of polar, cylindrical, and spherical coordinates. --Jasper Deng (talk) 07:03, 27 June 2015 (UTC)

Natural Gradient (geometrically correct gradient)
According to tensor calculus, Tensors_in_curvilinear_coordinates, the definition of gradient which produces the same vector from a scalar field independent of the coordinate system is:

$$\nabla f(x) = \frac{\partial f}{\partial q^i} b^i = \frac{\partial f}{\partial q^i} g^{i j} b_j$$

where $$f$$ is the function, $$x$$ represents the input into the function parameterized by the coordinate system, $$q^i$$ is the ith parameter of the coordinate system, $$b^i$$ is the ith basis vector in the contravariant (~inverse) coordinate system, $$b_i$$ is the ith basis vector in the (covariant) coordinate system, $$g^{i j}$$ is the inverse metric tensor.

I feel this is important to include on this wiki page as it says how to calculate the gradient in different coordinate systems. A few examples of the tensor / natural gradient include

Euclidean: $$b_1 = \vec{i}, b_2 = \vec{j}, b_3 = \vec{k}$$
 * $$g_{i j} = g^{i j} = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}$$
 * So the gradient reduces to the familiar form of
 * $$ \nabla f(x) = \frac{\partial f}{\partial x^m} \frac{\partial x^m}{\partial b^i} g^{i j} b_i = \frac{\partial f}{\partial x} \vec{i} + \frac{\partial f}{\partial y} \vec{j} + \frac{\partial f}{\partial z} \vec{k}$$

Spherical:
 * $$ x = r sin(\theta) cos(\phi)$$
 * $$ y = r sin(\theta) sin(\phi)$$
 * $$ z = r cos(\theta)$$
 * $$ J = \begin{bmatrix}

\cos\left( \phi\right) \sin\left( \theta\right) & r\, \cos\left( \phi\right) \cos\left( \theta\right) & r\, -\sin\left( \phi\right) \sin\left( \theta\right) \\ \sin\left( \phi\right) \sin\left( \theta\right) & r\, \sin\left( \phi\right) \cos\left( \theta\right) & r\, \cos\left( \phi\right) \cos\left( \theta\right) \\ \cos\left( \theta\right) & -r \, \sin\left( \theta\right) & 0 \end{bmatrix} $$
 * $$ g_{i j} = J^T \, J = \begin{bmatrix}1&0&0\\0&r^2&0\\0&0&r^2 \, \sin\left( \theta\right)^2\end{bmatrix}$$
 * $$ g^{i j} = \begin{bmatrix}1&0&0\\0&r^{-2}&0\\0&0&r^{-2} \, \sin\left( \theta\right)^{-2}\end{bmatrix}$$
 * $$ b_1 = [\cos\left( \phi\right) \sin\left( \theta\right), \sin\left( \phi\right) \sin\left( \theta\right), \cos\left( \theta\right)] =

\vec{r} $$


 * $$ b_2 = [r\, \cos\left( \phi\right) \cos\left( \theta\right), r\, \sin\left( \phi\right) \cos\left( \theta\right), -r \, \sin\left( \theta\right) ] = \vec{\theta} \, r $$


 * $$ b_3 = [r\, -\sin\left( \phi\right) \sin\left( \theta\right), r\, \cos\left( \phi\right) \cos\left( \theta\right), 0] = \vec{\phi} \, r \, \sin\left(\theta\right) $$


 * $$ \nabla f(x) = \frac{\partial f}{\partial x^m} \frac{\partial x^m}{\partial b^i} g^{i j} b_i = \frac{\partial f}{\partial r} \vec{r} + \frac{\partial f}{\partial \theta} \frac{\vec{\theta}}{r} + \frac{\partial f}{\partial \phi} \frac{\vec{\phi}}{r \sin\left( \theta\right)}$$

— Preceding unsigned comment added by Mouse7mouse9 (talk • contribs) 21:53, 21 January 2015 (UTC)

Product Rule
The description for the product rule was a bit misleading.


 * If f and g are real-valued functions differentiable at a point a ∈ Rn, then the product rule asserts that the product (fg)(x) = f(x)g(x) of the functions f and g is differentiable at a, and
 * $$\nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a).$$

Initially, I assumed it had something to do with the derivative of $$\nabla (fg)$$. The little segment "(fg)(x) = f(x)g(x)" confuses matters further and I think it should be removed or simplified because it is a bit obvious. My suggestion:


 * If f and g are real-valued functions differentiable at a point a ∈ Rn, then the product rule asserts that the product fg is differentiable at a, and
 * $$\nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a).$$

Muntoo (talk) 01:03, 29 July 2015 (UTC)


 * The new revision is an improvement because it is more concise. I'm not sure I understand the source of confusion in the original wording, but I would support a revision like this if it clarifies things for some readers.   S ławomir  Biały  11:40, 29 July 2015 (UTC)

Confusion with differential
The current lead describes the gradient as a "generalization of the derivative". It then proceeds to describe what appears to be the differential. This rewrite occurred. This description appears to be false (except under highly coordinate-dependent assumptions), since it is really apparently describing the differential, which is related to the gradient via a musical isomorphism. This discrepancy appears to be confirmed by the content of the article. However, this change has remained unchallenged for nearly four years, and so I would not like to summarily revert it to the prior (to my eye, more correct) description. Can others whose subject area this is comment on this? —Quondum 17:18, 17 September 2017 (UTC)
 * For a univariate function $$f(x),$$ the derivative is $$f'(x),$$ and the differential is $$f'(x)dx$$ (this is what I have learnt in my undergraduate courses). Thus, it seems that it is the gradient, not the differential, which is a generalization of the derivative. This distinction between the gradient and the differential is clearly stated, in a similar way, in section "Differential or (exterior) derivative". The change that you have quoted was intended to give a less specialized lead: the previous lead used exclusively the terminology of vector fields, while many people use the gradient without using (and often knowing) this terminology (for example, in algebraic geometry, a singular point of an hypersurface is defined as a point where the gradient of the defining equation is zero, and nobody would use the terminology of vector fields here). D.Lazard (talk) 18:30, 17 September 2017 (UTC)


 * Okay, I should not be bringing in other terms, as I am clearly not being rigorous. I should try to isolate what is bothering me about the first paragraph.  If the gradient $$\nabla f$$ of a function $$f$$ is a generalization of the derivative $$f'$$, then in one dimension, they should be the same thing.  The gradient is undefined without the presence of a bilinear form (e.g. the Euclidean metric).  It is therefore defined only with reference to a specific bilinear form.  However, the derivative is defined without any bilinear form.  That they cannot be equivalent in once dimension is made a bit more obvious if the bilinear form is defined and is changed: the derivative remains unchanged, but the gradient clearly changes.  I expect that this problem also manifests under a change of variables instead of a change of bilinear form.  One can also see the problem if one restricts the space in cylindrical coordinates to a circular arc (e.g. set $$r=5, \varphi = 0$$): in this one-dimensional space the gradient and the derivative differ by a non-unity factor.  —Quondum 19:51, 17 September 2017 (UTC)


 * I disagree with the assertion that the gradient is defined with respect of a bilinear form. If this were true, one could not use this term in affine geometry and in algebraic geometry. Here is what I have understood from what I have read on the subject: If $$f$$ is a differentiable function defined on a vector space $$V,$$ then its gradient at a point belongs to the dual of $$V.$$ As an element of a dual, the gradient is a linear form, and it is thus not really different from the differential; this is just a difference of point of view: when considering it per se, or through its coordinates, one talks of "gradient", while, when applying, as a linear form, to a vector, one talks of differential. If, as it is the case in Euclidean geometry, $$V$$ is equipped with a bilinear form, this defines an isomorphism between $$V$$ and its dual, and associates to the gradient an element of $$V,$$ which is also called "gradient". In other words, the gradient is a covector, which is often considered as a vector through this isomorphism. This is what is been done, when saying that the gradient the gradient "points in the direction of the greatest rate of increase of the function" (note that "greatest rate" implies the existence of a metric).


 * I agree that this interpretation may be WP:OR, but I do not see any other way to make compatible the different views on the subject. As these considerations are rather technical, and most readers would be confused by them, I find better to keep this ambiguity between the covector and the associated vector. Only very experimented readers would be confused by this. One could add, near the end of the article a section for clarify this near the end of the article. D.Lazard (talk) 09:23, 18 September 2017 (UTC)


 * Okay, so we are dealing with two definitions, one as a vector and one as a covector. Regardless of exactly what level of reader we aiming the article at, I totally disagree that unclarified (technical) ambiguity should be allowed in the article.  If there are two definitions in use, this should be clearly stated.  For the lay reader, it is easy enough to gloss over the distinction without confusing the reader who wants to use the article as a reference for the precise definition.  So it seems a bit of research is needed.  —Quondum 16:27, 18 September 2017 (UTC)


 * I've taken a shot at rewriting the lede to clarify the difference between the derivative/differential and the gradient. The most elementary way of keeping these straight is that the derivative is a row vector (covector), while the gradient is a column vector, while the main substantive point is that the derivative is how much the output changes, while the gradient is a change in input. There's more technicalities in advanced differential geometry, and explaining why these are conflated in vector calculus is likewise lengthy, but for the lede this should be enough: simple, brief, and precise. :—Nils von Barth (nbarth) (talk) 05:20, 31 October 2019 (UTC)


 * We should not refer to "row" or "column" vectors since conventions differ on which is a vector and which is a covector. The derivative in one dimension is still a covector, as a linear mapping taking on values in the base field. Maciej Zworski once told me that "gradient" is not meaningful in a general coordinates setting; in retrospect, I believe he means that while the derivative is always meaningful on any differentiable manifold, the "gradient" isn't.--Jasper Deng (talk) 06:23, 31 October 2019 (UTC)


 * I think the way the lead is now (that is, the D.Lazard version, just reinstated) is much more readable to a layman. If college freshmen can understand gradient by way of a simple example of the gradient of a scalar-valued function of two independent variables, then I don't see why we need a definition in complete generality in the lead, if it is going to confuse readers. Such generality (assuming it is accurate) is more appropriate later in the article.—Anita5192 (talk) 06:40, 31 October 2019 (UTC)
 * It confused the heck out of me. I think the term scalar-valued needs to be changed. 184.148.140.210 (talk) 16:30, 19 April 2024 (UTC)


 * Thanks for comments! I've tried again in . Specifically:
 * Use "vector vs. covector" for gradient vs. derivative, which is the key point, but don't say row/column in lede (do mention it when using notation); as Jasper notes, conventions differ.
 * Don't mention generality in lede (distracting, as Anita notes).
 * ... and more generally, incorporate changes into the article better, rather than taking them on: add details only in the detailed sections.
 * The key point we're trying to make (and the confusion we're trying to prevent) is that the gradient is related to the derivative, but it is not a derivative: the value of the gradient is a vector, while the value of the (total) derivative is a covector. You can use the dot product to convert between them, but it's not the same thing.
 * Thus the previous "the gradient is a multi-variable generalization of the derivative" is at best confusing, at worst wrong; hopefully this is a bit better?
 * —Nils von Barth (nbarth) (talk) 19:43, 31 October 2019 (UTC)
 * "the value of the gradient is a vector, while the value of the (total) derivative is a covector": This opposition is nonsensical, as, per definition, a covector is a vector (the difference is that a vector belongs to some vector space, and a covector belongs to the dual vector space. When a vector is given by its coordinates (coordinate vector), one has to go further in the theory for knowing whether it is a vector or a covector. So the distinction is confusing in the lead. But it could be explained in a section that coulf be called "Mathematical nature" (of the gradient). D.Lazard (talk) 21:13, 31 October 2019 (UTC)
 * Thanks D., good point. It's more precise (and clearer) to specify tangent vector and cotangent vector. I've done so briefly in the lede in, with details in the section for the relationship with the derivative (Gradient), which seemed an existing home.
 * There I specify "vectors in $$\mathbf R^n$$" vs. covectors, to clarify the notation:
 * Using the convention that vectors in $$\mathbf R^n$$ are represented by column vectors, and that covectors (linear maps $$\mathbf R^n \to \mathbf R$$) are represented by row vectors,
 * Is this clearer?
 * —Nils von Barth (nbarth) (talk) 01:33, 1 November 2019 (UTC)
 * This is clearer, but I worry that the lede is now inpenetrable to those who know nothing about tangent or cotangent bundles. I would suggest moving this to a footnote. After all, in Euclidean space, the distinction is unimportant until coordinate transformations become involved.--Jasper Deng (talk) 08:55, 1 November 2019 (UTC)
 * Yeah, it's hard to keep this elementary but correct. I've avoided saying tangent/cotangent bundle (as you note, that's too technical), but it's correct and a bit clearer to at least say vector field (that clarifies the diagrams!). I've revised in, with most details in footnotes (or detailed sections, or other pages), but still using the words tangent vector and cotangent vector to distinguish the gradient from the derivative (shortest way to distinguish), with brief explanation of what this means (vector vs. function of vector). Any ideas for a simpler way of distinguishing these?
 * —Nils von Barth (nbarth) (talk) 08:39, 3 November 2019 (UTC)

Lead clean-up
I rewrote the first two paragraphs of the lead to make them mathematically accurate and supplied citations. The wording was improper and sloppy. There were no sources cited which is probably why the description was incorrect. The next paragraph about the tangent space is sloppy and contains many intuitive but inaccurate words. For example, what is the graph of a function of several variables? I think the paragraph about the Jacobian should actually refer to the Jacobian matrix. I am not familiar enough with tangent spaces, Jacobian matrices, and Banach spaces to rewrite the rest of the lead competently, but I hope someone knowledgeable will rewrite it properly and cite reliable sources for everything.—Anita5192 (talk) 16:20, 7 April 2019 (UTC)

Typical of wikipedia math/science articles
The gradient of a line I believe is difference in y divided by difference in x.

This surely is the single most common query people are asking when they want to find out what a gradient is.

Why on Earth am I reading about three dimension gradients already when the two dimensional case isn't yet discussed?

I do believe in fact the most definition is not present in this article. I couldn't find it, at any rate.

This I have found is normal for Wikipedia math or physics articles. They can only be understood by people who don't need to read them.

The search results from Bing are no better. As far as I can tell, the entire first page of results has no page which says, simply, difference in y over difference in x. — Preceding unsigned comment added by 176.58.238.150 (talk) 15:53, 9 April 2019 (UTC)
 * The answer to your complaint is in the first line of the article (hatnote), where it is explicitly said that, for the gradient of a line, you have to go to the article Slope. Don't blame blame Wikipedia, if you are not able to read the beginning of an article. D.Lazard (talk) 16:09, 9 April 2019 (UTC)

Recent Changes
I have added to the definition part of the article with the meaning of the word gradient outside the context of math. — Preceding unsigned comment added by DJ Jerree Erickson (talk • contribs) 21:00, 4 December 2020 (UTC)

why is arrows moving from blue to red if concentration and pressure gradients move from high to low
— Preceding unsigned comment added by 98.199.140.156 (talk) 21:00, 31 July 2021 (UTC)



As you can read in the lead, "The gradient vector can be interpreted as the 'direction and rate of fastest increase.'" That is, the vectors point from low to high value. They are not flow vectors.—Anita5192 (talk) 21:46, 31 July 2021 (UTC)
 * I don't understand the problem, the arrows point in the direction of the gradient. Pressure gradients move from low to high as well. It's the opposite direction that a particle would be pushed if you placed there, if that's what you mean; i.e. the force density would be given by $$\mathbf{f} = -\nabla p$$. &#8209;&#8209;Volteer1 (talk) 17:22, 26 August 2021 (UTC)


 * I didn't ask the question. An IP created this section with the question and no content other than the figure.—Anita5192 (talk) 17:46, 26 August 2021 (UTC)
 * Oh, my bad, thanks for clarifying. I thought it was a bit strangely worded. &#8209;&#8209;Volteer1 (talk) 17:52, 26 August 2021 (UTC)

History section
I'm pretty sure that history section is wrong, or at least doesn't belong to this article. It says "The gradient was invented by Dan "the man" Lasker around 2016 where it quickly became one of the most popular trends in web development." Gifah.28 (talk) 03:25, 8 December 2022 (UTC)