Talk:Dot product

Regarding bases
In 2D and 3D Euclidean space, the scalar product a.b between vectors a and b was defined for me as |a||b|cos@. If you use an orthonormal basis then it can be shown that a.b = (a1, a2, a3).(b1, b2, b3) = a1b1 + a2b2 + a3b3, where (x1, x2, x3) means x1e1 + x2e2 + x3e3 where {ei} is an orthonormal basis (and the xi are scalars).

However, for any other basis, the scalar product will NOT turn out to be the sum of the products of the components with respect to the basis, i.e. will not equal a1b1 + a2b2 + a3b3. However this is how the scalar product is defined by the article.

What is the resolution of this apparent contradiction? --81.152.176.225 (talk) 17:46, 30 March 2011 (UTC)


 * Your confusion is between the notions of inner product an dot product (at least as they are used here; their meanings are not always well distinguished). A Euclidean space comes equipped with an inner product. It is related to the norms of vectors and the cosine of the angle as you indicated, although I would say this defines the angle rather than the inner product. As defined here, a dot product is just an algebraic expression in terms of entries, like the determinant of a matrix (but a different expression of course). It happens that Euclidean spaces always have (many) orthonormal bases, and that in terms of the coordinates on a given orthonormal bases the inner product is given by the dot product of the respective coordinates. In terms of coordinates on other bases this is not the case, but that is no contradiction, it is actually more of a surprise that it is given by the very same expression in terms of coordinates of any orthonormal bases (in general changing coordinates also changes the expressions for functions of the coordinates). As Euclidean spaces are often identified with Rn (in which case the dot product of the vector entries is an inner product), the notions are sometimes confounded, but this should not really be done (and I'm not sure to which one of the two "scalar product" should refer; I think inner product would be the most appropriate). The dot product is defined here in such a manner that the phrase "a coefficient of a matrix product is given by the dot product of a row of the left matrix and a column of the right matrix" makes sense; in this case there is no Euclidean space or inner product involved at all. Hope this helps. Marc van Leeuwen (talk) 17:29, 31 March 2011 (UTC)


 * The deeper issue here is that a vector space alone does not have any concept of orthogonality or length. For a finite dimensional vector space over the reals, if we take any basis, then we can define the scalar product relative to that basis. The basis will be orthonormal relative to the scalar product defined from that basis; the lengths and angles between other vectors are again only relative to the choice of basis, and they can be found using the usual formulas and the scalar product for that basis. &mdash; Carl (CBM · talk) 23:24, 30 June 2013 (UTC)


 * To be as general as possible maybe it makes sense to define the dot product by using the metric tensor. In a general non-orthonormal basis the dot product is given as: $$u \cdot v = \sum_{i,j} u_i v_j (\vec{e}_i \cdot \vec{e}_j) = \sum_{i,j} u_i v_j g_{i,j}$$. Then one may argue that for an orthonormal basis the metric tensor is the identity so the simpler formula is retrieved. Length and angles are invariant to change of bases. As it is currently written, the formula is correct only for orthonormal bases. Vassillen (talk) 11:18, 22 June 2020 (UTC)
 * You are talking about the inner product, of which the dot product is a specific case. D.Lazard (talk) 11:24, 22 June 2020 (UTC)


 * The dot product is the inner product in Euclidean space as noted in the article on the inner product. You can pick any basis for the N-dimensional Euclidean space. And in the case of a non-orthonormal basis (an Euclidean space, but not the standard Euclidean space) the current definition does not work. The main issue here is that the way length and angles are measured is not invariant to change of basis anymore. I believe that it should be mentioned somewhere at the very least, before one introduces the assumptions about the orthonormality of the basis. Vassillen (talk) 11:39, 22 June 2020 (UTC)


 * This seems to be mainly a contention about the meaning of the dot/scalar product. A definition can be found in: "Аналитическая геометрия, Том 1", Б. Н. ДЕЛОНЕ и Д. А. РАЙКОВ, 1948 (Boris Delaunay, see Books). The definition can be found in 1.2.9, specifically the definition is introduced through $$\vec{a}\vec{b} = \|a\|\|b\|\cos\theta$$. Later on the algebraic definition is derived from linearity, and the explicit general form using the metric tensor can be found in: 1.2.11.2. Vassillen (talk) 13:21, 22 June 2020 (UTC)

Remove section Dot product, Generalization to tensors...
...since this section doesn't say much at all. All the info is in the main articles tensor, tensor contraction. It would be better to briefly include various other generalizations such as weight functions, inner products on functions, dot products in matrices and dyadics... which are not mentioned, in additon to complex vectors and tensors.Maschen (talk) 20:24, 25 August 2012 (UTC)

Recent edits
Following my major revision of the article, I thought I should summarize here the most important points of my edit, and highlight what I think still needs to be done.


 * My chief concern in the sequence of edits was the demands of WP:NPOV, that we should give equal weight to the two different definitions of the dot product: the geometric and algebraic one. The earlier revision of the article took for granted that the dot product should be defined algebraically in terms of the components of a vector.  As a result, the article had to worry about things like coordinate dependence and orthonormal bases.  This is generally a bad way to define the dot product for that reason.  That is why in physics/tensor analysis, the dot product is defined in a coordinate-independent manner at the outset, and its algebraic properties are then deduced.  Of course, in much of applied mathematics, the dot product is just defined componentwise.  That's fine, but it seems like we should present both points of view in parallel.  It simplifies much of the discussion I think.
 * I have removed the section on Rotations. This was so badly written that it completely failed to convey the relevant point: that the dot product is invariant under rotations.  That is something that should be mentioned in the "Properties" section (which also needs to be taken in hand and organized a bit better).  It is a trivial consequence of the geometric definition of the dot product (!) that doesn't require matrices to explain.
 * I have added a more intuitive proof sketch of the equivalence of the algebraic and geometric definitions using the scalar projection. In doing so, I have unfortunately removed the connection with the law of cosines.  This should probably be mentioned somewhere in the article, but it was not particularly enlightening where it was anyway, situated in the middle of a very dull proof that no one was going to read.
 * I removed the paragraph from the lead that stated: "In Euclidean space, the inner product of two vectors (expressed in terms of coordinate vectors on an orthonormal basis) is the same as their dot product. However, in more general contexts (e.g., vector spaces that use complex numbers as scalars) the inner and the dot products are typically defined in ways that do not coincide." This is misleading because many many authors define the dot product geometrically in Euclidean space rather than in terms of components in an orthonormal basis.
 * I shortened the discussion of the scalar product and geometric definition. This section managed to drag on without saying anything substantive.

-- Sławomir Biały (talk) 15:52, 25 November 2012 (UTC)


 * Thanks, a massive improvement. Maschen (talk) 16:12, 25 November 2012 (UTC)


 * If it's ok I added the cosine law to the end of the properties section. Maschen (talk) 17:02, 25 November 2012 (UTC)
 * Good idea.  Sławomir Biały  (talk) 18:09, 25 November 2012 (UTC)

More recent edits
I edited the properties section without realizing Quondum's edit before:


 * "The cross product can also produce a vector (when the inputs are a vector and a pseudovector); is this worth getting into in this article?)"

I would say no since this article is about the dot product, not the cross...

Hope my changes to the properties section are ok - it is harder or clearer to read? M&and;Ŝc2ħεИτlk 18:20, 14 December 2012 (UTC)

IMHO, A simple 2D example of the algebraic formula would make this page useful to a wider audience
IMHO, this page assumes a high level of comfort with mathematical terminology.

As a software writer, using computer graphics, I was looking for confirmation of the simple case of dot product for a vector in (x,y): "(x1,y1) dot (x2,y2) = (x1*x2 + y1*y2)"

I presume the Algebraic section confirms that. But it would take me too long to think through what it is saying. (So I looked elsewhere.)

I suggest this example to anyone who might be thinking "this page could be made more accessible"... ToolmakerSteve (talk) 06:37, 5 February 2013 (UTC)


 * +1. With a diagram where the dot product is identified. Jordan Brown (talk) 21:30, 2 September 2017 (UTC)

"Sequences of numbers" revert
Hoo-boy. This [revert] somewhat surprises me. It is understood that the dot product may be defined on the Cartesian product of any finite number of copies of R. Indeed, it seems that this generalizes to any field. However, "sequences of numbers" does not adequately capture the requirement that the sequences must form a vector space before it can be considered for the dot product; also usually that the two vectors must belong to the same vector space. And damningly, this definition leads to explicit basis dependence (and even if one starts with a coordinate space, one can always find a nonstandard basis and thus another sequence of numbers, to which this definition would still apply, but with a basis-dependent result). Thus, however you word it, you need to either imply a standard basis, or restrict the definition to the original Cartesian product/power if you do not want to reference any basis. Perhaps, Sławomir, you are objecting to my reference to bases? These can be removed, but not the reference to a vector space. Every reference I've seen has clearly intended the definition in refer to a vector space, even if said vector space was defined in terms of the Cartesian product of n copies of R. One disclaimer: my access to references is limited, but certainly in Shaum's Outlines the intended vectorial nature of the "sequence" (and it is not called a sequence) is clear. The article also claims equivalence of the geometric and algebraic definitions, which does not hold as defined in the lead. What makes the idea of arbitrary "sequences of numbers" so notable? — Quondum 01:57, 30 June 2013 (UTC)


 * If you're referring to vectors, instead of "sequences of numbers", why not "ordered tuple" (of components)? M&and;Ŝc2ħεИτlk 08:01, 30 June 2013 (UTC)


 * My point is that the properties of a vector space are needed, specifically scalar multiplication and addition. These are not implied in the definition of a tuple, which is the same thing as "a sequence".  Without these, half the listed properties of the dot product do not follow.  Let's get back to basic WP principles: what notable reference defines a dot product on an arbitrary tuple of numbers (even for the case of reals), without the assumption that the tuple belongs to a vector space?  And if you find such a reference, we will have to make it clear that two different dot products exist in the literature, one for which properties 2, 3, 4, 6, 7 and 8 cannot be derived.  — Quondum 12:13, 30 June 2013 (UTC)


 * I'm afraid I don't understand what specifically the issue is. Is it the wording "sequences"?  I'm not married to this wording, and would be happy with n-tuples or other equivalent expression.  My complaint with the original edit was that there is no need to mention bases (or even vector spaces) to define the dot product.  It is specifically defined on $$\mathbf R^n$$.  By definition, $$\mathbf R^n$$ is the space of n-tuples of real numbers, and the dot product of two n-tuples is $$(a_1,\dots,a_n)\cdot (b_1,\dots,b_n) = a_1b_1+\cdots+a_nb_n$$.  That's really all there is to it.  The dot product is not defined on a general vector space.  (That's a related notion, called an inner product.)  In a general vector space V, if you have a basis $$\phi:\mathbf R^n\to V$$, then you can push forward the dot product along &phi; to define an inner product on V, but this is not the same thing as the dot product since V is not in general a space of tuples (or even if it is a space of tuples, then the inner product defined in this manner might be different from the dot product on that space).  Here you can discuss basis-dependence, because the basis is actually relevant, but that's not really a topic for the article.
 * I really also don't see what the issue is with the properties section. It's certainly well-known without bringing in special bases how to add tuples and multiply them by scalars.  Perhaps these notions could be described more clearly than they currently are in the properties section, but I don't see how User:Quondum could possibly be confused by what is intended.
 * I also don't understand the final charge that our article somehow diverges from the account in most published sources. Many thousands of sources exist that define the dot product the way this article does (some of them give both definitions; some deduce one as a consequence of the other).  See, for instance, the Harvard consortium textbook "Calculus" (the "reform calculus" textbook, which includes both definitions side by side).  Volume 1 of Tom Apostol's Calculus, a classic book which is well-regarded by many mathematicians, Peter Lax's "Linear algebra", Paul Halmos's "Finite dimensional vector spaces" all include the algebraic definition and deduce the geometric definition from it.  Josiah Willard Gibbs's classic textbook "Vector analysis" starts from the geometric definition and deduces the algebraic definition from it, as does Richard Courant's "Differential and integral calculus".  There are slight differences in exposition in these sources, but there is certainly no essential logical departure from the account given in this article.   Sławomir Biały  (talk) 23:54, 30 June 2013 (UTC)
 * The article starts, In mathematics, not In linear algebra. As such, the concept of tuple covers the Cartesian product of arbitrary sets, and does not carry with it the structure of addition or scalar multiplication of tuples.  The texts that you mention, from their very titles are in the context of linear algebra, where tuples (sequences) could well be taken to refer specifically to vectors, and to form a vector space with the associated properties.  — Quondum 00:09, 1 July 2013 (UTC)
 * I'm sorry, but I am still grasping to see what the point of this discussion actually is. The structure of addition and scalar multiplication is not actually needed to define the dot product.  You just need a tuple of numbers (let's say real numbers, for the sake of conversation).  Many of the aforementioned texts do not even mention vector spaces, which would seem to obviate your point that they all implicitly are relying on this more general perspective.  But, in any case, what additional sources would you have us consider that might elucidate your point of view on the subject?  Is there some particular text in the article that you would like to change and, if so, what would you change it to?   Sławomir Biały  (talk) 00:16, 1 July 2013 (UTC)
 * I am happy to have the dot product defined on tuples of pair-wise multiplication-compatible components. However, said dot product then does not in general have all the properties attributed to it in the article; it can only acquire these if the tuples are then additionally constrained to being in a vector space.  What I am really driving at is trying to distinguish whether the dot product for the purposes of this article should be this sum-of-products-over-tuples that you evidently prefer, or else it is defined specifically on a vector space.  If we assume the former, then half the guts of the article does not belong, and the claimed equivalence with the more abstract geometric definition on Euclidean spaces does not hold (because the latter is at best a subclass).  — Quondum 00:34, 1 July 2013 (UTC)
 * Well, I wouldn't say that it's necessarily my preference to define the dot product as a sum of products over tuples. But that's certainly the prevailing definition in most sources I have seen.  I've already given you a sample.  To say that the dot product is "defined specifically on a vector space" is as immaterial as saying that it is "defined specifically on a topological space" or "defined specifically on a measure space".  It's defined on $$\mathbf R^n$$.  This space of tuples happens to support operations of addition and scalar multiplication (among other things: it's also a measure space, a topological space, a topological group, and so on).  It also happens to be the Cartesian model for Euclidean space, and that's how the two definitions are related.  (Incidentally, even the geometric definition does not rely specifically on the vector space structure of Euclidean space, even though Euclidean space does support such a structure in addition to being a topological space, a measure space, and so on.)  Saying that "half the guts of the article does not belong, and the claimed equivalence with the more abstract geometric definition on Euclidean spaces does not hold" seems either to be WP:POINTy, or at least to show a disregard for what many high quality sources have to say on the matter.   Sławomir Biały  (talk) 01:32, 1 July 2013 (UTC)
 * I can imagine addition on Rn that does not correspond to component-wise addition, so unless we specifically define a tuple addition, to me it is not defined. I can imagine it being used to parameterize a map on a manifold, which would make predefined addition and scalar multiplication undesirable. As you suggest, this process is absorbing your and my energy – too much now relative to the potential benefit to WP. I'm not happy that what I consider to be at best non-obvious properties of Rn are not stated in the article (or anywhere else?). You obviously have the superior mathematical background and experience, but this leaves me concerned by what you feel the reader can be expected to assume. The dot product should be a simple topic, but as you can see, I am having serious difficulty pinning down a rigorous understanding of the starting point.  I suggest that I should take a (long) rest from it.  — Quondum 02:22, 1 July 2013 (UTC)

Constraint on basis transformation for invariance of the dot product
§Properties currently reads: Ignoring the dispute about the algebraic definition, the geometric definition is invariant under any change of basis, exactly as it should be. Any definition of the dot product that produces a value that could vary with a change in basis is simply insane. Why is there any restriction placed on the change of basis? — Quondum 22:32, 30 June 2013 (UTC)
 * Invariance under isometric changes of basis


 * I am new to this particular article, but it seems to me that if we divide all the components of a vector by 2 (which corresponds to a certain change of basis) then the dot product of that vector with itself will not usually remain the same, if the dot product is defined (as I assume it is) in terms of the components. Really the issue is that the dot product is only defined in that sense relative to a basis, and then "invariance" is just jargon for a theorem that two bases which are isometric with respect to each other (whatever that means...) will give the same values for all dot products. Actually, I am not certain exactly what an "isometric change of basis" is supposed to be. One possibility: if the new basis gives every vector the same length as the old basis, then by the Parallelogram law the two dot products have to be the same. &mdash; Carl (CBM · talk) 23:19, 30 June 2013 (UTC)


 * There used to be a paragraph in the lede that clarified what was going on, but it was removed in this edit: . As that paragraph used to explain, the dot product defined in terms of components in an arbitrary basis is not, in general, the same as the inner product of two vectors in $$\mathbb{R}^n$$, which is the special case of the dot product in the standard basis. &mdash; Carl (CBM · talk) 23:53, 30 June 2013 (UTC)
 * That removed paragraph maybe does not clarify much. However what you might be reading from it is that the dot product is defined on the coordinate vector representing an abstract vector in terms of a basis. In this picture, the original vector and the basis are not relevant, and we are dealing with a linear transformation of coordinate vectors.  The section would have to be updated to Invariance under isometric linear transformations, though this is a tautology: an isometry is defined as a transformation that preserves the dot product. — Quondum 00:24, 1 July 2013 (UTC)
 * Because any coordinate-free definition of the dot product is (as you said originally) preserved under every isomorphism, for that definition of an isometry to make sense, the dot product has to be defined relative to a basis in a way that makes essential use of the specific coordinates. The geometric definition is such a definition: it is just a special case of the coordinate-based dot product relative to a particular basis (the standard basis). &mdash; Carl (CBM · talk) 00:30, 1 July 2013 (UTC)
 * Okay, so we define the dot product as a sum of products of corresponding (real) coordinates of two tuples. Then you identify a map onto Euclidean space so that the original tuple happens to be equal to the coordinate vector of each Euclidean vector relative to some basis, as well as this basis being orthonormal.  Fine, you can do this.  The dot product then matches the Euclidean length (we'll ignore such inconvenient things such as units for now).  The problem comes in with the implicit pull-back (well, I think you call it that) of properties such as scalar multiplication and vector addition: you added these properties when you did the mapping.  You do not have these properties without the mapping, or defining them on the original space of tuples.  You also do not have the concept of a basis without these properties.  — Quondum 01:02, 1 July 2013 (UTC)
 * If I have $$\mathbb{R}^n$$ as an abstract vector space (i.e. I have a vector space over $$\mathbb{R}$$ of dimension n but I know nothing else about the space) then I already have scalar multiplication and vector addition, because I have a vector space. But I have no dot product or inner product, because I just have an abstract vector space. One way to define an inner product on my space is to choose any basis for the space (the definition of a basis is coordinate-free, and a basis exists for each vector space), so that I can write a coordinate tuple for each vector relative to the basis. Then I can define an inner product on the original space to be the dot product on the coordinate tuples of the two vectors. Now I can ask: for which pairs of bases B, B&prime; do I obtain the same values for all these dot products from B and from B&prime;? One answer is that if B and B&prime; give every vector the same length (i.e the dot product of every vector with itself relative to B is the same as the dot product with itself relative to B&prime;) then all the dot products will be equal under B and B&prime;. That is what the original sentence says to me. The key point is that the dot product of two vectors is only defined relative to a choice of basis, when all I know about the vectors is that they belong to an abstract copy of $$\mathbb{R}^n$$. &mdash; Carl (CBM · talk) 01:31, 1 July 2013 (UTC)

The main issue here is that what I call elementary books (e.g. for Calculus 3, or elementary linear algebra) define $$\mathbb{R}^n$$ to be a particular space, the space of n-tuples of real numbers with pointwise operations. But from a categorical viewpoint $$\mathbb{R}^n$$ is any space that is a product $$\mathbb{R}\times\cdots\times\mathbb{R}$$ where there are n factors, and this product is not unique - like any categorical product it is only unique up to isomorphism. From the viewpoint of the elementary texts $$\mathbb{R}^n$$ already comes with a standard basis using the components of the tuples; from the categorical viewpoint the choice of basis is arbitrary, because $$\mathbb{R}^n$$ is only defined up to isomorphism. The original versions of this article were written from the categorical perspective, i.e., where the dot product can be interpreted geometrically when an appropriate basis is used. At some point someone edited the article to use more of the elementary viewpoint, where the dot product is the geometrical one - but then parts of the previous article that talk about the dot product in other bases no longer fit. &mdash; Carl (CBM · talk) 01:43, 1 July 2013 (UTC)


 * I don't think this distinction between "elementary" and "categorical" is right. As a categorical product $$\mathbf R^n$$ is equipped with n projection maps into $$\mathbf R$$.   This is all we need to define the dot product.  Sławomir Biały  (talk) 02:00, 1 July 2013 (UTC)


 * Each individual copy of $$\mathbb{R}^n$$ has projections, but there is no unique space that "is" $$\mathbb{R}^n$$ in the category of vector spaces over $$\mathbb{R}$$. So there is an element of arbitrariness in choosing one particular space to call "$$\mathbb{R}^n$$", which is the same as the arbitrariness of choosing a basis. But in any case my point is that the original article was written from a viewpoint where a vector space does not come equipped with a basis - including $$\mathbb{R}^n$$ - while the present version assumes that $$\mathbb{R}^n$$ is a specific space of tuples. Either approach is fine with me, but it helps if everyone realizes which approach is being taken, because it will affect the way that the material is presented. &mdash; Carl (CBM · talk) 02:15, 1 July 2013 (UTC)


 * I still think this is a false dichotomy. Surely defining a dot product requires that we work in $$\mathbf R^n$$, not in a vector space that is abstractly isomorphic to $$\mathbf R^n$$.  The latter would more properly be called an inner product (perhaps an inner product induced by the isomorphism), but not the dot product.  See, for instance, the text "Linear algebra" by Greub.  He uses the term "standard inner product" to refer to the dot product on $$\mathbf R^n$$ and then discusses the induced inner product in a basis as a completely different case referring to it as an "inner product".   Sławomir Biały  (talk) 12:04, 1 July 2013 (UTC)
 * We both agree that the "dot" product requires working with tuples of real numbers. The distinction is whether $$\mathbb{R}^n$$ is defined to be the set of such tuples, as is common in elementary settings, or is instead a more abstract structure. In the former case, because vectors are defined as tuples, we can define the "dot product of two vectors" by referring directly to the tuples, or by identifying the vectors with geometric objects and referring to their length. In the latter case, when a "vector" does not come pre-equipped with components, it is impossible to define the "dot product of two vectors", only "the dot product of two vectors relative to a basis". The latter is exactly the induced inner product on a basis. In this more abstract setting, for example, the "angle" between two vectors is only defined after we pick (arbitrarily) an inner product, the angle is not an inherent property of the vectors themselves. (The same distinction would come if we treated $$\mathbb{R}^n$$ as an affine space: if $$\mathbb{R}^n$$ is the set of tuples then we know which vector is "really" zero, but if $$\mathbb{R}^n$$ is presented as an abstract affine space we have no way to tell.) &mdash; Carl (CBM · talk) 13:02, 1 July 2013 (UTC)
 * Then I think the point is not that there is a problem with the meaning of $$\mathbf R^n$$, but rather what "Euclidean space" signifies. Then I would agree that there is an elementary notion of Euclidean space (one that satisfies the axioms); an elementary model of Euclidean space: the "Cartesian model" $$\mathbf R^n$$ equipped with the usual metric; and a less elementary model of Euclidean space: an inner product space.   Sławomir Biały  (talk) 19:44, 1 July 2013 (UTC)
 * Back on topic: I have removed the property under discussion. It seems to me that this is already mentioned in a more appropriate context at the end of the section on the equivalence of the two definitions.   Sławomir Biały  (talk) 12:25, 1 July 2013 (UTC)

Section on "Scalar projection and the equivalence of the definitions" is jumbled and missing steps
1. It is not clear how "Scalar projection" and "equivalence of the definitions" are related. 2. It is not clear where discussion of scalar projection ends and proof of equivalence begins. 3. It is not clear how :$$\mathbf A\cdot\mathbf B = \sum_i B_i(\mathbf A\cdot\mathbf e_i)$$ follows. Furthermore, the section does not start with the trig definition and end with algebraic definition, or visa versa. So where is the equivalence? I'm addressing this to whomever reverted my changes from 2013-07-27, which were an improvement. You know who you are. Terse mathematics is fine for research papers and dissertations where the reader is expected to be an expert and math is a common language. But "Dot product" is not advanced mathematics. This subject can and should be readily accessible by first year college students who have never had course in abstract algebra and probably never will. So please step down off your high horse and join us peasants laboring in the fields. You may not be a smart as you think. — Preceding unsigned comment added by ThomasMcLeod (talk • contribs) 02:01, 2 August 2013 (UTC)

Operation, or result of the operation, or both?
The opening of the lead says the dot product is an operation that returns a number, and the lead sticks with this definition throughout. Then the lengthy section "Definition" consistently refers to the dot product as the scalar result of the operation. Then the untitled beginning of the section "Properties" treats the term as referring to an operation. Then the next five sections or subsections treat it as a result of the operation. Then two sections treat as an operation, then two sections as a result.

If it is standard to call the operation as well as its result the dot product, then this should be made clear in the lead. If there is a different term that is conventionally used for the operation, then that should be substituted where appropriate. Loraof (talk) 21:29, 7 October 2014 (UTC)


 * Can you point out where exactly you are seeing this, i.e. quote an offending sentence. I've just read the definition section and can see no problems as you describe.-- JohnBlackburne wordsdeeds 23:29, 7 October 2014 (UTC)


 * You ask for an offending sentence. The offense is the absence of a mention that the term refers to two related but different things: an operation, and a scalar that results from the operation.


 * sent. 1: the dot product ... is an algebraic operation


 * section "Definition": the length of a vector is defined as the square root of the dot product of the vector by itself -- the square root of the dot product is the square root of a number, not of an operation.


 * next subsection: the dot product of vectors [1, 3, −5] and [4, −2, −1] is ... 3. -- 3 is not an operation, even though sentence 1 of the article says that the dot product is an operation.


 * For further examples, see the exact statement of their locations throughout the article in my original post above. Loraof (talk) 01:50, 8 October 2014 (UTC)


 * The dot product is an operation; the dot product of two vectors is a number, as being the operation applied to two vectors. What is wrong with this formulation? D.Lazard (talk) 08:41, 8 October 2014 (UTC)


 * As at Cross product, this presumably relates to language, where we use a form of abbreviation, e.g. "the dot product of two vectors a and b" to mean "the result of applying the dot product to two vectors a and b". Thus, the "dot product" does not mean the result, even though the language use would suggest this on first reading. Where it might be confusing, we can be a little more explicit and not use this form of abbreviation/elision. —Quondum 13:55, 8 October 2014 (UTC)
 * I don't see any need to be any more explicit, it just makes it less clear. The language is far more elementary than the maths here: one might say "the sum of three and five is eight" and it's perfectly clear. Changing it to e.g. "the result of applying the sum to two numbers three and five is eight" is more clumsy and so far less clear. And this is the same example but with "sum" for "dot product" and "three and five" for "two vectors a and b".
 * If anything the dot product is clearer and easier to interpret. To take one example from above: "the length of a vector is defined as the square root of the dot product of the vector by itself", this can only mean one thing as the dot product is a binary operation on vectors and so requires two of them. There's only one vector so the dot product is with itself, A ⋅ A as is given in the article (so anyone who finds the language unclear can look at the algebra).-- JohnBlackburne wordsdeeds 14:39, 8 October 2014 (UTC)

This article appears to contradict another article
See the discussion here. Jarble (talk) 02:57, 6 January 2015 (UTC)

Geometric definition
As far as I can see, the geometric definition given is circular. It is, at best, a definition of the angle $θ$ in terms of the dot product imo. Taking the POV that Euclidean space is endowed with a norm (but not an inner product) in terms of which the dot product (here a full-fledged inner product) is defined seems rather bizarre, especially since what the angle between two vectors in $ℝ^{77}$ is isn't automatically clear to me (before it is defined in terms of the inner product). YohanN7 (talk) 14:16, 24 November 2015 (UTC)


 * I'm not clear what the objection here is. If a Euclidean space is equipped with a norm, then the dot product is given by the law of cosines identity $$A\cdot B = \frac{1}{2}(\|A+B\|^2-\|A\|^2-\|B\|^2) = \|A\|\, \|B\|\cos\theta$$.  There's nothing circular about that.  The definition $$A\cdot B = \|A\|\,\|B\|\,\cos\theta$$ goes back at least to Gibbs and Wilson, and remains standard in engineering and applied mathematics (see, for example, Kreyszig "Advanced engineering mathematics").   S ławomir  Biały  15:49, 24 November 2015 (UTC)


 * I'll try to explain exactly what I mean, therefore the essay below. It is not a lecture, it is a question.


 * Of course, if norm and angle are defined, it is not circular. But this is not always the approach. Euclidean space (linked from the article in connection with the definition) begins with the dot product, defines the norm through it, and finally defines the angle using both the norm and the dot product. Doing it the other way around requires well-motivated definitions of norm and angle.


 * I think can visualize a way of defining length using only two-dimensional concepts, such as the Pythagorean theorem in the plane and a suitable decomposition of the vector into terms having only one non-zero component, then successively applying the Pythagorean theorem to appropriate two-dimensional subspaces, finally arriving at the standard norm. But this presupposes that coordinate axes are (pairwise) at right angles, angles yet so far undefined. Then for the angles. Two linearly independent vector span a two-dimensional plane. In this plane, the angle is well defined from plane geometry (law of cosines). In order to make all this explicit, I'd appreciate the presence of the orthogonal group, at least $O(2)$. I think I'd need it, at least its existence to motivate the choice of appropriate bases.


 * Another approach would be to skip and "derivation" or "motivation" in terms of two-dimensional concepts, and just straight away define the norm by generalizing the formula of the Pythagorean theorem to $n$ dimensions (implicit here is that coordinate axes are at right angles), and, similarly, use the law of cosines as is to define the angle.


 * Out of curiosity, how is this done in the literature? YohanN7 (talk) 12:31, 25 November 2015 (UTC)


 * Concepts of norm and angle don't require a dot product or special Cartesian coordinate systems to define them. The concept of magnitude (as a length) in geometry predates Descartes by at least two thousand years.  Rigorous notions of length of a line segment, and the algebra of lengths, go back at least to Plato.  In applications, it's often assumed that one already knows what magnitude means, because we all were supposed to have studied geometry as a child.  This is the approach taken by sources like Gibbs and Wilson, Arfken and Weber, Kreyszig, and most "applied mathematics" sources: "A vector has a magnitude and direction."  This definition wouldn't  be much good if we didn't already know what "magnitude" means.  Anyway, there are articles that do discuss inner products, and using those to assign magnitudes to vectors.  Let's remember that the target audience of this article is someone who doesn't necessarily know what a dot product is, so the subject should be the dot product, not the foundations of geometry.   S ławomir  Biały  13:13, 25 November 2015 (UTC)

The algebraic definition seems flawed as well. Why talk about vectors? This brings in exactly the basis-dependence Q is talking about above. Only $n$-tuples should be mentioned here, taking for granted only the structure each factor ($ℝ$) comes with in the Cartesian product $ℝ^{n}$. Later, scalar multiplication and addition of $n$-tuples could be defined in order to make sense of the properties in the properties section for $n$-tuples as well. (With the definitions, this is the prototype of all finite-dimensional vector spaces.) YohanN7 (talk) 15:02, 24 November 2015 (UTC)


 * The word "vector" does not refer to a unique thing. It's clear in the context here that we mean coordinate vectors.  It might be less ambiguous to refer to n-tuples, but most sources just say "vectors", and I think here accord with sources is more important than imagined sources of ambiguity.  Many readers no doubt will encounter this in a text on basic physics or calculus, where "vector" refers to an element of $$\mathbb R^3$$.  More refined concept of "vector" are definitely possible, but presumably readers familiar with things like covariance and contravariance of vectors and vector spaces are also capable of inferring the intended meaning of the word "vector" in the algebraic definition.  In any case, there already is a paragraph introducing the definitions that discusses the use of coordinates to model Euclidean space.  It currently lacks a proper discussion of vectors, though.   S ławomir  Biały  15:51, 24 November 2015 (UTC)


 * The geometric definition makes sense if you have a clear understanding of what an angle is, which you do in elementary geometry, in two or three dimensions. you can e.g. form a plane triangle with two of the sides being the vectors then the angle is the well-defined angle between them. I think is often taught that way, which helps tie it into the cross product which has a similar definition in 3D space. It can them be turned around to define the angle in other spaces. In higher Euclidian dimensions, although you can do so geometrically even in $ℝ^{77}$ as you can construct a triangle on a plane and then work in two dimensions, and in other ways.


 * As for vector it is a vector product in that it is independent of the basis chosen. This is not obvious from the algebraic definition but must be true for the geometric definition to hold, and they are shown to be equivalent later. It is also a product on n-tuples but it is only interesting as it is a product on vectors and so basis independent.-- JohnBlackburne wordsdeeds 15:53, 24 November 2015 (UTC)


 * The following post has been written before reading the two replies (edit conflict) Moreover the definition of a vector that is given here (and also in Vector (mathematics and physics)) is not really a definition, but only an explanation. The correct definition of a vector appears in Equipollence (geometry). Then, a geometrical definition of the dot product may be deduced from the notion of Euclidean distance as follows: define orthogonality of vectors  from Pythagorean theorem; then either use it for defining Cartesian coordinates and dot product, or define orthogonal projection; in the latter case the dot product is the product of the length of one vector by the length of the projection of the second vector on the first one. This is rather complicate and probably too technical for a definition, even this must appear as a property. Thus, I would suggest something like In geometry, the dot product of two vectors is the dot product of their coordinate vectors over an orthonormal frame, which is an affine frame in which all vectors are pairwise orthogonal and of unit length. Then one could continue as in the considered section, by saying that two vectors are orthogonal if and only if their dot product is zero, and that the dot product of a vector by itself is the square of its length.  (Note that orthonormal frame needs to be completely rewritten).D.Lazard (talk) 16:03, 24 November 2015 (UTC)
 * The proposed text is very non-elementary. I would not add it to the "Definition" section of the article.  Anyway, I don't think it's entirely correct either.  The (algebraic) dot product is defined for vectors in $$\mathbb R^n$$.  There one doesn't have a concept of orthonormal frames, without already knowing what the dot product is (see previous discussions).  Certainly, the relationship between the two definitions involves a choice of Cartesian coordinates, which is already mentioned.  But we don't need to make such a huge breakfast out of this small point.   S ławomir  Biały  16:22, 24 November 2015 (UTC)

Thank you JohnBlackburne, D.Lazard and S ławomir Biały. I wrote my first reply (above) to Sławomir before reading the subsequent posts. The subsequent posts clarify a bit, and are mostly (but not all together) in accord with what I am thinking. YohanN7 (talk) 12:55, 25 November 2015 (UTC)

Discussion of non-Cartesian Basis
I wrote the new section on non-Cartesian basis, which D.Lizard removed. I've moved the section to be under Dot Product for now. I disagree with the comment saying it should be moved to be under Inner Product - a much broader category. The whole idea was to show that if you want to keep using the geometric definition of the dot product, if you change the basis, the algebraic definition needs to be altered. The introduction of the metric tensor and associated notation from differential geometry only make sense in this context (where they wouldn't be relevant in talking about the general inner product). StoicLaughter (talk) 21:57, 14 January 2018 (UTC)


 * But this remains unsourced and smacks of WP:OR, so I have removed it again. If you want to include this material you will need to provide reliable secondary sources that back it up. --Bill Cherowitzo (talk) 00:20, 15 January 2018 (UTC)


 * Which part do you consider to be 'original research'? The definition of a basis means that it's possible to write any vectors with respect to them, and the algebraic manipulations follow the exact steps in the main section. The metric tensor is a well known object. I don't have a textbook handy, but | Wolram Alpha provides an equivalent definition at the bottom of the page, where the lower indices imply the metric tensor implicitly. I can understand putting a flag requesting further citation, don't see why you think the section should be completely deleted.-- StoicLaughter (talk) 01:17, 15 January 2018 (UTC)


 * Wikipedia's definition of "original research" is much broader than the term commonly denotes and this can sometimes be confusing. It appears to me that you have taken several ideas from various places and combined them to synthesize a logical consequent. This is something mathematicians do all the time, it is the way we do business. However, Wikipedia views this as WP:SYNTH, a form of original research, unless this has already been done and vetted by the mathematical community. Editors are only allowed to report on what is already available in the existing literature. I do flag passages for further citation if I feel that such citations actually exist, otherwise I revert. Of course, I could make a wrong call and that can be fixed if a reliable secondary source can be cited, but this has only rarely happened.--Bill Cherowitzo (talk) 05:36, 15 January 2018 (UTC)
 * Here's another reference from my reply in the "regarding bases" thread. A definition can be found in: "Аналитическая геометрия, Том 1", Б. Н. ДЕЛОНЕ и Д. А. РАЙКОВ, 1948 (Boris Delaunay, see Books). The definition can be found in 1.2.9, specifically the definition is introduced through $$\vec{a}\vec{b} = \|a\|\|b\|\cos\theta$$. Later on the algebraic definition is derived from linearity, and the explicit general form using the metric tensor can be found in: 1.2.11.2.


 * The explanation comes from this course by Prof. Pavel Grinfeld at Drexel - the relevant discussion starts here, and continues here with the introduction of the index juggling convention. I changed the notation to be consistent with that presented in the main section, as well as the more standard notation g for the metric tensor. I don't have a copy handy, but Grinfeld's textbook is here with reference .  It seems to me that the discussion connecting the geometric and algebraic dot product is necessarily incomplete without mentioning the metric tensor - though I would not have realized that without Grinfeld! I strongly believe that that idea should at least somewhere be mentioned, whether it's through my submission or another approach. StoicLaughter (talk) 10:20, 15 January 2018 (UTC)


 * YouTube videos are not considered as reliable sources. So, Grinfeld's book is the only reliable source.
 * However, your edit remains original research, as Grinfeld certainly did not change (did not "alter") the definition of dot product. This does not mean that it would not be useful to have a section about the expression of the dot product on non-orthonormal bases, but your edit is too confusing for being accepted. Moreover, such a section seems too technical for this article, which is very elementary and supposes that a given basis has been chosen once for all (this is explicit in the algebraic definition, and implicit in the geometric definition). IMO, it would be better placed in Inner product space (the dot product is a particular inner product), but this is a personal opinion. D.Lazard (talk) 17:42, 3 February 2019 (UTC)


 * "Grinfeld certainly did not change (did not "alter") the definition of dot product." - This could just be reworded to "the method used to calculate the dot product must be changed. "your edit is too confusing for being accepted...such a section seems too technical for this article" - If you look at the other sections under "Generalizations", they all go far beyond an elementary (by which I mean high school) discussion of the topic. The presentation could be improved, but it's tricky to use consistent notation and properties from the above sections, get the point across with an example. and not get too non-elementary (i.e. a deeper discussion of contra- and covariant vectors). I still find it puzzling that the current solution is not to just add tags requesting further citations and/or cleanup, but to delete information that people could find useful. Since this page is supposed to cover the geometric and algebraic dot product on roughly equal footing, and the former is intimately tied up with the metric tensor, I strongly feel there should be more information on this page to guide curious readers. StoicLaughter (talk) 19:17, 4 February 2019 (UTC)

Text in the figure
I had a question about the formula shown with the illustration. It shows theta = arccos(x DOT y / |x| |y|). Should it be theta = arccos(x DOT y / (|x| |y|)) ? — Preceding unsigned comment added by 2601:280:5B7F:F890:2948:8B9C:53B:6F2 (talk) 15:14, 23 September 2019 (UTC)
 * You are right, but it is difficult to edit figures. Moreover, as formatted, I doubt that anybody would interpreted wrongly the formula. Also, in the case of a doubt, it is easy to look at the test of the aricle. D.Lazard (talk) 16:05, 23 September 2019 (UTC)

Curvilinear case
I agree with 's removal of 's edits. No one calls applying the metric tensor to two vectors a "dot product" in all but the Euclidean case. It also must be noted that this does not have the usual properties of a dot product when the manifold isn't Riemannian. Note too that there is not just one choice of metric on a finite-dimensional real vector space, since scaling the Euclidean metric by any positive constant factor yields another positive-definite quadratic form. We need a reliable source that uses the term "dot product" in this manner. It is also confusing to implicitly use raising and lowering indices without explaining what it is, which would leave our readers scratching their heads unless they already know differential geometry. Finally, the text as-written contained numerous grammar errors (often missing definite articles). All of these must be addressed before this can be reinserted.--Jasper Deng (talk) 07:58, 19 February 2020 (UTC)

Equivalence of definitions
I don't see a valid proof. The equivalence depends on homogeneity and distributivity of the geometric definition, which are not proved or justified. Zaslav (talk) 07:41, 13 April 2023 (UTC)

color usage
This page is great and informative. Though, I've seldom seen such stark and oversaturated colors in the definition of a mathematical concept. The edit in the history justifies the colors to "improve readability of the algebraic definition", but I don't think that the straightforward coordinate definition of a dot product necessitates this. Moreover, I feel the minor inconsistencies and arbitrary choices of what is colored adds confusion to the definition. I don't mind how it's done in the Law of Cosines section as a corresponding colored diagram is present, but I think I'm going to strip the colors in the coordinate definition unless anyone has strong feelings or can convince me otherwise. ^_^ EuclideanSwag (talk) 23:22, 6 May 2023 (UTC)


 * if anyone cares I did this. There's a few sentences in the article I'd also like to clean up later. :3 EuclideanSwag (talk) 22:40, 29 May 2023 (UTC)

θ =cos^ -1 ( vec a . vec b /| vec a || vec b | )
It will come handy if someone is trying to understand the dot product. Yuthfghds (talk) 05:18, 9 September 2023 (UTC)