Jump to content

Dot product: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Coordinate definition: fixed missing boldface on vector
 
(428 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{Short description|Algebraic operation on coordinate vectors}}
{{redirect|Scalar product|the abstract scalar product|Inner product space|the product of a vector and a scalar|Scalar multiplication}}
{{redirect|Scalar product|the abstract scalar product|Inner product space|the product of a vector and a scalar|Scalar multiplication}}


In [[mathematics]], the '''dot product''' or '''scalar product''' (sometimes '''inner product''' in the context of [[Euclidean space]], or rarely '''projection product''' for emphasizing the [[Vector projection|geometric significance]]), is an algebraic operation that takes two equal-length sequences of numbers (usually [[coordinate vector]]s) and returns a single number. This operation can be defined either algebraically or geometrically. Algebraically, it is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the [[Euclidean vector#Length|Euclidean magnitude]]s of the two vectors and the [[cosine]] of the angle between them. The name "dot product" is derived from the [[Dot operator|centered dot]] " '''·''' " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a [[scalar (mathematics)|scalar]] (rather than a [[Euclidean vector|vector]]).
In [[mathematics]], the '''dot product''' or '''scalar product'''<ref group="note">The term ''scalar product'' means literally "product with a [[Scalar (mathematics)|scalar]] as a result". It is also used for other [[symmetric bilinear form]]s, for example in a [[pseudo-Euclidean space]]. Not to be confused with [[scalar multiplication]].</ref> is an [[algebraic operation]] that takes two equal-length sequences of numbers (usually [[coordinate vector]]s), and returns a single number. In [[Euclidean geometry]], the dot product of the [[Cartesian coordinates]] of two [[Euclidean vector|vector]]s is widely used. It is often called the '''inner product''' (or rarely the '''projection product''') of [[Euclidean space]], even though it is not the only inner product that can be defined on Euclidean space (see ''[[Inner product space]]'' for more).


Algebraically, the dot product is the sum of the [[Product (mathematics)|products]] of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the [[Euclidean vector#Length|Euclidean magnitude]]s of the two vectors and the [[cosine]] of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern [[geometry]], [[Euclidean space]]s are often defined by using [[vector space]]s. In this case, the dot product is used for defining lengths (the length of a vector is the [[square root]] of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the [[quotient]] of their dot product by the product of their lengths).
In three-dimensional space, the dot product contrasts with the [[cross product]] of two vectors, which produces a [[pseudovector]] as the result. The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.


The name "dot product" is derived from the [[dot operator]] "&nbsp;'''·'''&nbsp;" that is often used to designate this operation;<ref name=":1">{{cite web|title=Dot Product|url=https://www.mathsisfun.com/algebra/vectors-dot-product.html|access-date=2020-09-06|website=www.mathsisfun.com}}</ref> the alternative name "scalar product" emphasizes that the result is a [[scalar (mathematics)|scalar]], rather than a vector (as with the [[vector product]] in three-dimensional space).
==Definition==
The dot product is often defined in one of two ways: algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude of vectors). The equivalence of these two definitions relies on having a [[Cartesian coordinate system]] for Euclidean space.


== Definition ==
In modern presentations of [[Euclidean geometry]], the points of space are defined in terms of their Cartesian coordinates, and [[Euclidean space]] itself is commonly identified with the [[real coordinate space]] '''R'''<sup>''n''</sup>. In such a presentation, the notions of length and angles are not primitive. They are defined by means of the dot product: the length of a vector is defined as the square root of the dot product of the vector by itself, and the [[cosine]] of the (non oriented) angle of two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.
The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a [[Cartesian coordinate system]] for Euclidean space.


In modern presentations of [[Euclidean geometry]], the points of space are defined in terms of their [[Cartesian coordinates]], and [[Euclidean space]] itself is commonly identified with the [[real coordinate space]] <math>\mathbf{R}^n</math>. In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the [[square root]] of the dot product of the vector by itself, and the [[cosine]] of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.
===Algebraic definition===
The dot product of two vectors {{nowrap|1='''A''' = [''A''<sub>1</sub>, ''A''<sub>2</sub>, ..., ''A''<sub>''n''</sub>]}} and {{nowrap|1='''B''' = [''B''<sub>1</sub>, ''B''<sub>2</sub>, ..., ''B''<sub>''n''</sub>]}} is defined as:<ref name="Lipschutz2009">{{cite book |author= S. Lipschutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>


=== Coordinate definition ===
:<math>\mathbf{A}\cdot \mathbf{B} = \sum_{i=1}^n A_iB_i = A_1B_1 + A_2B_2 + \cdots + A_nB_n</math>
The dot product of two vectors <math>\mathbf{a} = [a_1, a_2, \cdots, a_n]</math> and {{nowrap|1=<math>\mathbf{b} = [b_1, b_2, \cdots, b_n]</math>,}} specified with respect to an [[orthonormal basis]], is defined as:<ref name="Lipschutz2009">{{cite book |author1=S. Lipschutz |author2=M. Lipson |title= Linear Algebra (Schaum's Outlines) | edition= 4th | year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>

<math display="block">\mathbf a \cdot \mathbf b = \sum_{i=1}^n a_i b_i = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n</math>
where Σ denotes [[Summation|summation notation]] and ''n'' is the dimension of the vector space. For instance, in [[three-dimensional space]], the dot product of vectors {{nowrap|[1, 3, −5]}} and {{nowrap|[4, −2, −1]}} is:
where <math>\Sigma</math> denotes [[summation]] and <math>n</math> is the [[dimension]] of the [[vector space]]. For instance, in [[Three-dimensional space (mathematics)|three-dimensional space]], the dot product of vectors {{nowrap|<math> [1,3,-5] </math>}} and {{nowrap|<math> [4,-2,-1] </math>}} is:

:<math>
<math display="block">
\begin{align}
\begin{align}
\ [1, 3, -5] \cdot [4, -2, -1] &= (1)(4) + (3)(-2) + (-5)(-1) \\
\ [1, 3, -5] \cdot [4, -2, -1] &= (1 \times 4) + (3\times-2) + (-5\times-1) \\
&= 4 - 6 + 5 \\
&= 4 - 6 + 5 \\
&= 3.
&= 3
\end{align}
\end{align}
</math>
</math>


Likewise, the dot product of the vector {{nowrap|<math>[1,3,-5]</math>}} with itself is:
===Geometric definition===
<math display="block">
In [[Euclidean space]], a [[Euclidean vector]] is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector '''A''' is denoted by <math> \left\| \mathbf{A} \right\| </math>. The dot product of two Euclidean vectors '''A''' and '''B''' is defined by<ref name="Spiegel2009">{{cite book |author= M.R. Spiegel, S. Lipschutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref><ref>{{cite book|author1=A I Borisenko|author2=I E Taparov|title=Vector and tensor analysis with applications|publisher=Dover|translator=Richard Silverman|year=1968|page=14}}</ref>
\begin{align}
:<math>\mathbf A \cdot \mathbf B = \left\| \mathbf A \right\| \, \left\| \mathbf B \right\| \cos \theta ,</math>
\ [1, 3, -5] \cdot [1, 3, -5] &= (1 \times 1) + (3\times 3) + (-5\times -5) \\
where θ is the [[angle]] between '''A''' and '''B'''.
&= 1 + 9 + 25 \\
&= 35
\end{align}
</math>


If vectors are identified with [[column matrix|column vectors]], the dot product can also be written as a [[matrix multiplication|matrix product]]
In particular, if '''A''' and '''B''' are [[orthogonal]], then the angle between them is 90° and
:<math>\mathbf A \cdot \mathbf B = 0 .</math>
<math display="block">\mathbf a \cdot \mathbf b = \mathbf a^{\mathsf T} \mathbf b,</math>
where <math>\mathbf a{^\mathsf T}</math> denotes the [[transpose]] of <math>\mathbf a</math>.
At the other extreme, if they are codirectional, then the angle between them is 0° and

:<math>\mathbf A \cdot \mathbf B = \left\| \mathbf A \right\| \, \left\| \mathbf B \right\| </math>
Expressing the above example in this way, a 1 × 3 matrix ([[row vector]]) is multiplied by a 3 × 1 matrix ([[column vector]]) to get a 1 × 1 matrix that is identified with its unique entry:
This implies that the dot product of a vector '''A''' by itself is
<math display="block">
:<math>\mathbf A \cdot \mathbf A = \left\| \mathbf A \right\| ^2 ,</math>
\begin{bmatrix}
1 & 3 & -5
\end{bmatrix}
\begin{bmatrix}
4 \\ -2 \\ -1
\end{bmatrix} = 3 \, .
</math>

=== Geometric definition ===
[[File:Inner-product-angle.svg|thumb|Illustration showing how to find the angle between vectors using the dot product]]
[[File:Tetrahedral angle calculation.svg|thumb|216px|<!-- specify width as minus sign vanishes at most sizes --> Calculating bond angles of a symmetrical [[tetrahedral molecular geometry]] using a dot product]]
In [[Euclidean space]], a [[Euclidean vector]] is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The [[Magnitude (mathematics)|magnitude]] of a vector <math>\mathbf{a}</math> is denoted by <math> \left\| \mathbf{a} \right\| </math>. The dot product of two Euclidean vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> is defined by<ref name="Spiegel2009">{{cite book |author1=M.R. Spiegel |author2=S. Lipschutz |author3=D. Spellman |title= Vector Analysis (Schaum's Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref><ref>{{cite book|author1=A I Borisenko|author2=I E Taparov|title=Vector and tensor analysis with applications | publisher=Dover | translator=Richard Silverman | year=1968 | page=14}}</ref><ref name=":1" />
<math display="block">\mathbf{a}\cdot\mathbf{b}= \left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|\cos\theta ,</math>
where <math>\theta</math> is the [[angle]] between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>.

In particular, if the vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are [[orthogonal]] (i.e., their angle is <math>\frac{\pi}{2}</math> or <math>90^\circ</math>), then <math>\cos \frac \pi 2 = 0</math>, which implies that
<math display="block">\mathbf a \cdot \mathbf b = 0 .</math>
At the other extreme, if they are [[codirectional]], then the angle between them is zero with <math>\cos 0 = 1</math> and
<math display="block">\mathbf a \cdot \mathbf b = \left\| \mathbf a \right\| \, \left\| \mathbf b \right\| </math>
This implies that the dot product of a vector <math>\mathbf{a}</math> with itself is
<math display="block">\mathbf a \cdot \mathbf a = \left\| \mathbf a \right\| ^2 ,</math>
which gives
which gives
: <math> \left\| \mathbf A \right\| = \sqrt{\mathbf A \cdot \mathbf A} ,</math>
<math display="block"> \left\| \mathbf a \right\| = \sqrt{\mathbf a \cdot \mathbf a} ,</math>
the formula for the [[Euclidean length]] of the vector.
the formula for the [[Euclidean length]] of the vector.


===Scalar projection and first properties===
=== Scalar projection and first properties ===
[[File:Dot Product.svg|thumb|right|Scalar projection]]
[[File:Dot Product.svg|thumb|right|Scalar projection]]
The [[scalar projection]] (or scalar component) of a Euclidean vector '''A''' in the direction of a Euclidean vector '''B''' is given by
The [[scalar projection]] (or scalar component) of a Euclidean vector <math>\mathbf{a}</math> in the direction of a Euclidean vector <math>\mathbf{b}</math> is given by
:<math> A_B = \left\| \mathbf A \right\| \cos \theta ,</math>
<math display="block"> a_b = \left\| \mathbf a \right\| \cos \theta ,</math>
where θ is the angle between '''A''' and '''B'''.
where <math>\theta</math> is the angle between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>.


In terms of the geometric definition of the dot product, this can be rewritten
In terms of the geometric definition of the dot product, this can be rewritten as
:<math>A_B = \mathbf A \cdot \widehat{\mathbf B} ,</math>
<math display="block">a_b = \mathbf a \cdot \widehat{\mathbf b} ,</math>
where <math> \widehat{\mathbf B} = \mathbf B / \left\| \mathbf B \right\| </math> is the [[unit vector]] in the direction of '''B'''.
where <math> \widehat{\mathbf b} = \mathbf b / \left\| \mathbf b \right\| </math> is the [[unit vector]] in the direction of <math>\mathbf{b}</math>.


[[File:Dot product distributive law.svg|thumb|right|Distributive law for the dot product]]
[[File:Dot product distributive law.svg|thumb|right|Distributive law for the dot product]]
The dot product is thus characterized geometrically by<ref>{{Cite book | last1=Arfken | first1=G. B. | last2=Weber | first2=H. J. | title=Mathematical Methods for Physicists | publisher=[[Academic Press]] | location=Boston, MA | edition=5th | isbn=978-0-12-059825-0 | year=2000 | pages=14–15 }}.</ref>
The dot product is thus characterized geometrically by<ref>{{cite book | last1=Arfken | first1=G. B. | last2=Weber | first2=H. J. | title=Mathematical Methods for Physicists | publisher=[[Academic Press]] | location=Boston, MA | edition=5th | isbn=978-0-12-059825-0 | year=2000 | pages=14–15 }}</ref>
:<math> \mathbf A \cdot \mathbf B = A_B \left\| \mathbf{B} \right\| = B_A \left\| \mathbf{A} \right\| .</math>
<math display="block"> \mathbf a \cdot \mathbf b = a_b \left\| \mathbf{b} \right\| = b_a \left\| \mathbf{a} \right\| .</math>
The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar ''α'',
The dot product, defined in this manner, is [[Homogeneous function|homogeneous]] under scaling in each variable, meaning that for any scalar <math>\alpha</math>,
:<math> ( \alpha \mathbf{A} ) \cdot \mathbf B = \alpha ( \mathbf A \cdot \mathbf B ) = \mathbf A \cdot ( \alpha \mathbf B ) .</math>
<math display="block"> ( \alpha \mathbf{a} ) \cdot \mathbf b = \alpha ( \mathbf a \cdot \mathbf b ) = \mathbf a \cdot ( \alpha \mathbf b ) .</math>
It also satisfies a [[distributive law]], meaning that
It also satisfies the [[distributive law]], meaning that
:<math> \mathbf A \cdot ( \mathbf B + \mathbf C ) = \mathbf A \cdot \mathbf B + \mathbf A \cdot \mathbf C .</math>
<math display="block"> \mathbf a \cdot ( \mathbf b + \mathbf c ) = \mathbf a \cdot \mathbf b + \mathbf a \cdot \mathbf c .</math>


These properties may be summarized by saying that the dot product is a [[bilinear form]]. Moreover, this bilinear form is [[positive definite bilinear form|positive definite]], which means that
These properties may be summarized by saying that the dot product is a [[bilinear form]]. Moreover, this bilinear form is [[positive definite bilinear form|positive definite]], which means that <math> \mathbf a \cdot \mathbf a </math> is never negative, and is zero if and only if <math> \mathbf a = \mathbf 0 </math>, the zero vector.
<math> \mathbf A \cdot \mathbf A </math>
is never negative and is zero if and only if <math> \mathbf A = \mathbf 0 .</math>


===Equivalence of the definitions===
=== Equivalence of the definitions ===
If '''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub> are the [[standard basis|standard basis vectors]] in '''R'''<sup>''n''</sup>, then we may write
If <math>\mathbf{e}_1,\cdots,\mathbf{e}_n</math> are the [[standard basis|standard basis vectors]] in <math>\mathbf{R}^n</math>, then we may write
:<math>\begin{align}
<math display="block">\begin{align}
\mathbf A &= [A_1 , \dots , A_n] = \sum_i A_i \mathbf e_i \\
\mathbf a &= [a_1 , \dots , a_n] = \sum_i a_i \mathbf e_i \\
\mathbf B &= [B_1 , \dots , B_n] = \sum_i B_i \mathbf e_i.
\mathbf b &= [b_1 , \dots , b_n] = \sum_i b_i \mathbf e_i.
\end{align}
\end{align}
</math>
</math>
The vectors '''e'''<sub>''i''</sub> are an [[orthonormal basis]], which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length
The vectors <math>\mathbf{e}_i</math> are an [[orthonormal basis]], which means that they have unit length and are at right angles to each other. Since these vectors have unit length,
:<math> \mathbf e_i \cdot \mathbf e_i = 1 </math>
<math display="block"> \mathbf e_i \cdot \mathbf e_i = 1 </math>
and since they form right angles with each other, if {{nowrap|''i'' ≠ ''j''}},
and since they form right angles with each other, if <math>i\neq j</math>,
:<math> \mathbf e_i \cdot \mathbf e_j = 0 .</math>
<math display="block"> \mathbf e_i \cdot \mathbf e_j = 0 .</math>
Thus in general we can say that:
Thus in general, we can say that:
:<math> \mathbf e_i \cdot \mathbf e_j = \delta_ {ij} .</math>
<math display="block"> \mathbf e_i \cdot \mathbf e_j = \delta_ {ij} ,</math>
Where δ <sub> ij </sub> is the [[Kronecker delta]].
where <math>\delta_{ij}</math> is the [[Kronecker delta]].


[[File:Wiki dot.png|thumb|Vector components in an orthonormal basis]]
Also, by the geometric definition, for any vector '''e'''<sub>''i''</sub> and a vector '''A''', we note

:<math> \mathbf A \cdot \mathbf e_i = \left\| \mathbf A \right\| \, \left\| \mathbf e_i \right\| \cos \theta = \left\| \mathbf A \right\| \cos \theta = A_i ,</math>
Also, by the geometric definition, for any vector <math>\mathbf{e}_i</math> and a vector <math>\mathbf{a}</math>, we note that
where ''A''<sub>''i''</sub> is the component of vector '''A''' in the direction of '''e'''<sub>''i''</sub>.
<math display="block"> \mathbf a \cdot \mathbf e_i = \left\| \mathbf a \right\| \left\| \mathbf e_i \right\| \cos \theta_i = \left\| \mathbf a \right\| \cos \theta_i = a_i ,</math>
where <math>a_i</math> is the component of vector <math>\mathbf{a}</math> in the direction of <math>\mathbf{e}_i</math>. The last step in the equality can be seen from the figure.


Now applying the distributivity of the geometric version of the dot product gives
Now applying the distributivity of the geometric version of the dot product gives
:<math> \mathbf A \cdot \mathbf B = \mathbf A \cdot \sum_i B_i \mathbf e_i = \sum_i B_i ( \mathbf A \cdot \mathbf e_i ) = \sum_i B_i A_i ,</math>
<math display="block"> \mathbf a \cdot \mathbf b = \mathbf a \cdot \sum_i b_i \mathbf e_i = \sum_i b_i ( \mathbf a \cdot \mathbf e_i ) = \sum_i b_i a_i= \sum_i a_i b_i ,</math>
which is precisely the algebraic definition of the dot product. So the (geometric) dot product equals the (algebraic) dot product.
which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.


==Properties==
== Properties==
The dot product fulfills the following properties if '''a''', '''b''', and '''c''' are real [[vector (geometry)|vectors]] and ''r'' is a [[scalar (mathematics)|scalar]].<ref name="Lipschutz2009" /><ref name="Spiegel2009" />
The dot product fulfills the following properties if <math>\mathbf{a}</math>, <math>\mathbf{b}</math>, <math>\mathbf{c}</math> and <math>\mathbf{d}</math> are real [[vector (geometry)|vectors]] and <math>\alpha</math>, <math>\beta</math>, <math>\gamma</math> and <math>\delta</math> are [[scalar (mathematics)|scalars]].<ref name="Lipschutz2009" /><ref name="Spiegel2009" />


; [[Commutative]] : <math display="block"> \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} ,</math> which follows from the definition (<math>\theta</math> is the angle between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>):<ref>{{cite web|last=Nykamp|first=Duane|title=The dot product|url=https://mathinsight.org/dot_product|access-date=September 6, 2020|website=Math Insight}}</ref> <math display="block"> \mathbf{a} \cdot \mathbf{b} = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \cos \theta = \left\| \mathbf{b} \right\| \left\| \mathbf{a} \right\| \cos \theta = \mathbf{b} \cdot \mathbf{a} .</math>
# '''[[Commutative]]:'''
The commutative property can also be easily proven with the algebraic definition, and in [[Inner product space|more general spaces]] (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as
#: <math> \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} ,</math>
<math display="block"> \theta = \operatorname{arccos}\left( \frac{\mathbf{a}\cdot\mathbf{b}}{\left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|} \right). </math>
#: which follows from the definition (''θ'' is the angle between '''a''' and '''b'''):
; [[bilinear form|Bilinear]] (additive, distributive and scalar-multiplicative in both arguments) : <math display="block">
#: <math> \mathbf{a} \cdot \mathbf{b} = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \cos \theta = \left\| \mathbf{b} \right\| \left\| \mathbf{a} \right\| \cos \theta = \mathbf{b} \cdot \mathbf{a} .</math>
(\alpha \mathbf{a} + \beta\mathbf{b})\cdot (\gamma\mathbf{c}+\delta\mathbf{d}) = </math>
# '''[[Distributive property|Distributive]] over vector addition:'''
#: <math> \mathbf{a} \cdot (\mathbf{b} + \mathbf{c}) = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{c} .</math>
<math display="block">=\alpha(\mathbf{a}\cdot(\gamma\mathbf{c}+\delta\mathbf{d})) + \beta(\mathbf{b}\cdot(\gamma\mathbf{c}+\delta\mathbf{d})) =</math>
<math display="block">=\alpha\gamma(\mathbf{a}\cdot\mathbf{c}) + \alpha\delta(\mathbf{a}\cdot\mathbf{d}) +\beta\gamma(\mathbf{b}\cdot\mathbf{c}) +\beta\delta(\mathbf{b}\cdot\mathbf{d}) .</math>
# '''[[bilinear form|Bilinear]]''':
; Not [[associative]] : Because the dot product between a scalar <math>\mathbf{a}\cdot\mathbf{b}</math> and a vector <math>\mathbf{c}</math> is not defined, which means that the expressions involved in the associative property, <math>(\mathbf{a}\cdot\mathbf{b})\cdot\mathbf{c}</math> or <math>\mathbf{a}\cdot(\mathbf{b}\cdot\mathbf{c})</math> are both ill-defined.<ref>Weisstein, Eric W. "Dot Product". From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DotProduct.html</ref> Note however that the previously mentioned scalar multiplication property is sometimes called the "associative law for scalar and dot product"<ref name="BanchoffWermer1983">{{cite book|author1=T. Banchoff|author2=J. Wermer | title=Linear Algebra Through Geometry|year=1983|publisher=Springer Science & Business Media| isbn=978-1-4684-0161-5 | page=12| url=https://archive.org/details/linearalgebrathr00banc_0/page/12/mode/2up}}</ref> or one can say that "the dot product is associative with respect to scalar multiplication" because <math>c (\mathbf{a} \cdot \mathbf{b}) = (c\mathbf{a})\cdot\mathbf{b} = \mathbf{a}\cdot(c\mathbf{b})</math>.<ref name="BedfordFowler2008">{{cite book | author1=A. Bedford|author2=Wallace L. Fowler|title=Engineering Mechanics: Statics|year=2008|publisher=Prentice Hall | isbn=978-0-13-612915-8 | edition=5th | page=60}}</ref>
#: <math> \mathbf{a} \cdot ( r \mathbf{b} + \mathbf{c} ) = r ( \mathbf{a} \cdot \mathbf{b} ) + ( \mathbf{a} \cdot \mathbf{c} ) .</math>
; [[Orthogonal]] : Two non-zero vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are ''orthogonal'' if and only if <math>\mathbf{a} \cdot \mathbf{b} = 0</math>.
# '''[[Scalar multiplication]]:'''
; No [[cancellation law|cancellation]]
#: <math> ( c_1 \mathbf{a} ) \cdot ( c_2 \mathbf{b} ) = c_1 c_2 ( \mathbf{a} \cdot \mathbf{b} ) .</math>
: Unlike multiplication of ordinary numbers, where if <math>ab=ac</math>, then <math>b</math> always equals <math>c</math> unless <math>a</math> is zero, the dot product does not obey the [[cancellation law]]: {{pb}} If <math>\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{c}</math> and <math>\mathbf{a}\neq\mathbf{0}</math>, then we can write: <math>\mathbf{a}\cdot(\mathbf{b}-\mathbf{c}) = 0</math> by the [[distributive law]]; the result above says this just means that <math>\mathbf{a}</math> is perpendicular to <math>(\mathbf{b}-\mathbf{c})</math>, which still allows <math>(\mathbf{b}-\mathbf{c})\neq\mathbf{0}</math>, and therefore allows <math>\mathbf{b}\neq\mathbf{c}</math>.
# '''Not [[associative]]''' because the dot product between a scalar ('''a ⋅ b''') and a vector ('''c''') is not defined, which means that the expressions involved in the associative property, ('''a ⋅ b''') ⋅ '''c''' or '''a''' ⋅ ('''b ⋅ c''') are both ill-defined.
; [[Product rule]] : If <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are vector-valued [[differentiable function]]s, then the derivative ([[Notation for differentiation#Lagrange's notation|denoted by a prime]] <math>{}'</math>) of <math>\mathbf{a}\cdot\mathbf{b}</math> is given by the rule <math display="block">(\mathbf{a}\cdot\mathbf{b})' = \mathbf{a}'\cdot\mathbf{b} + \mathbf{a}\cdot\mathbf{b}'.</math>
# '''[[Orthogonal]]:'''
#: Two non-zero vectors '''a''' and '''b''' are ''orthogonal'' [[if and only if]] {{nowrap|1='''a''' ⋅ '''b''' = 0}}.
# '''No [[cancellation law|cancellation]]:'''
#: Unlike multiplication of ordinary numbers, where if {{nowrap|1=''ab'' = ''ac''}}, then ''b'' always equals ''c'' unless ''a'' is zero, the dot product does not obey the [[cancellation law]]:
#: If {{nowrap|1='''a''' ⋅ '''b''' = '''a''' ⋅ '''c'''}} and {{nowrap|'''a''' ≠ '''0'''}}, then we can write: {{nowrap|1='''a''' ⋅ ('''b''' − '''c''') = 0}} by the [[distributive law]]; the result above says this just means that '''a''' is perpendicular to {{nowrap|('''b''' − '''c''')}}, which still allows {{nowrap|('''b''' − '''c''') ≠ '''0'''}}, and therefore {{nowrap|'''b''' ≠ '''c'''}}.
# '''[[Product Rule]]:''' If '''a''' and '''b''' are [[function (mathematics)|functions]], then the derivative ([[Notation for differentiation#Lagrange's notation|denoted by a prime]] ′) of {{nowrap|'''a''' ⋅ '''b'''}} is {{nowrap|'''a'''′ ⋅ '''b''' + '''a''' ⋅ '''b'''′}}.


===Application to the cosine law===
=== Application to the law of cosines ===
[[File:Dot product cosine rule.svg|100px|thumb|Triangle with vector edges '''a''' and '''b''', separated by angle ''θ''.]]
[[File:Dot product cosine rule.svg|100px|thumb|Triangle with vector edges '''a''' and '''b''', separated by angle ''θ'']]
{{main|Law of cosines}}


Given two vectors <math>{\color{red}\mathbf{a}}</math> and <math>{\color{blue}\mathbf{b}}</math> separated by angle <math>\theta</math> (see the upper image), they form a triangle with a third side <math>{\color{orange}\mathbf{c}} = {\color{red}\mathbf{a}} - {\color{blue}\mathbf{b}}</math>. Let <math>a</math>, <math>b</math> and <math>c</math> denote the lengths of <math>{\color{red}\mathbf{a}}</math>, <math>{\color{blue}\mathbf{b}}</math>, and <math>{\color{orange}\mathbf{c}}</math>, respectively. The dot product of this with itself is:
{{main|law of cosines}}
<math display="block">

Given two vectors '''a''' and '''b''' separated by angle ''θ'' (see image right), they form a triangle with a third side {{nowrap|1='''c''' = '''a''' − '''b'''}}. The dot product of this with itself is:

:<math>
\begin{align}
\begin{align}
\mathbf{c} \cdot \mathbf{c} & = ( \mathbf{a} - \mathbf{b}) \cdot ( \mathbf{a} - \mathbf{b} ) \\
\mathbf{\color{orange}c} \cdot \mathbf{\color{orange}c} & = ( \mathbf{\color{red}a} - \mathbf{\color{blue}b}) \cdot ( \mathbf{\color{red}a} - \mathbf{\color{blue}b} ) \\
& = \mathbf{a} \cdot \mathbf{a} - \mathbf{a} \cdot \mathbf{b} - \mathbf{b} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} \\
& = \mathbf{\color{red}a} \cdot \mathbf{\color{red}a} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{blue}b} \cdot \mathbf{\color{red}a} + \mathbf{\color{blue}b} \cdot \mathbf{\color{blue}b} \\
& = a^2 - \mathbf{a} \cdot \mathbf{b} - \mathbf{a} \cdot \mathbf{b} + b^2 \\
& = {\color{red}a}^2 - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\
& = a^2 - 2 \mathbf{a} \cdot \mathbf{b} + b^2 \\
& = {\color{red}a}^2 - 2 \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\
c^2 & = a^2 + b^2 - 2 a b \cos \theta \\
{\color{orange}c}^2 & = {\color{red}a}^2 + {\color{blue}b}^2 - 2 {\color{red}a} {\color{blue}b} \cos \mathbf{\color{purple}\theta} \\
\end{align}
\end{align}
</math>
</math>

which is the [[law of cosines]].
which is the [[law of cosines]].
{{clear}}
{{clear}}


==Triple product expansion==
== Triple product ==
{{Main|Triple product}}
{{Main|Triple product}}


There are two [[ternary operation]]s involving dot product and [[cross product]].
This is an identity (also known as '''Lagrange's formula''') involving the dot- and [[Cross product|cross-products]]. It is written as:<ref name="Lipschutz2009" /><ref name="Spiegel2009" />


The '''scalar triple product''' of three vectors is defined as
:<math> \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) = \mathbf{b} ( \mathbf{a} \cdot \mathbf{c} ) - \mathbf{c} ( \mathbf{a} \cdot \mathbf{b} ) ,</math>
<math display="block"> \mathbf{a} \cdot ( \mathbf{b} \times \mathbf{c} ) = \mathbf{b} \cdot ( \mathbf{c} \times \mathbf{a} )=\mathbf{c} \cdot ( \mathbf{a} \times \mathbf{b} ).</math>
Its value is the [[determinant]] of the matrix whose columns are the [[Cartesian coordinates]] of the three vectors. It is the signed [[volume]] of the [[parallelepiped]] defined by the three vectors, and is isomorphic to the three-dimensional special case of the [[exterior product]] of three vectors.


The '''vector triple product''' is defined by<ref name="Lipschutz2009" /><ref name="Spiegel2009" />
which [[mnemonic|may be remembered]] as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula finds application in simplifying vector calculations in [[physics]].
<math display="block"> \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) = ( \mathbf{a} \cdot \mathbf{c} )\, \mathbf{b} - ( \mathbf{a} \cdot \mathbf{b} )\, \mathbf{c} .</math>
This identity, also known as ''Lagrange's formula'', [[mnemonic|may be remembered]] as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in [[physics]].


==Physics==
== Physics ==
In [[physics]], the dot product takes two vectors and returns a [[scalar (mathematics)|scalar]] quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, <math display=block>\mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| \, |\mathbf{b}| \cos \theta</math> Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector.
In [[physics]], vector magnitude is a [[scalar (physics)|scalar]] in the physical sense, i.e. a [[physical quantity]] independent of the coordinate system, expressed as the [[product (mathematics)|product]] of a [[number|numerical value]] and a [[physical unit]], not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. Examples include:<ref name="Riley2010">{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref><ref>{{cite book |author= M. Mansfield, C. O’Sullivan|title= Understanding Physics|edition= 4th |year= 2011|publisher= John Wiley & Sons|isbn=978-0-47-0746370}}</ref>
* [[Mechanical work]] is the dot product of [[force]] and [[Displacement (vector)|displacement]] vectors.
* [[Magnetic flux]] is the dot product of the [[magnetic field]] and the [[vector area]].


For example:<ref name="Riley2010">{{cite book |author1=K.F. Riley |author2=M.P. Hobson | author3=S.J. Bence |title= Mathematical methods for physics and engineering|url=https://archive.org/details/mathematicalmeth00rile |url-access=registration |edition= 3rd|year= 2010|publisher= Cambridge University Press | isbn=978-0-521-86153-3}}</ref><ref>{{cite book |author1=M. Mansfield |author2=C. O'Sullivan |title= Understanding Physics | edition= 4th |year= 2011|publisher= John Wiley & Sons|isbn=978-0-47-0746370}}</ref>
==Generalizations==
* [[Mechanical work]] is the dot product of [[force]] and [[Displacement (vector)|displacement]] vectors,
===Complex vectors===
* [[Power (physics)|Power]] is the dot product of [[force]] and [[velocity]].
For vectors with [[complex number|complex]] entries, using the given definition of the dot product would lead to quite different properties. For instance the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called [[Isotropic quadratic form|isotropic]]); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition<ref name="Lipschutz2009" />


== Generalizations ==
:<math> \mathbf{a} \cdot \mathbf{b} = \sum{a_i \overline{b_i}} ,</math>

where <span style="text-decoration: overline">''b<sub>i</sub>''</span> is the [[complex conjugate]] of ''b<sub>i</sub>''. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is thus [[sesquilinear]] rather than bilinear: it is [[conjugate linear]] and not linear in '''b''', and the scalar product is not symmetric, since
=== Complex vectors ===
:<math> \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}} .</math>
For vectors with [[complex number|complex]] entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector {{nowrap|<math>\mathbf{a} = [1\ i]</math>).}} This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition<ref>{{cite book | page = 287 | first= Sterling K. | last = Berberian | title = Linear Algebra | year = 2014 | orig-year = 1992 | publisher = Dover | isbn = 978-0-486-78055-9}}</ref><ref name="Lipschutz2009" />
<math display="block"> \mathbf{a} \cdot \mathbf{b} = \sum_i {{a_i}\,\overline{b_i}} ,</math>
where <math>\overline{b_i}</math> is the [[complex conjugate]] of <math>b_i</math>. When vectors are represented by [[column vector]]s, the dot product can be expressed as a [[matrix product]] involving a [[conjugate transpose]], denoted with the superscript H:
<math display="block"> \mathbf{a} \cdot \mathbf{b} = \mathbf{b}^\mathsf{H} \mathbf{a} .</math>

In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is [[sesquilinear]] rather than bilinear, as it is [[conjugate linear]] and not linear in <math>\mathbf{a}</math>. The dot product is not symmetric, since
<math display="block"> \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}} .</math>
The angle between two complex vectors is then given by
The angle between two complex vectors is then given by
:<math> \cos \theta = \frac{\operatorname{Re} ( \mathbf{a} \cdot \mathbf{b} )}{ \left\| \mathbf{a} \right\| \, \left\| \mathbf{b} \right\| } .</math>
<math display="block"> \cos \theta = \frac{\operatorname{Re} ( \mathbf{a} \cdot \mathbf{b} )}{ \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| } .</math>

The complex dot product leads to the notions of [[Hermitian form]]s and general [[inner product space]]s, which are widely used in mathematics and [[physics]].


{{anchor|Norm squared}}The self dot product of a complex vector <math>\mathbf{a} \cdot \mathbf{a} = \mathbf{a}^\mathsf{H} \mathbf{a} </math>, involving the conjugate transpose of a row vector, is also known as the '''norm squared''', <math display="inline">\mathbf{a} \cdot \mathbf{a} = \|\mathbf{a}\|^2</math>, after the [[Euclidean norm]]; it is a vector generalization of the ''[[absolute square]]'' of a complex scalar (see also: ''[[Squared Euclidean distance]]'').
This type of scalar product is nevertheless useful, and leads to the notions of [[Hermitian form]] and of general [[inner product space]]s.


===Inner product===
=== Inner product ===
{{main|Inner product space}}
{{main|Inner product space}}
The inner product generalizes the dot product to [[vector space|abstract vector spaces]] over a [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]], being either the field of [[real number]]s <math> \R </math> or the field of [[complex number]]s <math> \C </math>. It is usually denoted by <math> \left\langle \mathbf{a} \, , \mathbf{b} \right\rangle </math>.
The inner product generalizes the dot product to [[vector space|abstract vector spaces]] over a [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]], being either the field of [[real number]]s <math> \R </math> or the field of [[complex number]]s <math> \Complex </math>. It is usually denoted using [[angular brackets]] by <math> \left\langle \mathbf{a} \, , \mathbf{b} \right\rangle </math>.


The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is [[Sesquilinear form|sesquilinear]] instead of bilinear. An inner product space is a [[normed vector space]], and the inner product of a vector with itself is real and positive-definite.
The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is [[Sesquilinear form|sesquilinear]] instead of bilinear. An inner product space is a [[normed vector space]], and the inner product of a vector with itself is real and positive-definite.


===Functions===
=== Functions ===
The dot product is defined for vectors that have a finite number of [[coordinate vector|entries]]. Thus these vectors can be regarded as [[discrete function]]s: a length-{{math|''n''}} vector {{math|''u''}} is, then, a function with [[domain of a function|domain]] {{math|{''k'' ∈ ℕ ∣ 1 ≤ ''k'' ''n''}}}, and {{math|''u''<sub>''i''</sub>}} is a notation for the image of {{math|''i''}} by the function/vector {{math|''u''}}.
The dot product is defined for vectors that have a finite number of [[coordinate vector|entries]]. Thus these vectors can be regarded as [[discrete function]]s: a length-<math>n</math> vector <math>u</math> is, then, a function with [[domain of a function|domain]] <math>\{k\in\mathbb{N}:1\leq k \leq n\}</math>, and <math>u_i</math> is a notation for the image of <math>i</math> by the function/vector <math>u</math>.


This notion can be generalized to [[continuous function]]s: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some [[Interval (mathematics)|interval]] {{math|''a'' ≤ ''x'' ≤ ''b''}} (also denoted {{math|[''a'', ''b'']}}):<ref name="Lipschutz2009" />
This notion can be generalized to [[Square-integrable function|square-integrable functions]]: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some [[Measure space|measure space]] <math>(X, \mathcal{A}, \mu)</math>:<ref name="Lipschutz2009" />
<math display="block"> \left\langle u , v \right\rangle = \int_X u v \, \text{d} \mu.</math>


For example, if <math> f</math> and <math>g</math> are [[Continuous function|continuous functions]] over a [[Compact space|compact subset]] <math> K</math> of <math>\mathbb{R}^n</math> with the standard [[Lebesgue measure]], the above definition becomes:
:<math> \left\langle u , v \right\rangle = \int_a^b u(x) v(x) d x </math>
<math display="block"> \left\langle f , g \right\rangle = \int_K f(\mathbf{x}) g(\mathbf{x}) \, \operatorname{d}^n \mathbf{x} .</math>


Generalized further to [[complex function]]s {{math|''ψ''(''x'')}} and {{math|''χ''(''x'')}}, by analogy with the complex inner product above, gives<ref name="Lipschutz2009" />
Generalized further to [[complex function|complex continuous functions]] <math>\psi</math> and <math>\chi</math>, by analogy with the complex inner product above, gives:
<math display="block"> \left\langle \psi, \chi \right\rangle = \int_K \psi(z) \overline{\chi(z)} \, \text{d} z.</math>


=== Weight function ===
:<math> \left\langle \psi , \chi \right\rangle = \int_a^b \psi(x) \overline{\chi(x)} d x .</math>
Inner products can have a [[weight function]] (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions <math>u(x)</math> and <math>v(x)</math> with respect to the weight function <math>r(x)>0</math> is
<math display="block"> \left\langle u , v \right\rangle_r = \int_a^b r(x) u(x) v(x) \, d x.</math>


===Weight function===
=== Dyadics and matrices ===
A double-dot product for [[Matrix (mathematics)|matrices]] is the [[Frobenius inner product]], which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices <math>\mathbf{A}</math> and <math>\mathbf{B}</math> of the same size:
Inner products can have a [[weight function]], i.e. a function which weight each term of the inner product with a value.
<math display="block"> \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} \overline{B_{ij}} = \operatorname{tr} ( \mathbf{B}^\mathsf{H} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{H} ) .</math>
And for real matrices,
<math display="block"> \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} B_{ij} = \operatorname{tr} ( \mathbf{B}^\mathsf{T} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{T} ) = \operatorname{tr} ( \mathbf{A}^\mathsf{T} \mathbf{B} ) = \operatorname{tr} ( \mathbf{B} \mathbf{A}^\mathsf{T} ) .</math>


Writing a matrix as a [[dyadics|dyadic]], we can define a different double-dot product (see ''{{slink|Dyadics|Product of dyadic and dyadic}}'') however it is not an inner product.
===Dyadics and matrices===
[[Matrix (mathematics)|Matrices]] have the [[Frobenius inner product]], which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices '''A''' and '''B''' having the same size:


=== Tensors ===
:<math> \bold{A} : \bold{B} = \sum_i \sum_j A_{ij} \overline{B_{ij}} = \mathrm{tr} ( \mathbf{B}^\mathrm{H} \mathbf{A} ) = \mathrm{tr} ( \mathbf{A} \mathbf{B}^\mathrm{H} ) .</math>
The inner product between a [[tensor]] of order <math>n</math> and a tensor of order <math>m</math> is a tensor of order <math>n+m-2</math>, see ''[[Tensor contraction]]'' for details.
:<math> \bold{A} : \bold{B} = \sum_i \sum_j A_{ij} B_{ij} = \mathrm{tr} ( \mathbf{B}^\mathrm{T} \mathbf{A} ) = \mathrm{tr} ( \mathbf{A} \mathbf{B}^\mathrm{T} ) = \mathrm{tr} ( \mathbf{A}^\mathrm{T} \mathbf{B} ) = \mathrm{tr} ( \mathbf{B} \mathbf{A}^\mathrm{T} ) .</math> (For real matrices)


== Computation ==
[[Dyadics]] have a dot product and "double" dot product defined on them, see [[Dyadics#Product of dyadic and dyadic|Dyadics (Product of dyadic and dyadic)]] for their definitions.


===Tensors===
=== Algorithms ===
The inner product between a [[tensor]] of order ''n'' and a tensor of order ''m'' is a tensor of order {{nowrap|''n'' + ''m'' − 2}}, see [[tensor contraction]] for details.


The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from [[catastrophic cancellation]]. To avoid this, approaches such as the [[Kahan summation algorithm]] are used.
==See also==

=== Libraries ===

A dot product function is included in:
* [[BLAS]] level 1 real {{code|SDOT}}, {{code|DDOT}}; complex {{code|CDOTU}}, {{code|1=ZDOTU = X^T * Y}}, {{code|CDOTC}}, {{code|1=ZDOTC = X^H * Y}}
* [[Fortran]] as {{code|dot_product(A,B)}} or {{code|sum(conjg(A) * B)}}
* [[Julia (programming language)|Julia]] as &nbsp;{{code|A' * B}} or standard library LinearAlgebra as {{code|dot(A, B)}}
* [[R (programming language)]] as {{code|sum(A * B)}} for vectors or, more generally for matrices, as {{code|A %*% B}}
* [[Matlab]] as &nbsp;{{code|A' * B}}&nbsp; or &nbsp;{{code|conj(transpose(A)) * B}}&nbsp; or &nbsp;{{code|sum(conj(A) .* B)}}&nbsp; or &nbsp;{{code|dot(A, B)}}
* [[Python (programming language)|Python]] (package [[NumPy]]) as &nbsp;{{code|np.matmul(A, B)}}&nbsp; or &nbsp;{{code|np.dot(A, B)}}&nbsp; or &nbsp;{{code|np.inner(A, B)}}
* [[GNU Octave]] as &nbsp;{{code|sum(conj(X) .* Y, dim)}}, and similar code as Matlab
* Intel oneAPI Math Kernel Library real p?dot {{code|1=dot = sub(x)'*sub(y)}}; complex p?dotc {{code|1=dotc = conjg(sub(x)')*sub(y)}}

== See also ==
{{div col}}
* [[Cauchy–Schwarz inequality]]
* [[Cauchy–Schwarz inequality]]
* [[Cross product]]
* [[Cross product]]
* [[Dot product representation of a graph]]
* [[Euclidean norm]], the square-root of the self dot product
* [[Matrix multiplication]]
* [[Matrix multiplication]]
* [[Metric tensor]]
* [[Multiplication of vectors]]
* [[Outer product]]
{{div col end}}

== Notes ==
{{reflist|group=note}}


==References==
== References ==
{{reflist}}
{{reflist}}


==External links==
== External links ==
{{Commons category|Scalar product}}
* {{springer|title=Inner product|id=p/i051240}}
* {{springer|title=Inner product|id=p/i051240}}
* {{mathworld|urlname=DotProduct|title=Dot product}}
* [http://www.mathreference.com/la,dot.html Explanation of dot product including with complex vectors]
* [http://www.mathreference.com/la,dot.html Explanation of dot product including with complex vectors]
* [http://demonstrations.wolfram.com/DotProduct/ "Dot Product"] by Bruce Torrence, [[Wolfram Demonstrations Project]], 2007.
* [http://demonstrations.wolfram.com/DotProduct/ "Dot Product"] by Bruce Torrence, [[Wolfram Demonstrations Project]], 2007.


{{linear algebra}}
{{linear algebra}}
{{tensors}}
{{Authority control}}


[[Category:Articles containing proofs]]
[[Category:Articles containing proofs]]
[[Category:Bilinear forms]]
[[Category:Bilinear forms]]
[[Category:Linear algebra]]
[[Category:Operations on vectors]]
[[Category:Vectors (mathematics and physics)]]
[[Category:Analytic geometry]]
[[Category:Analytic geometry]]
[[Category:Tensors]]
[[Category:Scalars]]

Latest revision as of 23:47, 8 December 2024

In mathematics, the dot product or scalar product[note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more).

Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by the product of their lengths).

The name "dot product" is derived from the dot operator· " that is often used to designate this operation;[1] the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the vector product in three-dimensional space).

Definition

[edit]

The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.

In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space . In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.

Coordinate definition

[edit]

The dot product of two vectors and , specified with respect to an orthonormal basis, is defined as:[2] where denotes summation and is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors and is:

Likewise, the dot product of the vector with itself is:

If vectors are identified with column vectors, the dot product can also be written as a matrix product where denotes the transpose of .

Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry:

Geometric definition

[edit]
Illustration showing how to find the angle between vectors using the dot product
Calculating bond angles of a symmetrical tetrahedral molecular geometry using a dot product

In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector is denoted by . The dot product of two Euclidean vectors and is defined by[3][4][1] where is the angle between and .

In particular, if the vectors and are orthogonal (i.e., their angle is or ), then , which implies that At the other extreme, if they are codirectional, then the angle between them is zero with and This implies that the dot product of a vector with itself is which gives the formula for the Euclidean length of the vector.

Scalar projection and first properties

[edit]
Scalar projection

The scalar projection (or scalar component) of a Euclidean vector in the direction of a Euclidean vector is given by where is the angle between and .

In terms of the geometric definition of the dot product, this can be rewritten as where is the unit vector in the direction of .

Distributive law for the dot product

The dot product is thus characterized geometrically by[5] The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar , It also satisfies the distributive law, meaning that

These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that is never negative, and is zero if and only if , the zero vector.

Equivalence of the definitions

[edit]

If are the standard basis vectors in , then we may write The vectors are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length, and since they form right angles with each other, if , Thus in general, we can say that: where is the Kronecker delta.

Vector components in an orthonormal basis

Also, by the geometric definition, for any vector and a vector , we note that where is the component of vector in the direction of . The last step in the equality can be seen from the figure.

Now applying the distributivity of the geometric version of the dot product gives which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.

Properties

[edit]

The dot product fulfills the following properties if , , and are real vectors and , , and are scalars.[2][3]

Commutative
which follows from the definition ( is the angle between and ):[6]

The commutative property can also be easily proven with the algebraic definition, and in more general spaces (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as

Bilinear (additive, distributive and scalar-multiplicative in both arguments)

Not associative
Because the dot product between a scalar and a vector is not defined, which means that the expressions involved in the associative property, or are both ill-defined.[7] Note however that the previously mentioned scalar multiplication property is sometimes called the "associative law for scalar and dot product"[8] or one can say that "the dot product is associative with respect to scalar multiplication" because .[9]
Orthogonal
Two non-zero vectors and are orthogonal if and only if .
No cancellation
Unlike multiplication of ordinary numbers, where if , then always equals unless is zero, the dot product does not obey the cancellation law:
If and , then we can write: by the distributive law; the result above says this just means that is perpendicular to , which still allows , and therefore allows .
Product rule
If and are vector-valued differentiable functions, then the derivative (denoted by a prime ) of is given by the rule

Application to the law of cosines

[edit]
Triangle with vector edges a and b, separated by angle θ

Given two vectors and separated by angle (see the upper image), they form a triangle with a third side . Let , and denote the lengths of , , and , respectively. The dot product of this with itself is: which is the law of cosines.

Triple product

[edit]

There are two ternary operations involving dot product and cross product.

The scalar triple product of three vectors is defined as Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors.

The vector triple product is defined by[2][3] This identity, also known as Lagrange's formula, may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics.

Physics

[edit]

In physics, the dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector.

For example:[10][11]

Generalizations

[edit]

Complex vectors

[edit]

For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector ). This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition[12][2] where is the complex conjugate of . When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H:

In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in . The dot product is not symmetric, since The angle between two complex vectors is then given by

The complex dot product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics.

The self dot product of a complex vector , involving the conjugate transpose of a row vector, is also known as the norm squared, , after the Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also: Squared Euclidean distance).

Inner product

[edit]

The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers or the field of complex numbers . It is usually denoted using angular brackets by .

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.

Functions

[edit]

The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length- vector is, then, a function with domain , and is a notation for the image of by the function/vector .

This notion can be generalized to square-integrable functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some measure space :[2]

For example, if and are continuous functions over a compact subset of with the standard Lebesgue measure, the above definition becomes:

Generalized further to complex continuous functions and , by analogy with the complex inner product above, gives:

Weight function

[edit]

Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions and with respect to the weight function is

Dyadics and matrices

[edit]

A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices and of the same size: And for real matrices,

Writing a matrix as a dyadic, we can define a different double-dot product (see Dyadics § Product of dyadic and dyadic) however it is not an inner product.

Tensors

[edit]

The inner product between a tensor of order and a tensor of order is a tensor of order , see Tensor contraction for details.

Computation

[edit]

Algorithms

[edit]

The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used.

Libraries

[edit]

A dot product function is included in:

  • BLAS level 1 real SDOT, DDOT; complex CDOTU, ZDOTU = X^T * Y, CDOTC, ZDOTC = X^H * Y
  • Fortran as dot_product(A,B) or sum(conjg(A) * B)
  • Julia as  A' * B or standard library LinearAlgebra as dot(A, B)
  • R (programming language) as sum(A * B) for vectors or, more generally for matrices, as A %*% B
  • Matlab as  A' * B  or  conj(transpose(A)) * B  or  sum(conj(A) .* B)  or  dot(A, B)
  • Python (package NumPy) as  np.matmul(A, B)  or  np.dot(A, B)  or  np.inner(A, B)
  • GNU Octave as  sum(conj(X) .* Y, dim), and similar code as Matlab
  • Intel oneAPI Math Kernel Library real p?dot dot = sub(x)'*sub(y); complex p?dotc dotc = conjg(sub(x)')*sub(y)

See also

[edit]

Notes

[edit]
  1. ^ The term scalar product means literally "product with a scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused with scalar multiplication.

References

[edit]
  1. ^ a b "Dot Product". www.mathsisfun.com. Retrieved 2020-09-06.
  2. ^ a b c d e S. Lipschutz; M. Lipson (2009). Linear Algebra (Schaum's Outlines) (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.
  3. ^ a b c M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis (Schaum's Outlines) (2nd ed.). McGraw Hill. ISBN 978-0-07-161545-7.
  4. ^ A I Borisenko; I E Taparov (1968). Vector and tensor analysis with applications. Translated by Richard Silverman. Dover. p. 14.
  5. ^ Arfken, G. B.; Weber, H. J. (2000). Mathematical Methods for Physicists (5th ed.). Boston, MA: Academic Press. pp. 14–15. ISBN 978-0-12-059825-0.
  6. ^ Nykamp, Duane. "The dot product". Math Insight. Retrieved September 6, 2020.
  7. ^ Weisstein, Eric W. "Dot Product". From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DotProduct.html
  8. ^ T. Banchoff; J. Wermer (1983). Linear Algebra Through Geometry. Springer Science & Business Media. p. 12. ISBN 978-1-4684-0161-5.
  9. ^ A. Bedford; Wallace L. Fowler (2008). Engineering Mechanics: Statics (5th ed.). Prentice Hall. p. 60. ISBN 978-0-13-612915-8.
  10. ^ K.F. Riley; M.P. Hobson; S.J. Bence (2010). Mathematical methods for physics and engineering (3rd ed.). Cambridge University Press. ISBN 978-0-521-86153-3.
  11. ^ M. Mansfield; C. O'Sullivan (2011). Understanding Physics (4th ed.). John Wiley & Sons. ISBN 978-0-47-0746370.
  12. ^ Berberian, Sterling K. (2014) [1992]. Linear Algebra. Dover. p. 287. ISBN 978-0-486-78055-9.
[edit]