Definition
For some natural choice of
r
{\displaystyle r}
rows and
c
{\displaystyle c}
columns, a matrix
Φ
{\displaystyle \Phi }
of size
r
×
c
{\displaystyle r\times c}
over a field
F
{\displaystyle {\mathsf {F}}}
is a collection of elements
Φ
i
j
∈
F
{\displaystyle \Phi _{ij}\in {\mathsf {F}}}
indexed by
(
i
,
j
)
∈
[
r
]
×
[
c
]
{\displaystyle (i,j)\in [r]\times [c]}
.[ 1] [ a] Unless specified, the elements of a matrix are assumed to be scalars , but may also be elements from a ring, or something more general.
Φ
=
[
Φ
11
Φ
12
⋯
Φ
1
c
Φ
21
Φ
22
⋯
Φ
2
c
⋮
⋮
⋱
⋮
Φ
r
1
Φ
r
2
⋯
Φ
r
c
]
{\displaystyle \Phi ={\begin{bmatrix}\Phi _{11}&\Phi _{12}&\cdots &\Phi _{1c}\\\Phi _{21}&\Phi _{22}&\cdots &\Phi _{2c}\\\vdots &\vdots &\ddots &\vdots \\\Phi _{r1}&\Phi _{r2}&\cdots &\Phi _{rc}\\\end{bmatrix}}}
The set of all matrices with elements from
F
{\displaystyle {\mathsf {F}}}
and with indices from
(
i
,
j
)
∈
[
r
]
×
[
c
]
{\displaystyle (i,j)\in [r]\times [c]}
is denoted
M
(
r
,
c
:
F
)
{\displaystyle {\mathcal {M}}(r,c:{\mathsf {F}})}
or
F
r
,
c
{\displaystyle {\mathsf {F}}^{r,c}}
.[ 2]
Notation
r
×
c
{\displaystyle \mathbf {r\times c} }
Example
1
×
3
{\displaystyle 1\times 3}
[
1
2
3
]
{\displaystyle {\begin{bmatrix}1&2&3\\\end{bmatrix}}}
3
×
1
{\displaystyle 3\times 1}
[
1
2
3
]
{\displaystyle {\begin{bmatrix}1\\2\\3\\\end{bmatrix}}}
2
×
3
{\displaystyle 2\times 3}
[
1
2
3
4
5
6
]
{\displaystyle {\begin{bmatrix}1&2&3\\4&5&6\\\end{bmatrix}}}
3
×
2
{\displaystyle 3\times 2}
[
1
4
2
5
3
6
]
{\displaystyle {\begin{bmatrix}1&4\\2&5\\3&6\\\end{bmatrix}}}
3
×
3
{\displaystyle 3\times 3}
[
−
4
−
3
−
2
−
1
0
1
2
3
4
]
{\displaystyle {\begin{bmatrix}-4&-3&-2\\-1&0&1\\2&3&4\\\end{bmatrix}}}
Let
Φ
{\displaystyle \Phi }
be an
r
×
c
{\displaystyle r\times c}
matrix whose elements are from
F
{\displaystyle {\mathsf {F}}}
. Any individual entry may be referenced as
Φ
i
j
{\displaystyle \Phi _{ij}}
for the
i
{\displaystyle i}
-th row and the
j
{\displaystyle j}
-th column.
Rows
The
i
{\displaystyle i}
-th row vector in a matrix
Φ
{\displaystyle \Phi }
is defined as the subset of elements which shares some
i
{\displaystyle i}
-th index, and ordered by the
j
{\displaystyle j}
-th index.
The column index can be omitted for brevity by simply noting
Φ
i
{\displaystyle \Phi _{i}}
for the
i
{\displaystyle i}
-th row.
row
(
Φ
)
=
[
Φ
1
∗
⋯
Φ
r
∗
]
{\displaystyle \operatorname {row} (\Phi )=[\Phi _{1*}\cdots \Phi _{r*}]}
Columns
Let
col
{\displaystyle \operatorname {col} }
be a function which maps a matrix to a partition of its elements with an equivalence relation on the column row index, ordered by its column index.
col
(
Φ
)
=
[
Φ
∗
1
⋯
Φ
∗
c
]
{\displaystyle \operatorname {col} (\Phi )=[\Phi _{*1}\cdots \Phi _{*c}]}
Addition of Matrices
Let
Φ
,
Ψ
{\displaystyle \Phi ,\Psi }
be matrices from
F
r
×
c
{\displaystyle {\mathsf {F}}^{r\times c}}
. Then the sum of matrices is defined as entry-wise field addition.
[
Φ
11
⋯
Φ
1
c
⋮
⋱
⋮
Φ
r
1
⋯
Φ
r
c
]
+
[
Ψ
11
⋯
Ψ
1
c
⋮
⋱
⋮
Ψ
r
1
⋯
Ψ
r
c
]
:=
[
(
Φ
11
+
Ψ
11
)
⋯
(
Φ
1
c
+
Ψ
1
c
)
⋮
⋱
⋮
(
Φ
r
1
+
Ψ
r
1
)
⋯
(
Φ
r
c
+
Ψ
r
c
)
]
{\displaystyle {\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1c}\\\vdots &\ddots &\vdots \\\Phi _{r1}&\cdots &\Phi _{rc}\\\end{bmatrix}}+{\begin{bmatrix}\Psi _{11}&\cdots &\Psi _{1c}\\\vdots &\ddots &\vdots \\\Psi _{r1}&\cdots &\Psi _{rc}\\\end{bmatrix}}:={\begin{bmatrix}(\Phi _{11}+\Psi _{11})&\cdots &(\Phi _{1c}+\Psi _{1c})\\\vdots &\ddots &\vdots \\(\Phi _{r1}+\Psi _{r1})&\cdots &(\Phi _{rc}+\Psi _{rc})\\\end{bmatrix}}}
Scaling of Matrices
Let
Φ
{\displaystyle \Phi }
be a matrix from
F
r
×
c
{\displaystyle {\mathsf {F}}^{r\times c}}
, and let
λ
∈
F
{\displaystyle \lambda \in {\mathsf {F}}}
. The scalar multiplication of matrices is defined:
λ
[
Φ
11
⋯
Φ
1
c
⋮
⋱
⋮
Φ
r
1
⋯
Φ
r
c
]
:=
[
λ
Φ
11
⋯
λ
Φ
1
c
⋮
⋱
⋮
λ
Φ
r
1
⋯
λ
Φ
r
c
]
{\displaystyle \lambda {\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1c}\\\vdots &\ddots &\vdots \\\Phi _{r1}&\cdots &\Phi _{rc}\\\end{bmatrix}}:={\begin{bmatrix}\lambda \Phi _{11}&\cdots &\lambda \Phi _{1c}\\\vdots &\ddots &\vdots \\\lambda \Phi _{r1}&\cdots &\lambda \Phi _{rc}\\\end{bmatrix}}}
Transposition
As a matrix is a collection of double-indexed scalars
Φ
i
j
∈
F
{\displaystyle \Phi _{ij}\in {\mathsf {F}}}
, the transposition is a function
F
r
,
c
→
F
c
,
r
{\displaystyle {\mathsf {F}}^{r,c}\to {\mathsf {F}}^{c,r}}
of the form
(
Φ
)
↦
Φ
⊺
{\displaystyle (\Phi )\mapsto \Phi ^{\intercal }}
, defined as a mapping which swaps the positions of indices.
Φ
i
j
⊺
:=
Φ
j
i
{\displaystyle \Phi _{ij}^{\intercal }:=\Phi _{ji}}
t
r
a
n
s
p
o
s
e
[
Φ
11
⋯
Φ
1
c
⋮
⋱
⋮
Φ
r
1
⋯
Φ
r
c
]
:=
[
Φ
11
⋯
Φ
1
r
⋮
⋱
⋮
Φ
c
1
⋯
Φ
c
r
]
{\displaystyle transpose{\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1c}\\\vdots &\ddots &\vdots \\\Phi _{r1}&\cdots &\Phi _{rc}\\\end{bmatrix}}:={\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1r}\\\vdots &\ddots &\vdots \\\Phi _{c1}&\cdots &\Phi _{cr}\\\end{bmatrix}}}
Observations
The transposition of a product
Ψ
Φ
{\displaystyle \Psi \Phi }
is equal to the product of their transpositions, but in reverse order.
(
Ψ
Φ
)
⊺
=
Φ
⊺
Ψ
⊺
{\displaystyle (\Psi \Phi )^{\intercal }=\Phi ^{\intercal }\Psi ^{\intercal }}
Matrix-Vector Product
Let
Φ
:
F
c
→
F
r
{\displaystyle \Phi :{\mathsf {F}}^{c}\to {\mathsf {F}}^{r}}
.
Let
Φ
r
×
c
{\displaystyle \Phi ^{r\times c}}
be a matrix, let
[
α
1
⋯
α
c
]
∈
F
c
{\displaystyle [\alpha _{1}\cdots \alpha _{c}]\in {\mathsf {F}}^{c}}
, and let
[
β
1
⋯
β
r
]
∈
F
r
{\displaystyle [\beta _{1}\cdots \beta _{r}]\in {\mathsf {F}}^{r}}
.
A matrix-vector product is is a mapping
F
r
×
c
×
F
c
→
F
r
{\displaystyle {\mathsf {F}}^{r\times c}\times {\mathsf {F}}^{c}\to {\mathsf {F}}^{r}}
, such that:
[
Φ
11
⋯
Φ
1
c
⋮
⋱
⋮
Φ
r
1
⋯
Φ
r
c
]
[
α
1
⋮
α
c
]
=
[
β
1
⋮
β
r
]
{\displaystyle {\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1c}\\\vdots &\ddots &\vdots \\\Phi _{r1}&\cdots &\Phi _{rc}\\\end{bmatrix}}{\begin{bmatrix}\alpha _{1}\\\vdots \\\alpha _{c}\\\end{bmatrix}}={\begin{bmatrix}\beta _{1}\\\vdots \\\beta _{r}\\\end{bmatrix}}}
Column Perspective
Let
Φ
:
F
c
→
F
r
{\displaystyle \Phi :{\mathsf {F}}^{c}\to {\mathsf {F}}^{r}}
, and let
α
→
=
[
α
1
⋯
α
c
]
∈
F
c
{\displaystyle {\vec {\alpha }}=[\alpha _{1}\cdots \alpha _{c}]\in {\mathsf {F}}^{c}}
. Then the matrix-vector product can be defined as the linear combination of pairing scalar coefficients in
[
α
1
⋯
α
c
]
{\displaystyle [\alpha _{1}\cdots \alpha _{c}]}
to vectors in
col
(
Φ
)
{\displaystyle \operatorname {col} (\Phi )}
.
Φ
α
→
=
∑
i
=
1
c
α
i
col
(
Φ
)
i
{\displaystyle \Phi {\vec {\alpha }}=\sum _{i=1}^{c}\alpha _{i}\operatorname {col} (\Phi )_{i}}
α
1
[
Φ
1
,
1
⋮
Φ
r
,
1
]
+
α
2
[
Φ
1
,
2
⋮
Φ
r
,
2
]
+
⋯
+
α
c
[
Φ
1
,
c
⋮
Φ
r
,
c
]
{\displaystyle \alpha _{1}{\begin{bmatrix}\Phi _{1,1}\\\vdots \\\Phi _{r,1}\\\end{bmatrix}}+\alpha _{2}{\begin{bmatrix}\Phi _{1,2}\\\vdots \\\Phi _{r,2}\\\end{bmatrix}}+\cdots +\alpha _{c}{\begin{bmatrix}\Phi _{1,c}\\\vdots \\\Phi _{r,c}\\\end{bmatrix}}}
Row Perspective
Φ
α
→
=
[
Φ
→
1
⋅
α
→
⋮
Φ
→
r
⋅
α
→
]
{\displaystyle \Phi {\vec {\alpha }}={\begin{bmatrix}{\vec {\Phi }}_{1}\cdot {\vec {\alpha }}\\\vdots \\{\vec {\Phi }}_{r}\cdot {\vec {\alpha }}\\\end{bmatrix}}}
(
Φ
α
→
)
i
=
Φ
→
i
⋅
α
→
{\displaystyle (\Phi {\vec {\alpha }})_{i}={\vec {\Phi }}_{i}\cdot {\vec {\alpha }}}
Product of Matrices
Let
Φ
:
F
m
→
F
n
{\displaystyle \Phi :{\mathsf {F}}^{m}\to {\mathsf {F}}^{n}}
and
Ψ
:
F
n
→
F
p
{\displaystyle \Psi :{\mathsf {F}}^{n}\to {\mathsf {F}}^{p}}
be matrices. Then the product
Ψ
Φ
{\displaystyle \Psi \Phi }
is defined:
(
Ψ
Φ
)
i
j
=
∑
k
=
1
n
Ψ
j
k
Φ
k
i
{\displaystyle (\Psi \Phi )_{ij}=\sum _{k=1}^{n}\Psi _{jk}\Phi _{ki}}
For all natural pairs
(
i
,
j
)
∈
[
m
]
×
[
p
]
{\displaystyle (i,j)\in [m]\times [p]}
.
Column Perspective
For the product
Ψ
Φ
{\displaystyle \Psi \Phi }
, the
i
{\displaystyle i}
-th column of the matrix is defined by the application of
Ψ
{\displaystyle \Psi }
on the
i
{\displaystyle i}
-th column of
Φ
{\displaystyle \Phi }
.
col
(
Ψ
Φ
)
i
=
Ψ
col
(
Φ
)
i
{\displaystyle \operatorname {col} (\Psi \Phi )_{i}=\Psi \operatorname {col} (\Phi )_{i}}
Rank and Image
The rank of a matrix
Φ
r
×
c
{\displaystyle \Phi ^{r\times c}}
is the number of independent column vectors. The image of a matrix is the span of its columns.
An injective matrix is any full-rank matrix.
A surjective matrix is any full row-rank matrix.
Kernel and Nullity
The kernel of a matrix
Φ
:
F
r
→
F
c
{\displaystyle \Phi :{\mathsf {F}}^{r}\to {\mathsf {F}}^{c}}
is the set of vectors which map to
0
→
{\displaystyle {\vec {0}}}
.
[
Φ
11
⋯
Φ
1
c
⋮
⋱
⋮
Φ
r
1
⋯
Φ
r
c
]
[
x
1
⋮
x
c
]
=
[
0
1
⋮
0
r
]
{\displaystyle {\begin{bmatrix}\Phi _{11}&\cdots &\Phi _{1c}\\\vdots &\ddots &\vdots \\\Phi _{r1}&\cdots &\Phi _{rc}\\\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{c}\\\end{bmatrix}}={\begin{bmatrix}0_{1}\\\vdots \\0_{r}\\\end{bmatrix}}}
The nullity is the dimension of the kernel.
null
Φ
=
{
v
→
∈
F
c
∣
Φ
v
→
=
0
→
}
{\displaystyle \operatorname {null} \Phi =\{{\vec {v}}\in {\mathsf {F}}^{c}\mid \Phi {\vec {v}}={\vec {0}}\}}
Identity Matrix
For any matrix
Φ
:
F
c
→
F
r
{\displaystyle \Phi :{\mathsf {F}}^{c}\to {\mathsf {F}}^{r}}
there also exists matrices
I
c
,
I
r
{\displaystyle \mathrm {I} _{c},\mathrm {I} _{r}}
which act as the unique right and left identity element under the product of maps.
Φ
I
c
=
I
r
Φ
=
Φ
{\displaystyle \Phi \mathrm {I} _{c}=\mathrm {I} _{r}\Phi =\Phi }
Any matrix which fulfills this condition is known as the identity matrix , denoted
I
{\displaystyle \mathrm {I} }
or with a subscript
I
n
{\displaystyle \mathrm {I} _{n}}
for some dimension
n
{\displaystyle n}
. All identity matrices are square matrices whose values are defined for any index
(
i
,
j
)
∈
[
n
]
×
[
n
]
{\displaystyle (i,j)\in [n]\times [n]}
:
(
I
n
)
i
j
:=
{
0
(
i
≠
j
)
1
(
i
=
j
)
{\displaystyle (\mathrm {I} _{n})_{ij}:={\begin{cases}0\quad (i\neq j)\\1\quad (i=j)\\\end{cases}}}
An example of an identity matrix of
n
{\displaystyle n}
dimensions.
[
1
11
0
12
⋯
0
1
n
0
21
1
22
⋯
0
2
n
⋮
⋮
⋱
⋮
0
n
1
0
n
2
⋯
1
n
n
]
{\displaystyle {\begin{bmatrix}1_{11}&0_{12}&\cdots &0_{1n}\\0_{21}&1_{22}&\cdots &0_{2n}\\\vdots &\vdots &\ddots &\vdots \\0_{n1}&0_{n2}&\cdots &1_{nn}\\\end{bmatrix}}}
Inverse Matrix
A matrix
Φ
:
F
n
→
F
n
{\displaystyle \Phi :{\mathsf {F}}^{n}\to {\mathsf {F}}^{n}}
is invertible if there exists a matrix
Φ
−
1
:
F
n
→
F
n
{\displaystyle \Phi ^{-1}:{\mathsf {F}}^{n}\to {\mathsf {F}}^{n}}
such that:
Φ
−
1
Φ
=
Φ
Φ
−
1
=
I
n
{\displaystyle \Phi ^{-1}\Phi =\Phi \Phi ^{-1}=\mathrm {I} _{n}}
An invertible matrix may also be known as a non-singular matrix, a linear isomorphism or bijection.
The set of all invertible matrices of
n
{\displaystyle n}
size is known as the
GL
(
n
,
F
)
{\displaystyle \operatorname {GL} (n,{\mathsf {F}})}
.
All invertible matrices are full-rank square matrices, and thus the kernel is trivial.
For endomorphisms over finite-dimensional modules, surjection, injection, and bijection are all equivalent conditions.
The determinant of an invertible matrix is non-zero.
Left Inverse
Although only square matrices are strictly invertible, an injective matrix will have a left-inverse by definition.
Φ
L
−
1
=
(
Φ
⊺
Φ
)
−
1
Φ
⊺
{\displaystyle \Phi _{\textrm {L}}^{-1}=(\Phi ^{\intercal }\Phi )^{-1}\Phi ^{\intercal }}
Right Inverse
Φ
R
−
1
=
Φ
⊺
(
Φ
Φ
⊺
)
−
1
{\displaystyle \Phi _{\textrm {R}}^{-1}=\Phi ^{\intercal }(\Phi \Phi ^{\intercal })^{-1}}
Orthonormal Matrix
An orthonormal matrix
Φ
:
F
n
→
F
n
{\displaystyle \Phi :{\mathsf {F}}^{n}\to {\mathsf {F}}^{n}}
is an invertible matrix which preserves the norms between vector spaces. It is also the matrix where the transpose is the multiplicative inverse
Φ
−
1
{\displaystyle \Phi ^{-1}}
.
Φ
Φ
⊺
=
Φ
⊺
Φ
=
I
n
{\displaystyle \Phi \Phi ^{\intercal }=\Phi ^{\intercal }\Phi =\mathrm {I} _{n}}
The set of all
n
×
n
{\displaystyle n\times n}
orthogonal matrices over
F
{\displaystyle {\mathsf {F}}}
forms the orthogonal group
O
(
n
,
F
)
{\displaystyle \mathrm {O} (n,{\mathsf {F}})}
. The subset of
O
(
n
,
F
)
{\displaystyle \mathrm {O} (n,{\mathsf {F}})}
which has only a determinant of
+
1
{\displaystyle +1}
is known as the special orthogonal group
S
O
(
n
,
F
)
{\displaystyle \mathrm {S} \mathrm {O} (n,{\mathsf {F}})}
, and all matrices from this group are rotational matrices.
Observations
Let
v
→
1
,
v
→
2
∈
F
n
{\displaystyle {\vec {v}}_{1},{\vec {v}}_{2}\in {\mathsf {F}}^{n}}
. If
Φ
:
F
n
→
F
n
{\displaystyle \Phi :{\mathsf {F}}^{n}\to {\mathsf {F}}^{n}}
is orthonormal then
⟨
v
→
1
,
v
→
2
⟩
=
⟨
Φ
v
→
1
,
Φ
v
→
2
⟩
{\displaystyle \langle {\vec {v}}_{1},{\vec {v}}_{2}\rangle =\langle \Phi {\vec {v}}_{1},\Phi {\vec {v}}_{2}\rangle }
.
The determinant of
Φ
{\displaystyle \Phi }
is either
+
1
{\displaystyle +1}
or
−
1
{\displaystyle -1}
.
If
Φ
{\displaystyle \Phi }
is orthonormal then so is its transpose.
Example
1
2
[
1
1
1
−
1
]
{\displaystyle {\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\\1&-1\\\end{bmatrix}}}
1
3
[
2
−
2
1
1
2
2
2
1
−
2
]
{\displaystyle {\frac {1}{3}}{\begin{bmatrix}2&-2&1\\1&2&2\\2&1&-2\\\end{bmatrix}}}
Gram-Schmidt Process
Given the columns of a full-rank matrix, the Gram-Schmidt process can generate a similar orthonormal basis. Let
[
1
2
3
4
5
6
7
8
0
]
{\displaystyle {\begin{bmatrix}1&2&3\\4&5&6\\7&8&0\\\end{bmatrix}}}
Trace of a Square Matrix
Let
Φ
:
F
n
→
F
n
{\displaystyle \Phi :{\mathsf {F}}^{n}\to {\mathsf {F}}^{n}}
.
trace
(
Φ
)
:=
∑
i
=
1
n
Φ
i
i
{\displaystyle \operatorname {trace} (\Phi ):=\sum _{i=1}^{n}\Phi _{ii}}
The trace of a product of matrices is the sum of their individual traces.
trace
(
Ψ
Φ
)
:=
trace
(
Ψ
)
+
trace
(
Φ
)
{\displaystyle \operatorname {trace} (\Psi \Phi ):=\operatorname {trace} (\Psi )+\operatorname {trace} (\Phi )}
Matrix Decompositions
Rank Factorization (CR)
Rank factorization, or the column-row (CR) form of a matrix
Φ
:
F
c
→
F
r
{\displaystyle \Phi :{\mathsf {F}}^{c}\to {\mathsf {F}}^{r}}
means to decompose
Φ
=
C
R
{\displaystyle \Phi =\mathrm {C} \mathrm {R} }
, where
C
{\displaystyle \mathrm {C} }
represents independent columns from
Φ
{\displaystyle \Phi }
, and
R
{\displaystyle \mathrm {R} }
represents independent rows from
Φ
{\displaystyle \Phi }
.
[
1
2
3
1
2
3
1
2
3
]
=
[
1
1
1
]
[
1
2
3
]
{\displaystyle {\begin{bmatrix}1&2&3\\1&2&3\\1&2&3\\\end{bmatrix}}={\begin{bmatrix}1\\1\\1\\\end{bmatrix}}{\begin{bmatrix}1&2&3\\\end{bmatrix}}}
[
1
1
1
2
2
2
3
3
3
]
=
[
1
2
3
]
[
1
1
1
]
{\displaystyle {\begin{bmatrix}1&1&1\\2&2&2\\3&3&3\\\end{bmatrix}}={\begin{bmatrix}1\\2\\3\\\end{bmatrix}}{\begin{bmatrix}1&1&1\\\end{bmatrix}}}
[
2
0
4
0
3
6
0
0
0
]
=
[
2
0
0
3
0
0
]
[
1
0
2
0
1
2
]
{\displaystyle {\begin{bmatrix}2&0&4\\0&3&6\\0&0&0\\\end{bmatrix}}={\begin{bmatrix}2&0\\0&3\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&0&2\\0&1&2\\\end{bmatrix}}}
This factorization is motivated mostly by pedagogy and demonstrates basic properties of matrix multiplication.
Orthonormal (QR)
Singular Value Decomposition (SVD)
Any real or complex matrix of size
c
×
r
{\displaystyle c\times r}
may be decomposed into the triple product
U
Σ
V
∗
{\displaystyle \mathrm {U} \mathrm {\Sigma } \mathrm {V} ^{*}}
,[ b] where
U
{\displaystyle \mathrm {U} }
is
c
×
c
{\displaystyle c\times c}
and orthonormal,
Σ
{\displaystyle \mathrm {\Sigma } }
is
r
×
r
{\displaystyle r\times r}
is a positive-definite real diagonal matrix, and
V
∗
{\displaystyle \mathrm {V} ^{*}}
is
r
×
r
{\displaystyle r\times r}
and orthonormal.
Φ
=
[
10
8
0
0
0
0
7
6
8
0
0
0
9
8
7
4
3
0
7
6
8
0
0
0
0
0
0
8
9
8
]
{\displaystyle \Phi ={\begin{bmatrix}10&8&0&0&0&0\\7&6&8&0&0&0\\9&8&7&4&3&0\\7&6&8&0&0&0\\0&0&0&8&9&8\\\end{bmatrix}}}
For
U
{\displaystyle \mathrm {U} }
we have a relationship between rows and "concepts." For
V
∗
{\displaystyle \mathrm {V} ^{*}}
we have a relationship between columns and concepts. For
Σ
{\displaystyle \mathrm {\Sigma } }
we have a matrix which represents the eigenvalues of each concept.
Notes
^ Equivalently,
{
(
i
,
j
)
∈
N
×
N
∣
i
≤
r
,
j
≤
c
}
{\displaystyle \{(i,j)\in \mathbb {N} \times \mathbb {N} \mid i\leq r,\;j\leq c\}}
.
^ Or
U
Σ
V
⊺
{\displaystyle \mathrm {U} \mathrm {\Sigma } \mathrm {V} ^{\intercal }}
for real matrices.
Citations
Sources
Textbook
Web