Jump to content

Gauss–Jordan elimination: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Comparison with Gaussian elimination: Row echelon form is not really cheaper than reduced echelon form
Application to finding inverses: Simplifying wording + vertical bars to separate the parts of the augmented matrices
Line 9: Line 9:


Gauss–Jordan elimination allows to calculate the [[inverse matrix|inverse]] of a [[square matrix]]. This can be done by [[augmented matrix|augmenting]] the square matrix with the [[identity matrix]] of the same dimensions and applying the following matrix operations, where <math>\Rightarrow</math> stands for "may transformed by [[elementary row operation]]s into".
Gauss–Jordan elimination allows to calculate the [[inverse matrix|inverse]] of a [[square matrix]]. This can be done by [[augmented matrix|augmenting]] the square matrix with the [[identity matrix]] of the same dimensions and applying the following matrix operations, where <math>\Rightarrow</math> stands for "may transformed by [[elementary row operation]]s into".
:<math>[ A I ] \Rightarrow
:<math>\left[ \begin{array}{c|c]}A &I \end {array}\right] \Rightarrow
A^{-1} [ A I ] =
A^{-1} \left[ \begin{array}{c|c]}A &I \end {array}\right] =
[ I A^{-1} ].
\left[ \begin{array}{c|c]}I &A^{-1}\end {array}\right].
</math>
</math>


If the original square matrix, <math>A</math>, is given by the following expression:
If the original square matrix is
:<math> A =
:<math> A =
\begin{bmatrix}
\begin{bmatrix}
Line 20: Line 20:
-1 & 2 & -1 \\
-1 & 2 & -1 \\
0 & -1 & 2
0 & -1 & 2
\end{bmatrix}.
\end{bmatrix},
</math>
</math>
then, after augmenting it, it becomes:

:<math> \left[ \begin{array}{c|c]}A &I \end {array}\right] =
Then, after augmenting by the identity, the following is obtained:
\left[\begin{array}{ccc|ccc}
:<math> [ A I ] =
\begin{bmatrix}
2 & -1 & 0 & 1 & 0 & 0\\
2 & -1 & 0 & 1 & 0 & 0\\
-1 & 2 & -1 & 0 & 1 & 0\\
-1 & 2 & -1 & 0 & 1 & 0\\
0 & -1 & 2 & 0 & 0 & 1
0 & -1 & 2 & 0 & 0 & 1
\end{bmatrix}.
\end{array}\right ].
</math>
</math>
Gauss–Jordan elimination on the <math>[ A I ]</math> produces its [[reduced row echelon form]]:
Gauss–Jordan elimination on <math>\left[ \begin{array}{c|c]}A &I \end {array}\right]</math> produces its [[reduced row echelon form]]
:<math> \left[ \begin{array}{c|c]}I &A^{-1}\end {array}\right] =

\left[ \begin{array}{ccc|ccc]}
:<math> [ I A^{-1} ] =
\begin{bmatrix}
1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\
1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\
0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\
0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\
0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4}
0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4}
\end{bmatrix}.
\end {array}\right].
</math>
</math>


The matrix augmentation can now be undone, which gives the following:
The matrix augmentation can now be undone, giving:
:<math> I =
:<math> I =
\begin{bmatrix}
\begin{bmatrix}

Revision as of 15:46, 18 October 2012

In linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.

It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[1]

Comparison with Gaussian elimination

Gauss-Jordan elimination, like Gaussian elimination, is used for inverting matrices and solving systems of linear equations. Both Gauss–Jordan and Gaussian elimination have time complexity of order for an n by n full rank matrix (using Big O Notation), but the order of magnitude of the number of arithmetic operations (there are roughly the same number of additions and multiplications/divisions) used in solving a n by n matrix by Gauss-Jordan elimination is , whereas that for Gaussian elimination is . Hence, Gauss-Jordan elimination requires approximately 50% more computation steps.[2] However, the result of Gauss-Jordan elimination (reduced row echelon form) may be retrieved from the result of Gaussian elimination (row echelon form) in arithmetic operations, by proceeding from the last pivot to the first one. Thus the needed number of operations has the same order of magnitude for both eliminations.

Application to finding inverses

Gauss–Jordan elimination allows to calculate the inverse of a square matrix. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations, where stands for "may transformed by elementary row operations into".

Failed to parse (unknown function "\begin{array}"): {\displaystyle \left[ \begin{array}{c|c]}A &I \end {array}\right] \Rightarrow A^{-1} \left[ \begin{array}{c|c]}A &I \end {array}\right] = \left[ \begin{array}{c|c]}I &A^{-1}\end {array}\right]. }

If the original square matrix is

then, after augmenting it, it becomes:

Failed to parse (unknown function "\begin{array}"): {\displaystyle \left[ \begin{array}{c|c]}A &I \end {array}\right] = \left[\begin{array}{ccc|ccc} 2 & -1 & 0 & 1 & 0 & 0\\ -1 & 2 & -1 & 0 & 1 & 0\\ 0 & -1 & 2 & 0 & 0 & 1 \end{array}\right ]. }

Gauss–Jordan elimination on Failed to parse (unknown function "\begin{array}"): {\displaystyle \left[ \begin{array}{c|c]}A &I \end {array}\right]} produces its reduced row echelon form

Failed to parse (unknown function "\begin{array}"): {\displaystyle \left[ \begin{array}{c|c]}I &A^{-1}\end {array}\right] = \left[ \begin{array}{ccc|ccc]} 1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\ 0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\ 0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4} \end {array}\right]. }

The matrix augmentation can now be undone, giving:

A matrix is non-singular (meaning that it has an inverse matrix) if and only if the left hand side of row echelon form that has been obtained is the identity matrix.

References

  1. ^ Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history", The American Mathematical Monthly, 94 (2), Mathematical Association of America: 130–142, doi:10.2307/2322413, ISSN 0002-9890, JSTOR 2322413
  2. ^ J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10
  • Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp. 69–80.
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.1", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
  • Strang, Gilbert (2003), Introduction to Linear Algebra (3rd ed.), Wellesley, Massachusetts: Wellesley-Cambridge Press, pp. 74–76, ISBN 978-0-9614088-9-3

See also