Gauss–Jordan elimination
It has been suggested that this article be merged into Gaussian elimination. (Discuss) Proposed since February 2020. |
In linear geometry, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.
It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[1]
Comparison with Gaussian elimination
Gauss-Jordan elimination, like Gaussian elimination, is used for inverting matrices and solving systems of linear equations. Both Gauss–Jordan and Gaussian elimination have time complexity of order for an n by n full rank matrix (using Big O Notation), but the order of magnitude of the number of arithmetic operations (there are roughly the same number of additions and multiplications/divisions) used in solving a n by n matrix by Gauss-Jordan elimination is , whereas that for Gaussian elimination is . Hence, Gauss-Jordan elimination requires approximately 50% more computation steps.[2] However, the result of Gauss-Jordan elimination (reduced row echelon form) may be retrieved from the result of Gaussian elimination (row echelon form) in arithmetic operations, by proceeding from the last pivot to the first one. Thus the needed number of operations has the same order of magnitude for both eliminations.
Application to finding inverses
Gauss–Jordan elimination calculates the inverse of a square matrix. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations, where stands for "may be transformed by elementary row operations into".
If the original square matrix, , is given by the following expression:
Then, by augmenting by the identity, the following is obtained:
Gauss–Jordan elimination on the produces , its reduced row echelon form by simple row operations:
There are usually many ways to proceed to reduce the rows. What follows is one specific series of steps.
Note: below the symbolic expression rx → rx + c*ry means the elements of row x are replaced by the sum of the value of that element and a number c times the corresponding element in row y in the same column.
i) add second row, r2, to first, r1. r1 → r1 + r2
ii) add r1 to r2. r2 → r2 + r1
iii) add 2*r3 to r2. r2 → r2 + 2*r3
iv) add r2 to r3. r3 → r3 + r2
v) divide r3 by 4. r3 → r3 ÷ 4
vi) subtract 2*r3 from r2. r2 → r2 - 2*r3
vii) add r3 to r1. r1 → r1 + r3
viii) subtract r2 from r1. r1 → r1 - r2
The matrix augmentation can now be undone, which gives the following:
A matrix is non-singular (meaning that it has an inverse matrix) if and only if the left hand side of row echelon form that has been obtained is the identity matrix.
References
- ^ Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history", The American Mathematical Monthly, 94 (2), Mathematical Association of America: 130–142, doi:10.2307/2322413, ISSN 0002-9890, JSTOR 2322413
- ^ J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10
- Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp. 69–80.
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.1", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
- Strang, Gilbert (2003), Introduction to Linear Algebra (3rd ed.), Wellesley, Massachusetts: Wellesley-Cambridge Press, pp. 74–76, ISBN 978-0-9614088-9-3
External links
- Algorithm for Gauss–Jordan elimination in Octave
- Algorithm for Gauss–Jordan elimination in Python
- An online tool solve nxm linear systems using Gauss–Jordan elimination (source-code and mobile version included), by Felipe Santos de Andrade (Portuguese)
- Algorithm for Gauss–Jordan elimination in Basic
- Module for Gauss–Jordan Elimination
- Example of Gauss–Jordan Elimination "Step-by-Step"
- Gauss–Jordan Elimination Calculator
See also
- Gaussian elimination
- Computational complexity of mathematical operations
- Levinson recursion
- Strassen algorithm
- Coppersmith–Winograd algorithm