Jump to content

Gauss–Jordan elimination

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by D.Lazard (talk | contribs) at 12:07, 18 October 2012 (Comparison with Gaussian elimination: The algorithm does not depend on the computer arithmetic and there are as many additions as multiplications). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.

It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[1]

Comparison with Gaussian elimination

Gauss-Jordan elimination, like Gaussian elimination, is used for inverting matrices and solving systems of linear equations. Both Gauss–Jordan and Gaussian elimination have time complexity of order for an n by n full rank matrix (using Big O Notation), but the order of magnitude of the number of arithmetic operations (there are roughly the same number of additions and multiplications/divisions) used in solving a n by n matrix by Gauss-Jordan elimination is , whereas that for Gaussian elimination is . Hence, Gauss-Jordan elimination requires approximately 50% more computation steps.[2] However, it achieves a higher processing speed than Gaussian as the number of processors increases, due to its better load balancing characteristics and lower synchronization cost.[3]

Application to finding inverses

If Gauss–Jordan elimination is applied on a square matrix, it can be used to calculate the matrix's inverse. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations:

If the original square matrix, , is given by the following expression:

Then, after augmenting by the identity, the following is obtained:

By performing elementary row operations on the matrix until it reaches reduced row echelon form, the following is the final result:

The matrix augmentation can now be undone, which gives the following:

A matrix is non-singular (meaning that it has an inverse matrix) if and only if the identity matrix can be obtained using only elementary row operations.

References

  1. ^ Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history", The American Mathematical Monthly, 94 (2), Mathematical Association of America: 130–142, doi:10.2307/2322413, ISSN 0002-9890, JSTOR 2322413
  2. ^ J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10
  3. ^ G. A. Darmohray and E. D. Brooks, “Gaussian Techniques on Shared Memory Multiprocessor Computers,” in SIAM PPSC, Los Angeles, USA, December 1987.
  • Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp. 69–80.
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.1", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
  • Strang, Gilbert (2003), Introduction to Linear Algebra (3rd ed.), Wellesley, Massachusetts: Wellesley-Cambridge Press, pp. 74–76, ISBN 978-0-9614088-9-3

See also