Jump to content

Gauss–Jordan elimination

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by CitationCleanerBot (talk | contribs) at 23:33, 10 September 2011 (Various citation & identifier cleanup, plus AWB genfixes. Report errors and suggestions at User talk:CitationCleanerBot. using AWB). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.

It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[1]

Computer science's complexity theory shows Gauss–Jordan elimination to have a time complexity of for an n by n matrix (using Big O Notation). This result means it is efficiently solvable for most practical purposes. As a result, it is often used in computer software for a diverse set of applications. However, it is often an unnecessary step past Gaussian elimination. Gaussian elimination shares Gauss-Jordan's time complexity of but is generally faster. Therefore, in cases in which achieving reduced row echelon form over row echelon form is unnecessary, Gaussian elimination is typically preferred.

Application to finding inverses

If Gauss–Jordan elimination is applied on a square matrix, it can be used to calculate the matrix's inverse. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations:

If the original square matrix, , is given by the following expression:

Then, after augmenting by the identity, the following is obtained:

By performing elementary row operations on the matrix until it reaches reduced row echelon form, the following is the final result:

The matrix augmentation can now be undone, which gives the following:

A matrix is non-singular (meaning that it has an inverse matrix) if and only if the identity matrix can be obtained using only elementary row operations.

References

  1. ^ Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history", The American Mathematical Monthly, 94 (2), Mathematical Association of America: 130–142, doi:10.2307/2322413, ISSN 0002-9890, JSTOR 2322413
  • Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp. 69–80.
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.1", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
  • Strang, Gilbert (2003), Introduction to Linear Algebra (3rd ed.), Wellesley, Massachusetts: Wellesley-Cambridge Press, pp. 74–76, ISBN 978-0-9614088-9-3