Gauss–Jordan elimination: Difference between revisions
Line 61: | Line 61: | ||
<syntaxhighlight lang="c"> |
<syntaxhighlight lang="c"> |
||
#include <cstdlib> |
#include <cstdlib> |
||
double** gauss(double **matrix, int |
double** gauss(double **matrix, int dimension) |
||
{ |
{ |
||
double **inverse; |
double **inverse; |
||
inverse = (double**) malloc( |
inverse = (double**) malloc(dimension * sizeof (double *)); |
||
for (int i = 0; i < |
for (int i = 0; i < dimension; i++) |
||
inverse[i] = (double*) malloc( |
inverse[i] = (double*) malloc(dimension * sizeof (double)); |
||
for (int i = 0; i < |
for (int i = 0; i < dimension; i++) |
||
for (int j = 0; j < |
for (int j = 0; j < dimension; j++) |
||
inverse[i][j] = 0; |
inverse[i][j] = 0; |
||
for (int i = 0; i < |
for (int i = 0; i < dimension; i++) |
||
inverse[i][i] = 1; |
inverse[i][i] = 1; |
||
for (int k = 0; k < |
for (int k = 0; k < dimension; k++) |
||
{ |
{ |
||
for (int i = k; i < |
for (int i = k; i < dimension; i++) |
||
{ |
{ |
||
double valInv = 1.0 / matrix[i][k]; |
double valInv = 1.0 / matrix[i][k]; |
||
for (int j = k; j < |
for (int j = k; j < dimension; j++) |
||
matrix[i][j] *= valInv; |
matrix[i][j] *= valInv; |
||
for (int j = 0; j < |
for (int j = 0; j < dimension; j++) |
||
inverse[i][j] *= valInv; |
inverse[i][j] *= valInv; |
||
} |
} |
||
for (int i = k + 1; i < |
for (int i = k + 1; i < dimension; i++) |
||
{ |
{ |
||
for (int j = k; j < |
for (int j = k; j < dimension; j++) |
||
matrix[i][j] -= matrix[k][j]; |
matrix[i][j] -= matrix[k][j]; |
||
for (int j = 0; j < |
for (int j = 0; j < dimension; j++) |
||
inverse[i][j] -= inverse[k][j]; |
inverse[i][j] -= inverse[k][j]; |
||
} |
} |
||
} |
} |
||
for (int i = |
for (int i = dimension - 2; i >= 0; i--) |
||
{ |
{ |
||
for (int j = |
for (int j = dimension - 1; j > i; j--) |
||
{ |
{ |
||
for (int k = 0; k < |
for (int k = 0; k < dimension; k++) |
||
inverse[i][k] -= matrix[i][j] * inverse[j][k]; |
inverse[i][k] -= matrix[i][j] * inverse[j][k]; |
||
for (int k = 0; k < |
for (int k = 0; k < dimension; k++) |
||
matrix[i][k] -= matrix[i][j] * matrix[j][k]; |
matrix[i][k] -= matrix[i][j] * matrix[j][k]; |
||
} |
} |
Revision as of 00:38, 26 May 2012
In linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.
It is named after Carl Friedrich Gauss and Wilhelm Jordan because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.[1]
Comparison with Gaussian elimination
Gauss-Jordan elimination like Gaussian elimination is intended for the same purpose such as solving a system of linear equation matrix and matrix inversion. Gauss–Jordan elimination has time complexity of order for a n by n full rank matrix (using Big O Notation). It therefore shares Gaussian elimination time complexity order of . However despite sharing the same order, Gauss-Jordan elimination requires approximately 50% more computation steps than Gaussian elimination. Gauss-Jordan elimination order of magnitude of the number of flops used in solving a n by n matrix is given as , whereas that of Gaussian elimination given as [2]. Therefore Gauss-Jordan elimination is often an unnecessary step past Gaussian elimination.
This brings down the fundamental question, why should Gauss-Jordan elimination method be used over Gaussian elimination. Gauss-Jordan elimination method is used for computing application in a multi-processor environment where processing speed is the main criteria. It has been shown that even though Gauss-Jordan elimination method requires more computation steps than Gaussian elimination, in a multiple processor environment, Gauss-Jordan elimination achieves higher processing speed than Gaussian elimination as the number of processors increases. This is explained due to the better load balancing characteristics and lower synchronization cost of the Gauss-Jordan elimination method[3].
Application to finding inverses
If Gauss–Jordan elimination is applied on a square matrix, it can be used to calculate the matrix's inverse. This can be done by augmenting the square matrix with the identity matrix of the same dimensions and applying the following matrix operations:
If the original square matrix, , is given by the following expression:
Then, after augmenting by the identity, the following is obtained:
By performing elementary row operations on the matrix until it reaches reduced row echelon form, the following is the final result:
The matrix augmentation can now be undone, which gives the following:
A matrix is non-singular (meaning that it has an inverse matrix) if and only if the identity matrix can be obtained using only elementary row operations.
c++ code for matrix inverse
#include <cstdlib>
double** gauss(double **matrix, int dimension)
{
double **inverse;
inverse = (double**) malloc(dimension * sizeof (double *));
for (int i = 0; i < dimension; i++)
inverse[i] = (double*) malloc(dimension * sizeof (double));
for (int i = 0; i < dimension; i++)
for (int j = 0; j < dimension; j++)
inverse[i][j] = 0;
for (int i = 0; i < dimension; i++)
inverse[i][i] = 1;
for (int k = 0; k < dimension; k++)
{
for (int i = k; i < dimension; i++)
{
double valInv = 1.0 / matrix[i][k];
for (int j = k; j < dimension; j++)
matrix[i][j] *= valInv;
for (int j = 0; j < dimension; j++)
inverse[i][j] *= valInv;
}
for (int i = k + 1; i < dimension; i++)
{
for (int j = k; j < dimension; j++)
matrix[i][j] -= matrix[k][j];
for (int j = 0; j < dimension; j++)
inverse[i][j] -= inverse[k][j];
}
}
for (int i = dimension - 2; i >= 0; i--)
{
for (int j = dimension - 1; j > i; j--)
{
for (int k = 0; k < dimension; k++)
inverse[i][k] -= matrix[i][j] * inverse[j][k];
for (int k = 0; k < dimension; k++)
matrix[i][k] -= matrix[i][j] * matrix[j][k];
}
}
return inverse;
}
References
- ^ Althoen, Steven C.; McLaughlin, Renate (1987), "Gauss–Jordan reduction: a brief history", The American Mathematical Monthly, 94 (2), Mathematical Association of America: 130–142, doi:10.2307/2322413, ISSN 0002-9890, JSTOR 2322413
- ^ J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10
- ^ G. A. Darmohray and E. D. Brooks, “Gaussian Techniques on Shared Memory Multiprocessor Computers,” in SIAM PPSC, Los Angeles, USA, December 1987.
- Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp. 69–80.
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.1", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
- Strang, Gilbert (2003), Introduction to Linear Algebra (3rd ed.), Wellesley, Massachusetts: Wellesley-Cambridge Press, pp. 74–76, ISBN 978-0-9614088-9-3
External links
- Algorithm for Gauss–Jordan elimination in Octave
- Algorithm for Gauss–Jordan elimination in Python
- An online tool solve nxm linear systems using Gauss–Jordan elimination (source-code and mobile version included), by Felipe Santos de Andrade (Portuguese)
- Algorithm for Gauss–Jordan elimination in Basic
- Module for Gauss–Jordan Elimination
- Example of Gauss–Jordan Elimination "Step-by-Step"
- Gauss–Jordan Elimination Calculator
- Example of detailed solutions SLE in four unknowns using the Gauss-Jordan elimination