Jump to content

Gauss–Jordan elimination: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Jalal0 (talk | contribs)
Tag: Redirect target changed
 
(41 intermediate revisions by 24 users not shown)
Line 1: Line 1:
#REDIRECT [[Gaussian elimination#Gauss–Jordan elimination]]
In [[linear algebra]], '''Gauss–Jordan elimination''' is an [[algorithm]] for getting [[matrix (mathematics)|matrices]] in [[reduced row echelon form]] using [[elementary row operations]]. It is a variation of [[Gaussian elimination]]. Gaussian elimination places zeros below each [[pivot element|pivot]] in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.


{{rcat shell|
It is named after [[Carl Friedrich Gauss]] and [[Wilhelm Jordan (geodesist)|Wilhelm Jordan]] because it is a variation of Gaussian elimination as Jordan described in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.<ref>{{Citation | last1=Althoen | first1=Steven C. | last2=McLaughlin | first2=Renate | title=Gauss–Jordan reduction: a brief history | doi=10.2307/2322413 | year=1987 | journal=[[American Mathematical Monthly|The American Mathematical Monthly]] | issn=0002-9890 | volume=94 | issue=2 | pages=130–142 | jstor=2322413 | publisher=Mathematical Association of America}}</ref>
{{R from merge}}

{{R with Wikidata item}}
== Comparison with Gaussian elimination ==
{{R to anchor}}
Gauss–Jordan elimination has [[computational complexity theory|time complexity]] of order <math>O(n^3)</math> for an ''n'' by ''n'' [[Rank (linear algebra)|full rank]] matrix (using [[Big O Notation]]). It therefore shares Gaussian elimination time complexity order of <math>O(n^3)</math>, however despite sharing the same order, Gauss-Jordan elimination requires approximately 50% more computation steps than Gaussian elimination. Gauss-Jordan elimination order of magnitude of the number of flops used in solving a ''n'' by ''n'' matrix is given as <math>\frac{n^3}{3}</math>, whereas that of Gaussian elimination given as <math>\frac{n^3}{2}</math><ref>J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10</ref>. Therefore Gauss-Jordan elimination is often an unnecessary step past Gaussian elimination.

This brings down the fundamental question, why should Gauss-Jordan elimination method be used over Gaussian elimination. Gauss-Jordan elimination method is used for computing application in a [[Parallel computing|multi-processor]] environment where processing speed is the main criteria. It has been shown that even though Gauss-Jordan elimination method requires more computation steps than Gaussian elimination, in a multiple processor environment, Gauss-Jordan elimination achieves higher processing speed than Gaussian elimination as the number of processors increases. This is explained due to the better load balancing characteristics and lower synchronization cost of the Gauss-Jordan elimination method<ref>G. A. Darmohray and E. D. Brooks, “Gaussian Techniques on Shared Memory Multiprocessor Computers,” in SIAM PPSC, Los Angeles, USA, December 1987.</ref>.

== Application to finding inverses ==

If Gauss–Jordan elimination is applied on a [[square matrix]], it can be used to calculate the matrix's [[inverse matrix|inverse]]. This can be done by [[augmented matrix|augmenting]] the square matrix with the [[identity matrix]] of the same dimensions and applying the following matrix operations:
:<math>[ A I ] \Rightarrow
A^{-1} [ A I ] \Rightarrow
[ I A^{-1} ].
</math>

If the original square matrix, <math>A</math>, is given by the following expression:
:<math> A =
\begin{bmatrix}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2
\end{bmatrix}.
</math>

Then, after augmenting by the identity, the following is obtained:
:<math> [ A I ] =
\begin{bmatrix}
2 & -1 & 0 & 1 & 0 & 0\\
-1 & 2 & -1 & 0 & 1 & 0\\
0 & -1 & 2 & 0 & 0 & 1
\end{bmatrix}.
</math>

By performing [[Elementary_matrix#Operations|elementary row operations]] on the <math>[ A I ]</math> matrix until it reaches [[reduced row echelon form]], the following is the final result:

:<math> [ I A^{-1} ] =
\begin{bmatrix}
1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\
0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\
0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4}
\end{bmatrix}.
</math>

The matrix augmentation can now be undone, which gives the following:
:<math> I =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}\qquad
A^{-1} =
\begin{bmatrix}
\frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\
\frac{1}{2} & 1 & \frac{1}{2}\\
\frac{1}{4} & \frac{1}{2} & \frac{3}{4}
\end{bmatrix}.
</math>
A matrix is [[non-singular matrix|non-singular]] (meaning that it has an inverse matrix) [[if and only if]] the identity matrix can be obtained using only elementary row operations.

==References==
{{Reflist}}
* Lipschutz, Seymour, and Lipson, Mark. "Schaum's Outlines: Linear Algebra". Tata McGraw–Hill edition. Delhi 2001. pp.&nbsp;69–80.
* {{Citation
|last1=Press
|first1=WH
|last2=Teukolsky
|first2=SA
|last3=Vetterling
|first3=WT
|last4=Flannery
|first4=BP
|year=2007
|title=Numerical Recipes: The Art of Scientific Computing
|edition=3rd
|publisher=Cambridge University Press
| publication-place=New York
|isbn=978-0-521-88068-8
|chapter=Section 2.1
|chapter-url=http://apps.nrbook.com/empanel/index.html?pg=41
}}
}}
* {{Citation | last1=Strang | first1=Gilbert | author1-link=Gilbert Strang | title=Introduction to Linear Algebra | publisher=Wellesley-Cambridge Press | location=[[Wellesley, Massachusetts]] | edition=3rd | isbn=978-0-9614088-9-3 | year=2003 | pages = 74–76}}

== External links ==
<!-- {{wikibooks |1= Linear Algebra |2= Linear Algebra/Gauss-Jordan Reduction |3= Gauss-Jordan elimination }} -->
*[http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/linear-algebra/rref.m Algorithm for Gauss–Jordan elimination in Octave]
*[http://elonen.iki.fi/code/misc-notes/python-gaussj/index.html Algorithm for Gauss–Jordan elimination in Python]
*[http://lipe.advant.com.br/unicenp/gauss-jordan.php An online tool solve nxm linear systems using Gauss–Jordan elimination (source-code and mobile version included), by Felipe Santos de Andrade] (Portuguese)
*[http://www.cs.berkeley.edu/~wkahan/MathH110/gji.pdf Algorithm for Gauss–Jordan elimination in Basic]
*[http://math.fullerton.edu/mathews/n2003/GaussianJordanMod.html Module for Gauss–Jordan Elimination]
*[http://vivaldi.ucsd.edu:8080/~kcheng/ece155/hwsoln/Gaussian-Jordan.pdf Example of Gauss–Jordan Elimination "Step-by-Step"]
*[http://www.idomaths.com/gauss_jordan.php Gauss–Jordan Elimination Calculator]
*[http://www.mathhelpforum.com/math-help/f5/matricies-need-help-164225.html Example of detailed solutions SLE in four unknowns using the Gauss-Jordan elimination]

{{Numerical linear algebra}}

{{DEFAULTSORT:Gauss–Jordan Elimination}}
[[Category:Numerical linear algebra]]
[[Category:Exchange algorithms]]

[[ar:حذف غاوس-جوردان]]
[[de:Gauß-Jordan-Algorithmus]]
[[es:Eliminación de Gauss-Jordan]]
[[fr:Élimination de Gauss-Jordan]]
[[ko:가우스-요르단 소거법]]
[[id:Eliminasi Gauss-Jordan]]
[[is:Gauß-Jordan eyðing]]
[[it:Algoritmo di Gauss-Jordan]]
[[nl:Gauss-Jordaneliminatie]]
[[pt:Eliminação de Gauss-Jordan]]
[[ru:Метод Гаусса — Жордана]]
[[uk:Метод Гауса — Жордана]]
[[zh:高斯-若爾當消元法]]

Latest revision as of 15:51, 24 October 2023