Iterative Methods: Convergence of Jacobi and Gauss-Seidel Methods
If the matrix
is diagonally dominant, i.e., the values in the diagonal components are large enough, then this is a sufficient condition for the two methods to converge. In particular, if every diagonal component satisfies
, then, the two methods are guaranteed to converge. Generally, when these methods are used, the programmer should first use pivoting (exchanging the rows and/or columns of the matrix
) to ensure the largest possible diagonal components. Use the code above and see what happens after 100 iterations for the following system when the initial guess is
:
![]()
The system above can be manipulated to make it a diagonally dominant system. In this case, the columns are interchanged and so the variables order is reversed:
![]()
This system will now converge for any choice of an initial guess!
To show how the condition on the diagonal components
is a sufficient condition for the convergence of the iterative methods (solving
), the proof for the aforementioned condition is presented for the Jacobi method as follows. The proof for the Gauss-Seidel method has the same nature.
We want to prove that if
, then the Jacobi method (essentially) converges.
To this end, consider the formulation of the Jacobi method, i.e.,
![]()
Therefore,
, being the approximate solution for
at iteration
, is,
![Rendered by QuickLaTeX.com \[x_i^{(k+1)}=\frac{1}{A_{ii}}\left(b_i-\sum_{j=1}^{n}R_{ij}x_j^{(k)}\right)\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-683bdf19acbf129b17e51ae8d057defb_l3.png)
Since
(the diagonal components of
are zero), the above equation can be written as,
![Rendered by QuickLaTeX.com \[x_i^{(k+1)}=\frac{1}{A_{ii}}\left(b_i-\sum_{j=1, j\ne i}^{n}A_{ij}x_j^{(k)}\right)\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-582b117cc3d98562f89110d18ec8720d_l3.png)
Defining the error of
as,
![]()
we can write,
![Rendered by QuickLaTeX.com \[\begin{split}e_i^{k+1}&=\frac{1}{A_{ii}}\left(b_i-\sum_{j=1,j\ne i}^{n}A_{ij}x_j^{(k)}\right)-\frac{1}{A_{ii}}\left(b_i-\sum_{j=1,j\ne i}^{n}A_{ij}x_j\right)\\&=\frac{1}{A_{ii}}\sum_{j=1,j\ne i}^{n}A_{ij}(x_j-x_j^{(k)})\end{split}\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-41cd61bfdb0287d08c1594d6997a63ab_l3.png)
which, by the triangular inequality, implies
![]()
Letting
, we can write,
![]()
where
is the absolute value of the error of
(at the k-th iteration). Because
, the term
does not account for
being the error of
.
Now let
be the maximum of the absolute values of the errors of
for
; in a mathematical notation
is expressed as,
![]()
Note that
, the error of
, is also involved in calculating
.
Consequently,
![]()
or,
![Rendered by QuickLaTeX.com \[|e_i^{k+1}|\le\left(\sum_{j=1, j\ne i}^{n}\frac{|A_{ij}|}{|A_{ii}|}\right)E^k\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0a0b5825acc517ca2a680eb1e36acf3d_l3.png)
This indicates that if the positive value
, then,
![]()
which reads the error at
iteration is strictly less than the error at k-th iteration. Progressively, the error decreases through the iterations and convergence occurs.
In summary, the “diagonal dominance” condition
which can also be written as
![]()
is sufficient for the convergence of the Jacobi. This completes the proof
.
Lecture Video
The following video covers the convergence of the Jacobi and Gauss-Seidel Methods.
