Iterative Methods: Gauss-Seidel Method
The Method
The Gauss-Seidel method offers a slight modification to the Jacobi method which can cause it to converge faster. In the Gauss-Seidel method, the system is solved using forward substitution so that each component uses the most recent value obtained for the previous component. This is different from the Jacobi method where all the components in an iteration are calculated based on the previous iteration. First notice that a linear system
of size
can be written as:
![Rendered by QuickLaTeX.com \[\left(\begin{matrix}a_{11}&a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right)\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-4aee32b6a5dd0ed6c7b19ca64886e933_l3.png)
The left hand side can be decomposed as follows:
![Rendered by QuickLaTeX.com \[\left(\begin{matrix}0&a_{12}&\cdots&a_{1n}\\0&0&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\0&0&\cdots&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)+\left(\begin{matrix}a_{11}&0&\cdots&0\\a_{21}&a_{22}&\cdots&0\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right)\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7f002757ca163689dd5dda11f7d343fd_l3.png)
Effectively, we have separated
into two additive matrices:
where
is an upper triangular matrix with zero diagonal components, while
is a lower triangular matrix with its diagonal components equal to those of the diagonal components of
. If we start with nonzero diagonal components for
, then
can be used to solve the system using forward substitution:
![]()
the same stopping criterion as in the Jacobi method can be used for the Gauss-Seidel method.
Example
Consider the system
defined as:
![Rendered by QuickLaTeX.com \[ \left(\begin{matrix}3&-0.1&-0.2\\0.1&7&-0.3\\0.3&-0.2&10\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-42e93064f7cd95399075efd2524104a2_l3.png)
The Gauss-Seidel method with a stopping criterion of
will be used. First the system is rearranged to the form:
![Rendered by QuickLaTeX.com \[\left(\begin{matrix}3&0&0\\0.1&7&0\\0.3&-0.2&10\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right)-\left(\begin{matrix}0&-0.1&-0.2\\0&0&-0.3\\0&0&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-194e2c5828bafe3c29b048d99b874bce_l3.png)
Then, the initial guesses for the components
are used to calculate the new estimates using forward substitution. Notice that in forward substitution, the value of
uses the value for
and that
uses the values of
and
:
![Rendered by QuickLaTeX.com \[\begin{split}3x_1^{(1)}=7.85+0.1(1)+0.2(1)\Rightarrow x_1^{(1)}=2.716667\\0.1x_1^{(1)}+7x_2^{(1)}=-19.3+0.3(1)\Rightarrow x_2^{(1)}=-2.7531\\0.3x_1^{(1)}-0.2x_2^{(1)}+10x_3^{(1)}=71.4\Rightarrow x_3^{(1)}=7.00344\end{split}\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8be73f3882371dd8bc44dc3b876121c2_l3.png)
The relative approximate error in this case is
![]()
For iteration 2:
![Rendered by QuickLaTeX.com \[\begin{split}3x_1^{(2)}=7.85+0.1(-2.7531)+0.2(7.00344)\Rightarrow x_1^{(2)}=2.99179\\0.1x_1^{(2)}+7x_2^{(2)}=-19.3+0.3(7.00344)\Rightarrow x_2^{(2)}=-2.49974\\0.3x_1^{(2)}-0.2x_2^{(2)}+10x_3^{(2)}=71.4\Rightarrow x_3^{(2)}=7.00025\end{split}\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-38f628146f57ce23889373f39d7bf2f5_l3.png)
The relative approximate error in this case is
![]()
For iteration 3:
![Rendered by QuickLaTeX.com \[\begin{split}3x_1^{(3)}=7.85+0.1(-2.49974)+0.2(7.00025)\Rightarrow x_1^{(3)}=3.00003\\0.1x_1^{(3)}+7x_2^{(3)}=-19.3+0.3(7.00025)\Rightarrow x_2^{(3)}=-2.49999\\0.3x_1^{(3)}-0.2x_2^{(3)}+10x_3^{(3)}=71.4\Rightarrow x_3^{(3)}=7.\end{split}\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-1099e5006ec340ba195ff244b767aec2_l3.png)
The relative approximate error in this case is
![]()
And for iteration 4:
![Rendered by QuickLaTeX.com \[\begin{split}3x_1^{(4)}=7.85+0.1(-2.49999)+0.2(7)\Rightarrow x_1^{(4)}=3.\\0.1x_1^{(4)}+7x_2^{(4)}=-19.3+0.3(7.)\Rightarrow x_2^{(4)}=-2.5\\0.3x_1^{(4)}-0.2x_2^{(4)}+10x_3^{(4)}=71.4\Rightarrow x_3^{(4)}=7.\end{split}\]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-97bbc2bb1a51a8bc627ff700825b3212_l3.png)
The relative approximate error in this case is
![]()
Therefore, the system converges after iteration 4 similar to the Jacobi method. It is not obvious in this example, but the Gauss-Seidel method can converge faster than the Jacobi method.
Programming The Method
Using forward substitution in the equation
, then, the
component
in iteration
can be computed as follows:
(1) ![]()
The following code utilizes the procedure “GS” that takes a matrix
, a vector
, and a vector
of some guesses to return the new guess for the components of the vector
.
GS[A_, b_, x_] :=
(
n = Length[A];
xnew = Table[0, {i, 1, n}];
Do[xnew[[i]] = (b[[i]] - Sum[A[[i, j]] x[[j]], {j, i + 1, n}] - Sum[A[[i, j]] xnew[[j]], {j, 1, i - 1}])/A[[i, i]], {i, 1, n}];
xnew
)
A = {{3, -0.1, -0.2}, {0.1, 7, -0.3}, {0.3, -0.2, 10}}
b = {7.85, -19.3, 71.4}
x = {{1, 1, 1.}};
MaxIter = 100;
ErrorTable = {1};
eps = 0.001;
i = 2;
While[And[i <= MaxIter, Abs[ErrorTable[[i - 1]]] > eps],
xi = GS[A, b, x[[i - 1]]]; x = Append[x, xi];
ei = Norm[x[[i]] - x[[i - 1]]]/Norm[x[[i]]];
ErrorTable = Append[ErrorTable, ei]; i++]
x // MatrixForm
ErrorTable // MatrixForm
import numpy as np
def GS(A, b, x):
A = np.array(A)
n = len(A)
xnew = np.zeros(n)
for i in range(n):
xnew[i] = (b[i] - sum([A[i, j]*x[j] for j in range(i+1, n)]) - sum([A[i, j]*xnew[j] for j in range(i)]))/A[i, i]
return xnew
A = [[3, -0.1, -0.2], [0.1, 7, -0.3], [0.3, -0.2, 10]]
b = [7.85, -19.3, 71.4]
x = [[1, 1, 1.]]
MaxIter = 100
ErrorTable = [1]
eps = 0.001
i = 1
while i <= MaxIter and abs(ErrorTable[i - 1]) > eps:
xi = GS(A, b, x[i - 1])
x.append(xi)
ei = np.linalg.norm(x[i] - x[i - 1])/np.linalg.norm(x[i])
ErrorTable.append(ei)
i+=1
print("x:",np.array(x))
print("ErrorTable:",np.vstack(ErrorTable))
The example MATLAB code below is written to emphasize the approach and explicitly works with
and
. However, the algorithm can be programmed more compactly, for example using the forward substitution form presented in Equation 1.
Lecture Video
The following video covers the Gauss-Seidel Method
