Open Educational Resources

Iterative Methods: Jacobi Method

Introduction

In the previous section, we introduced methods that produced an exact solution for the determined linear system Ax=b. These methods relied on exactly solving the set of equations at hand. There are other “numerical techniques” that involve iterative methods that are similar to the iterative methods shown in the root finding methods section. When n is relatively large, and when the matrix is banded, then these methods might become more efficient than the traditional methods above.

The Jacobi Method

The Jacobi method is named after Carl Gustav Jacob Jacobi. The method is akin to the fixed-point iteration method in single root finding described before. First notice that a linear system Ax=b of size n can be written as:

    \[ \left(\begin{matrix}a_{11}&a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right) \]

The left hand side can be decomposed as follows:

    \[ \left(\begin{matrix}0&a_{12}&\cdots&a_{1n}\\a_{21}&0&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)+\left(\begin{matrix}a_{11}&0&\cdots&0\\0&a_{22}&\cdots&0\\\vdots&\vdots&\vdots&\vdots\\0&0&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right) \]

Effectively, we have separated A into two additive matrices: A=R+D where R has zero entries in the diagonal components and D is a diagonal matrix. If we start with nonzero diagonal components for A, then D is a diagonal matrix with nonzero entries in the diagonal and can easily be inverted and its inverse is:

    \[ D^{-1}=\left(\begin{matrix}\frac{1}{a_{11}}&0&\cdots&0\\0&\frac{1}{a_{22}}&\cdots&0\\\vdots&\vdots&\vdots&\vdots\\0&0&\cdots&\frac{1}{a_{nn}}\end{matrix}\right) \]

Therefore:

    \[ Rx+Dx=b\Rightarrow x=D^{-1}(b-Rx) \]

This form is similar to the fixed-point iteration method. By assuming initial guesses for the components of the vector x and substituting in the right hand side, then a new estimate for the components of x can be computed. In addition to having non-zero diagonal components for A, there are other requirements for the matrix D^{-1}R for this method to converge to a proper solution which are beyond the scope of these notes. One fact that is useful is that this method will converge if the diagonal components of A are large compared to the rest of the matrix components. The criteria for stopping this algorithm will be based on the size or the norm of the difference between the vector x in each iteration. For this, we can use the Euclidean norm. So, if the components of the vector x after iteration i are x_1^{(i)}, x_2^{(i)},\cdots,x_n^{(i)}, and if after iteration i+1 the components are: x_1^{(i+1)}, x_2^{(i+1)},\cdots,x_n^{(i+1)}, then, the stopping criterion would be:

    \[ \frac{\|x^{(i+1)}-x^{(i)}\|}{\|x^{(i+1)}\|}\leq \varepsilon_s \]

where

    \[\begin{split} \|x^{(i+1)}-x^{(i)}\|&=\sqrt{(x_1^{(i+1)}-x_1^{(i)})^2+(x_2^{(i+1)}-x_2^{(i)})^2+\cdots+(x_n^{(i+1)}-x_n^{(i)})^2}\\ \|x^{(i+1)}\|&=\sqrt{(x_1^{(i+1)})^2+(x_2^{(i+1)})^2+\cdots+(x_n^{(i+1)})^2}\\ \end{split} \]

Note that any other norm function can work as well.

Example

Consider the system Ax=b defined as:

    \[ \left(\begin{matrix}3&-0.1&-0.2\\0.1&7&-0.3\\0.3&-0.2&10\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right) \]

The Jacobi method with a stopping criterion of \varepsilon_s=0.0001 will be used. First the system is rearranged to the form:

    \[ \left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{matrix}\frac{1}{3}&0&0\\0&\frac{1}{7}&0\\0&0&\frac{1}{10}\end{matrix}\right)\left(\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right)-\left(\begin{matrix}0&-0.1&-0.2\\0.1&0&-0.3\\0.3&-0.2&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)\right) \]

Then, the initial guesses for the components x_1^{(0)}=1, x_2^{(0)}=1, x_3^{(0)}=1 are used to calculate the new estimates:

    \[ x^{(1)}=\left(\begin{array}{c}2.717\\-2.7286\\7.13\end{array}\right) \]

The relative approximate error in this case is

    \[ \varepsilon_r=\frac{\sqrt{(2.717-1)^2+(-2.7286-1)^2+(7.13-1)^2}}{\sqrt{2.717^2+(-2.7286)^2+7.13^2}}=0.91>\varepsilon_s \]

The next iteration:

    \[ x^{(2)}=\left(\begin{array}{c}3.00105\\-2.49038\\7.00393\end{array}\right) \]

The relative approximate error in this case is

    \[ \varepsilon_r=\frac{\sqrt{(3.00105-2.717)^2+(-2.49038+2.7286)^2+(7.00393-7.13)^2}}{\sqrt{3.00105^2+(-2.49038)^2+7.00393^2}}=0.0489>\varepsilon_s \]

The third iteration:

    \[ x^{(3)}=\left(\begin{array}{c}3.00058\\-2.49985\\7.00016\end{array}\right) \]

The relative approximate error in this case is

    \[ \varepsilon_r=\frac{\sqrt{(3.00058-3.00105)^2+(-2.49985+2.49038)^2+(7.00016-7.00393)^2}}{\sqrt{3.00058^2+(-2.49985)^2+7.00016^2}}=0.00127>\varepsilon_s \]

The fourth iteration:

    \[ x^{(4)}=\left(\begin{array}{c}3.00002\\-2.5\\6.99999\end{array}\right) \]

The relative approximate error in this case is

    \[ \varepsilon_r=\frac{\sqrt{(3.00002-3.00058)^2+(-2.5+2.49985)^2+(6.99999-7.00016)^2}}{\sqrt{3.00002^2+(-2.5)^2+6.99999^2}}=0.000076 < \varepsilon_s \]

Therefore convergence has been achieved. The exact solution is in fact:

    \[ x=A^{-1}b=\left(\begin{array}{c}3.\\-2.5\\7\end{array}\right) \]

Programming the Method

We will use the built-in Norm function for the stopping criteria. In the following code, the procedure “J” takes the matrix A, the vector b, and the guess x to return a new guess for the vector x. The maximum number of iterations is 100 and the stopping criteria are either the maximum number of iterations is reached or \varepsilon_r\leq \varepsilon_s:

View Mathematica Code
J[A_, b_, x_] := (
  n = Length[A];
  InvD = Table[0, {i, 1, n}, {j, 1, n}];
  R = A;
  Do[InvD[[i, i]] = 1/A[[i, i]]; R[[i, i]] = 0, {i, 1, n}];
  newx = InvD.(b - R.x)
  )

A = {{3, -0.1, -0.2}, {0.1, 7, -0.3}, {0.3, -0.2, 10}}
b = {7.85, -19.3, 71.4}
x = {{1, 1, 1.}};
MaxIter = 100;
ErrorTable = {1};
eps = 0.001;
i = 2;

While[And[i <= MaxIter, Abs[ErrorTable[[i - 1]]] > eps], 
 xi = J[A, b, x[[i - 1]]]; x = Append[x, xi]; 
 ei = Norm[x[[i]] - x[[i - 1]]]/Norm[x[[i]]]; 
 ErrorTable = Append[ErrorTable, ei]; i++]
x // MatrixForm
ErrorTable // MatrixForm
View Python Code
import numpy as np
def J(A, b, x):
  A = np.array(A)
  n = len(A)
  InvD = np.zeros([n,n])
  R = A
  for i in range(n):
    InvD[i, i] = 1/A[i, i]
    R[i, i] = 0
  return np.matmul(InvD,(b - np.matmul(R,x)))

A = [[3, -0.1, -0.2], [0.1, 7, -0.3], [0.3, -0.2, 10]]
b = [7.85, -19.3, 71.4]
x = [[1, 1, 1.]]
MaxIter = 100
ErrorTable = [1]
eps = 0.001
i = 1

while i <= MaxIter and abs(ErrorTable[i - 1]) > eps:
  xi = J(A, b, x[i - 1])
  x.append(xi)
  ei = np.linalg.norm(x[i] - x[i - 1])/np.linalg.norm(x[i]) 
  ErrorTable.append(ei)
  i+=1
print("x:",np.array(x))
print("ErrorTable:",np.vstack(ErrorTable))

Lecture Video

The following video covers the Jacobi method.



Leave a Reply

Your email address will not be published. Required fields are marked *