Iterative Methods: Jacobi Method
Introduction
In the previous section, we introduced methods that produced an exact solution for the determined linear system  . These methods relied on exactly solving the set of equations at hand. There are other “numerical techniques” that involve iterative methods that are similar to the iterative methods shown in the root finding methods section. When
. These methods relied on exactly solving the set of equations at hand. There are other “numerical techniques” that involve iterative methods that are similar to the iterative methods shown in the root finding methods section. When  is relatively large, and when the matrix is banded, then these methods might become more efficient than the traditional methods above.
 is relatively large, and when the matrix is banded, then these methods might become more efficient than the traditional methods above.
The Jacobi Method
The Jacobi method is named after Carl Gustav Jacob Jacobi. The method is akin to the fixed-point iteration method in single root finding described before. First notice that a linear system  of size
 of size  can be written as:
 can be written as:
      ![Rendered by QuickLaTeX.com \[ \left(\begin{matrix}a_{11}&a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-d9388445e27140d2847757ea871998f7_l3.png)
The left hand side can be decomposed as follows:
      ![Rendered by QuickLaTeX.com \[ \left(\begin{matrix}0&a_{12}&\cdots&a_{1n}\\a_{21}&0&\cdots&a_{2n}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)+\left(\begin{matrix}a_{11}&0&\cdots&0\\0&a_{22}&\cdots&0\\\vdots&\vdots&\vdots&\vdots\\0&0&\cdots&a_{nn}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-9dc87f8d6c32061e6c99572068f23238_l3.png)
Effectively, we have separated  into two additive matrices:
 into two additive matrices:  where
 where  has zero entries in the diagonal components and
 has zero entries in the diagonal components and  is a diagonal matrix. If we start with nonzero diagonal components for
 is a diagonal matrix. If we start with nonzero diagonal components for  , then
, then  is a diagonal matrix with nonzero entries in the diagonal and can easily be inverted and its inverse is:
 is a diagonal matrix with nonzero entries in the diagonal and can easily be inverted and its inverse is:
      ![Rendered by QuickLaTeX.com \[ D^{-1}=\left(\begin{matrix}\frac{1}{a_{11}}&0&\cdots&0\\0&\frac{1}{a_{22}}&\cdots&0\\\vdots&\vdots&\vdots&\vdots\\0&0&\cdots&\frac{1}{a_{nn}}\end{matrix}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-1d598070594a2a38655134217af8eee1_l3.png)
Therefore:
      ![Rendered by QuickLaTeX.com \[ Rx+Dx=b\Rightarrow x=D^{-1}(b-Rx) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0d2fbe8962333c75539fc6b58c92920a_l3.png)
This form is similar to the fixed-point iteration method. By assuming initial guesses for the components of the vector  and substituting in the right hand side, then a new estimate for the components of
 and substituting in the right hand side, then a new estimate for the components of  can be computed. In addition to having non-zero diagonal components for
 can be computed. In addition to having non-zero diagonal components for  , there are other requirements for the matrix
, there are other requirements for the matrix  for this method to converge to a proper solution which are beyond the scope of these notes. One fact that is useful is that this method will converge if the diagonal components of
 for this method to converge to a proper solution which are beyond the scope of these notes. One fact that is useful is that this method will converge if the diagonal components of  are large compared to the rest of the matrix components. The criteria for stopping this algorithm will be based on the size or the norm of the difference between the vector
 are large compared to the rest of the matrix components. The criteria for stopping this algorithm will be based on the size or the norm of the difference between the vector  in each iteration. For this, we can use the Euclidean norm. So, if the components of the vector
 in each iteration. For this, we can use the Euclidean norm. So, if the components of the vector  after iteration
 after iteration  are
 are  , and if after iteration
, and if after iteration  the components are:
 the components are:   , then, the stopping criterion would be:
, then, the stopping criterion would be:
      ![Rendered by QuickLaTeX.com \[ \frac{\|x^{(i+1)}-x^{(i)}\|}{\|x^{(i+1)}\|}\leq \varepsilon_s \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-9cbcb16ed112d7a289b8e6e33d495bcf_l3.png)
where
      ![Rendered by QuickLaTeX.com \[\begin{split} \|x^{(i+1)}-x^{(i)}\|&=\sqrt{(x_1^{(i+1)}-x_1^{(i)})^2+(x_2^{(i+1)}-x_2^{(i)})^2+\cdots+(x_n^{(i+1)}-x_n^{(i)})^2}\\ \|x^{(i+1)}\|&=\sqrt{(x_1^{(i+1)})^2+(x_2^{(i+1)})^2+\cdots+(x_n^{(i+1)})^2}\\ \end{split} \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-f500d45c89c156aa0adbac55267bfee1_l3.png)
Note that any other norm function can work as well.
Example
Consider the system  defined as:
 defined as:
      ![Rendered by QuickLaTeX.com \[ \left(\begin{matrix}3&-0.1&-0.2\\0.1&7&-0.3\\0.3&-0.2&10\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0b7c7cb1f7cba7417118f7066c87c4d1_l3.png)
The Jacobi method with a stopping criterion of  will be used. First the system is rearranged to the form:
 will be used. First the system is rearranged to the form:
      ![Rendered by QuickLaTeX.com \[ \left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{matrix}\frac{1}{3}&0&0\\0&\frac{1}{7}&0\\0&0&\frac{1}{10}\end{matrix}\right)\left(\left(\begin{array}{c}7.85\\-19.3\\71.4\end{array}\right)-\left(\begin{matrix}0&-0.1&-0.2\\0.1&0&-0.3\\0.3&-0.2&0\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5792832e3968106d15535ac49234fab9_l3.png)
Then, the initial guesses for the components  are used to calculate the new estimates:
 are used to calculate the new estimates:
      ![Rendered by QuickLaTeX.com \[ x^{(1)}=\left(\begin{array}{c}2.717\\-2.7286\\7.13\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-28d79d72af340e86a632379814b350d7_l3.png)
The relative approximate error in this case is
      ![Rendered by QuickLaTeX.com \[ \varepsilon_r=\frac{\sqrt{(2.717-1)^2+(-2.7286-1)^2+(7.13-1)^2}}{\sqrt{2.717^2+(-2.7286)^2+7.13^2}}=0.91>\varepsilon_s \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8546571599cac3d1a9b0a5b60b339996_l3.png)
The next iteration:
      ![Rendered by QuickLaTeX.com \[ x^{(2)}=\left(\begin{array}{c}3.00105\\-2.49038\\7.00393\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-a658b2d57506803391625079cd27bb6d_l3.png)
The relative approximate error in this case is
      ![Rendered by QuickLaTeX.com \[ \varepsilon_r=\frac{\sqrt{(3.00105-2.717)^2+(-2.49038+2.7286)^2+(7.00393-7.13)^2}}{\sqrt{3.00105^2+(-2.49038)^2+7.00393^2}}=0.0489>\varepsilon_s \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8ed463d027eaad37f32c97d623128687_l3.png)
The third iteration:
      ![Rendered by QuickLaTeX.com \[ x^{(3)}=\left(\begin{array}{c}3.00058\\-2.49985\\7.00016\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5e208269a18008bbc6508545eff0db36_l3.png)
The relative approximate error in this case is
      ![Rendered by QuickLaTeX.com \[ \varepsilon_r=\frac{\sqrt{(3.00058-3.00105)^2+(-2.49985+2.49038)^2+(7.00016-7.00393)^2}}{\sqrt{3.00058^2+(-2.49985)^2+7.00016^2}}=0.00127>\varepsilon_s \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-a39ec4953c149882739f32703a316078_l3.png)
The fourth iteration:
      ![Rendered by QuickLaTeX.com \[ x^{(4)}=\left(\begin{array}{c}3.00002\\-2.5\\6.99999\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-aad4b870c593e309ce807c0e72a5436e_l3.png)
The relative approximate error in this case is
      ![Rendered by QuickLaTeX.com \[ \varepsilon_r=\frac{\sqrt{(3.00002-3.00058)^2+(-2.5+2.49985)^2+(6.99999-7.00016)^2}}{\sqrt{3.00002^2+(-2.5)^2+6.99999^2}}=0.000076 < \varepsilon_s \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-13ff629660b5518c5bf6192378647e0b_l3.png)
Therefore convergence has been achieved. The exact solution is in fact:
      ![Rendered by QuickLaTeX.com \[ x=A^{-1}b=\left(\begin{array}{c}3.\\-2.5\\7\end{array}\right) \]](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-ac94fca2d860e690d43b2b8c77a3cc42_l3.png)
Programming the Method
We will use the built-in Norm function for the stopping criteria. In the following code, the procedure “J” takes the matrix  , the vector
, the vector  , and the guess
, and the guess  to return a new guess for the vector
 to return a new guess for the vector  . The maximum number of iterations is 100 and the stopping criteria are either the maximum number of iterations is reached or
. The maximum number of iterations is 100 and the stopping criteria are either the maximum number of iterations is reached or  :
:
J[A_, b_, x_] := (
  n = Length[A];
  InvD = Table[0, {i, 1, n}, {j, 1, n}];
  R = A;
  Do[InvD[[i, i]] = 1/A[[i, i]]; R[[i, i]] = 0, {i, 1, n}];
  newx = InvD.(b - R.x)
  )
A = {{3, -0.1, -0.2}, {0.1, 7, -0.3}, {0.3, -0.2, 10}}
b = {7.85, -19.3, 71.4}
x = {{1, 1, 1.}};
MaxIter = 100;
ErrorTable = {1};
eps = 0.001;
i = 2;
While[And[i <= MaxIter, Abs[ErrorTable[[i - 1]]] > eps], 
 xi = J[A, b, x[[i - 1]]]; x = Append[x, xi]; 
 ei = Norm[x[[i]] - x[[i - 1]]]/Norm[x[[i]]]; 
 ErrorTable = Append[ErrorTable, ei]; i++]
x // MatrixForm
ErrorTable // MatrixForm
import numpy as np
def J(A, b, x):
  A = np.array(A)
  n = len(A)
  InvD = np.zeros([n,n])
  R = A
  for i in range(n):
    InvD[i, i] = 1/A[i, i]
    R[i, i] = 0
  return np.matmul(InvD,(b - np.matmul(R,x)))
A = [[3, -0.1, -0.2], [0.1, 7, -0.3], [0.3, -0.2, 10]]
b = [7.85, -19.3, 71.4]
x = [[1, 1, 1.]]
MaxIter = 100
ErrorTable = [1]
eps = 0.001
i = 1
while i <= MaxIter and abs(ErrorTable[i - 1]) > eps:
  xi = J(A, b, x[i - 1])
  x.append(xi)
  ei = np.linalg.norm(x[i] - x[i - 1])/np.linalg.norm(x[i]) 
  ErrorTable.append(ei)
  i+=1
print("x:",np.array(x))
print("ErrorTable:",np.vstack(ErrorTable))
Lecture Video
The following video covers the Jacobi method.
