Open Educational Resources

Finding Roots of Equations: Bracketing Methods

An elementary observation from the previous method is that in most cases, the function changes signs around the roots of an equation. The bracketing methods rely on the intermediate value theorem.

4.2.1 Intermediate Value Theorem

Statement: Let f:[a,b]\rightarrow \mathbb{R} be continuous and f(a)\leq f(b). Then, \forall y\in[f(a),f(b)]:\exists x\in[a,b] such that y=f(x). The same applies if f(b)\leq f(a).
The proof for the intermediate value theorem relies on the assumption of continuity of the function f. Intuitively, because the function f is continuous, then for the values of f to change from f(a) to f(b) it has to pass by every possible value between f(a) and f(b). The consequence of the theorem is that if the function f is such that f(a)\leq 0 \leq f(b), then, there is x\in[a,b] such that f(x)=0. We can proceed to find x iteratively using either the bisection method or the false position method.

4.2.2 Bisection Method

In the bisection method, if f(a)f(b)<0, an estimate for the root of the equation f(x)=0 can be found as the average of a and b:

    \[x_i=\frac{a+b}{2}\]

Upon evaluating f(x_i), the next iteration would be to set either a=x_i or b=x_i such that for the next iteration the root x_{i+1} is between a and b. The following describes an algorithm for the bisection method given a<b, f(x), \varepsilon_s, and maximum number of iterations:
Step 1: Evaluate f(a) and f(b) to ensure that f(a)f(b)<0. Otherwise, exit with an error.
Step 2: Calculate the value of the root in iteration i as x_i=\frac{a_i+b_i}{2}. Check which of the following applies:

  1. If f(x_i)=0, then the root has been found, the value of the error \varepsilon_r=0. Exit.
  2. If f(x_i)f(a_i)<0, then for the next iteration, x_{i+1} is bracketed between a_i and x_i. The value of \varepsilon_r=\frac{x_{i+1}-x_{i}}{x_{i+1}}.
  3. If f(x_i)f(b_i)<0, then for the next iteration, x_{i+1} is bracketed between x_i and b_i. The value of \varepsilon_r=\frac{x_{i+1}-x_{i}}{x_{i+1}}.

Step 3: Set i=i+1. If i reaches the maximum number of iterations or if \varepsilon_r\leq \varepsilon_s, then the iterations are stopped. Otherwise, return to step 2 with the new interval a_{i+1} and b_{i+1}.

Example

Setting \varepsilon_s=0.0005 and applying this process to f(x)=\sin{5x}+\cos{2x} with a=-0.6 and b=-0.5 yields the estimate x_9=-0.523633 after 9 iterations with \varepsilon_r=-0.000373 as shown below. Similarly, applying this process to f(x)=\sin{5x}+\cos{2x} with a=-0.3 and b=-0.2 yields the estimate x_{10}=-0.224316 after 10 iterations with \varepsilon_r=-0.000435 while applying this process to f(x)=\sin{5x}+\cos{2x} with a=0.6 and b=0.7 yields the estimate x_9=0.673242 after 9 iterations with \varepsilon_r=0.00029:

Bisection1
View Mathematica Code View Python Code

The following code can be used to implement the bisection method in MATLAB. For the sake of demonstration, it finds the roots of the function in above example. The example function is defined in another file

The following tool illustrates this process for a=0.1 and b=0.9. Use the slider to view the process after each iteration. In the first iteration, the interval [a,b]=[0.1,0.9]. f(a)f(b)<0, so, x_1=\frac{a+b}{2}=0.5. In the second iteration, the interval becomes [0.5,0.9] and the new estimate x_2=\frac{0.5+0.9}{2}=0.7. The relative approximate error in the estimate \varepsilon_r=\frac{0.7-0.5}{0.7}=0.2857. You can view the process to see how it converges after 12 iterations.




 

Error Estimation

If V_t is the true value of the root and V_a is the estimate, then, the number of iterations n needed to ensure that the absolute value of the error |E|=|V_t-V_a| is less than or equal to a certain value can be easily obtained. Let L=b-a be the length of the original interval used. The first estimate x_1=\frac{a+b}{2} and so, in the next iteration, the interval where the root is contained has a length of \frac{L}{2}. As the process evolves, the interval for the iteration number n has a length of \frac{L}{2^n}. Since the true value exists in an interval of length \frac{L}{2^n}, the absolute value of the error is such that:

    \[|E|\leq \frac{L}{2^n}\]

Therefore, for a desired estimate of the absolute value of the error, say E_{max}, the number of iterations required is:

    \[\frac{L}{2^n}=E_{max}\Rightarrow n=\frac{\ln{\frac{L}{E_{max}}}}{\ln{2}}\]

Example

As an example, consider f(x)=x^3+x^2-10, if we wish to find the root of the equation f(x)=0 in the interval [1,2] with an absolute error less than or equal to 0.004, the number of iterations required is 8:

    \[n=\frac{\ln{\frac{1}{0.004}}}{\ln{2}}= 7.966 \approx 8\]

The actual root with 10 significant digits is V_t=1.867460025.
Using the process above, after the first iteration, x_1=1.5 and so, the root lies between 1.5 and 2. So, the length of the interval is equal to 0.5 and the error in the estimate is less than 0.5. The length of the interval after iteration 8 is equal to 0.0039 and so the error in the estimate is less than 0.0039. It should be noted however that the actual error was less than this upper bound after the seventh iteration.

table111
View Mathematica Code View Python Code

The procedure for solving the above example in MATLAB is available in the following files. The polynomial function is defined in a separate file.

4.2.3 False Position Method

In the false position method, the new estimate x_i at iteration i is obtained by considering the linear function passing through the two points (a,f(a)) and (b,f(b)). The point of intersection of this line with the x axis can be obtained using one of the following formulas:

    \[x_i=a-f(a)\frac{(b-a)}{f(b)-f(a)}=b-f(b)\frac{(b-a)}{f(b)-f(a)}=\frac{af(b)-bf(a)}{f(b)-f(a)}\]

Upon evaluating f(x_i), the next iteration would be to set either a=x_i or b=x_i such that for the next iteration the root x_{i+1} is between a and b. The following describes an algorithm for the false position method method given a<b, f(x), \varepsilon_s, and maximum number of iterations:
Step 1: Evaluate f(a) and f(b) to ensure that f(a)f(b)<0. Otherwise exit with an error.
Step 2: Calculate the value of the root in iteration i as x_i=\frac{a_if(b_i)-bf(a_i)}{f(b_i)-f(a_i)}. Check which of the following applies:

  1. If f(x_i)=0, then the root has been found, the value of the error \varepsilon_r=0. Exit.
  2. If f(x_i)f(a_i)<0, then for the next iteration, x_{i+1} is bracketed between a_i and x_i. The value of \varepsilon_r=\frac{x_{i+1}-x_{i}}{x_{i+1}}.
  3. If f(x_i)f(b_i)<0, then for the next iteration, x_{i+1} is bracketed between x_i and b_i. The value of \varepsilon_r=\frac{x_{i+1}-x_{i}}{x_{i+1}}.

Step 3: If i reaches the maximum number of iterations or if \varepsilon_r\leq \varepsilon_s, then the iterations are stopped. Otherwise, return to step 2.

Example

Setting \varepsilon_s=0.0005 and applying this process to f(x)=\sin{5x}+\cos{2x} with a=-0.6 and b=-0.5 yields the estimate x_3=-0.523569 after 3 iterations with \varepsilon_r=0.000498 as shown below. Similarly, applying this process to f(x)=\sin{5x}+\cos{2x} with a=-0.3 and b=-0.2 yields the estimate x_4=-0.2244 after 4 iterations with \varepsilon_r=-0.00015 while applying this process to f(x)=\sin{5x}+\cos{2x} with a=0.6 and b=0.7 yields the estimate x_3=0.673198 after 3 iterations with \varepsilon_r=-4.4\times 10^{-6}:

falseposition1
View Mathematica Code View Python Code

The procedure for implementing the false position root finding algorithm in MATLAB is available in the following files. The example function is defined in a separate file.

The following tool illustrates this process for a=0.1 and b=0.9. Use the slider to view the process after each iteration. In the first iteration, the interval [a,b]=[0.1,0.9]. f(a)f(b)<0, so, x_1=\frac{0.1f(0.9)-0.9f(0.1)}{f(0.9)-f(0.1)}=0.538249. In the second iteration, the interval becomes [0.538249,0.9] and the new estimate x_2=\frac{0.538249f(0.9)-0.9f(0.538249)}{f(0.9)-f(0.538249)}=0.693886. The relative approximate error in the estimate \varepsilon_r=\frac{0.693886-0.538249}{0.693886}=0.224297. You can view the process to see how it converges after very few iterations.




 

4.2.4 Lecture video


4.2.5 Lecture video


Page Comments

  1. Lucas Chin says:

    Thank you so much for this content, this is very helpful, and will be passing this along to my classmates. I wish I could’ve found this earlier.

  2. Samer Adeeb says:

    Thank you

Leave a Reply

Your email address will not be published.