Open Educational Resources

Introduction to Numerical Analysis: Numerical Integration

Introduction

Integrals arise naturally in the fields of engineering to describe quantities that are functions of infinitesimal data. For example, if the velocity of a moving object for a particular period of time is known, then the change of position of the moving object can be estimated by summing the velocity at discrete time points multiplied by the corresponding time intervals between the discrete points. The accuracy of the change in position can be increased by decreasing the number of discrete time points, i.e., by reducing the time intervals between the discrete points. The formal definition of an integral of a function f:[a,b]\rightarrow \mathbb{R} is the signed area of the region under the curve f(x) between the points a and b. The definite integral is denoted by:

    \[ \int_{a}^b\!f(x)\,\mathrm{d}x \]

Fundamental Theorem of Calculus

The fundamental theorem of calculus links the concepts of integration and differentiation; roughly speaking, they are the converse of each other. The fundamental theorem of calculus can be stated as follows:

Part I

Let f:[a,b]\rightarrow \mathbb{R} be continuous, and let F:[a,b]\rightarrow \mathbb{R} be the function defined such that \forall x\in[a,b]:

    \[ F(x)=\int_{a}^x\!f(x)\,\mathrm{d}x \]

then, F is uniformly continuous on [a,b], is differentiable on (a,b), and f is the derivative of F (also stated as F is the antiderivative of f):

    \[ \frac{\mathrm{d}F}{\mathrm{d}x}=f(x) \]

In other words, the first part asserts that if f is continuous on the interval [a,b], then, an antiderivative F always exists; f is the derivative of a function F that can be defined as F(x)=\int_{a}^x\!f(x)\,\mathrm{d}x with x\in[a,b].

Part II

Let f:[a,b]\rightarrow \mathbb{R} and F:[a,b]\rightarrow \mathbb{R} be two continuous functions such that f(x)=\frac{\mathrm{d}F}{\mathrm{d}x} (i.e., f is the derivative of F, or F is an antiderivative of f), then:

    \[ \int_{a}^b\!f(x)\,\mathrm{d}x=F(b)-F(a) \]

The second part states that if f has an antiderivative F, then F can be used to calculate the definite integral of f on the interval [a,b].

For visual proofs of the fundamental theorem of calculus, check out this page.

Numerical Integration

The fundamental theorem of calculus allows the calculation of \int_{a}^b\!f(x)\,\mathrm{d}x using the antiderivative of f. However, if the antiderivative is not available, a numerical procedure can be employed to calculate the integral or the area under the curve. Historically, the term “Quadrature” was used to mean calculating areas. For example, the Gauss numerical integration scheme that will be studied later is also called the Gauss quadrature. The basic problem in numerical integration is to calculate an approximate solution to the definite integral of a function f:[a,b].

Riemann Integral

One of the early definitions of the integral of a function is the limit:

    \[ \int_{a}^b\!f(x)\,\mathrm{d}x=\lim_{\max{\Delta x_k}\rightarrow 0 } \sum_{k=1}^n f(x_k^*)\Delta x_k \]

where a= x_1 < x_2 < x_3 < \cdots < x_n < x_{n+1}=b, \Delta x_k=x_{k+1}-x_k, and x_k^* is an arbitrary point such that x_k^*\in [x_k,x_{k+1}]. If A is the sum of the areas of vertical rectangles arranged next to each other and whose top sides touch the function f at arbitrary points, then, the Riemann integral is the limit of this sum A as the maximum width of the vertical rectangles approaches zero. A function is “Riemann integrable” if the limit above (the limit of A) exists as the width of the rectangles gets smaller and smaller. In this course, we are always dealing with continuous functions. For continuous functions, the limit always exists and thus, all continuous functions are “Riemann integrable”.

Newton-Cotes Formulas

The Newton-Cotes formulas rely on replacing the function or tabulated data with an interpolating polynomial that is easy to integrate. In this case, the required integral is evaluated as:

    \[ I=\int_{a}^b\!f(x)\,\mathrm{d}x\approx \int_{a}^b\!f_n(x)\,\mathrm{d}x \]

where f_n(x) is a polynomial of degree n which can be constructed to pass through n+1 data points in the interval [a,b].

Example

Consider the function f:[0,1]\rightarrow \mathbb{R} defined as:

    \[ f(x)=2-x^2+0.1\cos{\left(\frac{2\pi x}{0.7}\right)} \]

Calculate the exact integral:

    \[ I=\int_{0}^1\!f(x)\,\mathrm{d}x \]

Using steps sizes of h\in\{0.1,0.2,0.5,1\}, fit an interpolating polynomial to the values of the function at the generated points and calculate the integral by integrating the polynomial. Compare the exact with the interpolating polynomial.

Solution

The exact integral can be calculated using Mathematica to be:

    \[ I=\int_{0}^1\!f(x)\,\mathrm{d}x=1.6715 \]

The following figures show the interpolating polynomial fitted through data points at step sizes of 0.1, 0.2, 0.5, and 1. The exact integral compared with the numerical integration scheme evaluated by integrating the interpolating polynomial is shown under each curve. Naturally, the higher the degree of the polynomial, the closer the numerical value to the exact value.
Interpolating polynomial
The resulting interpolating polynomials are given by:

    \[\begin{split} y_{h=0.1}&=2.1 + 0.0161419 x - 5.48942 x^2 + 5.32445 x^3 - 6.11881 x^4 + 123.581 x^5 - 356.658 x^6\\ & + 389.096 x^7 - 159.091 x^8 - 6.69727 x^9 + 14.8471 x^{10}\\ y_{h=0.2}&=2.1 + 1.11382 x - 18.0584 x^2 + 53.5143 x^3 - 61.493 x^4 + 23.7332 x^5\\ y_{h=0.5}&=2.1 - 0.298911 x - 0.891185 x^2\\ y_{h=1}&=2.1-1.1901x \end{split} \]

Integrating the above formulas is straightforward and the resulting values are:

    \[\begin{split} I_{h=0.1}&=1.6715\\ I_{h=0.2}&=1.67294\\ I_{h=0.5}&=1.65348\\ I_{h=1}&=1.50495 \end{split} \]

The following is the Mathematica code used.
View Mathematica Code

h = {0.1, 0.2, 0.5, 1};
n = Table[1/h[[i]] + 1, {i, 1, 4}];
y = 2 - x^2 + 0.1*Cos[2 Pi*x/0.7];
Data = Table[{h[[i]]*(j - 1), (y /. x -> h[[i]]*(j - 1.0))}, {i, 1, 4}, {j, 1, n[[i]]}];
Data // MatrixForm;
p = Table[InterpolatingPolynomial[Data[[i]], x], {i, 1, 4}];
a1 = Plot[{y, p[[1]]}, {x, 0, 1},   Epilog -> {PointSize[Large], Point[Data[[1]]]},   AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,   PlotRange -> All, Filling -> {2 -> Axis},   PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a2 = Plot[{y, p[[2]]}, {x, 0, 1},    Epilog -> {PointSize[Large], Point[Data[[2]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a3 = Plot[{y, p[[3]]}, {x, 0, 1},    Epilog -> {PointSize[Large], Point[Data[[3]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a4 = Plot[{y, p[[4]]}, {x, 0, 1},    Epilog -> {PointSize[Large], Point[Data[[4]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
Iexact = Integrate[y, {x, 0, 1}];
Inumerical = Table[Integrate[p[[i]], {x, 0, 1}], {i, 1, 4}];
atext = Table[Grid[{{"I exact = ", Iexact, ", I numerical = ", Inumerical[[i]]}}], {i, 1, 4}];
Grid[{{a1, a2}, {atext[[1]], atext[[2]]}}]
Grid[{{a3, a4}, {atext[[3]], atext[[4]]}}]

However, as described in the interpolating polynomial section, the larger the degree of the polynomial, the more susceptible it is to oscillations. As an example, the previous problem is repeated with the Runge function defined on the interval from -1 to 1. The figure below shows that using many points leads to large oscillations in the interpolating polynomial which renders the numerical integration largely inaccurate. When fewer points are used, the interpolating polynomial is still highly inaccurate. Therefore, the Newton-Cotes formulas can be applied by simply subdividing the interval into smaller subintervals and applying the Newton-Cotes formulas to each subinterval and adding the results. This process is called: “Composite rules”. The rectangle method, trapezoidal rule, and Simpson’s rules presented in the following sections are specific examples of the composite Newton-Cotes formulas.
Interpolating polynomial2

View Mathematica Code
h = {0.1, 0.2, 0.5, 1};
n = Table[2/h[[i]] + 1, {i, 1, 4}];
y = 1.0/(1 + 25 x^2);
Data = Table[{-1 +    h[[i]]*(j - 1), (y /. x -> -1 + h[[i]]*(j - 1.0))}, {i, 1,     4}, {j, 1, n[[i]]}];
Data // MatrixForm;
p = Table[InterpolatingPolynomial[Data[[i]], x], {i, 1, 4}];
a1 = Plot[{y, p[[1]]}, {x, -1, 1},    Epilog -> {PointSize[Large], Point[Data[[1]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a2 = Plot[{y, p[[2]]}, {x, -1, 1},    Epilog -> {PointSize[Large], Point[Data[[2]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a3 = Plot[{y, p[[3]]}, {x, -1, 1},    Epilog -> {PointSize[Large], Point[Data[[3]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
a4 = Plot[{y, p[[4]]}, {x, -1, 1},    Epilog -> {PointSize[Large], Point[Data[[4]]]},    AxesLabel -> {"x", "y"}, BaseStyle -> Bold, ImageSize -> Medium,    PlotRange -> All, Filling -> {2 -> Axis},    PlotLegends -> {"y actual", "y Interpolating Polynomial"}];
Iexact = Integrate[y, {x, -1, 1}];
Inumerical = Table[Integrate[p[[i]], {x, -1, 1}], {i, 1, 4}];

atext = Table[   Grid[{{"I exact = ", Iexact, ", I numerical = ",  Inumerical[[i]]}}], {i, 1, 4}];
Grid[{{a1, a2}, {atext[[1]], atext[[2]]}}]
Grid[{{a3, a4}, {atext[[3]], atext[[4]]}}]

Rectangle Method

Let f:[a,b]\rightarrow \mathbb{R}. The rectangle method utilizes the Riemann integral definition to calculate an approximate estimate for the area under the curve by drawing many rectangles with very small width adjacent to each other between the graph of the function f and the x axis. For simplicity, the width of the rectangles is chosen to be constant. Let n be the number of intervals with a=x_0 < x_1 < x_2 < \cdots < x_n=b and constant spacing h=x_{i+1}-x_i. The rectangle method can be implemented in one of the following three ways:

    \[\begin{split} I_1&=\int_{a}^b\!f(x)\,\mathrm{d}x\approx h\sum_{i=1}^{n}f(x_{i-1})\\ I_2&=\int_{a}^b\!f(x)\,\mathrm{d}x\approx h\sum_{i=1}^{n}f\left(\frac{x_{i-1}+x_{i}}{2}\right)\\ I_3&=\int_{a}^b\!f(x)\,\mathrm{d}x\approx h\sum_{i=1}^{n}f(x_{i}) \end{split} \]

If x_{i-1} and x_{i} are the left and right points defining the rectangle number i, then I_1 assumes that the height of the rectangle is equal to f(x_{i-1}), I_2 assumes that the height of the rectangle is equal to f\left(\frac{x_{i-1}+x_{i}}{2}\right), and I_3 assumes that the height of the rectangle is equal to f(x_{i}). I_2 is called the midpoint rule.
To illustrate the difference, consider the function f(x)=x^2 on the interval [0,1]. The exact integral can be calculated as

    \[ I_{\mbox{exact}}=\frac{x^3}{3}\bigg|_{x=0}^{x=1}=0.3333 \]

The following three tools show the implementation of the rectangle method for numerically integrating the function using I_1, I_2, and I_3, respectively. For I_1, each rectangle touches the graph of the function at the top left corner of the rectangle. For I_2, each rectangle touches the graph of the function at the mid point of the top side. For I_3, each rectangle touches the graph of the function at the top right corner of the rectangle. The areas of the rectangles are calculated underneath each curve. Use the slider to increase the number of rectangles to see how many rectangles are needed to get a good approximation for the area under the curve.


Interpolating polynomial


Interpolating polynomial


Interpolating polynomial

The following Mathematica code can be used to numerically integrate any function f on the interval [a,b] using the three options for the rectangle method.
View Mathematica Code

I1[f_, a_, b_, n_] := (h = (b - a)/n; Sum[f[a + (i - 1)*h]*h, {i, 1, n}])
I2[f_, a_, b_, n_] := (h = (b - a)/n; Sum[f[a + (i - 1/2)*h]*h, {i, 1, n}])
I3[f_, a_, b_, n_] := (h = (b - a)/n; Sum[f[a + (i)*h]*h, {i, 1, n}])
f[x_] := x^2;
I1[f, 0, 1, 13.0]
I2[f, 0, 1, 13.0]
I3[f, 0, 1, 13.0]

Error Analysis

Taylor’s theorem can be used to find how the error changes as the step size h decreases. First, let’s consider I_1 (the same applies to I_3). The error in the calculation of rectangle number i between the points x_{i-1} and x_{i} will be estimated. Using Taylor’s theorem, \forall x\in[x_{i-1},x_{i}], \exists \xi\in[x_{i-1},x] such that:

    \[ f(x)=f(x_{i-1})+f'(\xi)(x-x_{i-1}) \]

The error in the integral using this rectangle can be calculated as follows:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_{i}}\!f(x)-f(x_{i-1})\,\mathrm{d}x\right|=\left|\int_{x_{i-1}}^{x_{i}}\!f'(\xi)(x-x_{i-1})\,\mathrm{d}x\right|\\ &\leq \max_{\xi\in[x_{i-1},x_i]}|f'(\xi)|\frac{(x-x_{i-1})^2}{2}\bigg|_{x_{i-1}}^{x_i}=\max_{\xi\in[x_{i-1},x_i]}|f'(\xi)|\frac{h^2}{2} \end{split} \]

If n is the number of subdivisions (number of rectangles), i.e., nh=b-a, then:

    \[ |E|=|nE_i| \leq \max_{\xi\in[a,b]}|f'(\xi)| n\frac{h^2}{2}=\max_{\xi\in[a,b]}|f'(\xi)| (b-a)\frac{h}{2} \]

In other words, the total error is bounded by a term that is directly proportional to h. When h decreases, the error bound decreases proportionally. Of course when h goes to zero, the error goes to zero as well.

I_2 (Midpoint rule) provides a faster convergence rate, or a more accurate approximation as the error is bounded by a term that is directly proportional to h^2 as will be shown here.
Using Taylor’s theorem, \forall x\in[x_{i-1},x_{i}], \exists \xi\in[x_{i-1},x] such that:

    \[ f(x)=f(x_{m})+f'(x_i)(x-x_{m})+f''(\xi)\frac{(x-x_{m})}{2} \]

Where x_m=\frac{x_{i-1}+x_{i}}{2}. The error in the integral using this rectangle can be calculated as follows:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_{i}}\!f(x)-f(x_m)\,\mathrm{d}x\right|=\left|f'(x_i)\int_{x_{i-1}}^{x_{i}}\!(x-x_m)\,\mathrm{d}x+\int_{x_{i-1}}^{x_{i}}\!\frac{f''(\xi)}{2}(x-x_m)^2\,\mathrm{d}x\right|\\ &=\left|f'(x_i)\frac{(x_i-x_m)^2-(x_{i-1}-x_m)^2}{2}+\int_{x_{i-1}}^{x_{i}}\!\frac{f''(\xi)}{2}(x-x_m)^2\,\mathrm{d}x\right|\\ &=\left|0+\int_{x_{i-1}}^{x_{i}}\!\frac{f''(\xi)}{2}(x-x_m)^2\,\mathrm{d}x\right|\\ &\leq \max_{\xi\in[x_{i-1},x_i]}\frac{|f''(\xi)|}{2}\frac{\left(\frac{h}{2}\right)^3-\left(\frac{-h}{2}\right)^3}{3}\\ &\leq \max_{\xi\in[x_{i-1},x_i]}\frac{|f''(\xi)|}{24}h^3 \end{split} \]

If n is the number of subdivisions (number of rectangles), i.e., nh=b-a, then:

    \[ |E|=|nE_i| \leq \max_{\xi\in[a,b]}|f''(\xi)| n\frac{h^3}{24}=\max_{\xi\in[a,b]}|f''(\xi)| (b-a)\frac{h^2}{24} \]

In other words, the total error is bounded by a term that is directly proportional to h^2 which provides faster convergence than I_1. The tools shown above can provide a good illustration of this. With only five rectangles, I_2 already provides a very good estimate for the integral compared to both I_1, and I_3.

Example

Using the rectangle method with n=4, calculate I_1, I_2, and I_3 and compare with the exact integral of the function f(x)=e^x on the interval [0,2]. Then, find the values of h required so that the total error obtained by I_1 and that obtained by I_2 are bounded by 0.001.

Solution

Since n=4 the spacing h can be calculated as:

    \[ h=\frac{b-a}{n}=\frac{2-0}{4}=0.5 \]

Therefore, x_0=0, x_1=0.5, x_2=1, x_3=1.5, and x_4=2.0. The values of the function at these points are given by:

    \[ f(x_0)=e^0=1.00\qquad f(x_1)=e^{0.5}=1.6487 \qquad f(x_2)=e^{1}=2.7183 \]

    \[ f(x_3)=e^{1.5}=4.4817 \qquad f(x_4)=e^{2}=7.3891 \]

According to the rectangle method we have:

    \[\begin{split} I_1&=\sum_{i=1}^{n}f(x_{i-1})h=0.5 (1 + 1.6487 + 2.7183 + 4.4817) = 4.924\\ I_3&=\sum_{i=1}^{n}f(x_{i})h=0.5 (1.6487 + 2.7183 + 4.4817+7.3891) = 8.119 \end{split} \]

For I_2, we need to calculate the values of the function at the midpoint of each rectangle:

    \[ f\left(\frac{x_0+x_1}{2}\right)=e^{0.25}=1.284 \qquad f\left(\frac{x_1+x_2}{2}\right)=e^{0.75}=2.117 \]

    \[ f\left(\frac{x_2+x_3}{2}\right)=e^{1.25}=3.49 \qquad f\left(\frac{x_3+x_4}{2}\right)=e^{1.75}=5.755 \]

Therefore:

    \[ I_2=\sum_{i=1}^{n}f\left(\frac{x_{i-1}+x_{i}}{2}\right)h=0.5(1.284+2.117+3.49+5.755)=6.323 \]

The exact integral is given by:

    \[ I=\int_{0}^2\!e^x\,\mathrm{d}x=e^2-e^0=6.38906 \]

Obviously, I_2 provides a good approximation with only 4 rectangles!

Error Bounds

The total errors obtained when h=0.5 are indeed less than the error bounds obtained by the formulas listed above. For I_1 and I_3, the error in the estimation is bounded by:

    \[ |E|\leq \max_{\xi\in[a,b]}|f'(\xi)| (b-a)\frac{h}{2}=e^{2}(2-0)\frac{0.5}{2}=3.695 \]

The errors |I-I_1|=|6.38906-4.924|=1.4651 and |I-I_3|=|6.38906-8.119|=1.73 are indeed less than that upper bound. The same formula can be used to find the value of h so that the error is bounded by 0.001:

    \[ e^{2}(2-0)\frac{h}{2}=0.001\Rightarrow h=0.000135 \]

Therefore, to guarantee an error less than 0.001 using I_1, the interval will have to be divided into: \frac{2}{0.000135}=14778 rectangles! In this case I_1=6.38862 and |E|=|6.38862-6.38906|=0.000432

Similarly, when h=0.5 the error in the estimate using I_2 is bounded by:

    \[ |E|=\max_{\xi\in[a,b]}|f''(\xi)| (b-a)\frac{h^2}{24}=e^2(2-0)\frac{0.5^2}{24}=0.1539 \]

The error |I-I_2|=|6.38906-6.323|=0.06606 is indeed bounded by that upper error. The same formula can be used to find the value of h so that the error is bounded by 0.001:

    \[ e^2(2-0)\frac{h^2}{24}=0.001\Rightarrow h = 0.04 \]

Therefore, to guarantee an error less than 0.001 using I_2, the interval will have to be divided into: \frac{2}{0.04}=50 rectangles! In this case I_2=6.38863 and |E|=|6.38863-6.38906|=0.000426

Trapezoidal Rule

Let f:[a,b]\rightarrow \mathbb{R}. By dividing the interval [a,b] into many subintervals, the trapezoidal rule approximates the area under the curve by linearly interpolating between the values of the function at the junctions of the subintervals, and thus, on each subinterval, the area to be calculated has a shape of a trapezoid. For simplicity, the width of the trapezoids is chosen to be constant. Let n be the number of intervals with a=x_0 < x_1 < x_2 < \cdots < x_n=b and constant spacing h=x_{i+1}-x_i. The trapezoidal method can be implemented as follows:

    \[ I_T=\int_{a}^b\!f(x)\,\mathrm{d}x\approx \frac{h}{2}\sum_{i=1}^{n}\left(f(x_{i-1})+f(x_i)\right)=\frac{h}{2}(f(x_0)+2f(x_1)+2f(x_2)+\cdots+2f(x_{n-1})+f(x_n)) \]

The following tool illustrates the implementation of the trapezoidal rule for integrating the function f:[0,1.5]\rightarrow \mathbb{R} defined as:

    \[ f(x)= 2 + 2 x + x^2 + \sin{(2 \pi x)} + \cos{(2 \pi \frac{x}{0.5})} \]

Use the slider to see the effect of increasing the number of intervals on the approximation.

The following Mathematica code can be used to numerically integrate any function f on the interval [a,b] using the trapezoidal rule.
View Mathematica Code

IT[f_, a_, b_, n_] := (h = (b - a)/n; Sum[(f[a + (i - 1)*h] + f[a + i*h])*h/2, {i, 1, n}])
f[x_] := 2 + 2 x + x^2 + Sin[2 Pi*x] + Cos[2 Pi*x/0.5];
IT[f, 0, 1.5, 1.0]
IT[f, 0, 1.5, 18.0]

Error Analysis

An extension of Taylor’s theorem can be used to find how the error changes as the step size h decreases. Given a function f and its interpolating polynomial of degree n (p_n(x)), the error term between the interpolating polynomial and the function is given by (See this article for a proof):

    \[ f(x)=p_n(x)+\frac{f^{n+1}(\xi)}{(n+1)!}\prod_{i=1}^{n+1}(x-x_{i}) \]

Where \xi is in the domain of the function f and is dependent on the point x. The error in the calculation of trapezoidal number i between the points x_{i-1} and x_{i} will be estimated based on the above formula assuming a linear interpolation between the points x_{i-1} and x_i. Therefore:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_i}\! f(x)-p_1(x)\,\mathrm{d}x\right|=\left|\int_{x_{i-1}}^{x_i}\!\frac{f''(\xi)}{2}(x-x_{i-1})(x-x_i)\,\mathrm{d}x\right|\\ &\leq \max_{\xi\in[x_{i-1},x_i]}\frac{|f''(\xi)|h^3}{12} \end{split} \]

If n is the number of subdivisions (number of trapezoids), i.e., nh=b-a, then:

    \[ |E|=|nE_i|\leq \max_{\xi\in[a,b]}\frac{|f''(\xi)|nh^3}{12}=\max_{\xi\in[a,b]}\frac{|f''(\xi)|(b-a)h^2}{12} \]

Example

Using the trapezoidal rule with n=4, calculate I_T and compare with the exact integral of the function f(x)=e^x on the interval [0,2]. Find the value of h so that the error is less than 0.001.

Solution

Since n=4 the spacing h can be calculated as:

    \[ h=\frac{b-a}{n}=\frac{2-0}{4}=0.5 \]

Therefore, x_0=0, x_1=0.5, x_2=1, x_3=1.5, and x_4=2.0. The values of the function at these points are given by:

    \[ f(x_0)=e^0=1.00\qquad f(x_1)=e^{0.5}=1.6487 \qquad f(x_2)=e^{1}=2.7183 \]

    \[ f(x_3)=e^{1.5}=4.4817 \qquad f(x_4)=e^{2}=7.3891 \]

According to the trapezoidal rule we have:

    \[\begin{split} I_T&=\frac{h}{2}(f(x_0)+2f(x_1)+2f(x_2)+\cdots+2f(x_{n-1})+f(x_n))\\ &=\frac{0.5}{2} (1 + 2\times 1.6487 + 2\times 2.7183 + 2\times 4.4817 + 7.3891) = 6.522 \end{split} \]

Which is already a very good approximation to the exact value of I=6.38906 even though only 4 intervals were used.

Error Bounds

The total error obtained when h=0.5 is indeed less than the error bounds obtained by the formula listed above. For I_T, the error in the estimation is bounded by:

    \[ |E|\leq \max_{\xi\in[a,b]}|f''(\xi)| (b-a)\frac{h^2}{12}=e^{2}(2-0)\frac{0.25}{12}=0.3079 \]

The error |I-I_T|=|6.38906-6.522|=0.13294 is indeed less than that upper bound. The same formula can be used to find the value of h so that the error is bounded by 0.001:

    \[ e^{2}(2-0)\frac{h^2}{12}=0.001\Rightarrow h=0.0285 \]

Therefore, to guarantee an error less than 0.001 using I_T, the interval will have to be divided into: \frac{2}{0.0285}=70.175 parts. Therefore, at least 71 trapezoids are needed! In this case, I_T=6.38948 and |E|=|6.38948-6.38906|=0.00042.

Simpson’s Rules

Simpson’s 1/3 Rule

Let f:[a,b]\rightarrow \mathbb{R}. By dividing the interval [a,b] into many subintervals, the Simpson’s 1/3 rule approximates the area under the curve in every subinterval by interpolating between the values of the function at the midpoint and ends of the subinterval, and thus, on each subinterval, the curve to be integrated is a parabola. For simplicity, the width of each subinterval is chosen to be constant and is equal to 2h. Let n be the number of intervals with a=x_0 < x_1 < x_2 < \cdots < x_n=b and constant spacing 2h=x_{i}-x_{i-1}. On each interval with end points x_{i-1} and x_i, Lagrange polynomials can be used to define the interpolating parabola as follows:

    \[ p_2(x)=f(x_{i-1})\frac{(x-{x_m}_i)(x-x_{i})}{2h^2}+f({x_m}_i)\frac{(x-x_{i-1})(x-x_{i})}{-h^2}+f(x_{i})\frac{(x-x_{i-1})(x-{x_m}_i)}{2h^2} \]

where {x_m}_i is the midpoint in the interval i. Integrating the above formula yields:

    \[ \int_{x_{i-1}}^{x_i}\!p_2(x)\,\mathrm{d}x=\frac{h}{3}\left(f(x_{i-1})+4f({x_m}_i)+f(x_{i})\right) \]

The Simpson’s 1/3 rule can be implemented as follows:

    \[ \begin{split} I_{S1} & =\int_{a}^b\!f(x)\,\mathrm{d}x\approx \frac{h}{3}\sum_{i=1}^{n}\left(f(x_{i-1})+4f({x_m}_i)+f(x_i)\right)\\ &=\frac{h}{3}(f(x_0)+4f({x_m}_1)+2f(x_1)+4f({x_m}_2)+2f(x_2)+\cdots+2f(x_{n-1})+4f({x_m}_n)+f(x_n)) \end{split} \]

The following tool illustrates the implementation of the Simpson’s 1/3 rule for integrating the function f:[0,1.5]\rightarrow \mathbb{R} defined as:

    \[ f(x)= 2 + 2 x + x^2 + \sin{(2 \pi x)} + \cos{(2 \pi \frac{x}{0.5})} \]

Use the slider to see the effect of increasing the number of intervals on the approximation.

The following Mathematica code can be used to numerically integrate any function f on the interval [a,b] using the Simpson’s 1/3 rule.

View Mathematica Code
IS1[f_, a_, b_, n_] := (h = (b - a)/2/n; h/3*Sum[(f[a + (2i - 2)*h] + 4*f[a + (2i - 1)*h]+f[a + (2i)*h]), {i, 1, n}])
f[x_] := 2 + 2 x + x^2 + Sin[2 Pi*x] + Cos[2 Pi*x/0.5];
IS1[f, 0, 1.5, 1.0]
IS1[f, 0, 1.5, 18.0]
Error Analysis

One estimate for the upper bound of the error can be derived similar to the derivation of the upper bound of the error in the trapezoidal rule as follows. Given a function f and its interpolating polynomial of degree n (p_n(x)), the error term between the interpolating polynomial and the function is given by (See this article for a proof):

    \[ f(x)=p_n(x)+\frac{f^{n+1}(\xi)}{(n+1)!}\prod_{i=1}^{n+1}(x-x_{i}) \]

Where \xi is in the domain of the function f. The error in the calculation of the integral of the parabola number i connecting the points x_{i-1}, x_{m}=\frac{x_{i-1}+x_i}{2}, and x_{i} will be estimated based on the above formula assuming 2h=x_i-x_{i-1}. Therefore:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_i}\! f(x)-p_2(x)\,\mathrm{d}x\right|=\left|\int_{x_{i-1}}^{x_i}\!\frac{f'''(\xi)}{3\times 2}(x-x_{i-1})(x-x_m)(x-x_i)\,\mathrm{d}x\right| \end{split} \]

Therefore, the upper bound for the error can be given by:

    \[ |E_i|\leq \max_{\xi\in[x_{i-1},x_i]}\frac{|f'''(\xi)|}{6}\int_{x_{i-1}}^{x_i}\!|(x-x_{i-1})(x-x_m)(x-x_i)|\,\mathrm{d}x=\max_{\xi\in[x_{i-1},x_i]}\frac{|f''(\xi)|h^4}{12} \]

If n is the number of subdivisions, where each subdivision has width of 2h, i.e., n(2h)=b-a, then:

    \[ |E|=|nE_i|\leq \max_{\xi\in[a,b]}\frac{|f'''(\xi)|nh^4}{12}=\max_{\xi\in[a,b]}\frac{|f'''(\xi)|(b-a)h^3}{24} \]

However, it can actually be shown that there is a better estimate for the upper bound of the error. This can be shown using Newton interpolating polynomials through the points x_{i-1},x_m,x_i,x_{i}+h. The reason we add an extra point is going to become apparent when the integration is carried out. The error term between the interpolating polynomial and the function is given by:

    \[\begin{split} f(x)=&b_1+b_2(x-x_{i-1})+b_3(x-x_{i-1})(x-x_{m})+b_4(x-x_{i-1})(x-x_{m})(x-x_{i})\\ &+\frac{f''''(\xi)}{4!}(x-x_{i-1})(x-x_{m})(x-x_{i})(x-x_{i}-h) \end{split} \]

where \xi\in[x_{i-1},x_{i}+h] and is dependent on x. The first three terms on the right-hand side are exactly the interpolating parabola passing through the points f(x_{i-1}), f(x_{m}), and f(x_{i}). Therefore, an estimate for the error can be evaluated as:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_i}\! f(x)-b_1-b_2(x-x_{i-1})-b_3(x-x_{i-1})(x-x_{m})\,\mathrm{d}x\right|\\ &\leq \left|\int_{x_{i-1}}^{x_i}\!b_4(x-x_{i-1})(x-x_{m})(x-x_{i})\,\mathrm{d}x\right|\\ &+\max_{\xi\in[x_{i-1},x_{i}+h]}\left|\frac{f''''(\xi)}{4!}\right|\int_{x_{i-1}}^{x_i}\!\left|(x-x_{i-1})(x-x_{m})(x-x_{i})(x-x_{i}-h)\right|\mathrm{d}x\\ &\leq 0 + \max_{\xi\in[x_{i-1},x_{i}+h]}\left|\frac{f''''(\xi)}{4!}\right|\frac{4h^5}{15}\\ &\leq \max_{\xi\in[x_{i-1},x_{i}+h]}\left|f''''(\xi)\right|\frac{h^5}{90} \end{split} \]

The first term on the right-hand side of the inequality is equal to zero. This is because the point x_m is the average of x_{i-1} and x_i, so, integrating that cubic polynomial term yields zero. This was the reason to consider a third-order polynomial instead of a second-order polynomial which allows the error term to be in terms of h^5. If n is the number of subdivisions, where each subdivision has width of 2h, i.e., n(2h)=b-a, then:

    \[ |E|=|nE_i|\leq \max_{\xi\in[a,b]}\frac{|f''''(\xi)|nh^5}{90}=\max_{\xi\in[a,b]}\frac{|f''''(\xi)|(b-a)h^4}{180} \]

Example

Using the Simpson’s 1/3 rule with n=2, calculate I_{S1} and compare with the exact integral of the function f(x)=e^x on the interval [0,2]. Find the value of h so that the error is less than 0.001.

Solution

It is important to note that n defines the number of intervals on which a parabola is defined. Each interval has a width of 2h. Since n=2 the spacing h can be calculated as:

    \[ h=\frac{b-a}{2n}=\frac{2-0}{4}=0.5 \]

Therefore, x_0=0, x_1=0.5, x_2=1, x_3=1.5, and x_4=2.0. The values of the function at these points are given by:

    \[ f(x_0)=e^0=1.00\qquad f(x_1)=e^{0.5}=1.6487 \qquad f(x_2)=e^{1}=2.7183 \]

    \[ f(x_3)=e^{1.5}=4.4817 \qquad f(x_4)=e^{2}=7.3891 \]

According to the Simpson’s 1/3 rule we have:

    \[\begin{split} I_{S1}&=\frac{h}{3}(f(x_0)+4f(x_1)+2f(x_2)+4f(x_3)+f(x_4))\\ &=\frac{0.5}{3} (1 + 4\times 1.6487 + 2\times 2.7183 + 4\times 4.4817 + 7.3891) = 6.391 \end{split} \]

Which is already a very good approximation to the exact value of I=6.38906 even though only 2 intervals were used.

Error Bounds

The total error obtained when h=0.5 is indeed less than the error bounds obtained by the formula listed above. For I_{S1}, the error in the estimation is bounded by:

    \[ |E|\leq \max_{\xi\in[a,b]}|f''''(\xi)| (b-a)\frac{h^4}{180}=e^{2}(2-0)\frac{0.5^4}{180}=0.0051 \]

The error evaluted by \left|I-I_{S1}\right|=|6.38906-6.391|=0.002 is indeed less than that upper bound. The same formula can be used to find the value of h so that the error is bounded by 0.001:

    \[ e^{2}(2-0)\frac{h^4}{180}=0.001\Rightarrow h=0.33 \]

Therefore, to guarantee an error less than 0.001 using I_{S1}, the interval will have to be divided into: \frac{2}{2(0.33)}=3.03\approx 4 intervals where a parabola is defined on each! In this case, the value of I_{S1}=6.38919 and |E|=|6.38906-6.38919|=0.0001377.

Simpson’s 3/8 Rule

Let f:[a,b]\rightarrow \mathbb{R}. By dividing the interval [a,b] into many subintervals, the Simpson’s 3/8 rule approximates the area under the curve in every subinterval by interpolating between the values of the function at the ends of the subinterval and at two intermediate points, and thus, on each subinterval, the curve to be integrated is a cube. For simplicity, the width of each subinterval is chosen to be constant and is equal to 3h. Let n be the number of intervals with a=x_0 < x_1 < x_2 < \cdots < x_n=b and constant spacing 3h=x_{i}-x_{i-1} with the intermediate points for each interval i as {x_l}_i=x_{i-1}+h and {x_r}_i=x_{i-1}+2h. On each interval with end points x_{i-1} and x_i, Lagrange polynomials can be used to define the interpolating cubic polynomial as follows:

    \[ \begin{split} p_3(x)&=f(x_{i-1})\frac{(x-{x_l}_i)(x-{x_r}_i)(x-x_{i})}{-6h^3}+f({x_l}_i)\frac{(x-x_{i-1})(x-{x_r}_i)(x-x_{i})}{2h^3}\\ &+f({x_r}_i)\frac{(x-x_{i-1})(x-{x_l}_i)(x-x_{i})}{2h^3}+f(x_{i})\frac{(x-x_{i-1})(x-{x_l}_i)(x-{x_r}_i)}{6h^3} \end{split} \]

Integrating the above formula yields:

    \[ \int_{x_{i-1}}^{x_i}\!p_3(x)\,\mathrm{d}x=\frac{3h}{8}\left(f(x_{i-1})+3f({x_l}_i)+3f({x_r}_i)+f(x_{i})\right) \]

The Simpson’s 3/8 rule can be implemented as follows:

    \[ \begin{split} I_{S2} & =\int_{a}^b\!f(x)\,\mathrm{d}x\approx \frac{3h}{8}\sum_{i=1}^{n}\left(f(x_{i-1})+3f({x_l}_i)+3f({x_r}_i)+f(x_{i})\right)\\ &=\frac{3h}{8}(f(x_0)+3f({x_l}_1)+3f({x_r}_1)+2f(x_1)+\cdots+3f({x_r}_n)+f(x_n)) \end{split} \]

The following tool illustrates the implementation of the Simpson’s 3/8 rule for integrating the function f:[0,1.5]\rightarrow \mathbb{R} defined as:

    \[ f(x)= 2 + 2 x + x^2 + \sin{(2 \pi x)} + \cos{(2 \pi \frac{x}{0.5})} \]

Use the slider to see the effect of increasing the number of intervals on the approximation.

The following Mathematica code can be used to numerically integrate any function f on the interval [a,b] using the Simpson’s 3/8 rule.

View Mathematica Code
IS2[f_, a_, b_, n_] := (h = (b - a)/3/n;3 h/8*Sum[(f[a + (3 i - 3)*h] + 3*f[a + (3 i - 2)*h] + 3 f[a + (3 i - 1)*h] + f[a + (3 i)*h]), {i, 1, n}])
f[x_] := 2 + 2 x + x^2 + Sin[2 Pi*x] + Cos[2 Pi*x/0.5];
IS2[f, 0, 1.5, 5]
Error Analysis

The estimate for the upper bound of the error can be derived similar to the derivation of the upper bound of the error in the trapezoidal rule as follows. Given a function f and its interpolating polynomial of degree n (p_n(x)), the error term between the interpolating polynomial and the function is given by (See this article for a proof):

    \[ f(x)=p_n(x)+\frac{f^{n+1}(\xi)}{(n+1)!}\prod_{i=1}^{n+1}(x-x_{i}) \]

Where \xi is in the domain of the function f. The error in the calculation of the integral of the parabola number i connecting the points x_{i-1}, {x_l}_i=x_{i-1}+h, {x_r}_i=x_{i-1}+2h, and x_{i} will be estimated based on the above formula assuming 3h=x_i-x_{i-1}. Therefore:

    \[\begin{split} |E_i|&=\left|\int_{x_{i-1}}^{x_i}\! f(x)-p_3(x)\,\mathrm{d}x\right|=\left|\int_{x_{i-1}}^{x_i}\!\frac{f''''(\xi)}{4\times 3\times 2}(x-x_{i-1})(x-{x_l}_i)(x-{x_r}_i)(x-x_i)\,\mathrm{d}x\right| \end{split} \]

Therefore, the upper bound for the error can be given by:

    \[ |E_i|\leq \max_{\xi\in[x_{i-1},x_i]}\frac{|f''''(\xi)|}{4\times 3 \times 2}\int_{x_{i-1}}^{x_i}\!|(x-x_{i-1})(x-{x_l}_i)(x-{x_r}_i)(x-x_i)|\,\mathrm{d}x=\max_{\xi\in[x_{i-1},x_i]}\frac{|f''''(\xi)|3h^5}{80} \]

If n is the number of subdivisions, where each subdivision has width of 3h, i.e., n(3h)=b-a, then:

    \[ |E|=|nE_i|\leq \max_{\xi\in[a,b]}\frac{|f''''(\xi)|3nh^5}{80}=\max_{\xi\in[a,b]}\frac{|f''''(\xi)|(b-a)h^4}{80} \]

Example

Using the Simpson’s 3/8 rule with n=1, calculate I_{S2} and compare with the exact integral of the function f(x)=e^x on the interval [0,2]. Find the value of h so that the error is less than 0.001.

Solution

It is important to note that n defines the number of intervals on which a cubic polynomial is defined. Each interval has a width of 3h. Since n=1 the spacing h can be calculated as:

    \[ h=\frac{b-a}{3n}=\frac{2-0}{3}=0.6667 \]

Therefore, x_0=0, {x_l}_1=0.6667, {x_r}_1=1.333, and x_1=2.0. The values of the function at these points are given by:

    \[ f(x_0)=e^0=1.00\qquad f({x_l}_1)=e^{0.6667}=1.94773 \]

    \[ f({x_r}_1)=e^{1.333}=3.79367 \qquad f(x_1)=e^{2}=7.3891 \]

According to the Simpson’s 3/8 rule we have:

    \[\begin{split} I_{S2}&=\frac{3h}{8}(f(x_0)+3f({x_l}_1)+3f({x_r}_1)+f(x_1))\\ &=\frac{3\times 0.6667}{8} (1 + 3\times 1.94773 + 3\times 3.79367 + 7.3891) = 6.403 \end{split} \]

Which is already a very good approximation to the exact value of I=6.38906 even though only 1 interval was used.

Error Bounds

The total error obtained when h=0.6667 is indeed less than the error bounds obtained by the formula listed above. For I_{S2}, the error in the estimation is bounded by:

    \[ |E|\leq \max_{\xi\in[a,b]}|f''''(\xi)| (b-a)\frac{h^4}{80}=e^{2}(2-0)\frac{0.6667^4}{80}=0.0365 \]

The error |I-I_{S2}|=|6.38906-6.403|=0.014 is indeed less than that upper bound. The same formula can be used to find the value of h so that the error is bounded by 0.001:

    \[ e^{2}(2-0)\frac{h^4}{80}=0.001\Rightarrow h=0.27125 \]

Therefore, to guarantee an error less than 0.001 using I_{S2}, the interval will have to be divided into: \frac{2}{3(0.27125)}=2.46\approx 3 intervals where a cubic polynomial is defined on each! In this case, the value of I_{S2}=6.38925 and |E|=|6.38906-6.38925|=0.000192.

Romberg’s Method

Richardson Extrapolation

Romberg’s method applied a technique called the Richardson extrapolation to the trapezoidal integration rule (and can be applied to any of the rules above). The general Richardson extrapolation technique is a powerful method that combines two or more less accurate solutions to obtain a highly accurate one. The essential ingredient of the method is the knowledge of the order of the truncation error. Assuming a numerical technique approximates the value of A by choosing the value of h_1, and calculating an estimate A^*(h_1) according to the equation:

    \[ A=A^*(h_1)+C(h_1^{k_1})+\mathcal{O}(h_1^{k_2}) \]

Where C is a constant whose value does not need to be known and k_2\geq k_1+1. If a smaller h_2 < h_1 is chosen with t=\frac{h_1}{h_2}, then the new estimate for A is A^*(h_2) and the equation becomes:

    \[ A=A^*(h_2)+C(h_2^{k_1})+\mathcal{O}(h_2^{k_2}) \]

Multiplying the second equation by t^{k_1} and subtracting the first equation yields:

    \[ (t^{k_1}-1)A=t^{k_1}A^*(h_2)-A^*(h_1)+\mathcal{O}(h_1^{k_2}) \]

Therefore:

    \[ A=\frac{(t^{k_1}A^*(h_2)-A^*(h_1))}{(t^{k_1}-1)}+\mathcal{O}(h_1^{k_2}) \]

In other words, if the first error term in a method is directly proportional to h^{k_1}, then, by combining two values of h, we can get an estimate whose error term is directly proportional to h^{k_2}. The above equation can also be written as:

    \[ A=\frac{(t^{k_1}A^*\left(\frac{h_1}{t}\right)-A^*(h_1))}{(t^{k_1}-1)}+\mathcal{O}(h_1^{k_2}) \]

Romberg’s Method Using the Trapezoidal Rule

As shown above the truncation error in the trapezoidal rule is \mathcal{O}(h^{2}). If the trapezoidal numerical integration scheme is applied for a particular value of h_1 and then applied again for half that value (i.e., t=\frac{h_1}{h_2}=2), then, substituting in the equation above yields:

    \[ I=\frac{\left(2^{2}I_T\left(\frac{h_1}{2}\right)-I_T(h_1)\right)}{(2^{2}-1)}+\mathcal{O}(h_1^{k_2})=\frac{4I_T\left(\frac{h_1}{2}\right)-I_T(h_1)}{3}+\mathcal{O}(h_1^{k_2}) \]

It should be noted that for the trapezoidal rule, k_2 is equal to 4, i.e., the error term using this method is \mathcal{O}(h_1^4). This method can be applied successively by halving the value of h to obtain an error estimate that is \mathcal{O}(h_1^6). Assuming that the trapezoidal integration is available for h_1, \frac{h_1}{2}, \frac{h_1}{4}, and \frac{h_1}{8}, then the first two can be used to find an estimate I_{T1} that is \mathcal{O}(h_1^{4}), and the last two can be used to find an estimate I_{T2} that is \mathcal{O}\left(\left(\frac{h_1}{4}\right)^{4}\right). Applying the Richardson extrapolation equation to I_{T1} and I_{T2} and noticing that t=4 in this case produce the following estimate for I:

    \[ I=\frac{\left(4^{2}I_{T2}-I_{T1}\right)}{(4^{2}-1)}+\mathcal{O}(h_1^{6})=\frac{16I_{T2}-I_{T1}}{15}+\mathcal{O}(h_1^{6}) \]

The process can be extended even further to find an estimate that is \mathcal{O}(h_1^{8}). In general, the process can be written as follows:

    \[ I_{j,k}=\frac{\left(2^{2(k-1)}I_{j+1,k-1}-I_{j,k-1}\right)}{(2^{2(k-1)}-1)}+\mathcal{O}(h_1^{2k}) \]

where j+1 indicates the more accurate integral, while j indicates the less accurate integral, k=1,2,3,\cdots correspond to the calculations with error terms that correspond to h^2, h^4, h^6,\cdots, respectively. The above equation is applied for k\geq 2. For example, setting k=2 and j=1 yields:

    \[ I_{1,2}=\frac{4I_{2,1}-I_{1,1}}{3}+\mathcal{O}(h_1^{4}) \]

Similarly, setting k=3 and j=1 yields:

    \[ I_{1,3}=\frac{\left(16I_{2,2}-I_{1,2}\right)}{15}+\mathcal{(O)}(h_1^{6}) \]

The following table sketches how the process is applied to obtain an estimate that is \mathcal{O}(h^{10}) using this algorithm. The first column corresponds to evaluating the integral for the values of h, \frac{h}{2}, \frac{h}{4}, \frac{h}{8}, and \frac{h}{16}. These correspond to n, 2n, 4n, 8n, and 16n trapezoids, respectively. The equation above is then used to fill the remaining values in the table.

Romberg2

A stopping criterion for this algorithm can be set as:

    \[ \varepsilon_r=\left|\frac{I_{1,k}-I_{1,k-1}}{I_{1,k}}\right|\leq \varepsilon_s \]

The following Mathematica code provides a procedural implementation of the Romberg’s method using the trapezoidal rule. The first procedure “IT[f,a,b,n]” provides the numerical estimate for the integral of f from a to b with n being the number of trapezoids. The “RI[f,a,b,k,n1]” procedure builds the Romberg’s method table shown above up to k columns. n_1 provides the number of subdivisions (number of trapezoids) in the first entry in the table I_{1,1}.
View Mathematica Code

IT[f_, a_, b_, n_] := (h = (b - a)/n; Sum[(f[a + (i - 1)*h] + f[a + i*h])*h/2, {i, 1, n}])
RI[f_, a_, b_, k_,n1_] := (RItable = Table["N/A", {i, 1, k}, {j, 1, k}];
Do[RItable[[i, 1]] = IT[f, a, b, 2^(i - 1)*n1], {i, 1, k}];
  Do[RItable[[j, ik]] = (2^(2 (ik - 1)) RItable[[j + 1, ik - 1]] - RItable[[j, ik - 1]])/(2^(2 (ik - 1)) - 1), {ik, 2, k}, {j, 1,k - ik + 1}];
  ERtable =  Table[(RItable[[1, i + 1]] - RItable[[1, i]])/RItable[[1, i + 1]], {i, 1, k - 1}];
  {RItable, ERtable})
a = 0.0;
b = 2;
k=3;
n1=1;
f[x_] := E^x
Integrate[f[x], {x, a, b}]
s = RI[f, a, b, k,n1];
s[[1]] // MatrixForm
s[[2]] // MatrixForm

Example 1

As shown in the example above in the trapezoidal rule, when 71 trapezoids were used, the estimate for the integral of e^x from a=0 to b=2 was I_T=6.38948 with an absolute error of |E|=0.00042. Using the Romberg’s method, find the depth k starting with h_1=b-a=2 so that the estimate for the same integral has the same or less absolute error |E|\leq 0.00042. Compare the number of computations required by the Romberg’s method to that required by the traditional trapezoidal rule to obtain an estimate with the same absolute error.

Solution

First, we will start with k=2. For that, we will need to compute the integral numerically using the trapezoidal rule for a chosen h_1 and then for \frac{h_1}{2} to fill in the entries I_{1,1} and I_{2,1} in the Romberg’s method table.

Assuming h_1=b-a=2, i.e., 1 trapezoid, the value of I_{1,1}=8.38906. Assuming a trapezoid width of \frac{h_1}{2}=1, i.e., two trapezoids on the interval, the value of I_{2,1}=6.91281. Using the Romberg table, the value of I_{1,2} can be computed as:

    \[ I_{1,2}=\frac{4I_{2,1}-I_{1,1}}{3}=6.42073 \]

with a corresponding absolute error of |E|=|6.42073-6.38906|=0.0317>0.00042.
The output Romberg table with depth k=2 has the following form:
RT1

Next, to fill in the table up to depth k=3, the value of I_{3,1} in the table needs to be calculated. This value corresponds to the calculation of the trapezoidal rule with a trapezoid width of \frac{h_1}{2^{k-1}}=\frac{h_1}{4}, i.e., 4 trapezoids on the whole interval. Using the trapezoidal rule, we get I_{3,1}=6.52161. The remaining values in the Romberg table can be calculated as follows:

    \[\begin{split} I_{2,2}&=\frac{4I_{3,1}-I_{2,1}}{3}=\frac{4\times 6.52161-6.91281}{3}=6.39121\\ I_{1,3}&=\frac{16I_{2,2}-I_{1,2}}{15}=\frac{16\times 6.39121-6.42073}{15}=6.38924\\ \end{split} \]

with the absolute error in I_{1,3} that is less than the specified error: |E|=|6.38924-6.38906|=0.000186<0.00042. The output Romberg table with depth k=3 has the following form: RT2

Number of Computations

For the same error, the traditional trapezoidal rule would have required 71 trapezoids. If we assume each trapezoid is one computation, the Romberg’s method requires computations of 1 trapezoid in I_{1,1}, two trapezoids in I_{2,1}, and 4 trapezoids in I_{3,1} with a total of 7 corresponding computations. I.e., almost one tenth of the computational resources is required by the Romberg’s method in this example to produce the same level of accuracy!

The following Mathematica code was used to produce the above calculations.

View Mathematica Code
IT[f_, a_, b_, n_] := (h = (b - a)/n;  Sum[(f[a + (i - 1)*h] + f[a + i*h])*h/2, {i, 1, n}])
RI[f_, a_, b_, k_, n1_] := (RItable = Table["N/A", {i, 1, k}, {j, 1, k}];
  Do[RItable[[i, 1]] = IT[f, a, b, 2^(i - 1)*n1], {i, 1, k}];
  Do[RItable[[j,ik]] = (2^(2 (ik - 1)) RItable[[j + 1, ik - 1]] - RItable[[j, ik - 1]])/(2^(2 (ik - 1)) - 1), {ik, 2, k}, {j, 1, k - ik + 1}];
  ERtable = Table[(RItable[[1, i + 1]] - RItable[[1, i]])/RItable[[1, i + 1]], {i, 1, k - 1}];
  {RItable, ERtable})
a = 0.0;
b = 2;
(*Depth of Romberg Table*)
k = 3;
n1 = 1;
f[x_] := E^x
Itrue = Integrate[f[x], {x, a, b}]
s = RI[f, a, b, k, n1];
s[[1]] // MatrixForm
s[[2]] // MatrixForm
ErrorTable = Table[If[RItable[[i, j]] == "N/A", "N/A", Abs[Itrue - RItable[[i, j]]]], {i, 1, Length[RItable]}, {j, 1, Length[RItable]}];
ErrorTable // MatrixForm

Example 2

Consider the integral:

    \[ I=\int_{0}^{1.5}\! 2 + 2 x + x^2 + \sin{(2 \pi x)} + \cos{(2 \pi \frac{x}{0.5})}\,\mathrm{d}x \]

Using the trapezoidal rule, draw a table with the following columns: n, h, I_T, I_{true}, and |E|, where n is the number of trapezoids, h is the width of each trapezoid, I_T is the estimate using the trapezoidal rule, I_{true} is the true value of the integral, and |E| is the absolute value of the error. Use n=1, 2, 4, 8, 16,\cdots,128. Similarly, provide the Romberg table with depth k=5. Compare the number of computations in each and the level of accuracy.

Solution

The true value of the integral can be computed using Mathematica as I_{\mbox{true}}=6.69331. For the trapezoidal rule, the following Mathematica code is used to produce the required table for the specified values of n:

View Mathematica Code
IT[f_, a_, b_, n_] := (h = (b - a)/n;   Sum[(f[a + (i - 1)*h] + f[a + i*h])*h/2, {i, 1, n}])
a = 0.0;
b = 1.5;
f[x_] := 2 + 2 x + x^2 + Sin[2 Pi*x] + Cos[2 Pi*x/0.5]
Title = {"n", "h", "I_T", "I_true", "|E|"};
Ii = Table[{2^(i - 1), (b - a)/2^(i - 1), ss = IT[f, a, b, 2^(i - 1)], Itrue, Abs[Itrue - ss]}, {i, 1, 8}];
Ii = Prepend[Ii, Title];
Ii // MatrixForm

The table shows that with 128 subdivisions, the value of the absolute error is 0.00011.
RT3

For the Romberg table, the code developed above is used to produce the following table:

View Mathematica Code
IT[f_, a_, b_, n_] := (h = (b - a)/n; Sum[(f[a + (i - 1)*h] + f[a + i*h])*h/2, {i, 1, n}])
RI[f_, a_, b_, k_, n1_] := (RItable = Table["N/A", {i, 1, k}, {j, 1, k}];
  Do[RItable[[i, 1]] = IT[f, a, b, 2^(i - 1)*n1], {i, 1, k}];
  Do[RItable[[j, ik]] = (2^(2 (ik - 1)) RItable[[j + 1, ik - 1]] - RItable[[j, ik - 1]])/(2^(2 (ik - 1)) - 1), {ik, 2, k}, {j, 1,  k - ik + 1}];
  ERtable =  Table[(RItable[[1, i + 1]] - RItable[[1, i]])/ RItable[[1, i + 1]], {i, 1, k - 1}];
  {RItable, ERtable})
a = 0.0;
b = 1.5;
k = 5;
n1 = 1;
f[x_] := 2 + 2 x + x^2 + Sin[2 Pi*x] + Cos[2 Pi*x/0.5]
Itrue = Integrate[f[x], {x, a, b}]
s = RI[f, a, b, k, n1];
s[[1]] // MatrixForm
s[[2]] // MatrixForm
ErrorTable = Table[If[RItable[[i, j]] == "N/A", "N/A", Abs[Itrue - RItable[[i, j]]]], {i, 1, Length[RItable]}, {j, 1,  Length[RItable]}];
ErrorTable // MatrixForm

RT4
The corresponding errors are given in the following table:
RT5

Comparing the table produced for the traditional trapezoidal method and that produced by the Romberg’s method reveals how powerful the Romberg’s method is. The Romberg table utilizes only the first 5 entries (up to n=16) in the traditional trapezoidal method table and then using a few calculations according to the Romberg’s method equation, produces a value with an absolute error of 0.0000799 which is less than that with traditional trapezoidal rule with n=128.

Gauss Quadrature

The Gauss integration scheme is a very efficient method to perform numerical integration over intervals. In fact, if the function to be integrated is a polynomial of an appropriate degree, then the Gauss integration scheme produces exact results. The Gauss integration scheme has been implemented in almost every finite element analysis software due to its simplicity and computational efficiency. This section outlines the basic principles behind the Gauss integration scheme. For its application to the finite element analysis method, please visit this section.

Introduction

Gauss quadrature aims to find the “least” number of fixed points to approximate the integral of a function f:[-1,1]\rightarrow \mathbb{R} such that:

    \[ I=\int_{-1}^{1} \! f\,\mathrm{d}x \approx \sum_{i=1}^n w_i f(x_i) \]

where \forall 1\leq i\leq n:x_i\in[a,b] and w_i\in\mathbb{R}. Also, \forall i:x_i is called an integration point and w_i is called the associated weight. The number of integration points and the associated weights are chosen according to the complexity of the function f to be integrated. Since a general polynomial of degree 2n-1 has 2n coefficients, it is possible to find a Gauss integration scheme with n number of integration points and n number of associated weights to exactly integrate that polynomial function on the interval [-1,1]. The following figure illustrates the concept of using the Gauss integration points to calculate the area under the curve for polynomials. In the following sections, the required number of integration points for particular polynomials are presented.

Gauss Integration for Polynomials of the 1st, 3rd and 5th Degrees

Gauss Integration for Polynomials of the 1st, 3rd and 5th Degrees

Affine Functions (First-Degree Polynomials): One Integration Point

For a first-degree polynomial, 2n-1=1, therefore, n=1, so, 1 integration point is sufficient as will be shown here. Consider f(x)=ax+b with a,b\in\mathbb{R}, then:

    \[ \int_{-1}^{1} \! ax+b\,\mathrm{d}x=2b+a\frac{x^2}{2}\bigg|_{-1}^1=2b=2f(0) \]

So, for functions that are very close to being affine, a numerical integration scheme with 1 integration point that is x_1=0 with an associated weight of 2 can be employed. In other words, a one-point numerical integration scheme has the form:

    \[ I=\int_{-1}^{1} \! f\,\mathrm{d}x \approx 2 f(0) \]

Third-Degree Polynomials: Two Integration Points

For a third-degree polynomial, 2n-1=3, therefore, n=2, so, 2 integration points are sufficient as will be shown here. Consider f(x)=a_0+a_1x+a_2x^2+a_3x^3, then:

    \[ \int_{-1}^{1} \! a_0+a_1x+a_2x^2+a_3x^3\,\mathrm{d}x=2a_0+2\frac{a_2}{3} \]

Assuming 2 integration points in the Gauss integration scheme yields:

    \[ \int_{-1}^{1} \! a_0+a_1x+a_2x^2+a_3x^3\,\mathrm{d}x=w_1(a_0+a_1x_1+a_2x_1^2+a_3x_1^3)+w_2(a_0+a_1x_2+a_2x_2^2+a_3x_2^3) \]

For the Gauss integration scheme to yield accurate results, the right-hand sides of the two equations above need to be equal for any choice of a cubic function. Therefore the multipliers of a_0, a_1, a_2, and a_3 should be the same. So, four equations in four unknowns can be written as follows:

    \[ \begin{split} w_1+w_2&=2\\ w_1x_1+w_2x_2&=0\\ w_1x_1^2+w_2x_2^2&=\frac{2}{3}\\ w_1x_1^3+w_2x_2^3&=0 \end{split} \]

The solution to the above four equations yields x_1=\frac{-1}{\sqrt{3}}, x_2=\frac{1}{\sqrt{3}}, w_1=w_2=1. The Mathematica code below forms the above equations and solves them:
View Mathematica Code

f = a0 + a1*x + a2*x^2 + a3*x^3;
I1 = Integrate[f, {x, -1, 1}]
I2 = w1*(f /. x -> x1) + w2*(f /. x -> x2)
Eq1 = Coefficient[I1 - I2, a0]
Eq2 = Coefficient[I1 - I2, a1]
Eq3 = Coefficient[I1 - I2, a2]
Eq4 = Coefficient[I1 - I2, a3]
Solve[{Eq1 == 0, Eq2 == 0, Eq3 == 0, Eq4 == 0}, {x1, x2, w1, w2}]

So, for functions that are very close to being cubic, the following numerical integration scheme with 2 integration points can be employed:

    \[ I=\int_{-1}^{1} \! f\,\mathrm{d}x \approx f\left(\frac{-1}{\sqrt{3}}\right)+f\left(\frac{1}{\sqrt{3}}\right) \]

Fifth-Degree Polynomials: Three Integration Points

For a fifth-degree polynomial, 2n-1=5, therefore, n=3, so, 3 integration points are sufficient to exactly integrate a fifth-degree polynomial. Consider f(x)=a_0+a_1x+a_2x^2+a_3x^3+a_4x^4+a_5x^5, then:

    \[ \int_{-1}^{1} \! a_0+a_1x+a_2x^2+a_3x^3+a_4x^4+a_5x^5\,\mathrm{d}x=2a_0+2\frac{a_2}{3}+2\frac{a_4}{5} \]

Assuming 3 integration points in the Gauss integration scheme yields:

    \[\begin{split} \int_{-1}^{1} \! a_0+a_1x+a_2x^2+a_3x^3\,\mathrm{d}x&=w_1(a_0+a_1x_1+a_2x_1^2+a_3x_1^3)+w_2(a_0+a_1x_2+a_2x_2^2+a_3x_2^3)\\ &+w_3(a_0+a_1x_3+a_2x_3^2+a_3x_3^3) \end{split} \]

For the Gauss integration scheme to yield accurate results, the right-hand sides of the two equations above need to be equal for any choice of a fifth-order polynomial function. Therefore the multipliers of a_0, a_1, a_2, a_3, a_4, and a_5 should be the same. So, six equations in six unknowns can be written as follows:

    \[ \begin{split} w_1+w_2+w_3&=2\\ w_1x_1+w_2x_2+w_3x_3&=0\\ w_1x_1^2+w_2x_2^2+w_3x_3^2&=\frac{2}{3}\\ w_1x_1^3+w_2x_2^3+w_3x_3^3&=0\\ w_1x_1^4+w_2x_2^4+w_3x_3^4&=\frac{2}{5}\\ w_1x_1^5+w_2x_2^5+w_3x_3^5&=0 \end{split} \]

The solution to the above six equations yields x_1=-\sqrt{\frac{3}{5}}, x_2=0, x_3=\sqrt{\frac{3}{5}}, w_1=\frac{5}{9}, w_2=\frac{8}{9}, and w_3=\frac{5}{9}. The Mathematica code below forms the above equations and solves them:
View Mathematica Code

f = a0 + a1*x + a2*x^2 + a3*x^3 + a4*x^4 + a5*x^5;
I1 = Integrate[f, {x, -1, 1}]
I2 = w1*(f /. x -> x1) + w2*(f /. x -> x2) + w3*(f /. x -> x3)
Eq1 = Coefficient[I1 - I2, a0]
Eq2 = Coefficient[I1 - I2, a1]
Eq3 = Coefficient[I1 - I2, a2]
Eq4 = Coefficient[I1 - I2, a3]
Eq5 = Coefficient[I1 - I2, a4]
Eq6 = Coefficient[I1 - I2, a5]
Solve[{Eq1 == 0, Eq2 == 0, Eq3 == 0, Eq4 == 0, Eq5 == 0, Eq6 == 0}, {x1, x2, x3, w1, w2, w3}]

So, for functions that are very close to being fifth-order polynomials, the following numerical integration scheme with 3 integration points can be employed:

    \[ I=\int_{-1}^{1} \! f\,\mathrm{d}x \approx \frac{5}{9}f\left(-\sqrt{\frac{3}{5}}\right)+\frac{8}{9}f(0)+\frac{5}{9}f\left(\sqrt{\frac{3}{5}}\right) \]

Seventh-Degree Polynomial: Four Integration Points

For a seventh-degree polynomial, 2n-1=7, therefore, n=4, so, 4 integration points are sufficient to exactly integrate a seventh-degree polynomial. Following the procedure outlined above, the integration points along with the associated weight:

    \[\begin{split} x_1=-0.861136 \qquad w_1=0.347855\\ x_2=-0.339981 \qquad w_2=0.652145\\ x_3=0.339981 \qquad w_3=0.652145\\ x_4=0.861136 \qquad w_4=0.347855 \end{split} \]

So, for functions that are very close to being seventh-order polynomials, the following numerical integration scheme with 4 integration points can be employed:

    \[\begin{split} I&=\int_{-1}^{1} \! f\,\mathrm{d}x\\ & \approx 0.347855f(-0.861136)+0.652145f(-0.339981)\\ & +0.652145f(0.339981)+0.347855f(0.861136) \end{split} \]

The following is the Mathematica code used to find the above integration points and corresponding weights.

View Mathematica Code
f = a0 + a1*x + a2*x^2 + a3*x^3 + a4*x^4 + a5*x^5 + a6*x^6 + a7*x^7;
I1 = Integrate[f, {x, -1, 1}]
I2 = w1*(f /. x -> x1) + w2*(f /. x -> x2) + w3*(f /. x -> x3) +  w4*(f /. x -> x4)
Eq1 = Coefficient[I1 - I2, a0]
Eq2 = Coefficient[I1 - I2, a1]
Eq3 = Coefficient[I1 - I2, a2]
Eq4 = Coefficient[I1 - I2, a3]
Eq5 = Coefficient[I1 - I2, a4]
Eq6 = Coefficient[I1 - I2, a5]
Eq7 = Coefficient[I1 - I2, a6]
Eq8 = Coefficient[I1 - I2, a7]
a = Solve[{Eq1 == 0, Eq2 == 0, Eq3 == 0, Eq4 == 0, Eq5 == 0, Eq6 == 0,  Eq7 == 0, Eq8 == 0}, {x1, x2, x3, w1, w2, w3, x4, w4}]
Chop[N[a[[1]]]]

Ninth-Degree Polynomial: Five Integration Points

For a ninth-degree polynomial, 2n-1=9, therefore, n=5, so, 5 integration points are sufficient to exactly integrate a ninth-degree polynomial. Following the procedure outlined above, the integration points along with the associated weight:

    \[\begin{split} x_1=-0.90618 \qquad w_1=0.236927\\ x_2=-0.538469 \qquad w_2=0.478629\\ x_3=0 \qquad w_3=0.568889\\ x_4=0.538469 \qquad w_4=0.478629\\ x_5=0.90618 \qquad w_5=0.236927 \end{split} \]

So, for functions that are very close to being ninth-order polynomials, the following numerical integration scheme with 5 integration points can be employed:

    \[\begin{split} I&=\int_{-1}^{1} \! f\,\mathrm{d}x\\ & \approx 0.236927f(-0.90618)+0.478629f(-0.538469)\\ & +0.568889f(0)+0.478629f(0.538469)\\ & + 0.236927 f(0.90618) \end{split} \]

The following is the Mathematica code used to find the above integration points and corresponding weights.

View Mathematica Code
f = a0 + a1*x + a2*x^2 + a3*x^3 + a4*x^4 + a5*x^5 + a6*x^6 + a7*x^7 +  a8*x^8 + a9*x^9;
I1 = Integrate[f, {x, -1, 1}]
I2 = w1*(f /. x -> x1) + w2*(f /. x -> x2) + w3*(f /. x -> x3) +   w4*(f /. x -> x4) + w5*(f /. x -> x5)
Eq1 = Coefficient[I1 - I2, a0]
Eq2 = Coefficient[I1 - I2, a1]
Eq3 = Coefficient[I1 - I2, a2]
Eq4 = Coefficient[I1 - I2, a3]
Eq5 = Coefficient[I1 - I2, a4]
Eq6 = Coefficient[I1 - I2, a5]
Eq7 = Coefficient[I1 - I2, a6]
Eq8 = Coefficient[I1 - I2, a7]

Eq9 = Coefficient[I1 - I2, a8]
Eq10 = Coefficient[I1 - I2, a9]

a = Solve[{Eq1 == 0, Eq2 == 0, Eq3 == 0, Eq4 == 0, Eq5 == 0, Eq6 == 0,  Eq7 == 0, Eq8 == 0, Eq9 == 0, Eq10 == 0}, {x1, x2, x3, w1, w2, w3, x4, w4, x5, w5}]
Chop[N[a[[1]]]]

Eleventh-Degree Polynomial: Six Integration Points

For an eleventh-degree polynomial, 2n-1=11, therefore, n=6, so, 6 integration points are sufficient to exactly integrate an eleventh-degree polynomial. Following the procedure outlined above, the integration points along with the associated weight:

    \[\begin{split} x_1=-0.93247 \qquad w_1=0.171324\\ x_2=-0.661209 \qquad w_2=0.360762\\ x_3=-0.238619 \qquad w_3=0.467914\\ x_4=0.238619 \qquad w_4=0.467914\\ x_5=0.661209 \qquad w_5=0.360762\\ x_6=0.93247 \qquad w_6=0.171324\\ \end{split} \]

So, for functions that are very close to being eleventh-order polynomials, the following numerical integration scheme with 6 integration points can be employed:

    \[\begin{split} I&=\int_{-1}^{1} \! f\,\mathrm{d}x\\ & \approx 0.171324f(-0.93247 )+0.360762f(-0.661209 )\\ & +0.467914f(-0.238619)+0.467914f(0.238619)\\ & + 0.360762f(0.661209 )+0.171324f(0.93247 ) \end{split} \]

The following is the Mathematica code used to find the above integration points and corresponding weights. Note that symmetry was employed to reduce the number of equations.

View Mathematica Code
f = a0 + a1*x + a2*x^2 + a3*x^3 + a4*x^4 + a5*x^5 + a6*x^6 + a7*x^7 + a8*x^8 + a9*x^9 + a10*x^10 + a11*x^11;
I1 = Integrate[f, {x, -1, 1}]
I2 =  w1*(f /. x -> x1) + w2*(f /. x -> x2) + w3*(f /. x -> x3) + 
   w3*(f /. x -> -x3) + w2*(f /. x -> -x2) + w1*(f /. x -> -x1)]
Eq1 = Coefficient[I1 - I2, a0]
Eq3 = Coefficient[I1 - I2, a2]
Eq5 = Coefficient[I1 - I2, a4]
Eq7 = Coefficient[I1 - I2, a6]
Eq9 = Coefficient[I1 - I2, a8]
Eq11 = Coefficient[I1 - I2, a10]
a = Solve[{Eq1 == 0, Eq3 == 0, Eq5 == 0, Eq7 == 0, Eq9 == 0, Eq11 == 0}, {x1, x2, x3, w1, w2, w3}]
Chop[N[a[[1]]]]

Implementation of Gauss Quadrature

Gauss quadrature is very easy to implement and provides very accurate results with very few computations. However, one drawback is that it is not applicable to data obtained experimentally as the values of the function at the specific integration points would not be necessarily available. In order to implement the Gauss integration scheme in Mathematica, first, the following table is created which contains the integration points and the corresponding weights. Row number i contains the Gauss integration scheme with i-1 integration points. Then, a procedure is created in Mathematica whose input is a function f, and the requested number of integration points. The procedure then calls the appropriate row in the “GaussTable” and calculates the weighted sum of the function evaluated at the appropriate integration points.
GuassPoints

View Mathematica Code
Clear[f, x]
GaussTable = {{{"Integration Points"}, {"Corresponding Weights"}}, {{0}, {2}}, {{-1/Sqrt[3], 1/Sqrt[3]}, {1, 1}}, {{-Sqrt[3/5], 0, Sqrt[3/5]}, {5/9, 8/9, 5/9}}, {{-0.861136, -0.339981, 0.339981, 0.861136}, {0.347855, 0.652145, 0.652145, 0.347855}}, {{-0.90618, -0.538469, 0, 0.538469, 0.90618}, {0.236927, 0.478629, 0.568889, 0.478629, 0.236927}}, {{-0.93247, -0.661209, -0.238619, 0.238619, 0.661209, 0.93247}, {0.171324, 0.360762, 0.467914, 0.467914, 0.360762,  0.171324}}};
GaussTable // MatrixForm
IG[f_, n_] := Sum[GaussTable[[n + 1, 2, i]]*f[GaussTable[[n + 1, 1, i]]], {i, 1, n}]
f[x_] := x^9 + x^8
IG[f, 5.0]
Example

Calculate the exact integral of \cos{x} on the interval [-1,1] and find the relative error if a Gauss 1, 2, 3, and 4 integration points scheme is used. Also, calculate the number of rectangles required by the midpoint rule of the rectangle method to produce an error similar to that produced by the 3 point integration scheme, then evaluate the integral using the midpoint rule of the rectangle method with the obtained number of rectangles.

Solution

The exact integral is given by:

    \[ I=\int_{-1}^{1}\!\cos{x}\,\mathrm{d}x=\sin{(1)}-\sin{(-1)}=1.68294 \]

Employing a Gauss integration scheme with one integration point yields:

    \[\begin{split} I_1&=w_1\cos{x_1}=2 \cos{(0)}=2\\ E_r&=\frac{1.68294-2}{1.68294}=-0.1884 \end{split} \]

Employing a Gauss integration scheme with two integration points yields:

    \[ \begin{split} I_2&=w_1\cos{x_1}+w_2\cos{x_2}=\cos{\left(\frac{-1}{\sqrt{3}}\right)}+\cos{\left(\frac{1}{\sqrt{3}}\right)}=1.67582\\ E_r&=\frac{1.68294-1.67582}{1.68294}=0.00422968 \end{split} \]

Employing a Gauss integration scheme with three integration points yields:

    \[\begin{split} I_3&=w_1\cos{x_1}+w_2\cos{x_2}+w_3\cos{x_3}=\frac{5}{9}\cos{\left(-\sqrt{\frac{3}{5}}\right)}+\frac{8}{9}\cos{(0)}+\frac{5}{9}\cos{\left(\sqrt{\frac{3}{5}}\right)}=1.683\\ E_r&=\frac{1.68294-1.683}{1.68294}=-0.0003659 \end{split} \]

Employing a Gauss integration scheme with four integration points yields:

    \[\begin{split} I_4&=w_1\cos{x_1}+w_2\cos{x_2}+w_3\cos{x_3}+w_4\cos{x_4}\\ &= 0.347855\cos(-0.861136)+0.652145\cos(-0.339981)\\ &+0.652145\cos(0.339981)+0.347855\cos(0.861136)\\ &=1.68294\\ E_r&=\frac{1.68294-1.68294}{1.68294}=0 \end{split} \]

Using a three point integration scheme yielded an error of:

    \[ |E|=|1.68294-1.683| = |-0.00006|=0.00006 \]

Equating the upper bound for the error in the midpoint rule to the error produced by a three-point Gauss integration scheme yields:

    \[ \max_{\xi\in[a,b]}|f''(\xi)| (b-a)\frac{h^2}{24}=0.00006 \]

With b-a=1-(-1)=2, and \max_{\xi\in[a,b]}|f''(\xi)|=1. Therefore:

    \[ h=\sqrt{\frac{0.00006\times 24}{2}}=0.02683 \]

Therefore, the required number of rectangles is:

    \[ n=\frac{b-a}{h}=\frac{2}{0.02683}\approx 75 \]

With 75 rectangles the rectangle method yields an integral of I_{\mbox{rectangle}}=1.68299. It is important to note how accurate the Gauss integration scheme is compared to the rectangle method. For this example, it would take 75 divisions (equivalent to 75 integration points) to reach an accuracy similar to the Gauss integration scheme with only 3 points!

View Mathematica Code
Clear[f, x, I2]
GaussTable = {{{"Integration Points"}, {"Corresponding Weights"}}, {{0}, {2}}, {{-1/Sqrt[3], 1/Sqrt[3]}, {1, 1}}, {{-Sqrt[3/5], 0,  Sqrt[3/5]}, {5/9, 8/9, 5/9}}, {{-0.861136, -0.339981, 0.339981, 0.861136}, {0.347855, 0.652145, 0.652145, 0.347855}}, {{-0.90618, -0.538469, 0, 0.538469, 0.90618}, {0.236927, 0.478629, 0.568889, 0.478629, 0.236927}}, {{-0.93247, -0.661209, -0.238619, 0.238619, 0.661209, 0.93247}, {0.171324, 0.360762, 0.467914, 0.467914, 0.360762,  0.171324}}};
GaussTable // MatrixForm
IG[f_, n_] := Sum[GaussTable[[n + 1, 2, i]]*f[GaussTable[[n + 1, 1, i]]], {i, 1,  n}];
f[x_] := Cos[x]
Iexact = Integrate[Cos[x], {x, -1, 1.0}]
Itable = Table[{i, N[IG[f, i]], (Iexact - IG[f, i])/Iexact}, {i, 1, 6}];
Title = {"Number of Integration Points",  "Numerical Integration Results", "Relative Error"};
Itable = Prepend[Itable, Title];
Itable // MatrixForm
h = Sqrt[0.00006*24/2/1]
n = Ceiling[2/h]
I2[f_, a_, b_, n_] := (h = (b - a)/n; Sum[f[a + (i - 1/2)*h]*h, {i, 1, n}])
I2[f, -1.0, 1, n]
Alternate Integration Limits

The Gauss integration scheme presented above applies to functions that are integrated over the interval [-1,1]. A simple change of variables can be used to allow integrating a function g(z) where z\in[a,b]. In this case, the linear relationship between z and x can be expressed as:

    \[ \frac{z-a}{b-a}=\frac{x-(-1)}{1-(-1)}=\frac{x+1}{2} \]

Therefore:

    \[ \mathrm{d}z=\frac{(b-a)}{2}\mathrm{d}x \]

Which means the integration can be converted from integrating over z to integrating over x as follows:

    \[ \int_{a}^b\! g(z)\,\mathrm{d}z=\int_{-1}^1\! g(z(x))\,\frac{(b-a)}{2}\mathrm{d}x \]

where

    \[ z(x)=\frac{(b-a)(x+1)}{2}+a=\frac{(b-a)(x)+(b+a)}{2} \]

Therefore:

    \[ \int_{a}^b\! g(z)\,\mathrm{d}z=\int_{-1}^1\! g\left(\frac{(b-a)(x)+(b+a)}{2}\right)\,\frac{(b-a)}{2}\mathrm{d}x \]

The integration can then proceed as per the weights and values of the integration points with f(x) given by:

    \[ f(x)=g\left(\frac{(b-a)(x)+(b+a)}{2}\right)\,\frac{(b-a)}{2} \]

Example

Calculate the exact integral of xe^x on the interval [0,3] and find the absolute relative error if a Gauss 1, 2, 3, and 4 integration points scheme is used.

Solution

First, to differentiate between the given function limits, and the limits after changing variables, we will assume that the function is given in terms of z as follows:

    \[ g(z)=ze^z \]

The exact integral is given by:

    \[ \int_{0}^3\!ze^z\,\mathrm{d}z=e^z(z-1)|_{z=0}^{z=3}=2e^3+1=41.1711 \]

Using a linear change of variables z=z(x) we have:

    \[ \int_{a}^b\! g(z)\,\mathrm{d}z=\int_{-1}^1\! g\left(\frac{(b-a)(x)+(b+a)}{2}\right)\,\frac{(b-a)}{2}\mathrm{d}x \]

Therefore:

    \[ \int_{0}^3\! ze^z\,\mathrm{d}z=\int_{-1}^1\! g\left(\frac{3x+3}{2}\right)\frac{3}{2}\,\mathrm{d}x=\int_{-1}^1\! \left(\frac{(3x+3)}{2}\right)e^{\left(\frac{(3x+3)}{2}\right)}\frac{3}{2}\,\mathrm{d}x \]

Employing a Gauss integration scheme with one integration point yields:

    \[\begin{split} I_1&=w_1f(x_1)=2 f(0)=2 \left(\frac{(3(0)+3)}{2}\right)e^{\left(\frac{(3(0)+3)}{2}\right)}\frac{3}{2}=20.1676\\ E_r&=\frac{41.1711-20.1676}{41.1711}=0.51 \end{split} \]

Employing a Gauss integration scheme with two integration points yields:

    \[ \begin{split} I_2&=w_1f(x_1)+w_2f(x_2)\\ &=\left(\frac{(3\left(-\sqrt{\frac{1}{3}}\right)+3)}{2}\right)e^{\left(\frac{(3\left(-\sqrt{\frac{1}{3}}\right)+3)}{2}\right)}\frac{3}{2}+\left(\frac{(3\left(\sqrt{\frac{1}{3}}\right)+3)}{2}\right)e^{\left(\frac{(3\left(\sqrt{\frac{1}{3}}\right)+3)}{2}\right)}\frac{3}{2}=39.607\\ E_r&=\frac{41.1711-39.607}{41.1711}=0.038 \end{split} \]

Employing a Gauss integration scheme with three integration points yields:

    \[\begin{split} I_3&=w_1f(x_1)+w_2f(x_2)+w_3f(x_3)=\frac{5}{9}f(x_1)+\frac{8}{9}f(x_2)+\frac{5}{9}f(x_3)=41.1313\\ E_r&=\frac{41.1711-41.1313}{41.1711}=0.0009657 \end{split} \]

where,

    \[\begin{split} f(x_1)&=f(-\sqrt{0.6})=\left(\frac{(3(-\sqrt{0.6})+3)}{2}\right)e^{\left(\frac{(3(-\sqrt{0.6})+3)}{2}\right)}\frac{3}{2}\\ f(x_2)&=f(0)=\left(\frac{(3(0)+3)}{2}\right)e^{\left(\frac{(3(0)+3)}{2}\right)}\frac{3}{2}\\ f(x_3)&=f(\sqrt{0.6})=\left(\frac{(3(\sqrt{0.6})+3)}{2}\right)e^{\left(\frac{(3(\sqrt{0.6})+3)}{2}\right)}\frac{3}{2} \end{split} \]

In the following Mathematica code, the procedure IGAL applies the Gauss integration scheme with n integration points to a function f on the interval [a,b]
View Mathematica Code

GaussTable = {{{"Integration Points"}, {"Corresponding Weights"}}, {{0}, {2}}, {{-1/Sqrt[3], 1/Sqrt[3]}, {1, 1}}, {{-Sqrt[3/5], 0,  Sqrt[3/5]}, {5/9, 8/9, 5/9}}, {{-0.861136, -0.339981, 0.339981,  0.861136}, {0.347855, 0.652145, 0.652145,  0.347855}}, {{-0.90618, -0.538469, 0, 0.538469, 0.90618}, {0.236927, 0.478629, 0.568889, 0.478629,  0.236927}}, {{-0.93247, -0.661209, -0.238619, 0.238619, 0.661209,      0.93247}, {0.171324, 0.360762, 0.467914, 0.467914, 0.360762,      0.171324}}};
GaussTable // MatrixForm
IGAL[f_, n_, a_, b_] := Sum[(b - a)/2*GaussTable[[n + 1, 2, i]]*f[(b - a)/2*(GaussTable[[n + 1, 1, i]] + 1) + a], {i, 1, n}]
f[x_] := x*E^x
Iexact = Integrate[f[x], {x, 0., 3}]
Itable = Table[{i, N[IGAL[f, i, 0, 3]], (Iexact - IGAL[f, i, 0, 3])/Iexact}, {i, 1, 6}];
Title = {"Number of Integration Points",  "Numerical Integration Results", "Relative Error"};
Itable = Prepend[Itable, Title];
Itable // MatrixForm

Problems

  1. Consider the following integral

        \[ \int_{0}^4\! 1-e^{-x}\,\mathrm{d}x \]

    • Evaluate the integral analytically.
    • Evaluate the integral using the trapezoidal rule with n=1, 2, and 4. For each case, calculate the relative error.
    • Evaluate the integral using the Simpson’s 1/3 rule with n=1, 2, and 4. For each case, calculate the relative error.
    • Evaluate the integral using the Simpson’s 3/8 rule with n=1, 2, and 4. For each case, calculate the relative error.
    • Comment on the accuracy of the methods used above.
  2. A transportation engineering study requires calculating the number of cars through an intersection travelling during morning rush hour. The following table provides the number of cars that pass every 4 minutes at particular times during that period:
    Transportation

    • Find the corresponding rate of cars through the intersection per minute
    • By integrating the rate of cars per minute over the interval from 7:30 a.m. to 9:15 a.m., find the total number of cars that pass through the intersection. Use an appropriate integration method and justify its use.
  3. Consider the following integral

        \[ \int_{0}^3\!xe^{2x}\,\mathrm{d}x \]

    • Evaluate the integral analytically.
    • Evaluate the integral using the Romberg’s method with a depth k such that \varepsilon_s=0.005.
    • Evaluate the integral using the Gauss integration scheme with 1, 2, 3, and 4 integration point. Calculate the relative error in each case.
    • Comment on the number of integration points required to achieve an acceptable accuracy.
  4. Consider the following integral

        \[ \mbox{erf}(a)=\frac{2}{\sqrt{\pi}}\int_{0}^{a}e^{-x^2}\,\mathrm{d}x \]

    with a=1.5

    • Use the built-in numerical integration “NIntegrate” function to calculate an approximation to the true value of the integral. Consider this to be the true value.
    • Evaluate the integral using the Gauss integration scheme with 1, 2, 3, and 4 integration point. Calculate the relative error in each case.
  5. The amount of mass transported via a pipe over a period of time can be computed as:

        \[ M=\int_{t_1}^{t_2}\!Q(t)c(t)\,\mathrm{d}t \]

    where M is the mass in mg, t_1 is the initial time in minutes, t_2 is the final time in minutes, Q(t) is the flow rate in m^3/min, and c(t) is the concentration in mg/m^3. The following functional representations define the temporal variations in flow and concentration:

        \[\begin{split} Q(t)&=9+5(\cos{(0.4t)} )^2\\ c(t)&=5e^{-0.5t}+2e^{0.15t} \end{split} \]

    • Determine the mass transported between t_1=2 and t_2=8 using the Gauss integration scheme with 1, 2, 3, and 4 integration points.
    • Either using trial and error or using the upper bound of the error, find the number of intervals using the trapezoidal rule, and the Simpson’s 1/3 rule such that the absolute value of the error is equivalent to that produced by the Gauss integration scheme with 4 points.
    • Find the level k in the Romberg’s method that would produce an absolute value of the error similar to that produced by the Gauss integration scheme with 4 points.
  6. Use the “Maximize” built-in function in Mathematica to create procedures whose inputs are f, h, a, and b which evaluate the upper bound of the error for the rectangle method, the trapezoidal rule, the Simpson’s 1/3 rule, and the Simpson’s 3/8 rule. Using different examples for each method, show that the actual error is indeed less than the upper bound of the error.
  7. Modify the Romberg’s Method code so that the inputs are: f, a, b, k_{max}, and \varepsilon_s. k_{max} is the maximum number of columns in the produced Romberg table, but the code should stop when \varepsilon_r\leq \varepsilon_s or when k_{max} is reached. The code should be efficient such that the only terms that are calculated are those used in the estimate of the integral. (i.e., you should not calculate all the values up to k_{max}).

Leave a Reply

Your email address will not be published.