Open Educational Resources

Introduction to Numerical Analysis: Curve Fitting

Given a set of data (x_i,y_i) with 1\leq i \leq n, curve fitting revolves around finding a mathematical model that can describe the relationship y(x) such that the prediction of the mathematical model would match, as closely as possible, the given data. There are two advantages to finding an appropriate mathematical model with a good fit. The first advantage is the reduction of the data as the mathematical model will often have much fewer parameters than the actual data. The second advantage is the characterization of the response y(x) which allows calculating y for any value for the variable x within the range of applicability of the model.

The first step in curve fitting is to assume a particular mathematical model that might fit the data. The second step is to find the mathematical model parameters that would minimize the sum of the squares of the errors between the model prediction and the data. In these pages we will first present linear regression, which is basically fitting data to a linear mathematical model. Then, we will present nonlinear regression, i.e., fitting data to nonlinear mathematical models.

Linear Regression

Given a set of data (x_i,y_i) with 1\leq i \leq n, a linear model fit to this set of data has the form:

    \[ y(x)=ax+b \]

where a and b are the model parameters. The model parameters can be found by minimizing the sum of the squares of the difference (S) between the data points and the model predictions:

    \[ S=\sum_{i=1}^n\left(y(x_i)-y_i\right)^2 \]

Substituting for y(x_i)=ax_i+b yields:

    \[ S=\sum_{i=1}^n\left(ax_i+b-y_i\right)^2 \]

In order to minimize S, we need to differentiate S with respect to the unknown parameters a and b and set these two equations to zero:

    \[\begin{split} \frac{\partial S}{\partial a}&=2\sum_{i=1}^n\left(\left(ax_i+b-y_i\right)x_i\right)=0\\ \frac{\partial S}{\partial b}&=2\sum_{i=1}^n\left(ax_i+b-y_i\right)=0\\ \end{split} \]

Solving the above two equations yields the best linear fit to the data. The solution for a and b has the form:

    \[\begin{split} a&=\frac{n\sum_{i=1}^nx_iy_i-\sum_{i=1}^nx_i\sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2}\\ b&=\frac{\sum_{i=1}^ny_i-a\sum_{i=1}^nx_i}{n} \end{split} \]

The built-in function “Fit” in mathematica can be used to find the best linear fit to any data. The following code illustrates the procedure to use “Fit” to find the equation of the line that fits the given array of data:
View Mathematica Code

Data = {{1, 0.5}, {2, 2.5}, {3, 2}, {4, 4.0}, {5, 3.5}, {6, 6.0}, {7,  5.5}};
y = Fit[Data, {1, x}, x]
Plot[y, {x, 1, 7}, Epilog -> {PointSize[Large], Point[Data]}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0}]

Coefficient of Determination

The coefficient of determination, denoted R^2 or r^2 and pronounced R squared, is a number that provides a statistical measure of how the produced model fits the data. The value of R^2 can be calculated as follows:

    \[ R^2=1-\frac{\sum_{i=1}^n\left(y_i-y(x_i)\right)^2}{\sum_{i=1}^n\left(y_i-\frac{1}{n}\sum_{i=1}^ny_i\right)^2} \]

The equation above implies that 0\leq R^2 \leq 1. A value closer to 1 indicates that the model is a good fit for the data while a value of 0 indicates that the model does not fit the data.

Using Mathematica and/or Excel

The built-in Mathematica function “LinearModelFit” provides the statistical measures associated with a linear regression model, including the confidence intervals in the parameters a and b. However, these are not covered in this course. The “LinearModelFit” can be used to output the R^2 of the linear fit as follows:
View Mathematica Code

Data = {{1, 0.5}, {2, 2.5}, {3, 2}, {4, 4.0}, {5, 3.5}, {6, 6.0}, {7, 5.5}};
y2 = LinearModelFit[Data, x, x]
y = Normal[y2]
y3 = y2["RSquared"]
Plot[y, {x, 1, 7}, Epilog -> {PointSize[Large], Point[Data]}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0}]

The following user-defined Mathematica procedure finds the linear fit and R^2 and draws the curve for a set of data:
View Mathematica Code

Clear[a, b, x]
Data = {{1, 0.5}, {2, 2.5}, {3, 2}, {4, 4.0}, {5, 3.5}, {6, 6.0}, {7, 5.5}};
LinearFit[Data_] :=
 (n = Length[Data];
  a = (n*Sum[Data[[i, 1]]*Data[[i, 2]], {i, 1, n}] - Sum[Data[[i, 1]], {i, 1, n}]*Sum[Data[[i, 2]], {i, 1, n}])/(n*Sum[Data[[i, 1]]^2, {i, 1, n}] - (Sum[Data[[i, 1]], {i, 1, n}])^2);
  b=(Sum[Data[[i, 2]], {i, 1, n}] - a*Sum[Data[[i, 1]], {i, 1, n}])/    n;
   RSquared=1-Sum[(Data[[i,2]]-a*Data[[i,1]]-b)^2,{i,1,n}]/Sum[(Data[[i,2]]-1/n*Sum[Data[[j,2]],{j,1,n}])^2,{i,1,n}];
  {RSquared,a,b })
LinearFit[Data]
RSquared=LinearFit[Data][[1]]
a = LinearFit[Data][[2]]
b = LinearFit[Data][[3]]
y = a*x + b;
Plot[y, {x, 1, 7}, Epilog -> {PointSize[Large], Point[Data]}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0} ]

Microsoft Excel can be used to provide the linear fit (linear regression) for any set of data. First, the curve is plotted using “XY scatter”. Afterwards, right clicking on the data curve and selecting “Add Trendline” will provide the linear fit. You can format the trendline to “show equation on chart” and R^2.

Example

Find the best linear fit to the data (1,0.5), (2,2.5), (3,2), (4,4), (5,3.5), (6,6), (7,5.5).

Solution

Using the equations for a, b, and R^2:

    \[ \begin{split} a&=\frac{n\sum_{i=1}^nx_iy_i-\sum_{i=1}^nx_i\sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2}=\frac{7(119.5)-(28)(24)}{7(140)-(28)^2}=0.839286\\ b&=\frac{\sum_{i=1}^ny_i-a\sum_{i=1}^nx_i}{n}=\frac{24-(0.839286)(28)}{7}=0.071429\\ R^2&=1-\frac{\sum_{i=1}^n\left(y_i-y(x_i)\right)^2}{\sum_{i=1}^n\left(y_i-\frac{1}{n}\sum_{i=1}^ny_i\right)^2}=1-\frac{2.9911}{22.7143}=0.8683 \end{split} \]

Microsoft Excel can easily be used to generate the values for a, b, and R^2 as shown in the following sheet
linearfitr2

The following is the Microsoft Excel chart produced as described above.

linearfit

Extension of Linear Regression

In the previous section, the model function y was linear in x. However, we can have a model function y that is linear in the unknown coefficients but non-linear in x. In a general sense, the model function y can be composed of m terms with the following form:

    \[ y(x)=a_1f_1(x)+a_2f_2(x)+a_3f_3(x)+\cdots+a_mf_m(x)=\sum_{j=1}^ma_jf_j(x) \]

Note that the linear regression model can be viewed as a special case of this general form with only two functions f_1(x)=1 and f_2(x)=x. Another special case of this general form is polynomial regression where the model function has the form:

    \[ y(x)=a_0+a_1x+a_2x^2+a_3x^3+\cdots+a_mx^m \]

The regression procedure constitutes finding the coefficients a_j that would yield the least sum of squared differences between the data and model prediction. Given a set of data (x_i,y_i) with 1\leq i \leq n, and if S is the sum of the squared differences between a general linear regression model and the data, then S has the form:

    \[ S=\sum_{i=1}^n\left(y(x_i)-y_i\right)^2=\sum_{i=1}^n\left(\sum_{j=1}^ma_jf_j(x_i)-y_i\right)^2 \]

To find the minimizers of S, the derivatives of S with respect to each of the coefficients a_j can be equated to zero. Taking the derivative of S with respect to an arbitrary coefficient a_k and equating to zero yields the general equation:

    \[ \frac{\partial S}{\partial a_k}=\sum_{i=1}^n\left(2\left(\sum_{j=1}^ma_jf_j(x_i)-y_i\right)f_k(x_i)\right)=0 \]

A system of m-equations of the m unknowns a_1,a_2,\cdots,a_m can be formed and have the form:

    \[ \begin{split} \left(\begin{matrix} \sum_{i=1}^nf_1(x_i)f_1(x_i)&\sum_{i=1}^nf_1(x_i)f_2(x_i)&\cdots & \sum_{i=1}^nf_1(x_i)f_m(x_i)\\ \sum_{i=1}^nf_2(x_i)f_1(x_i)&\sum_{i=1}^nf_2(x_i)f_2(x_i)&\cdots & \sum_{i=1}^nf_2(x_i)f_m(x_i)\\ \vdots&\vdots&\ddots & \vdots\\ \sum_{i=1}^nf_m(x_i)f_1(x_i)&\sum_{i=1}^nf_m(x_i)f_2(x_i)&\cdots & \sum_{i=1}^nf_m(x_i)f_m(x_i) \end{matrix}\right)&\left(\begin{array}{c}a_1\\a_2\\\vdots\\a_m\end{array}\right)\\ &= \left(\begin{array}{c}\sum_{i=1}^ny_if_1(x_i)\\\sum_{i=1}^ny_if_2(x_i)\\\vdots\\\sum_{i=1}^ny_if_m(x_i)\end{array}\right) \end{split} \]

It can be shown that the above system always has a unique solution when the functions f_i(x) are non-zero and distinct. Solving these equations yields the best fit to the data, i.e., the best coefficients a_i that would minimize the sum of the squares of the differences between the model and the data. The coefficient of determination can be obtained as described above.

Example 1

Consider the data (1,0.5), (2,2.5), (3,2), (4,4), (5,3.5), (6,6), (7,5.5). Consider a model of the form

    \[ y=a_1+a_2x+a_3 \cos{(\pi x)} \]

Find the coefficients a_1, a_2, and a_3 that would give the best fit.

Solution

This model is composed of a linear combination of three functions: f_1(x)=1, f_2(x)=x, and f_3(x)=\cos{(\pi x)}. To find the best coefficients, the following linear system of equations needs to be solved:

    \[ \begin{split} \left(\begin{matrix} \sum_{i=1}^nf_1(x_i)f_1(x_i)&\sum_{i=1}^nf_1(x_i)f_2(x_i)& \sum_{i=1}^nf_1(x_i)f_3(x_i)\\ \sum_{i=1}^nf_2(x_i)f_1(x_i)&\sum_{i=1}^nf_2(x_i)f_2(x_i)&\sum_{i=1}^nf_2(x_i)f_3(x_i)\\ \sum_{i=1}^nf_3(x_i)f_1(x_i)&\sum_{i=1}^nf_3(x_i)f_2(x_i)&\sum_{i=1}^nf_3(x_i)f_3(x_i) \end{matrix}\right)&\left(\begin{array}{c}a_1\\a_2\\a_3\end{array}\right)\\ &= \left(\begin{array}{c}\sum_{i=1}^ny_if_1(x_i)\\\sum_{i=1}^ny_if_2(x_i)\\\sum_{i=1}^ny_if_3(x_i)\end{array}\right) \end{split} \]

The following Microsoft Excel table is used to calculate the entries in the above matrix equation
linearfittable
Therefore, the linear system of equations can be written as:

    \[ \left(\begin{matrix} 7&28&-1\\ 28&140&-4\\ -1&-4&7 \end{matrix}\right)  \left(\begin{array}{c}a_1\\a_2\\a_3\end{array}\right)= \left(\begin{array}{c}24\\119.5\\1\end{array}\right) \]

Solving the above system yields a_1=0.16369, a_2=0.83929, a_3=0.64583. Therefore, the best-fit model has the form:

    \[ y(x)=0.16369+0.83929x+0.64583\cos{(\pi x)} \]

To find the coefficient of determination, the following Microsoft Excel table for the values of (y_i-y(x_i))^2 and \left(y_i-\frac{1}{n}\sum_{i=1}^ny_i\right)^2 is used:
linearfittable2
Therefore:

    \[ R^2=1-\frac{0.13095}{22.71429}=0.994234 \]

R^2 is very close to 1 indicating a very good fit!

The Mathematica function LinearModelFit[Data,{functions},x] does the above computations and provides the required model. The equation of the model can be retrieved using the built-in function Normal and the R^2 can also be retrieved as shown in the code below. The final plot of the model vs. the data is shown below as well.
View Mathematica Code

Clear[a, b, x]
Data = {{1, 0.5}, {2, 2.5}, {3, 2}, {4, 4.0}, {5, 3.5}, {6, 6.0}, {7, 5.5}};
model = LinearModelFit[Data, {1, x, Cos[Pi*x]}, x]
y = Normal[model]
R2 = model["RSquared"]
Plot[y, {x, 1, 7}, Epilog -> {PointSize[Large], Point[Data]},  PlotLegends->{"Model"},AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0} ]

linearfit2

Example 2

Fit a cubic polynomial to the data (1,1.93),(1.1,1.61),(1.2,2.27),(1.3,3.19),(1.4,3.19),(1.5,3.71),(1.6,4.29),(1.7,4.95),(1.8,6.07),(1.9,7.48),(2,8.72),(2.1,9.34),(2.2,11.62).

Solution

A cubic polynomial fit would have the form:

    \[ y(x)=a_0+a_1x+a_2x^2+a_3x^3 \]

This is a linear combination of the functions f_0(x)=1, f_1(x)=x, f_2(x)=x^2, and f_3(x)=x^3. The following system of equations needs to be formed to solve for the coefficients a_0, a_1, a_2, and a_3:

    \[ \begin{split} \left(\begin{matrix} \sum_{i=1}^nf_0(x_i)f_0(x_i)&\sum_{i=1}^nf_0(x_i)f_1(x_i)&\sum_{i=1}^nf_0(x_i)f_2(x_i) & \sum_{i=1}^nf_0(x_i)f_3(x_i)\\ \sum_{i=1}^nf_1(x_i)f_0(x_i)&\sum_{i=1}^nf_1(x_i)f_1(x_i)&\sum_{i=1}^nf_1(x_i)f_2(x_i) & \sum_{i=1}^nf_1(x_i)f_3(x_i)\\ \sum_{i=1}^nf_2(x_i)f_0(x_i)&\sum_{i=1}^nf_2(x_i)f_1(x_i)&\sum_{i=1}^nf_2(x_i)f_2(x_i) & \sum_{i=1}^nf_2(x_i)f_3(x_i)\\ \sum_{i=1}^nf_3(x_i)f_0(x_i)&\sum_{i=1}^nf_3(x_i)f_1(x_i)&\sum_{i=1}^nf_3(x_i)f_2(x_i) & \sum_{i=1}^nf_3(x_i)f_3(x_i)\\ \end{matrix}\right)&\left(\begin{array}{c}a_0\\a_1\\a_2\\a_3\end{array}\right)\\ &= \left(\begin{array}{c}\sum_{i=1}^ny_if_0(x_i)\\\sum_{i=1}^ny_if_1(x_i)\\\sum_{i=1}^ny_if_2(x_i)\\\sum_{i=1}^ny_if_3(x_i)\end{array}\right) \end{split} \]

The following Microsoft Excel tables are used to find the entries of the above linear system of equations:
example21
example22

Therefore, the linear system of equations can be written as:

    \[ \left(\begin{matrix} 13&20.8&35.1&61.984\\ 20.8&35.1&61.984&113.607\\ 35.1&61.984&113.607&214.502\\ 61.984&113.607&214.502&414.623 \end{matrix}\right)\left(\begin{array}{c}a_0\\a_1\\a_2\\a_3\end{array}\right)= \left(\begin{array}{c}68.37\\123.638\\231.4056\\444.8628\end{array}\right) \]

Solving the above system yields a_0=-1.6247, a_1=6.6301, a_2=-5.7273, and a_3=2.46212. Therefore, the best-fit model has the form:

    \[ y=-1.6247+6.6301x-5.7273x^2+2.46212x^3 \]

The following graph shows the Microsoft Excel plot with the generated cubic trendline. The trendline equation generated by Excel is the same as the one above.
example23

To find the coefficient of determination, the following Microsoft Excel table for the values of (y_i-y(x_i))^2 and \left(y_i-\frac{1}{n}\sum_{i=1}^ny_i\right)^2 is used:
example24

Therefore:

    \[ R^2=1-\frac{0.9506}{120.0129}=0.9921 \]

Which is similar to the one produced by Microsoft Excel. Alternatively, the following Mathematica code does the above computations to produce the model and its R^2.

View Mathematica Code
Data = {{1, 1.93}, {1.1, 1.61}, {1.2, 2.27}, {1.3, 3.19}, {1.4,3.19}, {1.5, 3.71}, {1.6, 4.29}, {1.7, 4.95}, {1.8, 6.07}, {1.9,7.48}, {2, 8.72}, {2.1, 9.34}, {2.2, 11.62}};
model = LinearModelFit[Data, {1, x, x^2, x^3}, x]
y = Normal[model]
R2 = model["RSquared"]
Plot[y, {x, 1, 2.2}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {"Model"}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0} ]

Linearization of Nonlinear Relationships

In the previous two sections, the model function y was formed as a linear combination of functions f_1,f_2,\cdots,f_m and the minimization of the sum of the squares of the differences between the model prediction and the data produced a linear system of equations to solve for the coefficients in the model. In that case y was linear in the coefficients. In certain situations, it is possible to convert nonlinear relationships to a linear form similar to the previous methods. For example, consider the following models y_{\mbox{exp}}, y_{\mbox{power}}, and y_{\mbox{log}}:

    \[ y_{\mbox{exp}}=b_1e^{a_1x}\qquad y_{\mbox{power}}=b_2x^{a_2} \qquad y_{\mbox{log}}=a_3\ln x + b_3 \]

y_{\mbox{exp}} is an exponential model, y_{\mbox{power}} is a power model, while y_{\mbox{log}} is a logarithmic model. These models are nonlinear in x and the unknown coefficients. However, by taking the natural logarithm of the first two, they can easily be transformed into linear models as follows:

    \[ \ln y_{\mbox{exp}}=a_1 x+\ln b_1  \qquad \ln y_{\mbox{power}}=a_2 \ln x+\ln b_2 \]

In the first model, the data can be converted to (x_i,\ln y_i) and linear regression can be used to find the coefficients a_1 and \ln b_1. For the second model, the data can be converted to (\ln x_i,\ln y_i) and linear regression can be used to find the coefficients a_2, and \ln b_2. The third model can be considered linear after converting the data into the form (\ln x_i, y_i).

Coefficient of Determination for Nonlinear Relationships

For nonlinear relationships, the coefficient of determination is not a very good measure for how well the data fit the model. See for example this article on the subject. In fact, different software will give different values for R^2. We will use the coefficient of determination for nonlinear relationships defined as:

    \[ R^2=1-\frac{\sum_{i=1}^n\left(y_i-y(x_i)\right)^2}{\sum_{i=1}^n\left(y_i\right)^2} \]

which is equal to 1 minus the ratio between the model sum of squares and the total sum of squares of the data. This is consistent with the definition of R^2 used in Mathematica for nonlinear models.

Example 1

Fit an exponential model to the data: (1,1.93),(1.1,1.61),(1.2,2.27),(1.3,3.19),(1.4,3.19),(1.5,3.71),(1.6,4.29),(1.7,4.95),(1.8,6.07),(1.9,7.48),(2,8.72),(2.1,9.34),(2.2,11.62).

Solution

The exponential model has the form:

    \[ y_{\mbox{exp}}=b_1e^{a_1x} \]

This form can be linearized as follows:

    \[ \ln y_{\mbox{exp}}=a_1 x+\ln b_1 \]

The data needs to be converted to (x_i,\ln y_i). y^* will be used to designate \ln y. The following Microsoft Excel table shows the raw data, and after conversion to (x_i,y^*_i).
example 11

The linear regression described above will be used to find the best fit for the model:

    \[ y^*=a^*x+b^* \]

with

    \[\begin{split} a^*&=\frac{n\sum_{i=1}^nx_iy^*_i-\sum_{i=1}^nx_i\sum_{i=1}^ny^*_i}{n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2}\\ b^*&=\frac{\sum_{i=1}^ny^*_i-a^*\sum_{i=1}^nx_i}{n} \end{split} \]

The following Microsoft Excel table is used to calculate the various entries in the above equation:
betternumbers1

Therefore:

    \[\begin{split} a^*&=\frac{13\times 33.8013-20.8\times 19.3085}{13\times 35.10-\left(20.8\right)^2}=1.5976\\ b^*&=\frac{19.3085-1.5976\times 20.8}{13}=-1.0709 \end{split} \]

These can be used to calculate the coefficients in the original model:

    \[ a_1=a^*=1.5976 \qquad b_1=e^{b^*}=e^{-1.0709}=0.3427 \]

Therefore, the best exponential model based on the least squares of the linearized version has the form:

    \[ y_{\mbox{exp}}=0.3427e^{1.5976x} \]

The following Microsoft Excel chart shows the calculated trendline in Excel with the same coefficients:
example 23

It is possible to calculate the coefficient of determination for the linearized version of this model, however, it would only describe how good the linearized model is. For the nonlinear model, we will use the coefficient of determination as described above which requires the following Microsoft Excel table:
Example25
In this case, the coefficient of determination can be calculated as:

    \[ R^2=1-\frac{\sum_{i=1}^n\left(y_i-y(x_i)\right)^2}{\sum_{i=1}^n\left(y_i\right)^2}=1-\frac{0.97}{479.59}=0.998 \]

The NonlinearModelFit built-in function in Mathematica can be used to generate the model and calculate its R^2 as shown in the code below.
View Mathematica Code

Data = {{1, 1.93}, {1.1, 1.61}, {1.2, 2.27}, {1.3, 3.19}, {1.4, 3.19}, {1.5, 3.71}, {1.6, 4.29}, {1.7, 4.95}, {1.8, 6.07}, {1.9, 7.48}, {2, 8.72}, {2.1, 9.34}, {2.2, 11.62}};
model = NonlinearModelFit[Data, b1*E^(a1*x), {a1, b1}, x]
y = Normal[model]
R2 = model["RSquared"]
Plot[y, {x, 1, 2.2}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {"Model"}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0} ]

Example 2

Fit a power model to the data: (1,1.93),(1.1,1.61),(1.2,2.27),(1.3,3.19),(1.4,3.19),(1.5,3.71),(1.6,4.29),(1.7,4.95),(1.8,6.07),(1.9,7.48),(2,8.72),(2.1,9.34),(2.2,11.62).

Solution

The power model has the form:

    \[ y_{\mbox{power}}=b_2x^{a_2} \]

This form can be linearized as follows:

    \[ \ln y_{\mbox{power}}=a_2 \ln x+\ln b_2 \]

The data needs to be converted to (\ln x_i,\ln y_i). y^* and x^* will be used to designate \ln y and \ln x respectively. The following Microsoft Excel table shows the raw data, and after conversion to (x_i^*,y^*_i).
Example b1

The linear regression described above will be used to find the best fit for the model:

    \[ y^*=a^*x^*+b^* \]

with

    \[\begin{split} a^*&=\frac{n\sum_{i=1}^nx_i^*y^*_i-\sum_{i=1}^nx_i^*\sum_{i=1}^ny^*_i}{n\sum_{i=1}^n(x_i^*)^2-\left(\sum_{i=1}^nx_i^*\right)^2}\\ b^*&=\frac{\sum_{i=1}^ny^*_i-a^*\sum_{i=1}^nx_i^*}{n} \end{split} \]

The following Microsoft Excel table is used to calculate the various entries in the above equation:
betternumbers2

Therefore:

    \[\begin{split} a^*&=\frac{13\times 10.3985-5.7357\times 19.3085}{13\times 3.3013-\left(5.7357\right)^2}=2.4387\\ b^*&=\frac{19.3085-2.4387\times 5.7357}{13}=0.4093 \end{split} \]

These can be used to calculate the coefficients in the original model:

    \[ a_2=a^*=2.4387 \qquad b_2=e^{b^*}=e^{0.4093}=1.5058 \]

Therefore, the best power model based on the least squares of the linearized version has the form:

    \[ y_{\mbox{power}}=1.5058x^{2.4387} \]

The following Microsoft Excel chart shows the calculated trendline in Excel with the same coefficients:
power1

It is possible to calculate the coefficient of determination for the linearized version of this model, however, it would only describe how good the linearized model is. For the nonlinear model, we will use the coefficient of determination as described above which requires the following Microsoft Excel table:
power2

In this case, the coefficient of determination can be calculated as:

    \[ R^2=1-\frac{\sum_{i=1}^n\left(y_i-y(x_i)\right)^2}{\sum_{i=1}^n\left(y_i\right)^2}=1-\frac{3.25}{479.59}=0.9932 \]

The NonlinearModelFit built-in function in Mathematica can be used to generate a slightly better model with a higher R^2 as shown in the code below.
View Mathematica Code

Data = {{1, 1.93}, {1.1, 1.61}, {1.2, 2.27}, {1.3, 3.19}, {1.4, 3.19}, {1.5, 3.71}, {1.6, 4.29}, {1.7, 4.95}, {1.8, 6.07}, {1.9, 7.48}, {2, 8.72}, {2.1, 9.34}, {2.2, 11.62}};
model = NonlinearModelFit[Data, b1*x^(a1), {a1, b1}, x]
y = Normal[model]
R2 = model["RSquared"]
Plot[y, {x, 1, 2.2}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {"Model"}, AxesLabel -> {"x", "y"}, AxesOrigin -> {0, 0} ]

The following is the corresponding Mathematica output

Power3

Nonlinear Regression

In nonlinear regression, the model function y is a nonlinear function of x and of the parameters a_1, a_2,\cdots,a_m. Given a set of n data points (x_i,y_i) with 1\leq i \leq n, curve fitting starts by assuming a model y(x) with m parameters. The parameters can be obtained by minimizing the least squares:

    \[ S=\sum_{i=1}^n (y(x_i)-y_i)^2 \]

In order to find the parameters of the model that would minimize S, m equations of the following form are solved:

    \[ \frac{\partial S}{\partial a_k}=\sum_{i=1}^n\left(2\left(y(x_i)-y_i\right)\frac{\partial y}{\partial a_k}\bigg|_{x=x_i}\right)=0 \]

Since y(x) is nonlinear in the coefficients, the m equations formed are also nonlinear and can only be solved using a nonlinear equation solver method such as the Newton Raphson method described before. We are going to rely on the built-in NonlinearModelFit function in Mathematica that does the required calculations.

Example 1

The following data describes the yield strength \sigma_y and the plastic strain \varepsilon_p obtained from uniaxial experiments where the first variable is the plastic strain and the second variable is the corresponding yield strength: (0.0001,204.18),(0.0003,226.4),(0.0015,270.35),(0.0025,276.86),(0.004,296.86),(0.005,299.3),(0.015,334.65),(0.025,346.56),(0.04,371.81),(0.05,377.45),(0.1,398.01),(0.15,422.45),(0.2,434.42).
Use the NonlinearModelFit function in Mathematica to find the best fit for the data assuming the relationship follows the form:

    \[ \sigma_y=K\varepsilon_p^n \]

Solution

While this model can be linearized as described in the previous section, the NonlinearModelFit will be used. The array of data is input as a list in the variable Data. The NonlinearModelFit function is then used and the output is stored in the variable: “model”. The form of the best fit function is evaluated using the built-in function Normal[model]. R^2 can be evaluated as well as shown in the code below. The values of K and n that describe the best fit are K=506.64 and n=0.0988. Therefore, the best-fit model is:

    \[ \sigma_y=506.64\varepsilon_p^{0.0988} \]

The following is the Mathematica code used along with the output
View Mathematica Code

Data = {{0.0001, 204.18}, {0.0003, 226.4}, {0.0015, 270.35}, {0.0025, 276.86}, {0.004, 296.86}, {0.005, 299.3}, {0.015, 334.65}, {0.025,346.56}, {0.04, 371.81}, {0.05, 377.45}, {0.1, 398.01}, {0.15, 422.45}, {0.2, 434.42}};
model = NonlinearModelFit[Data, Kk*eps^(n), {Kk, n}, eps]
y = Normal[model]
Print["R^2=",R2 = model["RSquared"]]
Plot[y, {eps, 0, 0.21}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {"Model"}, AxesLabel -> {"Eps_p", "Sigma_y"}, AxesOrigin -> {0, 0}]
nonlinearregres

Example 2

Exponential and Logarithmic Models offer a wide range of model functions that can be used for various engineering applications. An example is the exponential model that describes an initial rapid increase in the dependent variable y which then levels off to become asymptotic to an upper limit a_1:

    \[ y(x)=a_1(1-e^{-a_2x}) \]

where a_1 and a_2 are the model parameters. For plastic materials, Armstrong and Frederick proposed this model in 1966 (republished here in 2007) to describe how a quantity called the “Centre of the Yield Surface” changes with the increase in the plastic strain. If \alpha is the centre of the yield surface and \varepsilon_p is the plastic strain, then, the model for \alpha has the form:

    \[ \alpha = C\left(1-e^{-\gamma \varepsilon_p}\right) \]

Find the values of C and \gamma for the best fit to the experimental data ({\varepsilon_p}_i,\alpha_i) given by: (0.0001,2.067),(0.005,3.96),(0.0075,6.22),(0.008,8.07),(0.009,9.32),(0.01,12.02),(0.02,13.2),(0.03,15.22),(0.04,15.51),(0.05,15.44),(0.06,15.1),(0.075,15.76),(0.1,15.11),(0.17,15.53),(0.2,15.26)

Solution

Unlike the model in the previous question, this model cannot be linearized. The NonlinearModelFit built-in function in Mathematica will be used. The array of data is input as a list in the variable Data. The NonlinearModelFit function is then used and the output is stored in the variable: “model”. The form of the best fit function is evaluated using the built-in function Normal[model]. R^2 can be evaluated as well as shown in the code below. The values of C and \gamma that describe the best fit are C=15.5624 and \gamma=94.3836. Therefore, the best-fit model is:

    \[ \alpha = 15.5624\left(1-e^{-94.3836 \varepsilon_p}\right) \]

The following is the Mathematica code used along with the output

View Mathematica Code
Data = {{0.0001, 2.067}, {0.005, 3.96}, {0.0075, 6.22}, {0.008,8.07}, {0.009, 9.32}, {0.01, 12.02}, {0.02, 13.2}, {0.03,15.22}, {0.04, 15.51}, {0.05, 15.44}, {0.06, 15.1}, {0.075,15.76}, {0.1, 15.11}, {0.17, 15.53}, {0.2, 15.26}};
model = NonlinearModelFit[Data, Cc (1 - E^(-gamma*eps)), {Cc, gamma}, eps]
y = Normal[model]
Print["R^2=", R2 = model["RSquared"]]
Plot[y, {eps, 0, 0.21}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {"Model"}, AxesLabel -> {"Eps_p", "Alpha"}, AxesOrigin -> {0, 0}]

graph

Problems

  1. The following data provides the number of trucks with a particular weight at each hour of the day on one of the busy US highways. The data is provided as (h_i,N_i), where h_i is the hour and N_i is the number of trucks: (1,16),(2,12),(3,14),(4,8),(5,24),(6,92),(7,311),(8,243),(9,558),(10,644),(11,768),(12,838),(13,911),(14,897),(15,853),(16,860),(17,853),(18,875),(19,673),(20,378),(21,207),(22,142),(23,108),(24,62). The distribution follows a Gaussian model of the form:

        \[ N=a_1 e^{\frac{-(x-a_2)^2}{a_3}} \]

    • Find the parameters a_1, a_2, and a_3 that would produce the best fit to the data.
    • On the same plot draw the model prediction vs. the data.
    • Calculate the R^2 of the model.
    • Calculate the sum of squares of the difference between the model prediction and the data.
  2. In 2014, MacLeans published the following table showing the average tuition in Canadian provinces for the period from 2010 to 2014.
    tuition

    • Find the best linear model for each province that would predict tuition each year and its associated R^2. Use calendar years to represent the x axis data.
    • Based on the linear fit, which province has the highest tuition increase per year and which one has the lowest?
  3. The yield strength of steel decreases with elevated temperatures. The following experimental data provides the value of the temperature in 1000 degrees Celsius and the corresponding reduction factor in the yield strength. (0.05,1), (0.1,1), (0.2,0.95), (0.3,0.92), (0.4,0.82), (0.5,0.6), (0.6,0.4), (0.7,0.15), (0.8,0.15), (0.9,0.1), (1.0,0.1). Find the parameters a_1, a_2, a_3, a_4, a_5 in the following model that would provide the best fit to the data:

        \[ R=a_1+\frac{a_2}{1+a_3 e^{\left(a_4x+a_5\right)}} \]

    Also, on the same plot draw the model prediction vs. the data. Based on the model, what is the reduction in the yield strength at temperatures of 150, 450, and 750 Degrees Celsius?

  4. In order to noninvasively assess the stability of medical implants, Westover et al. devised a technique by which the implant is struck using a hand-piece while measuring the acceleration response.ASISTIn order to assess the stability, the acceleration (a) data as a function of time (t) has to be fit to a model of the form:

        \[\begin{split} y(t)&=a_1\sin{(a_2t+\phi_1)}+a_3e^{-a_4a_5t}\sin{\left(\sqrt{\left(1-a_4^2\right)}a_5t+\phi_2\right)}\\ a(t)&=\frac{\mathrm{d}^2y(t)}{\mathrm{d}t^2} \end{split} \]

    where y(t) is the position, a(t) is the acceleration, a_1, a_2, a_3, a_4, a_5, \phi_1, and \phi_2 are the model parameters. Initial guesses for the model parameters can be taken as: 1, 1500, 1, 0, 50000, 0.1, 0.1 respectively.
    The following text file OneStrike.txt contains the acceleration data collected by striking one of the implants. Download the data file and use the NonLinearModelFit built-in function to calculate the parameters for the best-fit model along with the value of R^2. Plot the data overlapping the model.

    Reference: L. Westover, W. Hodgetts, G. Faulkner, D. Raboud. 2015. Advanced System for Implant Stability Testing (ASIST). OSSEO 2015 5th International Congress on Bone Conduction Hearing and Related Technologies, Fairmont Chateau Lake Louise, Canada, May 20 – 23, 2015.

  5. Find the parabola y=a_0+a_1x+a_2x^2 that best fits the following data:
    x 2.2 1.1 -0.11 -1.1 -2.2
    y 6.38 -1.21 -4.18 -3.63 1.65
  6. Find the exponential model that best fits the following data:
    data = {{10, 5.7}, {20, 18.2}, {30, 44.3}, {40, 74.5}, {50, 130.2}}
  7. Find the power model that best fits the following data:
    data = {{10, 5.7}, {20, 18.2}, {30, 44.3}, {40, 74.5}, {50, 130.2}}
  8. Linear regression can be extended when the data depends on more than one variable (called multiple linear regression). For example, if the dependent variable is z while the independent variables are x and y, then the data to be fitted have the form (x_i,y_i,z_i). In this case, the best-fit model is represented by a plane of the form z=a_0+a_1x+a_2y. Using least squares assumption, find expressions for finding the coefficients of the model: a_0, a_1, and a_2.

Leave a Reply

Your email address will not be published.