Open Educational Resources

Displacement and Strain: The Deformation and the Displacement Gradients

Learning Outcomes

  • Compute the “deformation gradient” and the “displacement gradient” when given a deformation function. Identify that the “deformation gradient” and the “displacement gradient” are fundamental for calculating strain.
  • Compute the small strain matrix and identify that it is the symmetric component of the displacement gradient

 Definitions

For a general 3D deformation of an object, local strains can be measured by comparing the “length” between two neighbouring points before and after deformation. Thus, we are interested in tracking lines or curves on the reference and deformed configuration. The restrictions on the possible position functions f (especially being differentiable) allow such comparisons and calculation as follows:

We first assume a material curve inside the reference and deformed configurations by assuming a parameter \xi\in\mathbb{R} that defines the curve (Figure 1). The position in the reference and deformed configurations respectively are given by X(\xi) and x(\xi,t) where t refers to time:

    \[X = \begin{pmatrix} X_1(\xi) \\ X_2(\xi) \\ X_3(\xi) \\ \end{pmatrix} \hspace{5mm} x =  \begin{pmatrix} x_1(\xi, t) \\ x_2(\xi, t) \\ x_3(\xi, t) \\ \end{pmatrix}\]

The tangents to the material curves at a point given by a particular value for the parameter \xi in the reference and deformed configurations are denoted N\in\mathbb{R}^3 and n\in\mathbb{R}^3 respectively (Figure 1) and given by:

    \[N = \dfrac{\partial X}{\partial \xi} =  \begin{pmatrix} \dfrac{\partial X_1}{\partial \xi} \\ \dfrac{\partial X_2}{\partial \xi} \\ \dfrac{\partial X_3}{\partial \xi} \\ \end{pmatrix} \hspace{5mm} n =  \dfrac{\partial x}{\partial \xi} = \dfrac{\partial x}{\partial X} \dfrac{\partial X}{\partial \xi} = \begin{pmatrix} \dfrac{\partial x_1}{\partial X_1} & \dfrac{\partial x_1}{\partial X_2} & \dfrac{\partial x_1}{\partial X_3} \\ \dfrac{\partial x_2}{\partial X_1} & \dfrac{\partial x_2}{\partial X_2} & \dfrac{\partial x_2}{\partial X_3} \\ \dfrac{\partial x_3}{\partial X_1} & \dfrac{\partial x_3}{\partial X_2}  & \dfrac{\partial x_3}{\partial X_3}  \\ \end{pmatrix} \begin{pmatrix} \dfrac{\partial X_1}{\partial \xi} \\ \dfrac{\partial X_2}{\partial \xi} \\ \dfrac{\partial X_3}{\partial \xi} \\ \end{pmatrix}\]

The matrix F:

    \[F =  \begin{pmatrix} \dfrac{\partial x_1}{\partial X_1} & \dfrac{\partial x_1}{\partial X_2} & \dfrac{\partial x_1}{\partial X_3} \\ \dfrac{\partial x_2}{\partial X_1} & \dfrac{\partial x_2}{\partial X_2} & \dfrac{\partial x_2}{\partial X_3} \\ \dfrac{\partial x_3}{\partial X_1} & \dfrac{\partial x_3}{\partial X_2}  & \dfrac{\partial x_3}{\partial X_3}  \\ \end{pmatrix}\]

is denoted the “Deformation Gradient” and contains all the required local information about the changes in length, volumes and angles due to the deformation as follows:

  • A tangent vector N in the reference configuration is deformed into the tangent vector n. The two vectors are related using the deformation gradient tensor F:

    \[n = FN\]

  • The ratio between the local volume of the deformed configuration to the local volume in the reference configuration is equal to the determinant of F.

  • If an infinitesimal area vector is termed (da) n and (dA) N in the deformed and reference configurations respectively, with da and dA in \mathbb{R} being the magnitudes of the area while n and N are the unit vectors perpendicular to the corresponding areas, then using Nanson’s formula shown in the section on the determinant of matrices \mathbb{M}^3, the relationship between them is given by:

  •     \[(da) n = \det(F)(dA)F^{-T}N\]

  • An isochoric deformation is a deformation preserving local volume, i.e., \det(F)=1 at every point.

  • A deformation is called homogeneous if F is constant at every point, i.e., F is not a function of position. Otherwise, the deformation is called non-homogeneous.

  • The physical restrictions of possible deformations force the \det(F) to be always positive. (why?)

Figure 1. Tangents to material curves in the reference and deformed configurations. Points C and D in the reference configuration correspond to points c and d in the deformed configuration.

The Polar Decomposition of the Deformation Gradient

One of the general results of linear algebra is the Polar Decomposition of matrices which states the following. Any matrix of real numbers F\in\mathbb{M}^n can be decomposed into two matrices multiplied by each other F=QU such that Q\in\mathbb{M}^n is an orthogonal matrix and U\in\mathbb{M}^n is a semi-positive definite symmetric matrix. In particular, if \det(F)>0, then we can find a rotation matrices R\in\mathbb{M}^n and two positive-definite symmetric matrices U\in\mathbb{M}^n and V\in\mathbb{M}^n such that

    \[\emph{F = RU = VR}\]

For continuum mechanics applications, U and V are termed the right and left stretch tensors respectively. The first equality is termed the “right” polar decomposition while the second is called the “left”. The proof of the above statement presented here is based on the two books:

and it will be presented using several statements.

Statement 1:

Let F\in\mathbb{M}^3 be such that \det(F)>0. Then, the matrices C=F^TF and B=FF^T are positive definite symmetric matrices.

PROOF:

The symmetry of C and B is straightforward as follows:

    \[C^{T} = (F^{T}F)^{T} = F^{T}F^{T^{T}} = F^{T}F = C \hspace{5mm} B^{T} = (FF^{T})^{T} = F^{T^{T}}F^{T} = FF^{T} = B\]

Also, since \det(F)>0, therefore: \forall x\in\mathbb{R}^n/\{0\} (i.e., x\neq 0) we have Fx\neq 0. Therefore, \|Fx\|^2=Fx\cdot Fx=x\cdot F^TFx=x\cdot Cx>0. Therefore, C is positive definite.

Similarly, it can be shown that B is positive definite.

Statement 2:

Let M\in\mathbb{M}^3 be such that M is a positive definite symmetric matrix. Show that there exists a unique square root positive-definite symmetric matrix for M (denoted by M^{1/2} such that M=M^{1/2}M^{1/2}.

PROOF:

The existence of a square root is straightforward. As per the results in the symmetric tensors section, we can choose a coordinate system such that M is diagonal with three positive real numbers M_1, M_2 and M_3 in the diagonal:

    \[M = \begin{pmatrix} M_1 & 0 & 0 \\ 0 & M_2 & 0 \\ 0 & 0 & M_3 \\ \end{pmatrix}\]

By setting \lambda_1=\sqrt{M_1}, \lambda_2=\sqrt{M_2} and \lambda_3=\sqrt{M_3} then:

    \[M^{1/2} = \begin{pmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \\ \end{pmatrix}\]

and it is straightforward to show that: M=M^{1/2}M^{1/2} in any coordinate system.

The uniqueness of M^{1/2} can be shown using contradiction by assuming that there are two different positive definite square roots U_1 and U_2 such that

    \[M = U_1U_1 = U_2U_2\]

M is a positive-definite symmetric matrix with positive eigenvalue M_1, M_2 and M_3. Let p\in\mathbb{R}^3 be the eigenvector associated with M_1, therefore:

    \[Mp = M_1p \Rightarrow U_1U_1p = M_1p = \lambda_1^{2}p \Rightarrow (U_1 + \lambda_1I)(U_1 - \lambda_1I)p = 0\]

Since by assumption, U_1 is positive definite, therefore \lambda_1 is an eigenvalue of U_1 associated with the eigenvector p; otherwise, -\lambda_1 is an eigenvalue of U_1 which contradicts that it is positive definite. This applies to U_2 as well and therefore, U_1 and U_2 are positive definite symmetric matrices that share the same eigenvalues and eigenvectors, therefore, they are identical (why?).

Notice that a positive definite symmetric matrix can have various “square roots”, however, there is only one unique square root that is also a positive definite symmetric matrix. For example, consider the matrix:

    \[M = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}\]

The unique positive definite symmetric square root of M is:

    \[M^{1/2} = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}\]

However, the following symmetric matrix A is also a square root of M nevertheless, it is NOT positive-definite:

    \[A = \sqrt{2} \begin{pmatrix} 1 & 1 & 0 \\ 1 & -1 & 0 \\ 0 & 0 & 1\sqrt{2} \\ \end{pmatrix}\]

Verify that M = AA and A is NOT positive-definite.

Statement 3:

Let F\in\mathbb{M}^3 be such that \det(F)>0 Show that there exists F can be uniquely decomposed into:

    \[F = RU = VR\]

where, U=(F^TF)^{1/2} and V=(FF^T)^{1/2} are positive-definite symmetric matrices while R\in\mathbb{M}^3is a rotation matrix.

PROOF:

From the statements 1 and 2 above, U and V are unique positive definite symmetric matrices, therefore, they are invertible (why?). Let

    \[R = FU^{-1} \hspace{5mm} S = V^{-1}F\]

Both R and S (why?). In addition:

    \[RR^{T} = FU^{-1}U^{-1}F^{T} = F(F^{T}F)^{-1}F^{T} = FF^{-1}(F)^{-T}F^{T} = I\]

and

    \[S^{T}S = F^{T}V^{-1}V^{-1}F = F^{T}(FF^{T})^{-1}F^{T} = F^{T}F^{-T}F^{-1}F = I\]

We also have: \det(U)^2=\det(F)\det(F^T) and \det(U)>0>, therefore, \det(U) = \det(F). Therefore, \det(R)=1. Similarly, \det(S)=1. Therefore, both, R and S are rotation matrices. Since U and V are invertible and unique, R and S are also unique. We can show this by contradiction, by assuming that:

    \[F = RU = R'U\]

Therefore:

    \[RUU^{-1} = R'UU^{-1} \Rightarrow R = R'\]

The same argument applies for S.

Finally, it is required to show that S=R indeed:

    \[F = RU = VS = IVS = (S^{T}S)VS = S(S^{T}VS)\]

However, S^TVS is a positive definite symmetric matrix (why?), and since the decomposition F=RU is unique, therefore,

    \[S = R \hspace{5mm} U = R^{T}VR\]

Statement 4:

V and U share the same eigenvalues and their eigenvectors differ by the rotation R

PROOF:

Assuming that \lambda_1 is an eigenvalue of V with the corresponding eigenvector n_1, then:

    \[Vn_1 = \lambda_1n_1 \Rightarrow (RUR^{T})n_1 = \lambda_1n_1 \Rightarrow (R^{T}RU)(R^{T}n_1) = \lambda_1(R^{T}n_1) \Rightarrow U(R^{T}n_1) = \lambda_1(R^{T}n_1)\]

Therefore, \lambda_1 is an eigenvalue of U while R^T n_1 is the associated eigenvector.

Similarly, if \lambda_2 is an eigenvalue of U with the corresponding eigenvector N_2, then:

    \[UN_2 = \lambda_2n_2 \Rightarrow (R^{T}VR)N_2 = \lambda_2N_2 \Rightarrow (RR^{T}V)(RN_2) = \lambda_2(RN_2) \Rightarrow U(RN_2) = \lambda_2(RN_2)\]

Therefore, \lambda_2 is an eigenvalue of V while R N_2 is the associated eigenvector. Assuming that \lambda_1, \lambda_2 and \lambda_3 are the eigenvalues of U and V, N_1, N_2, and N_3 are the associated eigenvectors of U, while n_1, n_2, and n_3 are the associated eigenvectors of V, then U and V admit the representations (see the section about the representation of symmetric matrices):

    \[U = \lambda_1(N_1 \otimes N_1) + \lambda_2(N_2 \otimes N_2) + \lambda_3(N_3 \otimes N_3)\]

    \[V = \lambda_1(n_1 \otimes n_1) + \lambda_2(n_2 \otimes n_2) + \lambda_3(n_3 \otimes n_3)\]

and \forall i\in\{1,2,3\}:

    \[n_i = RN_i\]

Physical Interpretation

The unique decomposition of the deformation gradient F into a rotation and a stretch indicates that any smooth deformation can be decomposed at any point inside the continuum body into a unique stretch described by U followed by a unique rotation described by R. For example, a circle representing the directions of all the vectors in \mathbb{R}^2 is deformed into an ellipse under the action of F:\mathbb{R}^2\rightarrow\mathbb{R}^2 where \det(F)>0. The decomposition F = RU is schematically shown by first stretching the circle into an ellipse whose major axes are the eigenvectors of U followed by a rotation of the ellipse through the matrix R. The decomposition F=VR represents rotating the circle through the matrix R and then stretching the circle into an ellipse whose major axes are the eigenvectors of V. Notice that the eigenvectors of V and the eigenvectors of U differ by a mere rotation. In the following tool, change the values of the four components of the matrix F\in\mathbb{R}^2. The code first ensures that \det(F)>0. Once this condition is satisfied, the tool draws the two steps of the right polar decomposition in the first row and then the steps of the left polar decomposition in the second row. In the first image on the first row, the arrows indicate the eigenvectors of U. The arrows are shown to deform but keep their direction in the second image of the first row. Then, after applying the rotation, the arrows rotate in the third image of the first row. In the second image of the second row, the arrows are rotated with the matrix R without any change in length. Then, they are deformed using the matrix V in the third image of the second row.

The Singular-Value Decomposition of the Deformation Gradient

One of the general results of linear algebra is the Singular-Value Decomposition of real or complex matrices. When the statement is applied to a matrix F\in\mathbb{M}^3 with \det(F)>0 it states that

    \[F = PDQ^{T}\]

Where, P and Q are rotation matrices while the matrix D is a diagonal matrix with positive diagonal entries. The singular-value decomposition follows immediately from the previous section on the polar decomposition of the deformation gradient. By setting F=RU and realizing that U is a positive definite symmetric matrix, then using the spectral form of symmetric tensors U can be decomposed such that U=QDQ^T where D is a diagonal matrix whose diagonal entries are positive and Q is a rotation matrix whose columns are the normalized eigenvectors of U (The rows of Q^T are the normalized eigenvectors of U). In particular, they are the square roots of the eigenvalues of the positive definite symmetric matrix F^TF. Therefore:

    \[F = RU = RQDQ^{T}\]

By setting P = RQ we get the required result:

    \[F = PDQ^{T}\]

The following tool calculates the polar decomposition and the singular value decomposition of a matrix F\in\mathbb{R}^3. Enter the values for the components of F and the tool calculates all the required matrices after checking that \det(F)>0.

The Right Cauchy-Green Deformation Tensor

The tensorC=F^TF is termed the right Cauchy-Green deformation tensor. As shown above, it is a positive definite symmetric matrix, thus, it has three positive real eigenvalues and three perpendicular eigenvectors. It also has a unique positive square root U^2=C (See statement 2 above) such that U has the same eigenvectors, while the eigenvalues of U are the positive square roots of the eigenvalues of C. Denote \lambda_1^2, \lambda_2^2, and \lambda_3^2 as the eigenvalues of C with the corresponding eigenvectors N_1, N_2 and N_3, then C, U and F admit the representations (see the section about the representation of symmetric matrices):

    \[C = \lambda_1^{2}(N_1 \otimes N_1) + \lambda_2^{2}(N_2 \otimes N_2) + \lambda_3^{2}(N_3 \otimes N_3)\]

    \[U = \lambda_1(N_1 \otimes N_1) + \lambda_2(N_2 \otimes N_2) + \lambda_3(N_3 \otimes N_3)\]

    \[U = R(\lambda_1(N_1 \otimes N_1) + \lambda_2(N_2 \otimes N_2) + \lambda_3(N_3 \otimes N_3))\]

It is worth noting that the last expression for F is equivalent to the singular-value decomposition of F described above. The singular-value decomposition, F=RQDQ^T where Q is a rotation matrix whose columns are the eigenvectors of U, is more convenient for component calculations, while the last expression with tensor product is much more useful for formula manipulation.

The Left Cauchy-Green Deformation Tensor:

The tensor B=FF^T is termed the left Cauchy-Green deformation tensor. As shown above, it is a positive definite symmetric matrix, thus, it has three positive real eigenvalues and three perpendicular eigenvectors. It also has a unique positive square root V^2=C (See statement 2 above) such that V has the same eigenvectors, while the eigenvalues of V are the positive square roots of the eigenvalues of B. From statement 2 and statement 4 above, the eigenvalues of B=V^2 and C=U^2 are the same while the eigenvectors differ by the rotation R (why?). Denote \lambda_1^2, \lambda_2^2, and \lambda_3^2 as the eigenvalues of B with the corresponding eigenvectors n_1, n_2 and n_3, then B, V and F admit the representations (see the section about the representation of symmetric matrices):

    \[B = \lambda_1^{2}(n_1 \otimes n_1) + \lambda_2^{2}(n_2 \otimes n_2) + \lambda_3^{2}(n_3 \otimes n_3)\]

    \[V = \lambda_1(n_1 \otimes n_1) + \lambda_2(n_2 \otimes n_2) + \lambda_3(n_3 \otimes n_3)\]

    \[F = VR = (\lambda_1(n_1 \otimes n_1) + \lambda_2(n_2 \otimes n_2) + \lambda_3(n_3 \otimes n_3))R\]

By utilizing the properties of the tensor product , the following alternative representation of F can be obtained:

    \[F = \lambda_1(n_1 \otimes N_1) + \lambda_2(n_2 \otimes N_2) + \lambda_3(n_3 \otimes N_3)\]

Also, since N_1, N_2, and N_3 are orthonormal, then \forall a\in\mathbb{R}^3:a=a_1N_1+a_2N_2+a_3N_3. Therefore:

    \[Ra = R(a_1N_1 +a_2N_2 + a_3N_3) = a_1RN_1 + a_2RN_2 + a_3RN_3 = (a \cdot N_1)n_1 + (a \cdot N_2)n_2 + (a \cdot N_3)n_3\]

By utilizing the properties of the tensor product:

    \[Ra = (n_1 \otimes N_1)a + (n_2 \otimes N_2)a + (n_3 \otimes N_3)a\]

Therefore:

    \[R = (n_1 \otimes N_1) + (n_2 \otimes N_2) + (n_3 \otimes N_3)\]

The Displacement Gradient Tensor

Another three dimensional measure of deformation is the displacement gradient tensor. The displacement gradient tensor appears naturally when we attempt to write the relationship between a tangent vector dX in the reference configuration deformation and its image under deformation dx such that:

    \[dx = dX + du\]

where du is the “displacement” vector that describes the change in tangent vectors.

As discussed in the deformation gradient section, dx and dX are related as follows:

    \[dx = FdX\]

Therefore, the “displacement” vector du can be written as:

    \[du = dx - dX = (F-I)dX\]

The tensor \nabla u=F-I is denoted the displacement gradient tensor and can be written in component form as follows:

    \[\nabla u =  \begin{pmatrix} \dfrac{\partial u_1}{\partial X_1} & \dfrac{\partial u_1}{\partial X_2} & \dfrac{\partial u_1}{\partial X_3} \\ \dfrac{\partial u_2}{\partial X_1} & \dfrac{\partial u_2}{\partial X_2} & \dfrac{\partial u_2}{\partial X_3} \\ \dfrac{\partial u_3}{\partial X_1} & \dfrac{\partial u_3}{\partial X_2}  & \dfrac{\partial u_3}{\partial X_3}  \\ \end{pmatrix} =  \begin{pmatrix} \dfrac{\partial x_1}{\partial X_1} & \dfrac{\partial x_1}{\partial X_2} & \dfrac{\partial x_1}{\partial X_3} \\ \dfrac{\partial x_2}{\partial X_1} & \dfrac{\partial x_2}{\partial X_2} & \dfrac{\partial x_2}{\partial X_3} \\ \dfrac{\partial x_3}{\partial X_1} & \dfrac{\partial x_3}{\partial X_2}  & \dfrac{\partial x_3}{\partial X_3}  \\ \end{pmatrix} -  \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}\]

As described in the skewsymmetric tensors section, every tensor can be uniquely decomposed into two additive components, a symmetric tensor and a skewsymmetric tensor. By denoting the symmetric part as \varepsilon or the “infinitesimal strain tensor” and the skewsymmetric part as W_{inf} or the “infintesimal rotation tensor” we can write the relationship between the vectors in the reference and deformed configuration as follows:

    \[dx = dX + \nabla udX = dX + \epsilon dX + W_{inf}dX\]

In other words, the additive decomposition of the displacement gradient tensor allows to write the deformed vector dx as the additive combination of three vectors: the original vector dX, plus a “strain” or “stretch” component \varepsilon dX, plus a “rotation” component W_{inf} dX. The stretch component can be calculated using the symmetric tensor \varepsilon while the rotation component can be calculated using the skewsymmetric tensor W_{inf}. Both tensors are physically meaningful when \nabla u has very small components \frac{\partial u_i}{\partial X_j}<<1 (small displacements).

Videos

Deformation and Displacement Gradient

Deformation Gradient (Advanced)

Quizzes

Quiz 13 – Deformation Gradient and Displacement Gradient

Solution Guide

Deformation Gradient (Advanced) Quiz

Leave a Reply

Your email address will not be published.