Displacement and Strain: The Deformation Gradient
Definitions:
For a general 3D deformation of an object, local strains can be measured by comparing the “length” between two neighbouring points before and after deformation. Thus, we are interested in tracking lines or curves on the reference and deformed configuration. The restrictions on the possible position functions (especially being differentiable) allow such comparisons and calculation as follows:
We first assume a material curve inside the reference and deformed configurations by assuming a parameter that defines the curve (Figure 1). The position in the reference and deformed configurations respectively are given by and where refers to time:
The tangents to the material curves at a point given by a particular value for the parameter in the reference and deformed configurations are denoted and respectively (Figure 1) and given by:
The matrix :
is denoted the “Deformation Gradient” and contains all the required local information about the changes in length, volumes and angles due to the deformation as follows:
- A tangent vector in the reference configuration is deformed into the tangent vector . The two vectors are related using the deformation gradient tensor :
- The ratio between the local volume of the deformed configuration to the local volume in the reference configuration is equal to the determinant of .
- If an infinitesimal area vector is termed and in the deformed and reference configurations respectively, with and in being the magnitudes of the area while and are the unit vectors perpendicular to the corresponding areas, then using Nanson’s formula shown in the section on the determinant of matrices , the relationship between them is given by:
- An isochoric deformation is a deformation preserving local volume, i.e., at every point.
- A deformation is called homogeneous if is constant at every point, i.e., is not a function of position. Otherwise, the deformation is called non-homogeneous.
- The physical restrictions of possible deformations force the to be always positive. (why?)
The Polar Decomposition of the Deformation Gradient:
One of the general results of linear algebra is the Polar Decomposition of matrices which states the following. Any matrix of real numbers can be decomposed into two matrices multiplied by each other such that is an orthogonal matrix and is a semi-positive definite symmetric matrix. In particular, if , then we can find a rotation matrices and two positive-definite symmetric matrices and such that
For continuum mechanics applications, and are termed the right and left stretch tensors respectively. The first equality is termed the “right” polar decomposition while the second is called the “left”. The proof of the above statement presented here is based on the two books:
- Ciarlet, P.G. (2004). Mathematical Elasticity. Volume 1: Three Dimensional Elasticity. Elsevier Ltd.
- Chadwick, P. (1999). Continuum Mechanics. Concise Theory and Problems. Dover Publications Inc.
and it will be presented using several statements.
Statement 1:
Let be such that . Then, the matrices and are positive definite symmetric matrices.
Proof:
The symmetry of and is straightforward as follows:
Also, since , therefore: (i.e., ) we have . Therefore, . Therefore, is positive definite.
Similarly, it can be shown that is positive definite.
Statement 2:
Let be such that is a positive definite symmetric matrix. Show that there exists a unique square root positive-definite symmetric matrix for (denoted by such that .
Proof:
The existence of a square root is straightforward. As per the results in the symmetric tensors section, we can choose a coordinate system such that is diagonal with three positive real numbers and in the diagonal:
By setting and then:
and it is straightforward to show that: in any coordinate system.
The uniqueness of can be shown using contradiction by assuming that there are two different positive definite square roots and such that
is a positive-definite symmetric matrix with positive eigenvalue and . Let be the eigenvector associated with , therefore:
Since by assumption, is positive definite, therefore is an eigenvalue of associated with the eigenvector ; otherwise, is an eigenvalue of which contradicts that it is positive definite. This applies to as well and therefore, and are positive definite symmetric matrices that share the same eigenvalues and eigenvectors, therefore, they are identical (why?).
Notice that a positive definite symmetric matrix can have various “square roots”, however, there is only one unique square root that is also a positive definite symmetric matrix. For example, consider the matrix:
The unique positive definite symmetric square root of is:
However, the following symmetric matrix is also a square root of , nevertheless, it is NOT positive-definite:
Verify that and is NOT positive-definite.
Statement 3:
Let be such that . Show that there exists can be uniquely decomposed into:
where, and are positive-definite symmetric matrices while is a rotation matrix.
Proof:
From the statements 1 and 2 above, and are unique positive definite symmetric matrices, therefore, they are invertible (why?). Let
Both and are invertible, (why?). In addition:
and
We also have: and , therefore, . Therefore, . Similarly, . Therefore, both, and are rotation matrices.
Since and are invertible and unique, and are also unique. We can show this by contradiction, by assuming that:
Therefore:
The same argument applies for .
Finally, it is required to show that , indeed:
However, is a positive definite symmetric matrix (why?), and since the decomposition is unique, therefore,
Statement 4:
and share the same eigenvalues and their eigenvectors differ by the rotation
Proof:
Assuming that is an eigenvalue of with the corresponding eigenvector , then:
Therefore, is an eigenvalue of while is the associated eigenvector.
Similarly, if is an eigenvalue of with the corresponding eigenvector , then:
Therefore, is an eigenvalue of while is the associated eigenvector.
Assuming that , and are the eigenvalues of and , , , and are the associated eigenvectors of , while , , and are the associated eigenvectors of , then and admit the representations (see the section about the representation of symmetric matrices):
and :
Physical Interpretation:
The unique decomposition of the deformation gradient F into a rotation and a stretch indicates that any smooth deformation can be decomposed at any point inside the continuum body into a unique stretch described by followed by a unique rotation described by . For example, a circle representing the directions of all the vectors in is deformed into an ellipse under the action of where . The decomposition is schematically shown by first stretching the circle into an ellipse whose major axes are the eigenvectors of followed by a rotation of the ellipse through the matrix . The decomposition represents rotating the circle through the matrix and then stretching the circle into an ellipse whose major axes are the eigenvectors of . Notice that the eigenvectors of V and the eigenvectors of U differ by a mere rotation. In the following tool, change the values of the four components of the matrix . The code first ensures that . Once this condition is satisfied, the tool draws the two steps of the right polar decomposition in the first row and then the steps of the left polar decomposition in the second row. In the first image on the first row, the arrows indicate the eigenvectors of . The arrows are shown to deform but keep their direction in the second image of the first row. Then, after applying the rotation, the arrows rotate in the third image of the first row. In the second image of the second row, the arrows are rotated with the matrix without any change in length. Then, they are deformed using the matrix in the third image of the second row.
The Singular-Value Decomposition of the Deformation Gradient:
One of the general results of linear algebra is the Singular-Value Decomposition of real or complex matrices. When the statement is applied to a matrix with it states that can be decomposed as follows:
Where, and are rotation matrices while the matrix is a diagonal matrix with positive diagonal entries. The singular-value decomposition follows immediately from the previous section on the polar decomposition of the deformation gradient. By setting and realizing that is a positive definite symmetric matrix, then using the spectral form of symmetric tensors can be decomposed such that where is a diagonal matrix whose diagonal entries are positive and is a rotation matrix whose columns are the normalized eigenvectors of (The rows of are the normalized eigenvectors of ). In particular, they are the square roots of the eigenvalues of the positive definite symmetric matrix .
Therefore:
By setting we get the required result:
The following tool calculates the polar decomposition and the singular value decomposition of a matrix . Enter the values for the components of and the tool calculates all the required matrices after checking that .
The Right Cauchy-Green Deformation Tensor:
The tensor is termed the right Cauchy-Green deformation tensor. As shown above, it is a positive definite symmetric matrix, thus, it has three positive real eigenvalues and three perpendicular eigenvectors. It also has a unique positive square root (See statement 2 above) such that has the same eigenvectors, while the eigenvalues of are the positive square roots of the eigenvalues of . Denote , , and as the eigenvalues of with the corresponding eigenvectors , and , then , and admit the representations (see the section about the representation of symmetric matrices):
It is worth noting that the last expression for is equivalent to the singular-value decomposition of described above. The singular-value decomposition, where is a rotation matrix whose columns are the eigenvectors of , is more convenient for component calculations, while the last expression with tensor product is much more useful for formula manipulation.
The Left Cauchy-Green Deformation Tensor:
The tensor is termed the left Cauchy-Green deformation tensor. As shown above, it is a positive definite symmetric matrix, thus, it has three positive real eigenvalues and three perpendicular eigenvectors. It also has a unique positive square root (See statement 2 above) such that has the same eigenvectors, while the eigenvalues of are the positive square roots of the eigenvalues of . From statement 2 and statement 4 above, the eigenvalues of and are the same while the eigenvectors differ by the rotation (why?). Denote , , and as the eigenvalues of with the corresponding eigenvectors , and , then , and admit the representations (see the section about the representation of symmetric matrices):
By utilizing the properties of the tensor product, the following alternative representation of can be obtained:
Also, since , , and are orthonormal, then . Therefore:
By utilizing the properties of the tensor product:
Therefore: