Linear Maps Between Vector Spaces: Basic Definitions
Learning Outcomes
- Describe matrices as linear functions from one space to another. Describe the matrix as a representation of this function in a particular ONB.
- Calculate the Kernel of simple matrices. Describe what the Kernel represents. Define an invertible matrix as one that has one element in the kernel.
- Calculate the components of a matrix in various ONBs
- Compute matrix multiplication
- Differentiate between an invertible (bijective) matrix and one that is a non-invertible matrix using the geometry of a linear map.
- Calculate the determinant of a 2×2 and a 3×3 matrix. Explain that the determinant is a measure of whether a matrix is invertible or not.
- Identify and apply the properties of the determinant function.
- Define eigenvalues and eigenvectors. Identify the properties of the eigenvalues and eigenvectors.
- Eigenvalues and Eigenvectors: Visualize the geometry of eigenvalues and eigenvectors (The online tool helps the visualization of the eigenvector as the vector that does not change direction when transformed by the matrix).
Compute eigenvalues and eigenvectors.
Linear Maps
A linear map between two vectors spaces
and
is a function
such that
:
Notice that the addition of two linear maps and their multiplication by scalars produce a linear map as well, which imply that the set of linear maps is also a linear vector space.
It is important not to confuse linear maps with affine maps. For example, the function defined such that
is not a linear map but rather an affine map.
is not a linear map since in general
. On the other hand, the function
defined such that
is indeed a linear map.
Tensors
Linear maps between finite dimensional linear vector spaces is one example of functions that are referred to as Tensors. Tensor analysis provides a natural and concise mathematical tool for the analysis of various engineering problems, in particular, solid mechanics. For a detailed description of tensors, refer to the Wolfram article on tensors.
According to wikipedia, the origin of the word “Tensors” dates back to the nineteenth century when it was introduced by Woldemar Voigt. It is likely that the word originated because one of the early linear operators introduced was the symmetric Cauchy stress matrix which functions to convert area vectors to force vectors. At the time, perhaps the scientists were interested in things that “stretch” and thus, the word “Tensor” from the Latin route “Tendere” came about.
Kernel of Linear Maps
Let T be a linear map between two vector spaces and
. Then, the kernel of
or
is the set of all vectors that are mapped into the zero vector, i.e.:
For example, consider the linear map defined such that
,
. Then, the kernel of this linear map consists of al the vectors in
that are mapped to zero, i.e., the vectors whose components
and
satisfy:
There are infinitely many vectors that satisfy this condition. The set of all those vectors is given as:
![Rendered by QuickLaTeX.com \ker(T)=\left\{(-1.2t,t)|t\in\mathbb{R}\right\}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2318648c6430c8b23ee8f2d780dd5cbc_l3.png)
Matrix Representation of Linear Maps
The matrix representation of linear maps is the most convenient way to represent linear maps when orthonormal basis sets are chosen for the underlying vector spaces. Consider the linear map . Let
and
be the orthonormal basis sets for
and
respectively. Then, because of the linearity of the map, the map is indeed well defined by the components of the vectors
. Since
we can assume that it has
components which can be denoted as follows:
![Rendered by QuickLaTeX.com \forall x\in\mathbb{R}^n:x=\sum_{j=1}^n(x_je_j)](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5e7b06d644f46988aa66b84ff104a7c0_l3.png)
![Rendered by QuickLaTeX.com Te_j](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-bdce4c2ddb04192c7de9bfb25af3e2d2_l3.png)
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
Matrix Representation and Change of Basis
The components of the matrix representation of defined above depend on the choice of the orthonormal basis sets for each vector space. For the discussion in this section, we will restrict ourselves to square matrices, i.e., linear maps between vector spaces of the same dimension.
Let . Let
be the chosen orthonormal basis set for both vector spaces and let
be another orthonormal basis set and let
be the matrix of coordinate transformation as defined in the Change of Basis section. The matrix representation of
when
is chosen as the basis set is denoted by
. The relationship between
and
can be obtained as follows:
Let , denote
. Let
and
denote the representation of
and
when
is chosen as the coordinate system. Therefore in each coordinate system we have:
This is true for every , therefore:
In the following tool, you can choose the components of the matrix and the vector
along with an angle
of the counterclockwise rotation of the coordinate system. The tool then applies the transformation of coordinates from the coordinate system
, to
where
are vectors rotated by
counterclockwise from
, and
. On the left hand side, the tool draws the vector
in blue, the vector
in red, the original coordinate system in black, and the vectors of the new coordinate system in dashed black. At the bottom of the left hand side drawing you will find the expressions for
and
using the basis set
. On the right hand side, the tool draws the vectors
in blue,
in red, and the new coordinate system in black. At the bottom of the right hand side, you will find the expressions for
and
using the basis set
. You can also check out the external tool built in MecsimCalc for changes of basis.
Similarly, the following tool is for three dimensional Euclidean vector spaces. The new coordinate system is obtained by simultaneously applying a counterclockwise rotation
, and
around the first, second, and third coordinate system axis, respectively. The view can be zoomed, rotated or panned by the mouse scroll wheel, holding down the left mouse button and moving the mouse, and holding down the right mouse button and moving the mouse.
Tensor Product
Let and
. The tensor product denoted by
is a linear map
defined such that
:
In simple words, the tensor product defined above utilizes the linear dot product operation and a fixed vector to produce a real number using the expression
, which is conveniently a linear function of
. The resulting number is then multiplied by the vector
.
Obviously, the tensor product of vectors belonging to vector spaces of dimensions higher than 1 are not invertible, in fact, the range of is one dimensional (why?)!
The following are some of the properties of the tensor product that can be deduced directly from the definition and the properties of the dot product operation, :
![Rendered by QuickLaTeX.com p,q](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0daab970308e47a5f712044304f2d97c_l3.png)
![Rendered by QuickLaTeX.com r\in\mathbb{R}^3](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-4c72e89a6dfafc4e3b7c61fa0779ad51_l3.png)
Matrix Representation of the Tensor Product
Let and consider the tensor product
. Consider the orthonormal basis set
. Then, the tensor product can be expressed in component form as follows:
Now, we have:
Which, can be represented in matrix form as follows:
Tensor Product Representation of Linear Maps
A linear map can be decomposed into the sum of multiple tensor products. For example, one can think of a linear map between three dimensional vector spaces, as the sum of three tensor products:
![Rendered by QuickLaTeX.com \{u,v.w\}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-d6be1e7ac7c6c2c3401539c3dbd578ff_l3.png)
![Rendered by QuickLaTeX.com \{x,y,z\}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-35cac43dabc5f4e1a8f61de8d9bd890b_l3.png)
There is a direct relationship between the tensor product representation and the matrix representation as follows: let and let
be an orthonormal basis set for both vector spaces, then,
:
![Rendered by QuickLaTeX.com T:\mathbb{R}^3\rightarrow\mathbb{R}^3](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7bafd0031f35a3544d897fdc147f4412_l3.png)
Video
See “Tensor Product” Video here
The Set of Linear Maps
In these pages, the notation is used to denote the set of linear maps between
and
. i.e.,:
![Rendered by QuickLaTeX.com \mathbb{B}(\mathbb{R}^n)](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-74e6a0133213b9a44aca2893014c8879_l3.png)
![Rendered by QuickLaTeX.com \mathbb{R}^n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7af0bce2c4c77e1979699540550301cf_l3.png)
![Rendered by QuickLaTeX.com \mathbb{R}^n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7af0bce2c4c77e1979699540550301cf_l3.png)
![Rendered by QuickLaTeX.com n\times n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-79d3b483e760859c70a6f705cc41aa87_l3.png)
![Rendered by QuickLaTeX.com \mathbb{M}^n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-571d697e13d124d6c6ead56c3da2b41d_l3.png)
![Rendered by QuickLaTeX.com \mathbb{B}(\mathbb{R}^n)](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-74e6a0133213b9a44aca2893014c8879_l3.png)
The Algebraic Structure of The Set of Linear Maps
In addition to being a vector space, the elements of the sets of linear maps has an algebraic structure arising naturally from the composition operation. Let and
, then, the composition map
is also a linear map since
,
:
![Rendered by QuickLaTeX.com x\in\mathbb{R}^n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-6adfefbebc1bb290ac00785d2acfbaf0_l3.png)
![Rendered by QuickLaTeX.com M](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-c3762c62cf4e6d98a9661ad73deaee22_l3.png)
![Rendered by QuickLaTeX.com N](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0e7c8785d3afc88f341f2a80852d3e34_l3.png)
![Rendered by QuickLaTeX.com U](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2d74e2e8fe05152d2b2722807f1dfc36_l3.png)
![Rendered by QuickLaTeX.com V](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-91c960ec21daed478da0623e37abb4f0_l3.png)
![Rendered by QuickLaTeX.com L](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8fc4a64bb149669d8ae9594aeec2db6c_l3.png)
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
![Rendered by QuickLaTeX.com Lx=NMx](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-a653bad0998cd36f37953015a1aba0f3_l3.png)
![Rendered by QuickLaTeX.com L_{ij}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-269f986b44369d29818606b24b24f410_l3.png)
![Rendered by QuickLaTeX.com N](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0e7c8785d3afc88f341f2a80852d3e34_l3.png)
![Rendered by QuickLaTeX.com M](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-c3762c62cf4e6d98a9661ad73deaee22_l3.png)
![Rendered by QuickLaTeX.com i^{th}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-05e9bbc15085e05731ebfe08ed306188_l3.png)
![Rendered by QuickLaTeX.com j^{th}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-b69eeeced9bdece2cff9c0be2ecc34a0_l3.png)
![Rendered by QuickLaTeX.com L](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8fc4a64bb149669d8ae9594aeec2db6c_l3.png)
![Rendered by QuickLaTeX.com i^{th}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-05e9bbc15085e05731ebfe08ed306188_l3.png)
![Rendered by QuickLaTeX.com N](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0e7c8785d3afc88f341f2a80852d3e34_l3.png)
![Rendered by QuickLaTeX.com j^{th}](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-b69eeeced9bdece2cff9c0be2ecc34a0_l3.png)
![Rendered by QuickLaTeX.com M](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-c3762c62cf4e6d98a9661ad73deaee22_l3.png)
![Rendered by QuickLaTeX.com NM](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-301d596c91f91b08b9750a54ee5ed401_l3.png)
![Rendered by QuickLaTeX.com MN](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-d4b690972c104bd0f2cc47c8fabf246d_l3.png)
![Rendered by QuickLaTeX.com n\neq m \neq l](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-f2af4c12b26040e5365919686be6f70e_l3.png)
However, if , and their respective associated matrices are
then both composition maps are well defined. The first one is the composition map
with its associated matrix
while the second is the composition map
and its associated matrix
. In general, these two maps are not identical.
The identity map and its associated identity matrix
is the identity element in the algebraic structure of
Bijective(Invertible) Linear Maps
In this section, we are concerned with the linear maps represented by square matrices and whether these linear maps (linear functions) are invertible or not. Recall from the Mathematical Preliminaries section that a function
is invertible if
such that
.
is denoted by
. Let’s now consider the linear map (represented by a matrix
)
, what are the conditions that guarantee the existence of
such that
where
is the identity matrix? We will answer this question using a few statements:
Statement 1: Let be a linear map. Then
.
This statement is simple to prove. First note that since
is a linear map, then
.
First, assume
is injective. Since
and since
is injective therefore, 0 is the unique image of 0. Therefore,
. For the opposite statement, assume that
. We will argue by contradiction, i.e., assuming that
is not injective. Therefore,
with
but
. Since
is linear we have
. Therefore,
which is a contradiction. Therefore
is injective.
◼
Statement 2: Let be a linear map. Then
.
First assume that
is invertible, therefore,
is injective. Statement 1 asserts then that
.
Assume now that
. Therefore, from statement 1,
is injective. We need to show that
is surjective. Note that using Statement 1, and since an invertible map is also injective, then we just need to show that
. This can be proven by picking a basis set
for
and showing that the set
is linearly independent which right away implies that
is surjective. Since
is injective and
is linearly independent we have:
![Rendered by QuickLaTeX.com Te_i](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cc1928197c2f303bfd859b9cddb587d5_l3.png)
![Rendered by QuickLaTeX.com \mathbb{R}^n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7af0bce2c4c77e1979699540550301cf_l3.png)
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
![Rendered by QuickLaTeX.com \forall y\in \mathbb{R}^n:\exists y_i](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-aac34b862f2316e32a018918e5180c93_l3.png)
![Rendered by QuickLaTeX.com x=y_1e_1+y_2e_2+\cdots+y_ne_n](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-06c33b5ecb5682d761f7e9e403727283_l3.png)
![Rendered by QuickLaTeX.com y](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2bbb992535337282f396724234c55353_l3.png)
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
◼
Statement 3: Let be a linear map. Then
the
vectors forming the square matrix of
are linearly independent.
First assume that are
linearly independent vectors that form the row vectors of the linear map
. We will argue by contradiction. Assume that
and
. Then,
. However, since
are linearly independent, they form a basis set and
can be expressed in terms of all of them. Therefore
. But
is orthogonal to all of them, then
. Therefore,
and the map is bijective using statement 2.
For the opposite direction, assume that the map is bijective yet
are linearly dependent. Since they are linearly dependent, therefore there is at least one vector that can be represented as a linear combination of the other vectors. Without loss of generality, assume that
. Therefore
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
![Rendered by QuickLaTeX.com n-1](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2e076f569830d70110794bd4f0c8ab3e_l3.png)
![Rendered by QuickLaTeX.com T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-cb229d438c955b6bf9114d4e6a1a90f3_l3.png)
◼
Statement 3 asserts that a square matrix is invertible if and only if the rows are linearly independent. In the following section, we will present the determinant of a matrix as a measure of whether the rows are linearly independent or not.
Determinant
The determinant of a matrix representation of a linear map is a real valued function of the components of a square matrix. The determinant is used to indicate whether the rows of the matrix are linearly dependent or not. If they are, then the determinant is equal to zero, otherwise, the determinant is not equal to zero. In the following, we will show the definition of the determinant function for
and for a general
. We will also verify that the determinant of
is equal to zero if and only if the row vectors of the matrix are linearly dependent for the cases
and
.
DETERMINANT OF
:
Let such that
The determinant of is defined as:
Clearly, the vectors and
are linearly dependent if and only if
. The determinant of the matrix
has a geometric meaning (See Figure 1). Consider the two unit vectors
and
. Let
and
. The area of the parallelogram formed by
and
is equal to the determinant of the matrix
.
The following is true and
:
where is the identity matrix.
![](https://engcourses-uofa.ca/wp-content/uploads/Det2-1.png)
![Rendered by QuickLaTeX.com \mathbb{M}^2](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7ced0fdfdd1d347aed785cf554c13705_l3.png)
DETERMINANT OF
:
Let such that
If , and
, then the determinant of
is defined as:
I.e., the tripe product of
, and
. From the results of the triple product, the vectors
, and
are linearly dependent if and only if
. The determinant of the matrix M has a geometric meaning (See Figure 2). Consider the three unit vectors
, and
. Let
, and
. The determinant of
is also equal to the triple product of
, and
and gives the volume of the parallelepiped formed by
, and
.
Additionally, and
are linearly independent, it is straightforward to show the following:
In other words, the determinant gives the ratio between and
where
is the volume of the transformed parallelepiped between
, and
and
is the volume of the parallelepiped between
, and
.
The alternator
defined in Mathematical Preliminaries can be used to write the followign useful equality:
The following is true and
:
where is the identity matrix.
![](https://engcourses-uofa.ca/wp-content/uploads/Det3-1.png)
![Rendered by QuickLaTeX.com \mathbb{M}^3](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-33a7178474696b438979cffc8c68a7f3_l3.png)
AREA TRANSFORMATION IN
:
The following is a very important formula (often referred to as “Nanson’s Formula”) that relates the cross product of vectors in to the cross product of their images under a linear transformation. This formula is used to relate area vectors before mapping to area vectors after mapping.
ASSERTION:
Let . Let
be an invertible matrix. Show the following relationship:
PROOF:
Let be an arbitrary vector in
. From the relationships above we have:
Therefore:
![Rendered by QuickLaTeX.com w](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8b048dfa22fda3e3cdd2f7d5c26627c6_l3.png)
![Rendered by QuickLaTeX.com M^T\left(Mu\times Mv\right)](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5ca3878be0b08cf77811f7535ab79a23_l3.png)
![Rendered by QuickLaTeX.com (\det{M})(u\times v)](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-a2ddf50a8c1e7d51f7bda1a6c24bdbc6_l3.png)
![Rendered by QuickLaTeX.com M](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-c3762c62cf4e6d98a9661ad73deaee22_l3.png)
![Rendered by QuickLaTeX.com M^T](https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-587e00b5b0da56780f76c54e21cfa8d2_l3.png)
Therefore:
Nanson’s formula is sometimes written as follows:
where
DETERMINANT OF
:
The determinant of is defined using the recursive relationship:
where and is formed by eliminating the 1st row and
column of the matrix
. It can be shown that
the rows of
are linearly dependent.
Video
See “Area Transformation” Video here
Eigenvalues and Eigenvectors
Let is called an eigenvalue of the tensor
if
such that
. In this case,
is called an eigenvector of
associated with the eigenvalue
.
Similar Matrices
Let . Let
be an invertible tensor. The matrix representations of the tensors
and
are termed “similar matrices”.
Similar matrices have the same eigenvalues while their eigenvectors differ by a linear transformation as follows: If is an eigenvalue of
with the associated eigenvector
then:
Therefore, is an eigenvalue of
and
is the associated eigenvector. Similarly, if
is an eigenvalue of
with the associated eigenvector
then:
Therefore, is an eigenvalue of
and
is the associated eigenvector. Therefore, similar matrices share the same eigenvalues.
The Eigenvalues and Eigenvector Problem
Given a tensor , we seek a nonzero vector
and a real number
such that:
. This is equivalent to
. In other words, the eigenvalue is a real number that makes the tensor
not invertible while the eigenvector is a non-zero vector
. Considering the matrix representation of the tensor
, the eigenvalue is the solution to the following equation:
The above equation is called the characteristic equation of the matrix .
From the properties of the determinant function, the characteristic equation is an degree polynomial of the unknown
where
is the dimension of the underlying space.
In particular, , where
are called the polynomial coefficients. Thus, the solution to the characteristic equation abides by the following facts from polynomial functions:
– Polynomial roots: A polynomial has a root
if
divides
, i.e.,
such that
.
– The fundamental theorem of Algebra states that a polynomial of degree has
complex roots that are not necessarily distinct.
– The Complex Conjugate Root Theorem states that If is a complex root of a polynomial with real coefficients, then the conjugate
is also a complex root.
Therefore, the eigenvalues can either be real or complex numbers. If one eigenvalue is a real number, then there exists a vector with real valued components that is an eigenvector of the tensor. Otherwise, the only eigenvectors are complex eigenvectors which are elements of finite dimensional linear spaces over the field of complex numbers.
Graphical Representation of the Eigenvalues and Eigenvectors
The eigenvectors of a tensor are those vectors that do not change their direction upon transformation with the tensor
but their length is rather magnified or reduced by a factor
. Notice that an eigenvalue can be negative (i.e., the transformed vector can have an opposite direction). Additionally, an eigenvalue can have the value of 0. In that case, the eigenvector is an element of the kernel of the tensor.
The following example illustrates this concept. Choose four entries for the matrix and press evaluate.
The tool then draws 8 coloured vectors across the circle and their respective images across the ellipse. Use visual inspection to identify which vectors keep their original direction.
The tool also finds at most two eigenvectors (if they exist) and draws them in black along with their opposite directions. A similar tool is available on the external MecsimCalc python app builder. Use the tool to investigate the eigenvalues and eigenvectors of the following matrices:
After inspection, you should have noticed that every vector is an eigenvector for the identity matrix since
, i.e.,
possesses one eigenvalue which is 1 but all the vectors in
are possible eigenvectors.
You should also have noticed that some matrices don’t have any real eigenvalues, i.e., none of the vectors keep their direction after transformation. This is the case for the matrix:
Additionally, the matrix:
has only one eigenvalue while any vector which is a multiplier of
keeps its direction after transformation through the matrix . You should also notice that some matrices will have negative eigenvalues. In that case, the corresponding eigenvector will be transformed into the direction opposite to its original direction. See for example, the matrix:
Videos:
2.2.1 Basic Definitions
2.2.1.4 Tensor Product
Refer to this section.
2.2.1.7 Area Transformation
Refer to this section.
https://mecsimcalc.com/app/5787840/graphical_representation_of_the_eigenvalues_and_eigenvectors