Open Educational Resources

Linear Vector Spaces: Basic Definitions

Linear Vector Space

A set V is called a linear vector space over the field of \mathbb{F} if the two operations (vector addition) +:V\times V\rightarrow V and (scalar multiplication) \cdot: \mathbb{F}\times V\rightarrow V are defined and satisfy the following 8 axioms: \forall x, y, z \in V; \forall \alpha,\beta,\gamma \in\mathbb{R}:

  1. Commutativity: x + y = y + x
  2. Associativity: (x + y) + z = x + (y + z)
  3. Distributivity of vector addition over scalar multiplication: \alpha(x+y)=\alpha x+\alpha y
  4. Distributivity of scalar addition over scalar multiplication: (\alpha + \beta)x=\alpha x+\beta x
  5. Compatibility of scalar multiplication with field multiplication: \alpha(\beta x)=(\alpha \beta)x
  6. Identity element of addition/Zero element: \exists \hat{0}\in V such that \forall v \in V: v+\hat{0}=v
  7. Inverse element of addition: \forall v \in V: \exists \tilde{v} such that v+\tilde{v}=0. \tilde{v} is denoted -v
  8. Identity element of scalar multiplication: 1x=x

The elements of the linear vector space are called vectors, while the elements of \mathbb{F} are called scalars. In general, \mathbb{F} is either the field of real numbers \mathbb{R} or of complex numbers \mathbb{C}. For the remainder of this article, \mathbb{F} is assumed to be the field of real numbers \mathbb{R} unless otherwise specified.
In the history of the development of linear vector spaces, the following statements sometimes appeared in the definition. However, the above 8 essential axioms can be used to prove them.

  • The element \hat{0} is unique.
  •  \forall v\in V:\tilde{v}=-v is unique.
  • If x+y=x+z, then y=z.
  • \forall \alpha \in \mathbb{R}, \forall v\in V:0(v)=\hat{0} and \alpha\hat{0}=\hat{0}.
  • -1x=-x

Notice that in the above definition and statements, the hat symbol was used to distinguish the zero vector \hat{0} from the zero scalar 0. However, in the future, 0 will be used for both and it is to be understood from the context whether it is the zero vector or the zero scalar.
Examples of linear vector spaces include the following sets:

  • \mathbb{R}
  • \mathbb{R}^2=\mathbb{R}\times\mathbb{R}=\{(x,y)|x,y\in\mathbb{R}\}
  • \mathbb{R}^n=\mathbb{R}\times\mathbb{R}\times\cdots\times\mathbb{R}=\{(x_1,x_2,\cdots,x_n)|\forall i:x_i\in\mathbb{R}\}
  • The set V=\{f:\mathbb{R}\rightarrow\mathbb{R}|\exists a,b\in\mathbb{R}\mbox{ such that }f(x)=ax+b\} is a linear vector space!

Subspaces

A subset Y of a vector space V is called a subspace if \forall x,y\in Y,\forall \alpha,\beta \in \mathbb{R}:\alpha x + \beta y \in Y.
In other words, a subset Y of a vector space V is called a subspace if it is closed under both vector addition and scalar multiplication. For example, the set Y=\{(0,x)|x\in\mathbb{R}\} is a subspace of \mathbb{R}^2 (why?).

Linear Independence

Let V be a linear vector space over \mathbb{R}. A set of vectors B=\{v_1,v_2,v_3,\cdots,v_n\}\subset V is called a linearly independent set of vectors if none of the vectors v_i can be expressed linearly in terms of the other elements in the set; i.e., the only linear combination that would produce the zero vector is the trivial combination:

    \[ \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = 0 \Leftrightarrow \forall i:\alpha_i=0 \]

Otherwise, the set is called linearly dependent.

Examples:
Consider the vector space \mathbb{R}^2. The set B=\{v,u\} where v=(1,0) and u=(0,1) is linearly independent:

    \[ \alpha_1 \left( \begin{array}{cc} 1\\0 \end{array}\right)+\alpha_2 \left( \begin{array}{cc} 0\\1 \end{array}\right) = \left( \begin{array}{cc} \alpha_1\\\alpha_2 \end{array}\right)=\left( \begin{array}{cc} 0\\0 \end{array}\right)\Leftrightarrow \alpha_1=\alpha_2=0 \]

However, the set C=\{x,y\} where x=(1,0) and y=(5,0) is linearly dependent:

    \[ \alpha_1 \left( \begin{array}{cc} 1\\0 \end{array}\right)+\alpha_2 \left( \begin{array}{cc} 5\\0 \end{array}\right) = \left( \begin{array}{cc} \alpha_1+5\alpha_2\\0 \end{array}\right)=\left( \begin{array}{cc} 0\\0 \end{array}\right)\Rightarrow \alpha_1+5\alpha_2=0 \]

This means that there are multiple combinations that would give the zero vector. For example, choosing \alpha_2=1 and \alpha_1=-5 would yield \alpha_1x+\alpha_2y=0.

Notice that if the zero vector is an element of a set B=\{0,v_2,v_3,\cdots,v_n\} then, the set is automatically linearly dependent since the following non trivial combination (\alpha_1\neq 0) would produce the zero vector:

    \[ \alpha_1 0 + 0 v_2 + 0 v_3 + \cdots + 0 v_n = 0 \]

Basis and Dimensions

A subset B of a linear vector space V is called a basis set if it is a set of linearly independent vectors such that \forall v\in V:v can be expressed as a linear combination of the elements of B. Alternatively: B=\{e_1,e_2,\cdots,e_n\}\subset V is a basis of V if and only if the following two conditions are satisfied:

  • \alpha_1 e_1 + \alpha_2 e_2 + \cdots + \alpha_n e_n = 0 \Leftrightarrow \forall i:\alpha_i=0
  • \forall v\in V:\exists v_i\in\mathbb{R} such that v=v_1e_1+v_2e_2+\cdots+v_ne_n

The dimension of a linear vector space V is the number of elements in its basis set.
For example \mathbb{R}^2=\{(x,y)|x,y\in\mathbb{R}\} is a two dimensional space, because its basis set B=\{e_1,e_2\} has two elements e_1=(1,0) and e_2=(0,1).

Assertion: If B=\{e_1,e_2,\cdots,e_n\}\subset V is a basis set of V, then \forall v\in V the combination v=v_1e_1+v_2e_2+\cdots+v_ne_n is unique. i.e., \forall v\in V:\exists!v_i\in\mathbb{R} such that v= v_1e_1+v_2e_2+\cdots+v_ne_n.

Proof:  Use the linear independence of the basis set B to prove the result.

Click on the image below to interact with this example. Input the components of the vectors e_1, e_2 and v. The tool draws the vectors e_1 and e_2 in blue and red respectively. It identifies if the vectors are linearly dependent or linearly independent. If they are linearly independent, the tool calculates and draws the components of v in the directions of e_1 and e_2.

Leave a Reply

Your email address will not be published.