Open Educational Resources

Linear Systems of Equations: Linear Algebra

Definitions

Linear Vector Spaces

In this course, by a linear vector space we mean the set

    \[ \mathbb{R}^n=\mathbb{R}\times\mathbb{R}\times\cdots\mathbb{R}=\left\{\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right)\bigg| x_1,x_2,\cdots,x_n\in\mathbb{R} \right\} \]

The dimension of this vector space is n. A vector in this space x\in\mathbb{R}^n has n components x_1, x_2,\cdots,x_n each of which is a real number. The zero vector is the vector whose components are all equal to zero.

For example \mathbb{R}^2 is the two-dimensional linear vector space. Each vector in this space has two components. \mathbb{R}^2 can be represented geometrically by a plane. \mathbb{R}^3 is a three-dimensional linear vector space. Each vector in this space has three components. The vector \left(\begin{array}{c}0\\0\end{array}\right) is the zero vector in \mathbb{R}^2. The vector \left(\begin{array}{c}0\\0\\0\end{array}\right) is the zero vector in \mathbb{R}^3.

Linear Functions between Vector Spaces(Matrices)

Given a linear vector space \mathbb{R}^m, and another linear vector space \mathbb{R}^n, a linear function f:\mathbb{R}^m\rightarrow\mathbb{R}^n has the following form, \forall x\in\mathbb{R}^m:

    \[ f(x)=\left(\begin{array}{c} a_{11}x_1+a_{12}x_2+\cdots +a_{1m}x_m\\ a_{21}x_1+a_{22}x_2+\cdots +a_{2m}x_m\\ \vdots \\ a_{n1}x_1+a_{n2}x_2+\cdots +a_{nm}x_m \end{array}\right) \]

The function f takes a vector x with m components and returns a vector y=f(x) which has n components. Using matrix notation, the above can be written in the following more convenient form:

    \[ f(x)=\left(\begin{matrix}a_{11}&a_{12}&\cdots&a_{1m}\\a_{21}&a_{22}&\cdots&a_{2m}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nm}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_m\end{array}\right) \]

If we denote the n\times m matrix of numbers by A, then the linear function f can be written in the following simple form:

    \[ f(x)=Ax \]

We noted that x is an element of the linear vector space \mathbb{R}^m. Similarly, we view A as an element of the possible linear functions from \mathbb{R}^m to \mathbb{R}^n which is denoted: \mathbb{B}(\mathbb{R}^m,\mathbb{R}^n)

If n=m, then for simplicity, we say that A\in\mathbb{B}(\mathbb{R}^n). We also use the symbol \mathbb{M}^n to denote the set of square matrices that live in the space \mathbb{B}(\mathbb{R}^n).

For a visual representation showing the action of the matrix A\in\mathbb{M}^2 on the space \mathbb{R}^2, see the tool shown here.

Matrix Transpose

If A\in\mathbb{B}(\mathbb{R}^m,\mathbb{R}^n), then the transpose of A, namely A^T is defined as the matrix A^T\in\mathbb{B}(\mathbb{R}^n,\mathbb{R}^m) whose columns are the rows of A.

Examples

Example 1

Consider the function f:\mathbb{R}\rightarrow\mathbb{R} defined as f(x)=5x. Following the above definition, x is a one-dimensional vector and the matrix A has dimensions 1\times 1. In this case A\in \mathbb{B}(\mathbb{R}^1) and A=A^T.

Example 2

Consider the function f:\mathbb{R}^3\rightarrow\mathbb{R}^2 defined as follows: \forall x\in\mathbb{R}^3:

    \[ f(x)=\left(\begin{array}{c}5x_1+3x_2+2x_3\\2x_2+4x_3\end{array}\right) \]

In matrix form, this function can be represented as follows:

    \[ f(x)=\left(\begin{array}{ccc}5&3&2\\0&2&4\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right) \]

This function takes vectors that have three components and gives vectors that have two components. The matrix A associated with this linear function has dimensions 2\times 3, lives in the space \mathbb{B}(\mathbb{R}^3,\mathbb{R}^2), and has the form:

    \[ A=\left(\begin{array}{ccc}5&3&2\\0&2&4\end{array}\right) \]

In this case, A^T is simply:

    \[ A^T=\left(\begin{array}{cc}5&0\\3&2\\2&4\end{array}\right) \]

Example 3

Consider the function f:\mathbb{R}^3\rightarrow\mathbb{R}^3 defined as follows: \forall x\in\mathbb{R}^3:

    \[ f(x)=\left(\begin{array}{c}5x_1+3x_2+2x_3\\2x_2+4x_3\\2x_1-3x_3\end{array}\right) \]

In matrix form, this function can be represented as follows:

    \[ f(x)=\left(\begin{array}{ccc}5&3&2\\0&2&4\\2&0&-3\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right) \]

This function takes vectors that have three components and gives vectors that have three components. The matrix A associated with this linear function has dimensions 3\times 3, lives in the space \mathbb{B}(\mathbb{R}^3)=\mathbb{M}^3, and has the form:

    \[ A=\left(\begin{array}{ccc}5&3&2\\0&2&4\\2&0&-3\end{array}\right) \]

In this case, A^T is simply:

    \[ A^T=\left(\begin{array}{ccc}5&0&2\\3&2&0\\2&4&-3\end{array}\right) \]

Example 4

Consider the function f:\mathbb{R}^3\rightarrow\mathbb{R} defined as follows: \forall x\in\mathbb{R}^3:

    \[ f(x)=\left(\begin{array}{c}5x_1+2x_3\end{array}\right) \]

In matrix form, this function can be represented as follows:

    \[ f(x)=\left(\begin{array}{ccc}5&0&2\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right) \]

This function takes vectors that have three components and gives vectors that have one component. The matrix A associated with this linear function has dimensions 1\times 3, lives in the space \mathbb{B}(\mathbb{R}^3,\mathbb{R}), and has the form:

    \[ A=\left(\begin{array}{ccc}5&0&2\end{array}\right) \]

In this case, A^T is simply:

    \[ A^T=\left(\begin{array}{c}5\\0\\2\end{array}\right) \]

Linear System of Equations

Let A\in\mathbb{B}(\mathbb{R}^m,\mathbb{R}^n) and given a vector b\in\mathbb{R}^n, a linear system of equations seeks the solution x to the equation:

    \[ Ax=b \]

In a matrix component form the above equation is:

    \[ \left(\begin{matrix}a_{11}&a_{12}&\cdots&a_{1m}\\a_{21}&a_{22}&\cdots&a_{2m}\\\vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nm}\end{matrix}\right)\left(\begin{array}{c}x_1\\x_2\\\vdots\\x_m\end{array}\right)=\left(\begin{array}{c}b_1\\b_2\\\vdots\\b_n\end{array}\right) \]

Another way of viewing this set of equations is as follows:

    \[\begin{split} a_{11}x_1+a_{12}x_2+\cdots +a_{1m}x_m&=b_1\\ a_{21}x_1+a_{22}x_2+\cdots +a_{2m}x_m&=b_2\\ &\vdots \\ a_{n1}x_1+a_{n2}x_2+\cdots +a_{nm}x_m&=b_n \end{split} \]

The solution for a linear system of equations is the numerical value for the variables x_1, x_2,\cdots,x_m that would satisfy the above equations. There are three possible scenarios:

Scenario 1

If m>n then we have more unknown components of the m-dimensional vector x than we have equations. The system is then termed underdetermined. This results in having many possible solutions, or no solution. For example, consider the linear system of equations Ax=b defined as:

    \[ \left(\begin{array}{ccc}5&3&2\\0&2&4\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}1\\2\end{array}\right) \]

There are many possible solutions that would satisfy the above equation. One solution can be obtained by setting x_3=1. Then:

    \[\begin{split} 5x_1+3x_2+2&=1\\ 2x_2+4&=2 \end{split} \]

which results in x_2=-1 and x_1=\frac{2}{5}. Therefore one possible solution is:

    \[ x=\left(\begin{array}{c}\frac{2}{5}\\-1\\1\end{array}\right) \]

Another possible solution can be obtained by setting x_3=2. Then:

    \[\begin{split} 5x_1+3x_2+4&=1\\ 2x_2+8&=2 \end{split} \]

which results in x_2=-3 and x_1=\frac{6}{5}. Therefore another possible solution is:

    \[ x=\left(\begin{array}{c}\frac{6}{5}\\-3\\2\end{array}\right) \]

In fact, there are infinite possible solutions that depend on what value you choose for x_3. So, if x_3 is set such that x_3=\alpha, then we have:

    \[\begin{split} 5x_1+3x_2+2\alpha&=1\\ 2x_2+4\alpha&=2 \end{split} \]

Therefore:

    \[ x_2=\frac{2-4\alpha}{2}=1-2\alpha \]

and

    \[ x_1=\frac{1}{5}(1-2\alpha - 3(1-2\alpha))=\frac{1}{5}(-2+4\alpha) \]

Therefore:

    \[ x=\left(\begin{array}{c}\frac{-2+4\alpha}{5}\\1-2\alpha\\\alpha\end{array}\right) \]

Scenario 2

If m<n and the n equations are linearly independent, then we have more equations than we have unknown components of the m-dimensional vector x. This results in having too many restrictions which prevents us from finding a solution. In this case, the system is termed overdetermined. For example, consider the linear system of equations Ax=b defined as:

    \[ \left(\begin{array}{cc}1&3\\0&2\\-1&1\end{array}\right)\left(\begin{array}{c}x_1\\x_2\end{array}\right)=\left(\begin{array}{c}1\\2\\2\end{array}\right) \]

In this system, we have three independent equations and two unknowns x_1 and x_2. If the first two equations are used to find a solution, then the third equation will present a contradiction. The first two equations have the following form:

    \[\begin{split} x_1+3x_2&=1\\ 2x_2&=2 \end{split} \]

which result in the solution x_2=1, and x_1=-2. If we substitute this solution into equation 3 we get:

    \[ -1\times (-2)+1\times 1=2+1=3\neq 2 \]

Scenario 3

If m=n and the n equations are independent, i.e., the rows of the matrix are linearly independent, then one unique solution exists and the matrix A that forms the equation has an inverse. In this case, there exists a matrix A^{-1} such that:

    \[ Ax=b\Rightarrow x=A^{-1}b \]

To know whether the equations are independent or not the determinant function defined below is sufficient. If the determinant of a matrix is equal to 0, then the rows of the matrix are linearly dependent, and so, an inverse cannot be found. If the determinant of a matrix on the other hand is not equal to 0, then the rows of the matrix are linearly independent and an inverse can be found.

For example, consider the system Ax=b defined as:

    \[ \left(\begin{array}{cc}1&3\\0&2\end{array}\right)\left(\begin{array}{c}x_1\\x_2\end{array}\right)=\left(\begin{array}{c}1\\2\end{array}\right) \]

The rows of the matrix A are clearly linearly independent. The inverse of A can be found (which will be discussed later in more detail). For now, we can simply solve the two equations as follows. From the second equation we get:

    \[ 2x_2=2\Rightarrow x_2=1 \]

Substituting into the first equation we get:

    \[ x_1+3(1)=1\Rightarrow x_1=-2 \]

Therefore, the system has a unique solution.

If m=n and the n equations are dependent, i.e., the rows of the matrix are linearly dependent, then it is not possible to find a unique solution for the linear system of equations. For example, consider the system Ax=b defined as:

    \[ \left(\begin{array}{cc}1&1\\-1&-1\end{array}\right)\left(\begin{array}{c}x_1\\x_2\end{array}\right)=\left(\begin{array}{c}1\\2\end{array}\right) \]

The rows of the matrix A are clearly linearly dependent. The inverse of A can not be found (which will be discussed later in more detail). The system presents a contradiction and cannot be solved because the first equation predicts that x_1+x_2=1 while the second equation predicts that x_1+x_2=-2!

As a different example, consider the system Ax=b defined as:

    \[ \left(\begin{array}{cc}1&1\\-1&-1\end{array}\right)\left(\begin{array}{c}x_1\\x_2\end{array}\right)=\left(\begin{array}{c}1\\-1\end{array}\right) \]

The rows of the matrix A are clearly linearly dependent. The inverse of A can not be found (which will be discussed later in more detail). The system has infinite number of possible solutions. The first and second equations give us the relationship x_2=1-x_1. So, by assuming any value for x_1, we can find a value for x_2. For example, if x_1=1, then x_2=0. Another possible solution is x_1=0 and x_2=1.

We will focus in the remaining part of this section on how to find a solution for a linear system of equations when m=n.

Special Types of Square Matrices

Symmetric Matrix

A symmetric matrix A is a matrix that satisfies A=A^T. For example:

    \[ A=\left(\begin{array}{cc}1&3\\3&2\end{array}\right) \]

is a symmetric matrix.

Identity Matrix

The identity matrix I\in\mathbb{M}^n is the matrix whose diagonal components have the value of 1 while its off diagonal components have a value of zero. For example, I\in\mathbb{M}^3 has the form:

    \[ I=\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right) \]

This is called the identity matrix because it takes every vector x and returns the same vector. For example, consider the general vector x\in\mathbb{R}^3 with components x_1, x_2, and x_3. The same vector is obtained after the operation Ix:

    \[ Ix=\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=x \]

Diagonal Matrix

A matrix M\in\mathbb{M}^n is called diagonal if all its off diagonal components are zero. The identity matrix is a diagonal matrix. Similarly, the following matrix is diagonal:

    \[ M=\left(\begin{array}{cccc}5&0&0&0\\0&-1&0&0\\0&0&0&0\\0&0&0&-3\end{array}\right) \]

Upper Triangular Matrix

Let M\in\mathbb{M}^n. M is an upper triangular matrix if all the components below the diagonal components are equal to zero. For example, the following matrix is an upper triangular matrix:

    \[ M=\left(\begin{matrix}1&2&3\\0&1&4\\0&0&5\end{matrix}\right) \]

Lower Triangular Matrix

Let M\in\mathbb{M}^n. M is a lower triangular matrix if all the components above the diagonal components are equal to zero. For example, the following matrix is a lower triangular matrix:

    \[ M=\left(\begin{matrix}1&0&0&0\\5&2&0&0\\2&3&1&0\\1&2&0&3\end{matrix}\right) \]

Banded Matrix

A matrix M\in\mathbb{M}^n is called banded if all its components are zero except the components on the main diagonal and one or more diagonals on either side. To quantify how banded a matrix is, the bandwidth of a matrix is defined. The bandwidth of a matrix indicates the width of the element band out of which all other elements are zero. To define the bandwidth, first we define the lower or left half-bandwidth, and the upper or right half-bandwidth as follows.

The upper bandwidth k_1 is the smallest positive integer such that M_{ij}=0 when j-i>k_1. This finds the last diagonal, above the main diagonal, that contains non-zero elements.

The lower bandwidth k_2 is the smallest positive integer such that M_{ij}=0 when i-i>k_2. This finds the last diagonal, below the main diagonal, that contains non-zero elements.

The bandwidth of M can be defined as: k_1 + k_2 + 1.

Note that the bandwidth may have different definition, e.g. maximum of k_1 and k_2, in some text.

Examples:

    \[ M=\left(\begin{matrix}1&0&0&0&0\\1&2&3&0&0\\0&3&1&2&0\\0&0&1&2&3\\0&0&0&4&4\end{matrix}\right) \]

M\in\mathbb{M}^5 is banded with a bandwidth of k_1 + k_2 + 1 = 1 + 1 + 1=3.

    \[ M=\left(\begin{matrix}1&1&0&0&0\\1&1&1&0&0\\0&0&1&0&0\\0&0&0&1&1\\0&0&0&1&1\end{matrix}\right) \]

M\in\mathbb{M}^5 is banded with a bandwidth of k_1 + k_2 + 1 = 1 + 1 + 1=3.

The bandwidth of  a diagonal matrix is k_1 + k_2 + 1 = 0 + 0 + 1=1

An upper triangular matrix M\in\mathbb{M}^n is not a banded matrix, but the definition of the bandwidth leads to a bandwidth of  k_1 + k_2 + 1 = (n-1) + 0 + 1=n for this matrix.

Matrix Operations

Matrix Addition

Let M_1,M_2\in\mathbb{M}^n. Consider the function f_1(x)=M_1x and the function f_2(x)=M_2x. Then, the function f_3(x)=f_1(x)+f_2(x) is given as:

    \[ f_3(x)=M_1x+M_2x=(M_1+M_2)x=M_3x \]

where M_3=M_1+M_2 is computed by adding each component in M_1 to the corresponding component in M_2. For example, consider the two matrices:

    \[ M_1=\left(\begin{matrix}5&2\\0&1\end{matrix}\right) \qquad M_2=\left(\begin{matrix}-1&0\\2&2\end{matrix}\right) \]

Then, the matrix M_3 can be computed as follows:

    \[ M_3=M_1+M_2=\left(\begin{matrix}4&2\\2&3\end{matrix}\right) \]

Two matrices can only be added if they have the same dimensions.

Matrix Multiplication

Consider the matrix M\in\mathbb{B}(\mathbb{R}^m,\mathbb{R}^n) which takes vectors of dimensions m and returns vectors of dimensions n. The matrix M has dimensions n\times m. Consider the matrix N\in\mathbb{B}(\mathbb{R}^n,\mathbb{R}^l) which takes vectors of dimension n and gives vectors of dimension l. The matrix N has dimensions l\times n. The combined operation N(M(x)) takes vectors of dimensions m and gives vectors of dimension l. This matrix L=NM corresponding to the combined operation can be obtained by taking the dot product of the row vectors of the matrix N with the column vectors of the matrix M. This can only be done if the inner dimensions are equal to each other. The product L=NM would have dimensions of l\times m and it is such that L\in\mathbb{B}(\mathbb{R}^m,\mathbb{R}^l):

    \[ L_{l\times m}=N_{l\times n}M_{n\times m} \]

For example, consider the two matrices:

    \[ N=\left(\begin{array}{cc}1&4\\2&2\\3&5\end{array}\right) \qquad M=\left(\begin{array}{cccc}2&3&1&5\\0&-1&2&4\end{array}\right) \]

We can only perform the operation L=NM but the opposite is not defined. The matrix multiplication L=NM results in:

    \[ L=\left(\begin{array}{cccc} 1\times 2+ 4\times 0 & 1\times 3-4\times 1 & 1\times 1+4\times 2 & 1\times 5 + 4\times 4\\ 2\times 2+ 2\times 0 & 2\times 3-2\times 1 & 2\times 1+2\times 2 & 2\times 5 + 2\times 4\\ 3\times 2+ 5\times 0 & 3\times 3-5\times 1 & 3\times 1+5\times 2 & 3\times 5 + 5\times 4 \end{array}\right)=\left(\begin{array}{cccc}2&-1&9&21\\4&4&6&18\\6&4&13&35\end{array}\right) \]

You can use “.” in Mathematica to multiply matrices, or you can use “MMULT” in excel to multiply matrices.

View Mathematica Code
Nn = {{1, 4}, {2, 2}, {3, 5}};
M = {{2, 3, 1, 5}, {0, -1, 2, 4}};
L = Nn.M;
L // MatrixForm
View Python Code
import numpy as np
Nn = [[1, 4], [2, 2], [3, 5]]
M = [[2, 3, 1, 5], [0, -1, 2, 4]]
np.matmul(Nn,M)

It is important to note that when the dimensions are appropriate, then matrix multiplication is associative and distributive, i.e., given the three matrices A,B,C then:

    \[ A(BC)=(AB)C \qquad A(B+C)=AB+AC \]

However, matrix multiplication is not necessarily commutative, i.e., AB\neq BA. For example, consider the matrices

    \[ A=\left(\begin{array}{cc}1&2\\0&0\end{array}\right) \qquad B=\left(\begin{array}{cc}1&1\\-1&1\end{array}\right) \]

Then:

    \[ AB=\left(\begin{array}{cc}-1&3\\0&0\end{array}\right) \neq BA=\left(\begin{array}{cc}1&2\\-1&-2\end{array}\right) \]

Matrix Determinant

If M\in\mathbb{M}^2 such that:

    \[ M=\left(\begin{matrix}a_1 &a_2\\b_1 & b_2\end{matrix}\right) \]

then \det{M}=a_1b_2-a_2b_1. If M\in\mathbb{M}^3 such that:

    \[ M=\left(\begin{matrix}a_1 &a_2&a_3\\b_1 & b_2&b_3\\c_1&c_2&c_3\end{matrix}\right) \]

then \det{M}=a_1(b_2c_3-b_3c_2)+a_2(b_3c_1-b_1c_3)+a_3(b_1c_2-b_2c_1).

Let M\in\mathbb{M}^n. The matrix determinant can be defined using the recursive relationship:

    \[ \det{M}=\sum_{i=1}^n(-1)^{(i+1)}M_{1i}\det{N_i} \]

where N_i\in\mathbb{M}^{n-1} and is formed by eliminating the 1st row and i^{th} column of the matrix M. It can be shown that \det{M}=0\Leftrightarrow the rows of M are linearly dependent. In other words, \det{M}=0 if and only if the matrix cannot be inverted. The function “Det[]” in Mathematica can be used to calculate the determinant of a matrix. In Excel, you can use the function “MDETERM()” to calculate the determinant of a matrix as well.

The following function in Mathematica computes the determinant of the matrix using the above recursive relationship and then compares the solution with the built-in function “Det[]” in Mathematica.

View Mathematica Code
fdet[M_] := (n = Length[M]; If[n == 2, (M[[1, 1]]*M[[2, 2]] - M[[1, 2]]*M[[2, 1]]),Sum[(-1)^(i + 1) M[[1, i]]*fdet[Drop[M, 1, {i}]], {i, 1, n}]]);
M = {{1, 1, 3, 7}, {4, 4, 5, 0}, {1, 5, 5, 6}, {1, -4, 3, -2}};
fdet[M]
Det[M]
View Python Code
import numpy as np
def fdet(M):
  n = len(M)
  if n == 2:
    return M[0][0]*M[1][1] - M[0][1]*M[1][0]
  else:
    return sum([(-1)**(i + 1)*M[0][i]*fdet(np.delete(M[1:],i,1)) for i in range(n)])
M = [[1, 1, 3, 7], [4, 4, 5, 0], [1, 5, 5, 6], [1, -4, 3, -2]]
print("fdet(M):",fdet(M))
print("np.linalg.det(M):",np.linalg.det(M))

You can check the section on the determinant of a matrix for additional information on some of the properties of the determinant of a matrix.

Solving Systems of Linear Equations

Solving systems of linear equations has been the subject of study for many centuries. In particular, for every engineering discipline and almost every engineering application, systems of linear equations appear naturally with large numbers of unknowns. For example, the stiffness method in structural analysis as well as the finite element analysis method seek to find the solution of the linear system of equations Ku=f where K is a symmetric matrix of dimensions n\times n, f\in\mathbb{R}^n is a vector of external forces and u is the vector of the displacement of the structure nodes. The software used need to implement a method by which u can be obtained given particular values for f. An appropriate method to solve for a linear system of equations has to be computationally efficient. In the following, we will present a few methods to solve systems in which the number of equations is equal to the number of unknowns along with their pros and cons. The methods are categorized into “Direct Methods” and “Iterative Methods”

Lecture video


Lecture video


Page Comments

  1. Lei Zhang says:

    In the banded matrix, how to understand “The band width is the maximum number of columns on the same row that have non-zero entries.” If I have a matrix [1,1,0,0,0][1,1,0,0,0][0,0,1,0,0][0,0,0,1,1][1,1,0,1,1], the band width is two or three?

  2. Samer Adeeb says:

    The matrix you provided isn’t a good example as the bottom row contains nonzero values at the first and last element.
    If the last row was [0,0,0,1,1] then the bandwidth would be 3.

    1. Lei Zhang says:

      Yes, Dr. Adeeb. That was my mistake. If my matrix is [1,1,0,0,0][1,1,0,0,0][0,0,1,0,0][0,0,0,1,1][0,0,0,1,1], I agree that the bandwidth for this matrix would be 3. But the maximum number of columns on the same row that have non-zero entries is only 2?

      1. Samer Adeeb says:

        Hi Lei,
        You are right. We will modify the definition to account for examples like the one you presented.
        Samer

Leave a Reply

Your email address will not be published. Required fields are marked *