## Calculus: Vector Calculus

### Fields

Let be a set. Scalar, vector or tensor valued functions defined on are denoted as scalar, vector or tensor fields respectively. If is a subset of and if , , and are scalar, vector and second-order tensor fields respectively, then by choosing a coordinate system defined by the orthonormal basis set , then the arguments of the functions can be chosen to be the cartesian coordinates such that :

### FIeld Derivatives

#### Change of Basis

By the results of the change of basis section, if another orthonormal basis set is chosen, where is the transformation matrix defined as , then the components in the new orthonormal basis set are related to the original components using the relationship:

The matrix is related to the derivatives of the components of with respect to the original components as follows:

and since we also have:

The above relationships will be useful when calculating the derivatives of other fields.

#### Gradient of a Scalar Field

Let be a scalar field, is said to be continuous at if

Alternatively, is **continuous at** if we have:

is said to be **continuous** if it is continuous at every point.

is said to be **differentiable** if there exists a vector field denoted or such that

The gradient vector field is unique since if another vector field satisfies the above equation, we have :

Since is arbitrary, we have:

By replacing with the basis vectors , and , then the components of the vector have the form:

The components of the vector using an orthonormal basis set are related to the components of using the relationship:

Therefore, the transformation of the vector follows the same rule for the change of basis for vectors in the Euclidean vector space .

The **directional derivative** of the scalar field in the direction of a fixed unit vector (i.e., ) is defined as:

The maximum value for the directional derivative is attained when n is chosen in the direction of the vector . On the other hand, the directional derivative in the direction of any vector that is normal (perpendicular) to is obviously zero. In other words, the vector field points towards the direction of the maximum change of the scalar field .

#### Partial Derivatives and Differentiability

Note that if the partial derivatives of exist, they do not guarantee that is differentiable at every point. Consider the following counter example:

The function is continuous at every point including at , why?. In addition, the partial derivatives with respect to and are defined as:

However, is not differentiable at . Consider the direction , then the directional derivative along at the point can be evaluated using the dot product:

However, the limit along the line gives a non zero value:

It is worth noting that the partial derivative is not continuous at . Since

while

therefore, is not continuous.

However, there is a theorem from vector calculus that states that is differentiable if and only if the partial derivatives exist and are continuous. The proof of this theorem can be found in multivariate calculus textbooks.

#### Gradient of a Vector Field

Let be a vector field, is said to be **continuous at** if

Alternatively, is **continuous at** if we have:

Alternatively, is **continuous at** if every component of the vector field is a continuous scalar function.

is said to be continuous if it is **continuous** at every point.

is said to be **differentiable** if there exists a tensor field denoted or such that ,

The gradient tensor field is unique since if another tensor field satisfies the above equation, we have :

Since is arbitrary, we have:

By replacing with the basis vectors and , then the components of the tensor have the form:

The components of the tensor using an orthonormal basis set are related to the components of using the relationship:

Therefore,

I.e.,the transformation of the tensor follows the same rule for the change of basis for tensors in the Euclidean vector space .

Similar to the scalar fields, the existence of the partial derivatives of a vector field does not guarantee the differentiability except when the partial derivatives are continuous.

#### Divergence of a Vector Field

Let be a differentiable vector field, then, the divergence of denoted by is a scalar field defined as:

Using the properties of the trace function, the divergence operator can be written in terms of the components of as follows:

#### Gradient of a Tensor Field

In order to assess continuity and differentiability of tensors, we first have to define what we mean by the size or norm of a tensor. If , then we can define the following norm function on such tensors as follows:

In other words, the norm of , is the supremum of the norms of the vectors where is a unit vector and . One should check that this definition satisfies the required properties of a norm function as shown in the definitions of linear vector spaces.

Let be a tensor field, is said to be **continuous at** if

Alternatively, is **continuous at** if we have:

Alternatively, is **continuous at** if every component of the tensor field is a continuous scalar function.

is said to be **continuous** if it is continuous at every point.

is said to be **differentiable** if there exists a third-order tensor field denoted or such that ,

The gradient third-order tensor field is unique since if another tensor field satisfies the above equation, we have :

Since is arbitrary, we have:

By replacing with the basis vectors and , then the components of the tensor have the form:

The components of the tensor using an orthonormal basis set are related to the components of using the relationship:

I.e.,the transformation of the tensor follows the same rule for the change of basis for second-order tensors in the Euclidean vector space .

Similar to the scalar fields, the existence of the partial derivatives of a vector field does not guarantee the differentiability except when the partial derivatives are continuous.

#### DIvergence of a Tensor Field

Let be a differentiable vector field, then, the divergence of denoted by is a vector field defined such that :

By replacing with the basis vectors and , then the components of the vector have the form:

Or in compact form:

It should be noted that in some texts, the divergence operator is defined as . The choice between the two definitions is a matter of convention.

#### Curl of a Vector Field

Let be a smooth vector field. The curl of at a point is defined as the vector such that :

If u has the components , , and in an orthonormal basis set, then, by setting , , and in the above formula, the components of are shown to have the following form:

or in a compact form:

#### Laplacian of a Scalar Field

Let be a twice differentiable scalar field. The Lacplacian of denoted by at any point is defined as the divergence of the gradient of :

### The Divergence Theorem

The divergence theorem which is also known as the Gaussâ€™ theorem or Greenâ€™s theorem is a useful tool that relates volume integrals of gradients of functions to surface integrals of those functions. In physical terms, the divergence theorem states that the change of a continuous differentiable quantity inside a closed volume is due to the flux of this quantity entering or exiting through the boundary. The continuity ensures that there are not sinks or sources inside the volume. The mathematical theorem is as follows:

#### Statement

Suppose is a compact set with a piecewise smooth boundary and is the vector field defining the outward normal to the boundary . Let be a continuous and differentiable scalar field and , , define the coordinates of a point . Then, the divergence theorem states:

where and are volume and surface elements respectively. The proof of the divergence theorem is straightforward but technical. It can be found in many vector calculus textbooks.

#### Variations

The statement of the divergence theorem above can be used to show the following consequences or variations of the divergence theorem. Assuming the same variables in the statement above and given continuous and differentiable vector and tensor fields and respectively then:

#### Useful Formulas

The following are useful formulas relating the various definitions above:

Note that the definitions and the results in this section can be naturally extended to for an arbitrary .

### Comma Notation

To simplify the expressions in vector calculus, comma notation can sometimes be used to replace the expression for partial differentiation. Consider a scalar field . Then, the partial derivative of with respect to any of the components , , and , can be written as:

If and are vector and tensor fields, respectively, then, the following simplified comma notation expressions can be used (where the Einstein summation convention is also used for repeated indices):

### Examples and Problems

#### Example 1

Consider the set and the scalar field :

The gradient of can be calculated as:

The following tool shows the surface plot and the contour plot of . The vector plot of is also drawn and then superimposed on top of the contour plot. Notice how the arrows are perpendicular to the contour lines! The maximum directional derivative of at the point is calculated to be along and . The maximum directional derivative is in the direction perpendicular to the contour line while the minimum directional derivative which is equal to zero is in the direction along (parallel) to the contour line. The tool also lets you choose a point and the angle of the direction of the vector and calculates the directional derivative at that point.

(interactive activity placeholder) VectorCalculus/phi1.jsp

View Mathematica Code

x0 = 0; y0 = 0; Clear[x, y]; fi = 5 x^2-7 x (y + 3)-5 y^2; gradfi = {D[fi, x], D[fi, y]}; gradfic = gradfi /. {x -> x0, y -> y0}; p0 = {x0, y0}; n1 = gradfic/Norm[gradfic]; n2 = {n1[[2]], -n1[[1]]}; Ar1 = Graphics[Arrow[{p0, p0 + n1}]]; Ar2 = Graphics[Arrow[{p0, p0 + n2}]]; Artext1 = Graphics[Text["n1", p0 + n1 + 0.2 n1]]; Artext2 = Graphics[Text["n2", p0 + n2 + 0.2 n2]]; a = Plot3D[fi, {x, -4, 4}, {y, -4, 4}]; b = ContourPlot[fi, {x, -4, 4}, {y, -4, 4}, ContourLabels -> True]; c = VectorPlot[gradfi, {x, -4, 4}, {y, -4, 4}]; Grid[{{"3 D Plot", "Contour Plot", "Gradient of \[Phi] plot"}, {Show[a], Show[b], Show[c]}, {"Contour Plot superimposed over Gradient of \\[Phi] plot", SpanFromLeft}, {Show[b, c, ImageSize -> "Medium"], SpanFromLeft}, {"Directional Derviative at {0, 0} in the direction of n1 and n2", SpanFromLeft}, {Show[b, Ar1, Artext1, Ar2, Artext2, ImageSize -> "Medium"], SpanFromLeft}, {Grid[{{"\[Del]\[Phi] at (0, 0) =", gradfic // MatrixForm}}, Spacings -> 0], SpanFromLeft}, {Grid[{{"n1=", N[n1] // MatrixForm, ",\[Del]\[Phi]\[CenterDot]n1 =", gradfic . n1}}, Spacings -> 0], SpanFromLeft}, {Grid[{{"n2 = ", N[n2] // MatrixForm, ", \[Del]\[Phi]\[CenterDot]n2 =", gradfic . n2}}, Spacings -> 0], SpanFromLeft}}]

View Python Code

from mpl_toolkits.mplot3d import Axes3D import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import sympy as sp from sympy import diff, Matrix, lambdify %matplotlib notebook x,y=sp.symbols("x y") f=5*x**2-7*x*(y+3)-5*y**2 fx=diff(f,x) fy=diff(f,y) v=Matrix([fx,fy]) #numpy equivalent F function F=lambdify((x,y),f) Fx=lambdify((x,y),fx) Fy=lambdify((x,y),fy) #ranges for plotting xrange = np.arange(-4, 4, .5) yrange = np.arange(-4, 4, .5) X, Y = np.meshgrid(xrange, yrange) Z = F(X,Y) DFx=Fx(X,Y) DFy=Fy(X,Y) # 3D plot fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(111, projection='3d') ax.plot_surface(X,Y,Z) plt.title('3D Plot') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') # Contour plot fig = plt.figure() ax = fig.add_subplot(111) cp = ax.contourf(X,Y,Z) fig.colorbar(cp) plt.title('Contour Plot') ax.set_xlabel('X') ax.set_ylabel('Y') # Gradient plot fig = plt.figure() ax = fig.add_subplot(111) ax.quiver(X, Y, DFx,DFy) plt.title('Gradient of \u03A6 Plot') ax.set_xlabel('X') ax.set_ylabel('Y') # Contour Plot superimposed over Gradient fig = plt.figure() ax = fig.add_subplot(111) ax.contourf(X,Y,Z) ax.quiver(X, Y, DFx, DFy) plt.title('Contour Plot superimposed over Gradient of \u03A6 plot') ax.set_xlabel('X') ax.set_ylabel('Y') #Directional Derviative at {0,0} fig = plt.figure() ax = fig.add_subplot(111) ax.contourf(X,Y,Z) x_0 = np.where(xrange == 0)[0][0] y_0 = np.where(yrange == 0)[0][0] phi = np.array([[DFx.item(x_0,y_0)],[DFy.item(x_0, y_0)] ]) ax.quiver(X[x_0, y_0], Y[x_0, y_0], DFx.item(x_0,y_0),DFy.item(x_0, y_0)) plt.title('Directional Derviative at {0,0}') ax.set_xlabel('X') ax.set_ylabel('Y') #outputs print("Delta \u03A6 at (0,0) =\n", phi)

### Video

### Vector Calculus Quiz

### Problems

- Let be a scalar field defined over the set . Evaluate and indicate the order (whether it is a scalar, vector, or tensor) of the gradient of , the Laplacian of , the gradient of , and the divergence of the gradient of . Use Mathematica to visualize and . Why is the gradient of always symmetric independent of the choice of the smooth function ?
- Use the definitions in this section to show the equalities in Useful Formulas above.
- Use the divergence theorem defined above to show the last three equalities shown in the Variations of the divergence theorem shown above.