Multivariable calculus

Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.

Multivariable calculus may be thought of as an elementary part of advanced calculus. For advanced calculus, see calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus.

Introduction
In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces:
 * 1) There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D;
 * 2) There are multiple extended objects associated with the dimension; for example, for a 1D function, it must be represented as a curve on the 2D Cartesian plane, but a function with two variables is a surface in 3D, while curves can also live in 3D space.

The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators.

The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined.

Limits
A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions.

A limit along a path may be defined by considering a parametrised path $$s(t): \mathbb{R} \to \mathbb{R}^n$$ in n-dimensional Euclidean space. Any function $$f(\overrightarrow{x}): \mathbb{R}^n \to \mathbb{R}^m$$ can then be projected on the path as a 1D function $$f(s(t))$$. The limit of $$f$$ to the point $$s(t_0)$$ along the path $$s(t)$$ can hence be defined as

Note that the value of this limit can be dependent on the form of $$s(t)$$, i.e. the path chosen, not just the point which the limit approaches. For example, consider the function


 * $$f(x,y) = \frac{x^2y}{x^4+y^2}.$$

If the point $$(0,0)$$ is approached through the line $$y=kx$$, or in parametric form:



Then the limit along the path will be:

On the other hand, if the path $$y=\pm x^2$$ (or parametrically, $$x(t)=t,\, y(t)=\pm t^2$$) is chosen, then the limit becomes:

Since taking different paths towards the same point yields different values, a general limit at the point $$(0,0)$$ cannot be defined for the function.

A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function $$f: \mathbb{R}^n \to \mathbb{R}^m$$ that the limit of $$f$$ to some point $$x_0 \in \mathbb{R}^n$$ is L, if and only if

for all continuous functions $$s(t): \mathbb{R} \to \mathbb{R}^n$$ such that $$s(t_0)=x_0$$.

Continuity
From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function $$f: \mathbb{R}^n \to \mathbb{R}^m$$ that $$f$$ is continuous at the point $$x_0$$, if and only if

for all continuous functions $$s(t): \mathbb{R} \to \mathbb{R}^n$$ such that $$s(t_0)=x_0$$.

As with limits, being continuous along one path $$s(t)$$ does not imply multivariate continuity.

Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example. For example, for a real-valued function $$f: \mathbb{R}^2 \to \mathbb{R}$$ with two real-valued parameters, $$f(x,y)$$, continuity of $$f$$ in $$x$$ for fixed $$y$$ and continuity of $$f$$ in $$y$$ for fixed $$x$$ does not imply continuity of $$f$$.

Consider

f(x,y)= \begin{cases} \frac{y}{x}-y & \text{if}\quad 0 \leq y < x \leq 1 \\ \frac{x}{y}-x & \text{if}\quad 0 \leq x < y \leq 1 \\ 1-x & \text{if}\quad 0 < x=y \\ 0 & \text{everywhere else}. \end{cases} $$

It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle $$(0,1)\times (0,1)$$. Furthermore, the functions defined for constant $$x$$ and $$y$$ and $$0 \le a \le 1$$ by
 * $$g_a(x) = f(x,a)\quad$$ and $$\quad h_a(y) = f(a,y)\quad$$

are continuous. Specifically,
 * $$g_0(x) = f(x,0) = h_0(0,y) = f(0,y) = 0$$ for all $$ and $$. Therefore, $$f(0,0)=0$$ and moreover, along the coordinate axes, $$\lim_{x \to 0} f(x,0) = 0$$ and $$\lim_{y \to 0} f(0,y) = 0$$. Therefore the function is continuous along both individual arguments.

However, consider the parametric path $$x(t) = t,\, y(t) = t$$. The parametric function becomes

Therefore,

It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates.

Theorems regarding multivariate limits and continuity

 * All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus.
 * Composition: If $$f: \mathbb{R}^n \to \mathbb{R}^m$$ and $$g: \mathbb{R}^m \to \mathbb{R}^p$$ are both multivariate continuous functions at the points $$x_0 \in \mathbb{R}^n$$ and $$f(x_0) \in \mathbb{R}^m$$ respectively, then $$g \circ f: \mathbb{R}^n \to \mathbb{R}^p$$ is also a multivariate continuous function at the point $$x_0$$.
 * Multiplication: If $$f: \mathbb{R}^n \to \mathbb{R}$$ and $$g: \mathbb{R}^n \to \mathbb{R}$$ are both continuous functions at the point $$x_0 \in \mathbb{R}^n$$, then $$fg: \mathbb{R}^n \to \mathbb{R}$$ is continuous at $$x_0$$, and $$f/g : \mathbb{R}^n \to \mathbb{R}$$ is also continuous at $$x_0$$ provided that $$g(x_0) \neq 0$$.
 * If $$f: \mathbb{R}^n \to \mathbb{R}$$ is a continuous function at point $$x_0 \in \mathbb{R}^n$$, then $$|f|$$ is also continuous at the same point.
 * If $$f: \mathbb{R}^n \to \mathbb{R}^m$$ is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point $$x_0 \in \mathbb{R}^n$$, then $$f$$ is multivariate continuous at $$x_0$$.

From the Lipschitz continuity condition for $$f$$ we have

where $$K$$ is the Lipschitz constant. Note also that, as $$s(t)$$ is continuous at $$t_0$$, for every $$\delta > 0$$ there exists a $$\epsilon > 0$$ such that $$|s(t)-s(t_0)| < \delta$$ $$\forall |t-t_0| < \epsilon$$.

Hence, for every $$\alpha > 0$$, choose $$\delta = \frac{\alpha}{K}$$; there exists an $$\epsilon > 0$$ such that for all $$t$$ satisfying $$|t-t_0| < \epsilon$$, $$|s(t)-s(t_0)| < \delta$$, and $$|f(s(t)) - f(s(t_0))| \leq K|s(t)-s(t_0)| < K\delta = \alpha$$. Hence $$\lim_{t \to t_0} f(s(t))$$ converges to $$f(s(t_0))$$ regardless of the precise form of $$s(t)$$.

Directional derivative
The derivative of a single-variable function is defined as

Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function $$f: \mathbb{R}^n \to \mathbb{R}$$ along some path $$s(t): \mathbb{R} \to \mathbb{R}^n$$:

Unlike limits, for which the value depends on the exact form of the path $$s(t)$$, it can be shown that the derivative along the path depends only on the tangent vector of the path at $$s(t_0)$$, i.e. $$s'(t_0)$$, provided that $$f$$ is Lipschitz continuous at $$s(t_0)$$, and that the limit exits for at least one such path.

For $$s(t)$$ continuous up to the first derivative (this statement is well defined as $$s$$ is a function of one variable), we can write the Taylor expansion of $$s$$ around $$t_0$$ using Taylor's theorem to construct the remainder:

where $$\tau \in [t_0,t]$$.

Substituting this into $$,

where $$\tau(h) \in [t_0,t_0+h]$$.

Lipschitz continuity gives us $$|f(x)-f(y)| \leq K|x-y|$$ for some finite $$K$$, $$\forall x,y\in \mathbb{R}^n$$. It follows that $$|f(x+O(h))-f(x)| \sim O(h)$$.

Note also that given the continuity of $$s'(t)$$, $$s'(\tau) = s'(t_0)+O(h)$$ as $$ h \to 0$$.

Substituting these two conditions into $$,

whose limit depends only on $$s'(t_0)$$ as the dominant term.

It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function $$f:\mathbb{R}^n \to \mathbb{R}$$ along the unit vector $$\hat{\bold{u}}$$ at some point $$x_0 \in \mathbb{R}^n$$ is

or, when expressed in terms of ordinary differentiation,

which is a well defined expression because $$f(x_0+\hat{\bold{u}}t)$$ is a scalar function with one variable in $$t$$.

It is not possible to define a unique scalar derivative without a direction; it is clear for example that $$\nabla_{\hat{\bold{u}}}f(x_0) = - \nabla_{-\hat{\bold{u}}}f(x_0)$$. It is also possible for directional derivatives to exist for some directions but not for others.

Partial derivative
The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.

A partial derivative may be thought of as the directional derivative of the function along a coordinate axis.

Partial derivatives may be combined in interesting ways to create  more complicated expressions of the derivative. In vector calculus, the del operator ($$\nabla$$) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function.

Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.

Multiple integration
The multiple integral expands the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.

The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves.

Fundamental theorem of calculus in multiple dimensions
In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:
 * Gradient theorem
 * Stokes' theorem
 * Divergence theorem
 * Green's theorem.

In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.

Applications and uses
Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics.

Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data.

Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus.

Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus.