Function of several real variables

In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.

The domain of a function of $n$ variables is the subset of $\mathbb{R}^n$ for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of $\mathbb{R}^n$.

General definition
A real-valued function of $n = 1$ real variables is a function that takes as input $n = 2$ real numbers, commonly represented by the variables $n = 3$, for producing another real number, the value of the function, commonly denoted $f(x_{1}, x_{2}, …, x_{n})$. For simplicity, in this article a real-valued function of several real variables will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.

Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset $n$ of $R^{n + 1}$, the domain of the function, which is always supposed to contain an open subset of $n$. In other words, a real-valued function of $n$ real variables is a function


 * $$f: X \to \R $$

such that its domain $n$ is a subset of $n$ that contains a nonempty open set.

An element of $x_{1}, x_{2}, …, x_{n}$ being an $f(x_{1}, x_{2}, …, x_{n})$-tuple $X$ (usually delimited by parentheses), the general notation for denoting functions would be $R^{n}$. The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write $R^{n}$.

It is also common to abbreviate the $n$-tuple $X$ by using a notation similar to that for vectors, like boldface $R^{n}$, underline $X$, or overarrow $n$. This article will use bold.

A simple example of a function in two variables could be:


 * $$\begin{align}

& V : X \to \R \\ & X = \left\{ (A,h) \in \R^2 \mid A>0, h> 0 \right\} \\ & V(A,h) = \frac{1}{3}A h \end{align}$$

which is the volume $(x_{1}, x_{2}, …, x_{n})$ of a cone with base area $f((x_{1}, x_{2}, …, x_{n}))$ and height $f(x_{1}, x_{2}, …, x_{n})$ measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive.

For an example of a function in two variables:


 * $$\begin{align}

& z : \R^2 \to \R \\ & z(x,y) = ax + by \end{align}$$

where $n$ and $(x_{1}, x_{2}, …, x_{n})$ are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the xy plane is the domain $x$ and the z axis is the codomain $x$, one can visualize the image to be a two-dimensional plane, with a slope of $x$ in the positive x direction and a slope of $V$ in the positive y direction. The function is well-defined at all points $A$ in $h$. The previous example can be extended easily to higher dimensions:


 * $$\begin{align}

& z : \R^p \to \R \\ & z(x_1,x_2,\ldots, x_p) = a_1 x_1 + a_2 x_2 + \cdots + a_p x_p \end{align}$$

for $a$ non-zero real constants $b$, which describes a $R^{2}$-dimensional hyperplane.

The Euclidean norm:


 * $$f(\boldsymbol{x})=\|\boldsymbol{x}\| = \sqrt{x_1^2 + \cdots + x_n^2}$$

is also a function of n variables which is everywhere defined, while
 * $$g(\boldsymbol{x})=\frac{1}{f(\boldsymbol{x})}$$

is defined only for $R$.

For a non-linear example function in two variables:


 * $$\begin{align}

& z : X \to \R \\ & X = \left\{ (x,y) \in \R^2 \, : \, x^2 + y^2 \leq 8 \,, \, x \neq 0 \, , \, y \neq 0 \right\} \\ & z(x,y) = \frac{1}{2xy}\sqrt{x^2 + y^2} \end{align}$$

which takes in all points in $a$, a disk of radius $b$ "punctured" at the origin $(x, y)$ in the plane $R^{2}$, and returns a point in $p$. The function does not include the origin $a_{1}, a_{2}, …, a_{p}$, if it did then $p$ would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy-plane as the domain $x ≠ (0, 0, …, 0)$, and the z axis the codomain $X$, the image can be visualized as a curved surface.

The function can be evaluated at the point $√8$ in $(x, y) = (0, 0)$:


 * $$z\left(2,\sqrt{3}\right) = \frac{1}{2 \cdot 2 \cdot \sqrt{3}}\sqrt{\left(2\right)^2 + \left(\sqrt{3}\right)^2} = \frac{1}{4\sqrt{3}}\sqrt{7} \,, $$

However, the function couldn't be evaluated at, say


 * $$(x,y) = (65,\sqrt{10}) \, \Rightarrow \, x^2 + y^2 = (65)^2 + (\sqrt{10})^2 > 8 $$

since these values of $R^{2}$ and $R$ do not satisfy the domain's rule.

Image
The image of a function $(x, y) = (0, 0)$ is the set of all values of $f$ when the $f$-tuple $R^{2}$ runs in the whole domain of $f$. For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.

The preimage of a given real number $R$ is called a level set. It is the set of the solutions of the equation $(x, y) = (2, √3)$.

Domain
The domain of a function of several real variables is a subset of $X$ that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain $x$ of a function $y$ to a subset $f(x_{1}, x_{2}, …, x_{n})$, one gets formally a different function, the restriction of $n$ to $(x_{1}, x_{2}, …, x_{n})$, which is denoted $$f|_Y$$. In practice, it is often (but not always) not harmful to identify $c$ and $$f|_Y$$, and to omit the restrictor $f(x_{1}, x_{2}, …, x_{n}) = c$.

Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation.

Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function $R^{n}$, it may be difficult to specify the domain of the function $$g(\boldsymbol{x}) = 1/f(\boldsymbol{x}).$$ If $X$ is a multivariate polynomial, (which has $$\R^n$$ as a domain), it is even difficult to test whether the domain of $f$ is also $$\R^n$$. This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial).

Algebraic structure
The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way:
 * For every real number $Y ⊂ X$, the constant function $$(x_1,\ldots,x_n)\mapsto r$$ is everywhere defined.
 * For every real number $f$ and every function $Y$, the function: $$rf:(x_1,\ldots,x_n)\mapsto rf(x_1,\ldots,x_n)$$ has the same domain as $f$ (or is everywhere defined if $|_{Y}$).
 * If $f$ and $f$ are two functions of respective domains $g$ and $r$ such that $r$ contains a nonempty open subset of $f$, then $$f\,g:(x_1,\ldots,x_n)\mapsto f(x_1,\ldots,x_n)\,g(x_1,\ldots,x_n)$$ and $$g\,f:(x_1,\ldots,x_n)\mapsto g(x_1,\ldots,x_n)\,f(x_1,\ldots,x_n)$$ are functions that have a domain containing $f$.

It follows that the functions of $r = 0$ variables that are everywhere defined and the functions of $f$ variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals ($g$-algebras). This is a prototypical example of a function space.

One may similarly define
 * $$1/f : (x_1,\ldots,x_n) \mapsto 1/f(x_1,\ldots,x_n),$$

which is a function only if the set of the points $X$ in the domain of $Y$ such that $X ∩ Y$ contains an open subset of $R^{n}$. This constraint implies that the above two algebras are not fields.

Univariable functions associated with a multivariable function
One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if $X ∩ Y$ is a point of the interior of the domain of the function $n$, we can fix the values of $n$ to $R$ respectively, to get a univariable function
 * $$x \mapsto f(x, a_2, \ldots, a_n),$$

whose domain contains an interval centered at $(x_{1}, …,x_{n})$. This function may also be viewed as the restriction of the function $f$ to the line defined by the equations $f(x_{1}, …, x_{n}) ≠ 0$ for $R^{n}$.

Other univariable functions may be defined by restricting $(a_{1}, …, a_{n})$ to any line passing through $f$. These are the functions
 * $$x \mapsto f(a_1+c_1 x, a_2+c_2 x, \ldots, a_n+c_n x),$$

where the $x_{2}, …, x_{n}$ are real numbers that are not all zero.

In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true.

Continuity and limit
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space.

For defining the continuity, it is useful to consider the distance function of $a_{2}, …, a_{n}$, which is an everywhere defined function of $a_{1}$ real variables:
 * $$d(\boldsymbol{x},\boldsymbol{y})=d(x_1, \ldots, x_n, y_1, \ldots, y_n)=\sqrt{(x_1-y_1)^2+\cdots +(x_n-y_n)^2}$$

A function $f$ is continuous at a point $x_{i} = a_{i}$ which is interior to its domain, if, for every positive real number $i = 2, …, n$, there is a positive real number $f$ such that $(a_{1}, …, a_{n})$ for all $c_{i}$ such that $R^{n}$. In other words, $2n$ may be chosen small enough for having the image by $f$ of the ball of radius $a = (a_{1}, …, a_{n})$ centered at $ε$ contained in the interval of length $φ$ centered at $|f(x) − f(a)| < ε$. A function is continuous if it is continuous at every point of its domain.

If a function is continuous at $x$, then all the univariate functions that are obtained by fixing all the variables $d(x a) < φ$ except one at the value $φ$, are continuous at $f$. The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at $φ$. For an example, consider the function $a$ such that $2ε$, and is otherwise defined by
 * $$f(x,y) = \frac{x^2y}{x^4+y^2}.$$

The functions $f(a)$ and $f(a)$ are both constant and equal to zero, and are therefore continuous. The function $x_{i}$ is not continuous at $a_{i}$, because, if $f(a)$ and $f(a)$, we have $f$, even if $f(0, 0) = 0$ is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through $x ↦ f(x, 0)$ are also continuous. In fact, we have
 * $$ f(x, \lambda x) =\frac{\lambda x}{x^2+\lambda^2}$$

for $y ↦ f(0, y)$.

The limit at a point of a real-valued function of several real variables is defined as follows. Let $f$ be a point in topological closure of the domain $(0, 0)$ of the function $ε < 1/2$. The function, $y = x^{2} ≠ 0$ has a limit $f(x, y) = 1/2$ when $|x|$ tends toward $(0, 0)$, denoted
 * $$L = \lim_{\boldsymbol{x} \to \boldsymbol{a}} f(\boldsymbol{x}), $$

if the following condition is satisfied: For every positive real number $λ ≠ 0$, there is a positive real number $a = (a_{1}, a_{2}, …, a_{n})$ such that
 * $$|f(\boldsymbol{x}) - L| < \varepsilon $$

for all $X$ in the domain such that
 * $$d(\boldsymbol{x}, \boldsymbol{a})< \delta.$$

If the limit exists, it is unique. If $f$ is in the interior of the domain, the limit exists if and only if the function is continuous at $f$. In this case, we have


 * $$f(\boldsymbol{a}) = \lim_{\boldsymbol{x} \to \boldsymbol{a}} f(\boldsymbol{x}). $$

When $L$ is in the boundary of the domain of $x$, and if $a$ has a limit at $ε > 0$, the latter formula allows to "extend by continuity" the domain of $δ > 0$ to $x$.

Symmetry
A symmetric function is a function $a$ that is unchanged when two variables $a$ and $a$ are interchanged:


 * $$f(\ldots, x_i,\ldots,x_j,\ldots) = f(\ldots, x_j,\ldots,x_i,\ldots)$$

where $f$ and $f$ are each one of $a$. For example:


 * $$f(x,y,z,t) = t^2 - x^2 - y^2 - z^2 $$

is symmetric in $f$ since interchanging any pair of $a$ leaves $f$ unchanged, but is not symmetric in all of $x_{i}$, since interchanging $x_{j}$ with $i$ or $j$ or $1, 2, …, n$ gives a different function.

Function composition
Suppose the functions


 * $$\xi_1 = \xi_1(x_1,x_2,\ldots,x_n), \quad \xi_2 = \xi_2(x_1,x_2,\ldots,x_n), \ldots \xi_m = \xi_m(x_1,x_2,\ldots,x_n),$$

or more compactly $x, y, z$, are all defined on a domain $x, y, z$. As the $f$-tuple $x, y, z, t$ varies in $t$, a subset of $x$, the $y$-tuple $z$ varies in another region $ξ = ξ(x)$ a subset of $X$. To restate this:


 * $$\boldsymbol{\xi} : X \to \Xi .$$

Then, a function $n$ of the functions $x = (x_{1}, x_{2}, …, x_{n})$ defined on $X$,


 * $$\begin{align}

& \zeta : \Xi \to \R, \\ & \zeta = \zeta(\xi_1,\xi_2,\ldots,\xi_m), \end{align}$$

is a function composition defined on $R^{n}$, in other terms the mapping


 * $$\begin{align}

& \zeta : X \to \R, \\ & \zeta = \zeta(\xi_1,\xi_2,\ldots,\xi_m) = f(x_1,x_2,\ldots,x_n). \end{align}$$

Note the numbers $m$ and $ξ = (ξ_{1}, ξ_{2}, …, ξ_{m})$ do not need to be equal.

For example, the function


 * $$f(x,y) = e^{xy}[\sin 3(x-y) - \cos 2(x+y)]$$

defined everywhere on $Ξ$ can be rewritten by introducing


 * $$(\alpha, \beta, \gamma ) = (\alpha(x,y), \beta(x,y), \gamma(x,y) ) = ( xy ,  x-y, x+y )$$

which is also everywhere defined in $R^{m}$ to obtain


 * $$f(x,y) = \zeta(\alpha(x,y),\beta(x,y),\gamma(x,y)) = \zeta(\alpha,\beta,\gamma) = e^\alpha[\sin (3\beta) - \cos (2\gamma)] \,.$$

Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations.

Calculus
Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus.

Partial derivatives
Partial derivatives can be defined with respect to each variable:


 * $$\frac{\partial}{\partial x_1} f(x_1, x_2, \ldots, x_n)\,,\quad \frac{\partial}{\partial x_2} f(x_1, x_2, \ldots x_n)\,,\ldots, \frac{\partial}{\partial x_n} f(x_1, x_2, \ldots, x_n). $$

Partial derivatives themselves are functions, each of which represents the rate of change of $ζ$ parallel to one of the $ξ(x)$ axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number.

For real-valued functions of a real variable, $Ξ$, its ordinary derivative $X$ is geometrically the gradient of the tangent line to the curve $m$ at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve.

The second order partial derivatives can be calculated for every pair of variables:


 * $$\frac{\partial^2}{\partial x^2_1} f(x_1, x_2, \ldots, x_n)\,,\quad \frac{\partial^2}{\partial x_1 x_2} f(x_1, x_2, \ldots x_n)\,,\ldots, \frac{\partial^2}{\partial x^2_n} f(x_1, x_2, \ldots, x_n) .$$

Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes.

This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization.

In general, partial derivatives of higher order $n$ have the form:


 * $$\frac{\partial^p}{\partial x_1^{p_1}\partial x_2^{p_2}\cdots\partial x_n^{p_n}} f(x_1, x_2, \ldots, x_n) \equiv \frac{\partial^{p_1}}{\partial x_1^{p_1}} \frac{\partial^{p_2}}{\partial x_2^{p_2}} \cdots \frac{\partial^{p_n}}{\partial x_n^{p_n}} f(x_1, x_2, \ldots, x_n)$$

where $R^{2}$ are each integers between $R^{3}$ and $f$ such that $x_{1}, x_{2}, …, x_{n}$, using the definitions of zeroth partial derivatives as identity operators:


 * $$\frac{\partial^0}{\partial x_1^0}f(x_1, x_2, \ldots, x_n) = f(x_1, x_2, \ldots, x_n)\,,\quad \ldots,\, \frac{\partial^0}{\partial x_n^0}f(x_1, x_2, \ldots, x_n)=f(x_1, x_2, \ldots, x_n)\,. $$

The number of possible partial derivatives increases with $y = f(x)$, although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some $dy/dx$.

Multivariable differentiability
A function $y = f(x)$ is differentiable in a neighborhood of a point $p$ if there is an $p_{1}, p_{2}, …, p_{n}$-tuple of numbers dependent on $0$ in general, $p$, so that:


 * $$f(\boldsymbol{x}) = f(\boldsymbol{a}) + \boldsymbol{A}(\boldsymbol{a})\cdot(\boldsymbol{x}-\boldsymbol{a}) + \alpha(\boldsymbol x)|\boldsymbol{x}-\boldsymbol{a}|$$

where $$\alpha(\boldsymbol x) \to 0$$ as $$|\boldsymbol{x}-\boldsymbol{a}| \to 0$$. This means that if $p_{1} + p_{2} + ⋯ + p_{n} = p$ is differentiable at a point $p$, then $p$ is continuous at $f(x)$, although the converse is not true - continuity in the domain does not imply differentiability in the domain. If $a$ is differentiable at $n$ then the first order partial derivatives exist at $a$ and:


 * $$\left.\frac{\partial f(\boldsymbol{x})}{\partial x_i}\right|_{\boldsymbol{x} = \boldsymbol{a}} = A_i (\boldsymbol{a}) $$

for $A(a) = (A_{1}(a), A_{2}(a), …, A_{n}(a))$, which can be found from the definitions of the individual partial derivatives, so the partial derivatives of $f$ exist.

Assuming an $a$-dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system:


 * $$\nabla f(\boldsymbol{x}) = \left(\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \ldots, \frac{\partial}{\partial x_n} \right) f(\boldsymbol{x}) $$

used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus.

Then substituting the gradient $f$ (evaluated at $x = a$ with a slight rearrangement gives:


 * $$f(\boldsymbol{x}) - f(\boldsymbol{a})= \nabla f(\boldsymbol{a})\cdot(\boldsymbol{x}-\boldsymbol{a}) + \alpha |\boldsymbol{x}-\boldsymbol{a}|$$

where $f$ denotes the dot product. This equation represents the best linear approximation of the function $a$ at all points $a$ within a neighborhood of $i = 1, 2, …, n$. For infinitesimal changes in $f$ and $n$ as $∇f$:


 * $$df = \left.\frac{\partial f(\boldsymbol{x})}{\partial x_1}\right|_{\boldsymbol{x}=\boldsymbol{a}}dx_1 +

\left.\frac{\partial f(\boldsymbol{x})}{\partial x_2}\right|_{\boldsymbol{x}=\boldsymbol{a}}dx_2 + \dots + \left.\frac{\partial f(\boldsymbol{x})}{\partial x_n}\right|_{\boldsymbol{x}=\boldsymbol{a}}dx_n = \nabla f(\boldsymbol{a}) \cdot d\boldsymbol{x}$$

which is defined as the total differential, or simply differential, of $x = a)$, at $·$. This expression corresponds to the total infinitesimal change of $f$, by adding all the infinitesimal changes of $x$ in all the $a$ directions. Also, $f$ can be construed as a covector with basis vectors as the infinitesimals $x$ in each direction and partial derivatives of $x → a$ as the components.

Geometrically $f$ is perpendicular to the level sets of $a$, given by $f$ which for some constant $f$ describes an $x_{i}$-dimensional hypersurface. The differential of a constant is zero:


 * $$df = (\nabla f) \cdot d \boldsymbol{x} = 0$$

in which $df$ is an infinitesimal change in $dx_{i}$ in the hypersurface $f$, and since the dot product of $∇f$ and $f$ is zero, this means $f(x) = c$ is perpendicular to $c$.

In arbitrary curvilinear coordinate systems in $(n − 1)$ dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1.

Differentiability classes
If all first order partial derivatives evaluated at a point $dx$ in the domain:


 * $$\left.\frac{\partial}{\partial x_1} f(\boldsymbol{x})\right|_{\boldsymbol{x}=\boldsymbol{a}}\,,\quad

\left.\frac{\partial}{\partial x_2} f(\boldsymbol{x})\right|_{\boldsymbol{x}=\boldsymbol{a}}\,,\ldots, \left.\frac{\partial}{\partial x_n} f(\boldsymbol{x})\right|_{\boldsymbol{x}=\boldsymbol{a}} $$

exist and are continuous for all $x$ in the domain, $f(x) = c$ has differentiability class $∇f$. In general, if all order $dx$ partial derivatives evaluated at a point $∇f$:


 * $$\left.\frac{\partial^p}{\partial x_1^{p_1}\partial x_2^{p_2}\cdots\partial x_n^{p_n}} f(\boldsymbol{x})\right|_{\boldsymbol{x}=\boldsymbol{a}}$$

exist and are continuous, where $dx$, and $n$ are as above, for all $a$ in the domain, then $a$ is differentiable to order $f$ throughout the domain and has differentiability class $C^{1}$.

If $p$ is of differentiability class $a$, $p_{1}, p_{2}, …, p_{n}$ has continuous partial derivatives of all order and is called smooth. If $p$ is an analytic function and equals its Taylor series about any point in the domain, the notation $a$ denotes this differentiability class.

Multiple integration
Definite integration can be extended to multiple integration over the several real variables with the notation;


 * $$\int_{R_n} \cdots \int_{R_2} \int_{R_1} f(x_1, x_2, \ldots, x_n) \, dx_1 dx_2\cdots dx_n \equiv \int_R f(\boldsymbol{x}) \, d^n\boldsymbol{x}$$

where each region $f$ is a subset of or all of the real line:


 * $$R_1 \subseteq \mathbb{R} \,, \quad R_2 \subseteq \mathbb{R} \,, \ldots, R_n \subseteq \mathbb{R}, $$

and their Cartesian product gives the region to integrate over as a single set:


 * $$R = R_1 \times R_2 \times \dots \times R_n \,,\quad R \subseteq \mathbb{R}^n \,,$$

an $p$-dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region $C ^{p}$ of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration.

The integral of a real-valued function of a real variable $f$ with respect to $C^{∞}$ has geometric interpretation as the area bounded by the curve $f$ and the $f$-axis. Multiple integrals extend the dimensionality of this concept: assuming an $C^{ω}$-dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the $R_{1}, R_{2}, …, R_{n}$-dimensional hypervolume bounded by $n$ and the $R$ axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent).

While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if $y = f(x)$ is some scalar density field and $x$ are the position vector coordinates, i.e. some scalar quantity per unit n-dimensional hypervolume, then integrating over the region $y = f(x)$ gives the total amount of quantity in $x$. The more formal notions of hypervolume is the subject of measure theory. Above we used the Lebesgue measure, see Lebesgue integration for more on this topic.

Theorems
With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign.

Vector calculus
One can collect a number of functions each of several real variables, say


 * $$y_1 = f_1(x_1, x_2, \ldots, x_n)\,,\quad y_2 = f_2(x_1, x_2, \ldots, x_n)\,,\ldots, y_m = f_m(x_1, x_2, \cdots x_n) $$

into an $n$-tuple, or sometimes as a column vector or row vector, respectively:


 * $$(y_1, y_2, \ldots, y_m) \leftrightarrow \begin{bmatrix} f_1(x_1, x_2, \ldots, x_n) \\ f_2(x_1, x_2, \cdots x_n) \\ \vdots \\ f_m(x_1, x_2, \ldots, x_n) \end{bmatrix} \leftrightarrow \begin{bmatrix} f_1(x_1, x_2, \ldots, x_n) &  f_2(x_1, x_2, \ldots, x_n) & \cdots & f_m(x_1, x_2, \ldots, x_n) \end{bmatrix} $$

all treated on the same footing as an $n$-component vector field, and use whichever form is convenient. All the above notations have a common compact notation $f(x)$. The calculus of such vector fields is vector calculus. For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus.

Implicit functions
A real-valued implicit function of several real variables is not written in the form "$x_{1}, x_{2}, …, x_{n}$". Instead, the mapping is from the space $f$ to the zero element in $x$ (just the ordinary zero 0):


 * $$\begin{align}

& \phi: \R^{n+1} \to \{0\} \\ & \phi(x_1, x_2, \ldots, x_n, y) = 0 \end{align}$$

is an equation in all the variables. Implicit functions are a more general way to represent functions, since if:


 * $$y=f(x_1, x_2, \ldots, x_n) $$

then we can always define:


 * $$ \phi(x_1, x_2, \ldots, x_n, y) = y - f(x_1, x_2, \ldots, x_n) = 0 $$

but the converse is not always possible, i.e. not all implicit functions have an explicit form.

For example, using interval notation, let


 * $$\begin{align}

& \phi : X \to \{ 0 \} \\ & \phi(x,y,z) = \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 + \left(\frac{z}{c}\right)^2 - 1 = 0 \\ & X = [-a,a] \times [-b,b] \times [-c,c] = \left\{ (x,y,z) \in \R^3 \,:\, -a\leq x\leq a, -b\leq y\leq b, -c\leq z\leq c \right\}. \end{align}$$

Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin $R$ with constant semi-major axes $R$, along the positive x, y and z axes respectively. In the case $m$, we have a sphere of radius $m$ centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for $y = f(x)$, $y = f(…)$ or $R^{n + 1}$; however it is much tidier to write it in an implicit form.

For a more sophisticated example:


 * $$\begin{align}

& \phi : \R^4 \to \{ 0 \} \\ & \phi(t,x,y,z) = C tz e^{tx-yz} + A \sin(3\omega t) \left(x^2z - B y^6\right) = 0 \end{align}$$

for non-zero real constants $R$, this function is well-defined for all $(x, y, z) = (0, 0, 0)$, but it cannot be solved explicitly for these variables and written as "$a, b, c$", "$a = b = c = r$", etc.

The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let $r$ be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point $x$ be zero:


 * $$\phi(\boldsymbol{a}, b) = 0;$$

and let the first partial derivative of $y$ with respect to $z$ evaluated at $A, B, C, ω$ be non-zero:


 * $$\left.\frac{\partial \phi(\boldsymbol{x},y)}{\partial y}\right|_{(\boldsymbol{x},y) = (\boldsymbol{a},b)} \neq 0 .$$

Then, there is an interval $(t, x, y, z)$ containing $t =$, and a region $x =$ containing $ϕ(x_{1}, x_{2}, …, x_{n})$, such that for every $(a, b) = (a_{1}, a_{2}, …, a_{n}, b)$ in $ϕ$ there is exactly one value of $y$ in $(a, b)$ satisfying $[y_{1}, y_{2}]$, and $b$ is a continuous function of $R$ so that $(a, b)$. The total differentials of the functions are:


 * $$dy=\frac{\partial y}{\partial x_1}dx_1 + \frac{\partial y}{\partial x_2}dx_2 + \dots + \frac{\partial y}{\partial x_n}dx_n ;$$
 * $$d\phi=\frac{\partial \phi}{\partial x_1}dx_1 + \frac{\partial \phi}{\partial x_2}dx_2 + \dots + \frac{\partial \phi}{\partial x_n}dx_n + \frac{\partial \phi}{\partial y}dy .$$

Substituting $x$ into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of $R$ with respect to $y$ in terms of the derivatives of the original function, each as a solution of the linear equation


 * $$\frac{\partial \phi}{\partial x_i} + \frac{\partial \phi}{\partial y}\frac{\partial y}{\partial x_i} = 0 $$

for $[y_{1}, y_{2}]$.

Complex-valued function of several real variables
A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.

If $ϕ(x, y) = 0$ is such a complex valued function, it may be decomposed as
 * $$f(x_1,\ldots, x_n)=g(x_1,\ldots, x_n)+ih(x_1,\ldots, x_n),$$

where $y$ and $x$ are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.

This reduction works for the general properties. However, for an explicitly given function, such as:


 * $$ z(x, y, \alpha, a, q) = \frac{q}{2\pi} \left[\ln\left(x+iy- ae^{i\alpha}\right) - \ln\left(x+iy + ae^{-i\alpha}\right)\right]$$

the computation of the real and the imaginary part may be difficult.

Applications
Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities.

Examples of real-valued functions of several real variables
Examples in continuum mechanics include the local mass density $ϕ(x, y(x)) = 0$ of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), $dy$, and time $y$:


 * $$\rho = \rho(\mathbf{r},t) = \rho(x,y,z,t)$$

Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields.

Another example is the velocity field, a vector field, which has components of velocity $x_{i}$ that are each multivariable functions of spatial coordinates and time similarly:


 * $$\mathbf{v} (\mathbf{r},t) = \mathbf{v}(x,y,z,t) = [v_x(x,y,z,t), v_y(x,y,z,t), v_z(x,y,z,t)]$$

Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields.

Another important example is the equation of state in thermodynamics, an equation relating pressure $i = 1, 2, …, n$, temperature $f(x_{1}, …, x_{n})$, and volume $g$ of a fluid, in general it has an implicit form:


 * $$f(P, V, T) = 0 $$

The simplest example is the ideal gas law:


 * $$f(P, V, T) = PV - nRT = 0 $$

where $h$ is the number of moles, constant for a fixed amount of substance, and $ρ$ the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form.

Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production.

Examples of complex-valued functions of several real variables
Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature.

In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential


 * $$F(x,y,\ldots) = \varphi(x,y,\ldots) + i\psi(x,y,\ldots) $$

is a complex valued function of the two spatial coordinates $r = (x, y, z)$ and $t$, and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function.

The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the z-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles:


 * $$Y^m_\ell = Y^m_\ell(\theta,\phi) $$

In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time $v = (v_{x}, v_{y}, v_{z})$:


 * $$\Psi = \Psi(\mathbf{r},t) = \Psi(x,y,z,t)\,,\quad \Phi = \Phi(\mathbf{p},t) = \Phi(p_x,p_y,p_z,t) $$

where each is related by a Fourier transform.