Exact differential

In multivariate calculus, a differential or differential form is said to be exact or perfect (exact differential), as contrasted with an inexact differential, if it is equal to the general differential $$dQ$$ for some differentiable function $$Q$$ in an orthogonal coordinate system (hence $$Q$$ is a multivariable function whose variables are independent, as they are always expected to be when treated in multivariable calculus).

An exact differential is sometimes also called a total differential, or a full differential, or, in the study of differential geometry, it is termed an exact form.

The integral of an exact differential over any integral path is path-independent, and this fact is used to identify state functions in thermodynamics.

Definition
Even if we work in three dimensions here, the definitions of exact differentials for other dimensions are structurally similar to the three dimensional definition. In three dimensions, a form of the type


 * $$A(x,y,z) \,dx + B(x,y,z) \,dy + C(x,y,z) \,dz$$

is called a differential form. This form is called exact on an open domain $$D \subset \mathbb{R}^3$$ in space if there exists some differentiable scalar function $$Q = Q(x,y,z)$$ defined on $$D$$ such that


 * $$ dQ \equiv \left ( \frac{\partial Q}{\partial x} \right )_{y,z} \, dx + \left ( \frac{\partial Q}{\partial y} \right )_{x,z} \, dy + \left ( \frac{\partial Q}{\partial z} \right )_{x,y} \, dz,$$ $$dQ = A \, dx + B \, dy + C \, dz$$

throughout $$D$$, where $$x,y,z$$ are orthogonal coordinates (e.g., Cartesian, cylindrical, or spherical coordinates). In other words, in some open domain of a space, a differential form is an exact differential if it is equal to the general differential of a differentiable function in an orthogonal coordinate system.


 * Note: In this mathematical expression, the subscripts outside the parenthesis indicate which variables are being held constant during differentiation. Due to the definition of the partial derivative, these subscripts are not required, but they are explicitly shown here as reminders.

Integral path independence
The exact differential for a differentiable scalar function $$Q$$ defined in an open domain $$D \subset \mathbb{R}^n$$ is equal to $$dQ = \nabla Q \cdot d \mathbf{r}$$, where $$\nabla Q$$ is the gradient of $$Q$$, $$\cdot$$ represents the scalar product, and $$d \mathbf{r}$$ is the general differential displacement vector, if an orthogonal coordinate system is used. If $$   Q$$ is of differentiability class $$C^1$$ (continuously differentiable), then $$\nabla Q$$ is a conservative vector field for the corresponding potential $$Q$$ by the definition. For three dimensional spaces, expressions such as $$d \mathbf{r} = (dx, dy, dz)$$ and $$\nabla Q = (\frac{\partial Q}{\partial x}, \frac{\partial Q}{\partial y},\frac{\partial Q}{\partial z})$$ can be made.

The gradient theorem states


 * $$\int _{i}^{f} dQ = \int _{i}^{f}\nabla Q (\mathbf {r} )\cdot d \mathbf {r} = Q \left(f \right) - Q \left(i \right)$$

that does not depend on which integral path between the given path endpoints $$i$$ and $$f$$ is chosen. So it is concluded that the integral of an exact differential is independent of the choice of an integral path between given path endpoints (path independence).

For three dimensional spaces, if $$\nabla Q$$ defined on an open domain $$D \subset \mathbb{R}^3$$ is of differentiability class $$C^1$$ (equivalently $$Q$$ is of $$C^2$$), then this integral path independence can also be proved by using the vector calculus identity $$\nabla \times ( \nabla Q ) = \mathbf{0}$$ and the Stokes' theorem.


 * $$\oint _{\partial \Sigma }\nabla Q \cdot d \mathbf {r} = \iint _{\Sigma }(\nabla \times \nabla Q)\cdot d \mathbf {a} = 0$$

for a simply closed loop $$\partial \Sigma$$ with the smooth oriented surface $$\Sigma$$ in it. If the open domain $$D$$ is simply connected open space (roughly speaking, a single piece open space without a hole within it), then any irrotational vector field (defined as a $$C^1$$ vector field $$\mathbf{v}$$ which curl is zero, i.e., $$\nabla \times \mathbf{v} = \mathbf{0}$$) has the path independence by the Stokes' theorem, so the following statement is made; In a simply connected open region, any $$C^1$$ vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa. The equality of the path independence and conservative vector fields is shown here.

Thermodynamic state function
In thermodynamics, when $$dQ$$ is exact, the function $$Q$$ is a state function of the system: a mathematical function which depends solely on the current equilibrium state, not on the path taken to reach that state. Internal energy $$U$$, Entropy $$S$$, Enthalpy $$H$$, Helmholtz free energy $$A$$, and Gibbs free energy $$G$$ are state functions. Generally, neither work $$W$$ nor heat $$Q$$ is a state function. (Note: $$Q$$ is commonly used to represent heat in physics. It should not be confused with the use earlier in this article as the parameter of an exact differential.)

One dimension
In one dimension, a differential form
 * $$A(x) \, dx$$

is exact if and only if $$A$$ has an antiderivative (but not necessarily one in terms of elementary functions). If $$A$$ has an antiderivative and let $$Q$$ be an antiderivative of $$A$$ so $$\frac{dQ}{dx} = A$$, then $$A(x) \, dx$$ obviously satisfies the condition for exactness. If $$A$$ does not have an antiderivative, then we cannot write $$dQ = \frac{dQ}{dx}dx$$ with $$A = \frac{dQ}{dx}$$ for a differentiable function $$Q$$ so $$A(x) \, dx$$ is inexact.

Two and three dimensions
By symmetry of second derivatives, for any "well-behaved" (non-pathological) function $$Q$$, we have


 * $$\frac{\partial ^2 Q}{\partial x \, \partial y} = \frac{\partial ^2 Q}{\partial y \, \partial x}.$$

Hence, in a simply-connected region R of the xy-plane, where $$x,y$$ are independent, a differential form


 * $$A(x, y)\,dx + B(x, y)\,dy$$

is an exact differential if and only if the equation


 * $$\left( \frac{\partial A}{\partial y} \right)_x = \left( \frac{\partial B}{\partial x} \right)_y$$

holds. If it is an exact differential so $$A=\frac{\partial Q}{\partial x}$$ and $$B=\frac{\partial Q}{\partial y}$$, then $$Q$$ is a differentiable (smoothly continuous) function along $$x$$ and $$y$$, so $$\left( \frac{\partial A}{\partial y} \right)_x = \frac{\partial ^2 Q}{\partial y \partial x} = \frac{\partial ^2 Q}{\partial x \partial y} = \left( \frac{\partial B}{\partial x} \right)_y$$. If $$\left( \frac{\partial A}{\partial y} \right)_x = \left( \frac{\partial B}{\partial x} \right)_y$$ holds, then $$A$$ and $$B$$ are differentiable (again, smoothly continuous) functions along $$y$$ and $$x$$ respectively, and $$\left( \frac{\partial A}{\partial y} \right)_x = \frac{\partial ^2 Q}{\partial y \partial x} = \frac{\partial ^2 Q}{\partial x \partial y} = \left( \frac{\partial B}{\partial x} \right)_y$$ is only the case.

For three dimensions, in a simply-connected region R of the xyz-coordinate system, by a similar reason, a differential


 * $$dQ = A(x, y, z) \, dx + B(x, y, z) \, dy + C(x, y, z) \, dz$$

is an exact differential if and only if between the functions A, B and C there exist the relations


 * $$\left( \frac{\partial A}{\partial y} \right)_{x,z} \!\!\!= \left( \frac{\partial B}{\partial x} \right)_{y,z}$$; $$\left( \frac{\partial A}{\partial z} \right)_{x,y} \!\!\!= \left( \frac{\partial C}{\partial x} \right)_{y,z}$$; $$\left( \frac{\partial B}{\partial z} \right)_{x,y} \!\!\!= \left( \frac{\partial C}{\partial y} \right)_{x,z}.$$

These conditions are equivalent to the following sentence: If G is the graph of this vector valued function then for all tangent vectors X,Y of the surface G then s(X, Y) = 0 with s the symplectic form.

These conditions, which are easy to generalize, arise from the independence of the order of differentiations in the calculation of the second derivatives. So, in order for a differential dQ, that is a function of four variables, to be an exact differential, there are six conditions (the combination $$C(4,2)=6$$) to satisfy.

Partial differential relations
If a differentiable function $$z(x,y)$$ is one-to-one (injective) for each independent variable, e.g., $$z(x,y)$$ is one-to-one for $$x$$ at a fixed $$y$$ while it is not necessarily one-to-one for $$(x,y)$$, then the following total differentials exist because each independent variable is a differentiable function for the other variables, e.g., $$x(y,z)$$.


 * $$d x = {\left ( \frac{\partial x}{\partial y} \right )}_z \, d y + {\left ( \frac{\partial x}{\partial z} \right )}_y \,dz$$
 * $$d z = {\left ( \frac{\partial z}{\partial x} \right )}_y \, d x + {\left ( \frac{\partial z}{\partial y} \right )}_x \,dy.$$

Substituting the first equation into the second and rearranging, we obtain


 * $$d z = {\left ( \frac{\partial z}{\partial x} \right )}_y \left [ {\left ( \frac{\partial x}{\partial y} \right )}_z d y + {\left ( \frac{\partial x}{\partial z} \right )}_y dz \right ] + {\left ( \frac{\partial z}{\partial y} \right )}_x dy,$$
 * $$d z = \left [ {\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial y} \right )}_z + {\left ( \frac{\partial z}{\partial y} \right )}_x \right ] d y + {\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial z} \right )}_y dz,$$
 * $$\left [ 1 - {\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial z} \right )}_y \right ] dz = \left [ {\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial y} \right )}_z + {\left ( \frac{\partial z}{\partial y} \right )}_x \right ] d y.$$

Since $$y$$ and $$z$$ are independent variables, $$d y$$ and $$d z$$ may be chosen without restriction. For this last equation to generally hold, the bracketed terms must be equal to zero. The left bracket equal to zero leads to the reciprocity relation while the right bracket equal to zero goes to the cyclic relation as shown below.

Reciprocity relation
Setting the first term in brackets equal to zero yields


 * $${\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial z} \right )}_y = 1.$$

A slight rearrangement gives a reciprocity relation,


 * $${\left ( \frac{\partial z}{\partial x} \right )}_y = \frac{1}{{\left ( \frac{\partial x}{\partial z} \right )}_y}.$$

There are two more permutations of the foregoing derivation that give a total of three reciprocity relations between $$x$$, $$y$$ and $$z$$.

Cyclic relation
The cyclic relation is also known as the cyclic rule or the Triple product rule. Setting the second term in brackets equal to zero yields


 * $${\left ( \frac{\partial z}{\partial x} \right )}_y {\left ( \frac{\partial x}{\partial y} \right )}_z = - {\left ( \frac{\partial z}{\partial y} \right )}_x.$$

Using a reciprocity relation for $$\tfrac{\partial z}{\partial y}$$ on this equation and reordering gives a cyclic relation (the triple product rule),


 * $${\left ( \frac{\partial x}{\partial y} \right )}_z {\left ( \frac{\partial y}{\partial z} \right )}_x {\left ( \frac{\partial z}{\partial x} \right )}_y = -1.$$

If, instead, reciprocity relations for $$\tfrac{\partial x}{\partial y}$$ and $$\tfrac{\partial y}{\partial z}$$ are used with subsequent rearrangement, a standard form for implicit differentiation is obtained:


 * $${\left ( \frac{\partial y}{\partial x} \right )}_z = - \frac { {\left ( \frac{\partial z}{\partial x} \right )}_y }{ {\left ( \frac{\partial z}{\partial y} \right )}_x }.$$

Some useful equations derived from exact differentials in two dimensions
(See also Bridgman's thermodynamic equations for the use of exact differentials in the theory of thermodynamic equations)

Suppose we have five state functions $$z,x,y,u$$, and $$v$$. Suppose that the state space is two-dimensional and any of the five quantities are differentiable. Then by the chain rule

but also by the chain rule:

and

so that (by substituting (2) and (3) into (1)):

which implies that (by comparing (4) with (1)):

Letting $$v=y$$ in (5) gives:

Letting $$u=y$$ in (5) gives:

Letting $$u=y$$ and $$v=z$$ in (7) gives:

using ($$\partial a/\partial b)_c = 1/(\partial b/\partial a)_c$$ gives the triple product rule: