First-order partial differential equation

In mathematics, a first-order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form


 * $$ F(x_1,\ldots,x_n,u,u_{x_1},\ldots u_{x_n}) =0. \,$$

Such equations arise in the construction of characteristic surfaces for hyperbolic partial differential equations, in the calculus of variations, in some geometrical problems, and in simple models for gas dynamics whose solution involves the method of characteristics. If a family of solutions of a single first-order partial differential equation can be found, then additional solutions may be obtained by forming envelopes of solutions in that family. In a related procedure, general solutions may be obtained by integrating families of ordinary differential equations.

General solution and complete integral
The general solution to the first order partial differential equation is a solution which contains an arbitrary function. But, the solution to the first order partial differential equations with as many arbitrary constants as the number of independent variables is called the complete integral. The following n-parameter family of solutions


 * $$\phi(x_1,x_2,\dots,x_n,u,a_1,a_2,\dots,a_n)$$

is a complete integral if $$\text{det}|\phi_{x_i a_j}|\neq 0$$. The below discussions on the type of integrals are based on the textbook A Treatise on Differential Equations (Chaper IX, 6th edition, 1928) by Andrew Forsyth.

Complete integral
The solutions are described in relatively simple manner in two or three dimensions with which the key concepts are trivially extended to higher dimensions. A general first-order partial differential equation in three dimensions has the form


 * $$ F(x,y,z,u,p,q,r)=0, \,$$

where $$ p=u_x,\,q=u_y, \, r=u_z.$$ Suppose $$\phi(x,y,z,u,a,b,c) = 0$$ be the complete integral that contains three arbitrary constants $$(a,b,c)$$. From this we can obtain three relations by differentiation


 * $$\phi_x + p \phi_u = 0$$
 * $$\phi_y + q \phi_u = 0$$
 * $$\phi_z + r \phi_u = 0$$

Along with the complete integral $$\phi=0$$, the above three relations can be used to eliminate three constants and obtain an equation (original partial differential equation) relating $$(x,y,z,u,p,q,r)$$. Note that the elimination of constants leading to the partial differential equation need not be unique, i.e., two different equations can result in the same complete integral, for example, elimination of constants from the relation $$u=\sqrt{(x-a)^2+(y-b)^2}+z-c$$ leads to $$p^2+q^2=1$$ and $$r=1$$.

General integral
Once a complete integral is found, a general solution can be constructed from it. The general integral is obtained by making the constants functions of the coordinates, i.e., $$a=a(x,y,z),\,b=b(x,y,z),\,c=c(x,y,z)$$. These functions are chosen such that the forms of $$(p,q,r)$$ are unaltered so that the elimination process from complete integral can be utilized. Differentiation of the complete integral now provides


 * $$\phi_x + p \phi_u = -(a_x\phi_a +b_x \phi_b + c_x\phi_c)$$
 * $$\phi_y + q \phi_u = -(a_y\phi_a +b_y \phi_b + c_y\phi_c)$$
 * $$\phi_z + r \phi_u = -(a_z\phi_a +b_z \phi_b + c_z\phi_c)$$

in which we require the right-hand side terms of all the three equations to vanish identically so that elimination of $$(a,b,c)$$ from $$\phi$$ results in the partial differential equation. This requirement can be written more compactly by writing it as


 * $$J\phi_a=0,\quad J\phi_b=0,\quad J\phi_c=0$$

where


 * $$J=\frac{\partial(a,b,c)}{\partial(x,y,z)}=\begin{align}

\begin{vmatrix} a_x & a_y & a_z \\ b_x & b_y & b_z \\ c_x & c_y & c_z \end{vmatrix} \end{align}$$

is the Jacobian determinant. The condition $$J=0$$ leads to the general solution. Whenever $$J=0$$, then there exists a functional relation between $$(a,b,c)$$ because whenever a determinant is zero, the columns (or rows) are not linearly independent. Take this functional relation to be


 * $$c=\psi(a,b).$$

Once $$(a,b)$$ is found, the problem is solved. From the above relation, we have $$dc = \psi_a da + \psi_b db$$. By summing the original equations $$(a_x\phi_a +b_x \phi_b + c_x\phi_c)=0$$, $$(a_y\phi_a +b_y \phi_b + c_y\phi_c)=0$$ and $$(a_z\phi_a +b_z \phi_b + c_z\phi_c)=0$$ we find $$\phi_ada+\phi_bdb+\phi_cdc=0$$. Now eliminating $$dc$$ from the two equations derived, we obtain


 * $$(\phi_a+\phi_c\psi_a)da+(\phi_b+\phi_c\psi_b)db=0$$

Since $$a$$ and $$b$$ are independent, we require


 * $$(\phi_a+\phi_c\psi_a)=0$$
 * $$(\phi_b+\phi_c\psi_b)=0.$$

The above two equations can be used to solve $$a$$ and $$b$$. Substituting $$(a,b,c)$$ in $$\phi=0$$, we obtain the general integral. Thus a general integral describes a relation between $$(x,y,z,u)$$, two known independent functions $$(a,b)$$ and an arbitrary function $$\psi(a,b)$$. Note that we have assumed $$c=\psi(a,b)$$ to make the determinant $$J$$ zero, but this is not always needed. The relations $$c=\psi(a)$$ or, $$c=\psi(b)$$ suffice to make the determinant zero.

Singular integral
Singular integral is obtained when $$J\neq 0$$. In this case, elimination of $$(a,b,c)$$ from $$\phi=0$$ works if


 * $$\phi_a=0, \quad \phi_b=0, \quad \phi_c=0.$$

The three equations can be used to solve the three unknowns $$(a,b,c)$$. Solution obtained by elimination of $$(a,b,c)$$ this way leads to what are called singular integrals.

Special integral
Usually, most integrals fall into three categories defined above, but it may happen that a solution does not fit into any of three types of integrals mentioned above. These solutions are called special integrals. A relation $$\chi(x,y,z,u)=0$$ that satisfies the partial differential equation is said to a special integral if we are unable to determine $$(a,b,c)$$ from the following equations


 * $$\phi_x\chi_u-\chi_x\phi_u=0$$
 * $$\phi_y\chi_u-\chi_y\phi_u=0$$
 * $$\phi_z\chi_u-\chi_z\phi_u=0.$$

If we able to determine $$(a,b,c)$$ from the above set of equations, then $$\chi=0$$ will turn out to be one of the three integrals described before.

Two dimensional case
The complete integral in two-dimensional space can be written as $$\phi(x,y,u,a,b)=0$$. The general integral is obtained by eliminating $$a$$ from the following equations


 * $$\phi(x,y,z,a,\psi(a))=0,\quad \phi_a+\psi_a\phi_b=0.$$

The singular integral if it exists can be obtained by eliminating $$(a,b)$$ from the following equations


 * $$\phi(x,y,z,a,b)=0,\quad \phi_a=0, \quad \phi_b=0.$$

If a complete integral is not available, solutions may still be obtained by solving a system of ordinary equations. To obtain this system, first note that the PDE determines a cone (analogous to the light cone) at each point: if the PDE is linear in the derivatives of u (it is quasi-linear), then the cone degenerates into a line. In the general case, the pairs (p,q) that satisfy the equation determine a family of planes at a given point:
 * $$ u - u_0 = p(x-x_0) + q(y-y_0), \,$$

where
 * $$ F(x_0,y_0,u_0,p,q) =0.\,$$

The envelope of these planes is a cone, or a line if the PDE is quasi-linear. The condition for an envelope is
 * $$ F_p\, dp + F_q \,dq =0, \,$$

where F is evaluated at $$ (x_0, y_0,u_0,p,q)$$, and dp and dq are increments of p and q that satisfy F=0. Hence the generator of the cone is a line with direction
 * $$ dx:dy:du = F_p:F_q:(pF_p + qF_q). \,$$

This direction corresponds to the light rays for the wave equation. To integrate differential equations along these directions, we require increments for p and q along the ray. This can be obtained by differentiating the PDE:
 * $$ F_x +F_u p + F_p p_x + F_q p_y =0, \,$$
 * $$ F_y +F_u q + F_p q_x + F_q q_y =0,\,$$

Therefore the ray direction in $$(x,y,u,p,q)$$ space is


 * $$ dx:dy:du:dp:dq = F_p:F_q:(pF_p + qF_q):(-F_x-F_u p):(-F_y - F_u q). \,$$

The integration of these equations leads to a ray conoid at each point $$(x_0,y_0,u_0)$$. General solutions of the PDE can then be obtained from envelopes of such conoids.

Definitions of linear dependence for differential systems
This part can be referred to $$\S 1.2.3 $$ of Courant's book. "We assume that these $h$ equations are independent, i.e., that none of them can be deduced from the other by differentiation and elimination."

- Courant, R. & Hilbert, D. (1962), Methods of Mathematical Physics: Partial Differential Equations, II, p.15-18

An equivalent description is given. Two definitions of linear dependence are given for first-order linear partial differential equations.

(*) \left\{ {\begin{array}{*{20}{c}} \sum\limits_{ij}^{} {a_{ij}^{( 1 )}\dfrac} + {f_1} = 0  \\ \vdots \\ \sum\limits_{ij}^{} {a_{ij}^{( n )}\dfrac} + {f_n} = 0 \end{array}} \right. $$ Where $$x_i$$ are independent variables; $$y_j$$ are dependent unknowns; $$a_{ij}^{( k )}$$ are linear coefficients; and $$f_k$$ are non-homogeneous items. Let ${Z_k} \equiv \sum_{ij}^{} {a_{ij}^{( k )}\frac}  + {f_k}$.

Definition I: Given a number field $$P$$, when there are coefficients ($$c_k\in P$$), not all zero, such that $\sum_{k} {{c_k}{Z_k} = 0} $ ; the Eqs.(*) are linear dependent.

Definition II (differential linear dependence): Given a number field $$P$$, when there are coefficients ($${c_k},d_{kl}\in P$$), not all zero, such that $\sum_k  + \sum_{kl} {{d_{kl}} \frac{Z_k} = 0}$ , the Eqs.(*) are thought as differential linear dependent. If $${d_{kl}} \equiv 0$$, this definition degenerates into the definition I.

The div-curl systems, Maxwell's equations, Einstein's equations (with four harmonic coordinates) and Yang-Mills equations (with gauge conditions) are well-determined in definition II, whereas are over-determined in definition I.

Characteristic surfaces for the wave equation
Characteristic surfaces for the wave equation are level surfaces for solutions of the equation
 * $$ u_t^2 = c^2 \left(u_x^2 +u_y^2 + u_z^2 \right). \,$$

There is little loss of generality if we set $$u_t =1$$: in that case u satisfies
 * $$ u_x^2 + u_y^2 + u_z^2= \frac{1}{c^2}. \,$$

In vector notation, let
 * $$ \vec x = (x,y,z) \quad \hbox{and} \quad \vec p = (u_x, u_y, u_z).\,$$

A family of solutions with planes as level surfaces is given by
 * $$ u(\vec x) = \vec p \cdot (\vec x - \vec{x_0}), \,$$

where
 * $$ | \vec p \,| = \frac{1}{c}, \quad \text{and} \quad \vec{x_0} \quad \text{is arbitrary}.\,$$

If x and x0 are held fixed, the envelope of these solutions is obtained by finding a point on the sphere of radius 1/c where the value of u is stationary. This is true if $$ \vec p$$ is parallel to $$\vec x - \vec{x_0}$$. Hence the envelope has equation
 * $$ u(\vec x) = \pm \frac{1}{c} | \vec x -\vec{x_0} \,|.$$

These solutions correspond to spheres whose radius grows or shrinks with velocity c. These are light cones in space-time.

The initial value problem for this equation consists in specifying a level surface S where u=0 for t=0. The solution is obtained by taking the envelope of all the spheres with centers on S, whose radii grow with velocity c. This envelope is obtained by requiring that
 * $$ \frac{1}{c} | \vec x - \vec{x_0}\, | \quad \hbox{is stationary for} \quad \vec{x_0} \in S. \,$$

This condition will be satisfied if $$ | \vec x - \vec{x_0}\, |$$ is normal to S. Thus the envelope corresponds to motion with velocity c along each normal to S. This is the Huygens' construction of wave fronts: each point on S emits a spherical wave at time t=0, and the wave front at a later time t is the envelope of these spherical waves. The normals to S are the light rays.