User:Prokaryotic Caspase Homolog/sandbox 3

Introduction to the mathematics of curved spacetime
The approach to tensors adopted here follows closely an older presentation by Lillian Lieber (1945, 2008) which was written to be accessible by anybody with a basic understanding of calculus. Lieber used the coordinate transformation approach to tensor analysis. The modern approach to tensor analysis stresses the geometrical nature of tensors rather than the transformation properties of their components. Because of the coordinate-free nature of the abstract view, it is often considered more physical. However, books on general relativity written in a manner intended to be usable by autodidacts (textbooks as well as semi-popularizations) usually adopt the coordinate transformation approach as requiring less mathematical sophistication on the part of the reader. Several textbooks, including that by Adler, provide side-by-side explanations in terms of both the classic view and the modern abstract view.

This non-rigorous introduction to the mathematics of general relativity stops at the vacuum field equations which are valid only in regions of space where the energy-momentum tensor is zero, which is to say, in regions devoid of mass-energy. Nevertheless, a variety of interesting results are possible with this limited approach, including derivation of the Schwarzschild metric and an exploration of some of its consequences.

Describing the shape of space and spacetime
In the section of this article on the Spacetime interval, the reader has been introduced to the concept of the interval $$s^2$$ and has been told, without detailed explanation, that the properties of this interval serve to characterize the geometric properties of the space (or spacetime) on which the interval has been defined.

For example, in a Euclidean plane, the Pythagorean theorem holds for right triangles drawn in that plane.

Conversely, if the distance between two points on a surface is given by
 * $$s^2 = x^2 + y^2$$

then that surface is necessarily a Euclidean plane.

Failure of the Pythagorean theorem to hold implies that a surface has an intrinsic curvature. The intrinsic curvature of the surface can be ascertained solely from measurements made from within that surface, without external comparisons, and without information that might be obtained by measurements obtained from any higher-dimensional space in which the surface may be embedded. Intrinsic curvature is to be distinguished from extrinsic curvature. If one takes a flat sheet and rolls it into a cylinder, the surface has extrinsic curvature, but the Pythagorean theorem continues to hold for measurements made within the surface, so the surface has no intrinsic curvature. General relativity is concerned only with the intrinsic curvature of spacetime.

In differential calculus, the student learns how to apply the Pythagorean theorem in computing lengths along a curve, as in Fig. 6–1a, where the differential form of the theorem is

In most of the forthcoming discussion we will prefer to use generalized coordinates, substituting $$x_1$$ for $$x$$ and $$x_2$$ for $$y ,$$ i.e.

The properties of a space do not depend on the coordinate system used to make measurements within that space. What would be the equivalent of ($$) for measurements made in other coordinate systems?

For polar coordinates, as shown in Fig. 6–1b, the relevant expression would be

where the equivalent expression using generalized coordinates, substituting $$x_1$$ for $$r$$ and $$x_2$$ for $$\theta ,$$ is

For oblique coordinates, as shown in Fig. 6–1c, the law of cosines allows us to write

and the equivalent expression using generalized coordinates would be

What of surfaces with a bona fide intrinsic curvature? In Fig. 6–1d, we illustrate a sphere on which has been drawn the elements of the spherical coordinate system. With the understanding that $$r = R \cos \beta ,$$ we note that

and the equivalent expression, replacing $$\alpha$$ with $$x_1$$ and $$\beta$$ with $$x_2$$ would be

The expression for $$ds^2$$ depends on both the intrinsic properties of the surface and the coordinate system used to describe that surface. Therefore, a cursory examination of $$ds^2$$ will not suffice to determine the characteristics of the surface that we are dealing with. To determine the characteristics of the surface starting from $$ds^2 ,$$ we must determine the curvature tensor.

What are tensors?
In precalculus, one learns about scalars and vectors. Scalars are quantities that have magnitude only, while vectors have both magnitude and direction. Measurements such as temperature and age are scalars, whereas measurements of velocity, momentum, acceleration and force are vectors.

Tensors are a form of mathematical object that have found great use in science and engineering. "Tensor" is an inclusive term that includes scalars and vectors as special cases: A scalar is a tensor of rank zero, while a vector is a tensor of rank one.

A familiar engineering use of tensors is in the representation of compressive, tensile, and sheer stresses on an object. A pure force (a vector) acting uniformly on an entire object will not cause the object to deform; instead, the object will accelerate uniformly, and the object will not "feel" any effects of the force. It is the differential application of forces on different parts of an object that exerts stress on the object, causing mechanical strain.

In Fig 6–2, consider a small surface element which is being acted upon by the force $$AB$$. The area and orientation of this surface element is represented by the vector $$AG$$, which is perpendicular to the surface and whose magnitude represents the area of the surface element. The stress at $$A$$ depends on both vectors and is a tensor of rank two.

Tensors exist independently of any coordinate system. However, for computational purposes, it is convenient to decompose a tensor into components.

In Fig 6–3a, a force $$F$$ acts on a small surface $$dS$$ where $$G$$ is the vector that represents the area and orientation of this surface element. In Fig 6–3b, the projections of this surface element $$dS_x, dS_y,$$ and $$dS_z$$ on the $$yz, xz,$$ and $$xy$$ planes, respectively, are illustrated. The x, y, and z components of $$G$$ (not illustrated) represent the areas and orientations of these three projections.

The total effect of the force $$F$$ on $$dS$$ can be computed by considering the effect of each of its three components, $$f_x, \, f_y,$$ and $$f_z$$ on each of the three projections $$dS_x, \, dS_y, \, $$ and $$dS_z.$$

The x-component of $$F,$$ which is $$f_x,$$ acts on each of the aforementioned projections, and the "pressure" (force per unit area) from $$f_x$$ acting on each of these projections is designated as $$p_{xx}, \, p_{xy}, \, p_{xz},$$ respectively. Since force equals pressure times area, we can write:
 * $$f_x = p_{xx} dS_x \, + \, p_{xy} dS_y \, + \, p_{xz} dS_z $$

Likewise, for $$f_y$$ and $$f_z,$$ we write
 * $$f_y = p_{yx} dS_x \, + \, p_{yy} dS_y \, + \, p_{yz} dS_z $$
 * $$f_z = p_{zx} dS_x \, + \, p_{zy} dS_y \, + \, p_{zz} dS_z $$

The total stress $$F$$ on the surface $$dS$$ is $$F = f_x \, + \, f_y \, + f_z, $$ so that

In three-dimensional space, force (a vector) has three components, but stress (a tensor of rank two) has nine components. A tensor of rank three will have n3 components and so forth.

In n-dimensional space, the n components of a vector are written in a single row, but the n2 components of a tensor of rank two are written in a square array.

Effect of changes in the coordinate system
Relativity is concerned with finding the physical laws which hold good for all observers, regardless of their viewpoint (coordinate system). In 1905, with special relativity, Einstein considered changes in viewpoint due to differences in uniform relative velocity. In 1916, with general relativity, Einstein generalized the idea to include observers in much more complex relationships with each other. The concept of invariance that Einstein introduced is one of the most fundamental in all of physics. Tensors are objects that are intrinsically invariant under transformation of coordinate systems. In the following, we explore the effects of such transformation, beginning with a simple rotation of coordinates.

In Fig. 6–4, consider a conventional Cartesian coordinate system in the $$xy$$ plane. Suppose we transform to a new $$\bar x, \, \bar y$$ coordinate system that is obtained from the $$x, \, y$$ system by rotating the coordinate axes by angle $$ \theta $$ about the origin. If point $$A$$ has coordinates $$x, \, y$$ in the first coordinate system, its coordinates in the primed system are given by
 * $$\bar x = x \cos \theta + y \sin \theta $$
 * $$\bar y = -x \sin \theta + y \cos \theta $$

The inverse transformation, calculating $$x$$ and $$y$$ given $$\bar x$$ and $$ \bar y,$$ is readily obtained from this first transformation.

Through a series of steps, we will generalize this notation to encompass other transformations in an arbitrary number of dimensions. The generalized notation will allow an elegantly condensed method of writing the equations that simplifies complex manipulations.

Our first generalization is to rewrite the transformation so that it is no longer tied to a specific form of rotation:


 * $$\bar x = a \cdot x + b \cdot y $$
 * $$\bar y = c \cdot x + d \cdot y $$

where $$a, \, b, \, c, \, d \,$$ are functions of $$\theta. \,$$ In differential form, we may write the following:
 * $$d \bar x = a \cdot dx + b \cdot dy $$
 * $$d \bar y = c \cdot dx + d \cdot dy $$

We further generalize by using $$dx^1$$ and $$dx^2$$ instead of $$dx$$ and $$dy$$, and by using the single letter $$a$$ with different subscripts instead of four different letters $$a, \, b, \, c, \, d.$$

'''We will henceforth mostly be using coordinates distinguished by superscripts rather than subscripts for reasons that will be discussed later. These superscripts are not to be confused with exponentiation:'''

The subscripted $$a$$'s are now understood as representing partial derivatives, with $$a_{11}$$ being the change in $$\bar x^1$$ due to a change in $$x^1$$ and so forth.

Notational simplifications
The two equations in ($$) may be rewritten in a single line:

The Einstein summation convention enables further abbreviation. Whenever a symbol occurs twice in a single term (e.g. the $$\sigma$$ in the right-hand member of ($$), it is understood that a summation is to be made on that subscript (or superscript). Hence, we may rewrite ($$) as follows:

Let $$x^\mu$$ be the coordinates of a point $$P$$ in a space of dimensionality n. Let $$P'$$ be a neighboring point having coordinates $$x^\mu + dx^\mu$$ as measured in the first frame. The coordinates of $$P'$$ in the second frame will be $$\bar x^\mu + d \bar x^\mu .$$ The n quantities $$dx^\mu$$ are understood to the components of the displacement vector $$\vec{PP'}$$ as measured in the first frame, while $$d \bar x^\mu$$ are the components of this same displacement vector as measured in the second frame. These are related to the components measured in the first frame by the transformation equation ($$).

The appearance of equation ($$) may be simplified further as follows: Given that $$dx^1$$ and $$dx^2$$ are the components of $$ds$$ in the unbarred system, we represent them more briefly by $$V^1$$ and $$V^2.$$ Likewise, given that $$d \bar x^1$$ and $$d \bar x^2$$ are the components of $$ds$$ in the barred system, we represent them more briefly by $$\bar V^1$$ and $$\bar V^2.$$

On the right side of ($$), $$\mu,$$ which is not repeated, is known as a free index, while the repeated summation indices are known as dummy indices, since they disappear when performing the summation. Unless stated otherwise, any free index shall have the same range as the dummy indices. Hence, in ($$),
 * $$\begin{pmatrix} \mu=1,2 \\ \sigma=1,2  \end{pmatrix} $$ may be written as $$(n = 2).$$

These superscripts should not be confused with exponents. $$V^2$$ is not the square of $$V.$$ Rather, these superscripts are used for indexing purposes, the same as subscripts. Superscripts and subscripts are used for distinct purposes which will be explained shortly.

Hence, ($$) may be rewritten as follows:

Given a vector $$V^\sigma$$, whose components are $$V^1$$ and $$V^2$$ in a given coordinate system, ($$) allows computation of its components in a new coordinate system related to the first by the transformation represented in ($$).

Actually, ($$) and ($$) are valid not merely for the transformation represented in ($$), but are valid for any transformation of coordinates (provided that the values of $$x^\sigma$$ and $$\bar x^\mu$$ are in one-to-one correspondence). In other words, in the transformation represented by
 * $$ \bar x^\mu = f^\mu(x^\sigma),$$

where $$f^\mu$$ are arbitrary functions, ($$) and ($$) allow computation of the vector components in the transformed coordinate system.

Any set of quantities that transforms according to ($$) is, by definition, a vector, or more precisely, a contravariant vector. One should also note that ($$) is extensible to vectors of any number of dimensions. In the curved spacetime of general relativity, one cannot think of vectors as being directed line segments stretching from one point to another. A set of coordinates $$x^n$$ do not form a vector. In the case discussed here, a contravariant vector is the set of coordinate differentials $$dx^n$$ along some given curve.

Using this notation, a contravariant tensor of rank two is defined as follows:

Since $$\gamma$$ and $$\delta$$ each occur twice in the term on the right, it is understood that the term represents a sum for $$\gamma$$ and $$\delta$$ over their entire ranges. On the other hand, neither $$\alpha$$ nor $$\beta$$ occur twice in any single term. In three-space, $$\alpha, \, \beta, \, \gamma, \, \delta$$ each range over $$1, \, 2, \, 3, $$ so the interpretation of ($$) is that it represents nine equations, each equation having the sum of nine terms on the right.

For example, given $$\alpha = 2, \, \beta=3, $$ ($$) expands to the following:


 * $$\bar V^{23} = \frac{\partial \bar x^2}{\partial x^1}\frac{\partial \bar x^3}{\partial x^1} V^{11} + \frac{\partial \bar x^2}{\partial x^1}\frac{\partial \bar x^3}{\partial x^2} V^{12} $$ $$ + \; \frac{\partial \bar x^2}{\partial x^1}\frac{\partial \bar x^3}{\partial x^3} V^{13} $$
 * $$ \quad \quad + \, \frac{\partial \bar x^2}{\partial x^2}\frac{\partial \bar x^3}{\partial x^1} V^{21} + \frac{\partial \bar x^2}{\partial x^2}\frac{\partial \bar x^3}{\partial x^2} V^{22} $$ $$ + \; \frac{\partial \bar x^2}{\partial x^2}\frac{\partial \bar x^3}{\partial x^3} V^{23} $$
 * $$ \quad \quad + \, \frac{\partial \bar x^2}{\partial x^3}\frac{\partial \bar x^3}{\partial x^1} V^{31} + \frac{\partial \bar x^2}{\partial x^3}\frac{\partial \bar x^3}{\partial x^2} V^{32} $$ $$ + \; \frac{\partial \bar x^2}{\partial x^3}\frac{\partial \bar x^3}{\partial x^3} V^{33} $$

In four-space, ($$) expands to sixteen equations, each having a sum of sixteen terms on the right.

The notation presented here hence offers a concise representation of complex mathematical objects.

Tensor addition and multiplication
Tensor algebra includes various operations for making new tensors from old tensors. Here we begin with tensor addition, starting with tensors of rank one (vectors) in a plane.

Suppose we have two contravariant vectors in a plane, $$A^\alpha$$ with components $$A^1$$ and $$A^2$$, and a second such vector, $$B^\alpha$$ with components $$B^1$$ and $$B^2$$. Let us form another quantity, $$C^\alpha,$$ by adding the corresponding components of $$A^\alpha$$ and $$B^\alpha$$. In other words, $$C^1 = A^1 + B^1$$ and $$C^2 = A^2 + B^2$$.

We ask whether the resulting quantity $$C^\alpha$$ is a vector, i.e. does it transform according to ($$)? Since $$A^\alpha$$ and $$B^\alpha$$ are contravariant vectors, we may write:

Taking the components one at a time, we may write, for the first components:
 * $$\bar A^1 = \frac{\partial \bar x^1}{\partial x^1} A^1 + \frac{\partial \bar x^1}{\partial x^2} A^2 $$
 * $$\bar B^1 = \frac{\partial \bar x^1}{\partial x_1} B^1 + \frac{\partial \bar x^1}{\partial x^2} B^2 $$

and likewise for the second components. Summing these, we obtain for the first and second components:
 * $$\bar A^1 + \bar B^1 = \frac{\partial \bar x^1}{\partial x^1} (A^1 + B^1) $$ $$ + \; \frac{\partial \bar x^1}{\partial x^2} ( A^2 + B^2)  $$
 * $$\bar A^2 + \bar B^2 = \frac{\partial \bar x^2}{\partial x^1} (A^1 + B^1) $$ $$ + \; \frac{\partial \bar x^2}{\partial x^2} ( A^2 + B^2)  $$

The above two equations may be rewritten more compactly as

or, using $$C$$s to represent each summed component

Since $$C^\alpha$$ transforms according to ($$), we have established that the sum of two vectors is another vector. The same holds for tensors of higher rank.

Note in particular how ($$) may be obtained by summing ($$) and ($$) as if they were each single equations with a single term on the right, when in reality, each represents multiple equations with multiple terms on the right.

The notational system used here, developed by Ricci and Levi-Cevita about 1900, with later enhancements by Einstein, permits complex operations to be performed following a relatively simple algebraic process often termed "index juggling". The notation automatically keeps track of whole sets of equations having many terms in each. We illustrate here with a process of multiplying tensors called "outer multiplication".

If we wish to multiply

by

we can immediately write

In outer multiplication, each equation of ($$) is to be multiplied by each equation of ($$), so there would be four multiplications. Written in expanded form, the first equation of ($$), with $$\lambda = 1,$$ and the first equation of ($$), with $$\mu = 1,$$ are, respectively,


 * $$\bar A^1 = \frac{\partial \bar x^1}{\partial x^1 } A^1 + \frac{\partial \bar x^1}{\partial x^2 } A^2 \quad$$ and $$\quad \bar B^1 = \frac{\partial \bar x^1}{\partial x^1 } B^1 + \frac{\partial \bar x^1}{\partial x^2 } B^2 $$

Following ordinary rules of algebra, we obtain, as the product, the following:

In like fashion, we obtain equations for $$\bar A^1 \bar B^2, \, \bar A^2 \bar B^1, \, $$ and $$\bar A^2 \bar B^2 .$$

To reiterate, according to the Einstein summation convention, since $$\alpha$$ and $$\beta$$ each occur twice on the right side of ($$), they must each take on all possible values to form a sum. For $$\lambda = 1, \mu = 1,$$ the terms sum to yield ($$), except that in ($$) we simplify the appearance by replacing $$A^\alpha B^\beta$$ with $$C^{\alpha\beta}.$$ In a similar fashion, we handle the other possible values of $$\lambda$$ and $$\mu, $$ thus showing that ($$) completely represents the outer product of ($$) and ($$).

From ($$), it is evident that the outer product of two vectors is a tensor of rank two. In general, the product of two tensors of rank m and n is a tensor of rank m + n.

Covariant tensors
In Fig. 6–5, consider an object having varying density in different parts of the object. The density at any particular point is a scalar, but the change in density as we go from point to point is a directed quantity, i.e. a vector. If we designate the density at any particular point by $$\psi$$, then
 * $$ \frac{\partial \psi}{\partial x^1}$$ and $$\frac{\partial \psi}{\partial x^2} $$

represent the partial variation of $$\psi$$ in the $$x^1$$ and $$x^2$$ directions. We will see that the transformation properties of this form of vector are different from those described before.

On top of the original coordinate system in Fig. 6–5, we overlay a changed coordinate system labeled with transformed coordinates. Given the unbarred coordinate components of the vector at point A, we wish to express its barred coordinate components. In other words, we wish to express


 * $$ \frac{\partial \psi}{\partial \bar x^1} \, \text{and} \, \frac{\partial \psi}{\partial \bar x^2} $$ in terms of $$ \frac{\partial \psi}{\partial x^1} \, \text{and} \, \frac{\partial \psi}{\partial x^2} $$

The $$\bar x^1$$ and $$\bar x^2$$ coordinates of any point in the transformed coordinate system depend on both $$x^1$$ and $$x^2$$ of the nontransformed system. The transformed vector coordinates may be written as
 * $$ \frac{\partial \psi}{\partial \bar x^1} = a_{11}\frac{\partial \psi}{\partial x^1} + a_{12} \frac{\partial \psi}{\partial x^2} $$


 * $$ \frac{\partial \psi}{\partial \bar x^2} = a_{21} \frac{\partial \psi}{\partial x^1} + a_{22} \frac{\partial \psi}{\partial x^2} $$

where $$a_{11}$$ is the partial change in $$x^1$$ per change in $$\bar x^1$$ and so forth. Writing the equations out fully,

As before, the above two equations may be combined using the summation convention:

Finally, using $$\overline W_\mu$$ to represent $$ \frac{\partial \psi}{\partial \bar x^\mu}$$ and $$W_\sigma$$ to represent $$\frac{\partial \psi}{\partial x^\sigma},$$ we write ($$) as follows:

The transformation rule for vectors described by ($$) is different from the transformation rule for vectors described by ($$), in that the coefficient on the right in ($$) is the reciprocal of the corresponding coefficient in ($$). Equation ($$) is the mathematical definition of a covariant vector, i.e. a covariant tensor of rank one. A covariant vector is the gradient of a scalar.

A covariant tensor of rank two is defined as follows:

Carefully compare ($$) with ($$), and ($$) with ($$).

'''Note that the indices of covariant tensors are subscripts, and the bars in the coefficients are in the denominators. In contrast, the indices of contravariant tensors are superscripts, and the bars in the coefficients are in the numerators.'''

Mixed tensors
Addition of covariant tensors can be performed in the same manner as contravariant tensors. Likewise, the outer multiplication of two covariant tensors of ranks m and n yields a covariant tensor of rank m + n. For example, the outer product of
 * $$\bar A_\lambda = \frac{\partial x^\alpha}{\partial \bar x^\lambda} A_\alpha$$

and
 * $$\bar B_{\mu \nu} = \frac{\partial x^\beta}{\partial \bar x^\mu} \frac{\partial x^\gamma}{\partial \bar x^\nu} B_{\beta\gamma} $$

is given by
 * $$\bar C_{\lambda \mu \nu} = \frac{\partial x^\alpha}{\partial \bar x^\lambda} \frac{\partial x^\beta}{\partial \bar x^\mu} \frac{\partial x^\gamma}{\partial \bar x^\nu} C_{\alpha \beta \gamma} $$

On the other hand, outer multiplication of a covariant tensor of rank m by a contravariant tensor of rank n yields a product of rank m + n which has m indices of covariance and n indices of contravariance. For example the outer product of the covariant tensor
 * $$\bar A_\lambda = \frac{\partial x^\alpha}{\partial \bar x^\lambda} A_\alpha$$

and the contravariant tensor
 * $$\bar B^\mu = \frac{\partial \bar x^\mu }{\partial x^\beta } B^\beta $$

is the mixed tensor

Contraction
Tensor contraction is a procedure whereby, given a tensor of rank n, one may construct a tensor of rank n − 2.

The general rule to contract a tensor is to set an upper index equal to a lower index and sum, yielding a tensor of reduced rank. For example, one possible contraction of $$T^{\alpha\beta}_{\lambda\gamma}$$ is $$T^{\alpha\beta}_{\beta\gamma} = S^\alpha_\gamma$$. Given several possible contractions, the one chosen would be dictated by the requirements of the physical problem being addressed.

Consider the mixed tensor:

This expression represents eight equations, each having eight terms on the right.

In the above, let us replace $$\gamma$$ by $$\alpha$$, yielding

On the left side, the summation convention means that we have two equations rather than eight. Moreover, the left side now has two terms rather than one.

On the right side, since $$\alpha$$ appears twice, the summation convention states that a sum needs to be taken over each value of $$\nu$$ and $$\lambda$$. Note, however, that the $$x \, \text{'s}$$ are independent variables. Although functional relationships exist between the $$\bar x \, \text{'s}$$ and the $$x \, \text{'s}$$, no such functional relationships exist among the $$x \, \text{'s}$$ themselves. What this means is that when $$ \nu \ne \lambda ,$$ the terms drop out, since
 * $$ \frac{\partial x^\nu}{\partial \bar x^\alpha} \frac{\partial \bar x^\alpha}{\partial x^\lambda} =

\frac{\partial x^\nu}{\partial x^\lambda} = 0 \quad \quad (\lambda \ne \nu) $$

On the other hand, when $$ \lambda = \nu ,$$ we observe that
 * $$ \frac{\partial x^\nu}{\partial \bar x^\alpha} \frac{\partial \bar x^\alpha}{\partial x^\lambda} =

\frac{\partial x^\lambda}{\partial \bar x^\alpha} \frac{\partial \bar x^\alpha}{\partial x^\lambda} = 1 \quad \quad (\lambda = \nu) $$ Equation ($$) therefore becomes

To clarify the meaning of ($$), we expand the individual terms, noting that $$\lambda$$ and $$\mu$$ each appear twice on the right side:
 * $$\bar A^{11}_1 + \bar A^{21}_2 = \frac{\partial \bar x^1}{\partial x^1}(A^{11}_1 + A^{21}_2) $$ $$ + \; \frac{\partial \bar x^1}{\partial x^2}(A^{12}_1 + A^{22}_2) $$
 * $$\bar A^{12}_1 + \bar A^{22}_2 = \frac{\partial \bar x^2}{\partial x^1}(A^{11}_1 + A^{21}_2) $$ $$ + \; \frac{\partial \bar x^2}{\partial x^2}(A^{12}_1 + A^{22}_2) $$

In the above expressions, perform the following substitutions and apply the summation convention:
 * $$\bar C^1 = \bar A^{11}_1 + \bar A^{21}_2$$
 * $$\bar C^2 = \bar A^{12}_1 + \bar A^{22}_2$$
 * $$C^1 = A^{11}_1 + A^{21}_2$$
 * $$C^2 = A^{12}_1 + A^{22}_2$$

Then ($$) becomes

The starting rank 3 tensor ($$) has been contracted to yield a tensor of rank one.

If we multiply two tensors to form an outer product, and this product is a mixed tensor, contracting this mixed tensor results in an inner product. Hence, if the outer product of $$A_{\alpha\beta}$$ and $$B^\gamma$$ is the mixed tensor $$C^\gamma_{\alpha\beta} \,$$, replacing $$\gamma$$ by $$\beta$$ results in the contracted tensor $$D_\alpha$$, which is an inner product of $$A_{\alpha\beta}$$ and $$B^\gamma$$.

The student will have already encountered inner products in their studies of vector algebra. The square root of the inner product of vector $$A$$ with itself is the magnitude of the vector $$|A|. \,$$ If $$\theta$$ is the angle between two vectors $$A$$ and $$B \,$$ then $$|A||B| \cos \theta = A \cdot B$$.

The importance of tensor contraction will be apparent later on when we discuss the vacuum field solution of general relativity.

The problem with "ordinary" differentiation
To be physically meaningful, the result of applying mathematical operations on tensors should be other tensors, since otherwise the operations lack coordinate independence. We have so far shown that addition, outer multiplication, and contraction of tensor variables do, in fact, yield tensors as their result. Ordinary differentiation, however, has issues.

Suppose we wish to compute the partial derivative of

with respect to $$ \bar x^\nu .$$ Applying the product rule, we obtain:

The result does not match up at all with any of the tensor prototypes that we have thus far identified. This situation, however, can be partially rectified by a change of variables. Note that $$\frac{\partial A^\sigma}{\partial \bar x^\nu} =   \frac{\partial A^\sigma}{\partial x^\tau} \frac{\partial x^\tau}{\partial \bar x^\nu} $$

If we apply this substitution to the left term of ($$) and rearrange slightly, we obtain

Close comparison of the left term of ($$) with other tensor prototypes presented thus far shows that the left term represents a mixed tensor of rank two. But the right term presents an issue.

For certain simple transformations, such as the rotation illustrated in Fig. 6–4, the right term vanishes, since the coefficients $$\partial \bar x^\mu / \partial x^\sigma$$ are constants. In such cases, ($$) will represent a tensor. In the general case, however, $$\partial \bar x^\mu / \partial x^\sigma$$ will not be constants, the right term will not vanish, and ($$) will not be a tensor. In general, therefore, ordinary differentiation of tensors does not represent a physically relevant operation.


 * The ordinary derivative of a tensor is a tensor if and only if coordinate changes are restricted to linear transformations.

We will shortly describe an operation called covariant differentiation which does always yield a tensor, and which is used in deriving the curvature tensor which plays an important role in general relativity.

The metric tensor
As mentioned before, the expression for $$ds^2$$ is dependent both on the properties of the space(time) in question and on the coordinate system used. It turns out that all of the different expressions for $$ds^2$$ have the the common form

This common form holds for all spaces and spacetimes, regardless of dimensionality.

In two dimensions, $$ may be expanded to


 * For a Euclidean plane in Cartesian coordinates ($$), $$g_{11} = 1 ,$$ $$g_{12} = 0 ,$$ $$g_{21} = 0 ,$$ and $$g_{22} = 1. $$ This leads to


 * For polar coordinates ($$), $$g_{11} = 1 ,$$ $$g_{12} = 0 ,$$ $$g_{21} = 0 ,$$ and $$g_{22} = r^2 .$$


 * For oblique coordinates ($$), $$g_{11} = 1 ,$$ $$g_{12} = -\cos \alpha ,$$ $$g_{21} = -\cos \alpha ,$$ and $$g_{22} = 1 .$$


 * For spherical coordinates ($$), $$g_{11} = r^2 ,$$ $$g_{12} = 0 ,$$ $$g_{21} = 0 ,$$ and $$g_{22} = R^2 .$$

Note that for each of the above, $$g_{12}$$ and $$g_{21}$$ have the same value.


 * In general, regardless of the dimensionality, the shape of the space(time), or the coordinate system employed, $$g_{\mu \nu} = g_{\nu \mu }.$$

Any such set of $$g \, \text{'s}$$ form a covariant tensor of rank two. Demonstrating that the set of $$g \, \text{'s}$$ in ($$) form a tensor involves an application of the Quotient Theorem: Given the Quotient theorem, demonstrating that $$g_{\mu \nu}$$ is a tensor is straightforward: Since $$ds^2$$ is a scalar, it is a tensor of rank zero. The product of $$ g_{\mu \nu} dx^\mu $$ and $$dx^\nu $$ on the right-hand side of $$ is therefore also a tensor of rank zero. But $$dx^\nu $$ is a contravariant tensor of rank one (i.e. a vector), allowing us to deduce that $$ g_{\mu \nu} dx^\mu $$ is a covariant tensor of rank one. But $$dx^\mu $$ is also a contravariant vector, demonstrating that $$g_{\mu \nu}$$ must be a covariant tensor of rank two.
 * If the product (inner or outer) of a given quantity with a tensor of any specified type and arbitrary components is itself a tensor, then the given quantity is a tensor.

The metric tensor $$g_{\mu \nu}$$ is the fundamental object of study in general relativity, since it characterizes the geometric properties of spacetime.

Covariant derivatives of tensors
The covariant derivative discussed in this section is the natural generalization of the ordinary derivative, since it is a tensor, and since, in flat Euclidean space with Cartesian coordinates, it reduces to the ordinary derivative. The expression of the covariant derivative introduces two new symbols, (1) the contravariant metric tensor $$g^{ \mu \nu}$$ (with raised indices), and (2) Christoffel's symbol of the second kind $$ \Gamma^\lambda_{\mu \nu}.$$

For simplicity, we limit ourselves to two dimensions. In this environment, $$g_{\mu \nu}$$ will have four components, which can be arranged in a matrix: $$\begin{bmatrix} g_{11} & g_{12} \\ g_{21} & g_{22} \end{bmatrix}$$

Since $$g_{12} = g_{21},$$ this is called a symmetric matrix, since it is symmetric with respect to the principal diagonal.

The determinant of this matrix, $$|g_{\mu \nu}|,$$ is often denoted simply by the letter $$g \,.$$

The inverse of this matrix is also symmetric, and its components transform as a contravariant tensor of rank two. The tensor represented by this matrix is $$g^{\mu \nu}.$$ The product of the two matrices is the identity matrix with ones along the diagonal and zeroes elsewhere. In tensor notation (note the summation upon $$\lambda$$)
 * $$ g^{\mu \lambda} g_{\lambda \nu} = \delta^\mu_\lambda,\quad$$ where $$ \delta^\mu_\lambda $$ is the Kroneker delta:

$$ \delta_{ij} \equiv \delta^i_j \equiv \delta^{ij} = \begin{cases} 0 &\text{if } i \neq j  \\ 1 &\text{if } i=j  \end{cases}$$

Christoffel's symbol of the second kind is given by

Derivation of the Christoffel symbols is outside the scope of this simple introduction but may be found in most textbooks, a relatively accessible presentation being that of Grøn and Øyvind (2011). In two dimensional space, ($$) would represent eight equations. Remembering to sum over $$\alpha,$$ we would have:
 * $$ \Gamma^1_{1 1} = \frac{1}{2} g^{1 1} \left(

\frac{\partial g_{1 1}}{\partial x^1} + \frac{\partial g_{1 1}}{\partial x^1} - \frac{\partial g_{1 1}}{\partial x^1} \right) $$ $$+ \, \frac{1}{2} g^{1 2} \left( \frac{\partial g_{1 2}}{\partial x^1} + \frac{\partial g_{1 2}}{\partial x^1} - \frac{\partial g_{1 1}}{\partial x^2} \right)$$ and similarly for the remaining seven values of $$ \Gamma^\lambda_{\mu \nu}.$$

If $$A_\sigma$$ is a covariant tensor of rank one, its covariant derivative with respect to $$x^\tau$$ is defined as

$$A_{\sigma \tau}$$ is a covariant tensor of rank two.

If $$A^\sigma$$ is a contravariant tensor of rank one, its covariant derivative with respect to $$x^\tau$$ is defined as

$$A^\sigma_\tau$$ is a mixed tensor of rank two.

If $$A_{\sigma\tau}$$ is a covariant tensor of rank two, its covariant derivative with respect to $$x^\rho$$ is defined as

and so forth.

In like fashion, we may obtain the covariant derivatives for tensors of higher ranks. In all cases, covariant differentiation leads to a tensor with one more rank of covariant character than the starting tensor.

In the special case where the $$g \text{'s}$$ are constants, as for instance when using Cartesian coordinates in a flat Euclidean plane, it is evident when looking at the definition of the Christoffel symbol ($$) that the symbols will all have value zero. In this case, ($$) becomes simply

In this special case, the covariant derivative is the same as the ordinary derivative.

The Riemann–Christoffel curvature tensor
Suppose that z is a function of x and y, for example z = x2 + 2xy. The partial derivative of z with respect to x and y does not depend on the order of differentiation. In other words,
 * $$\frac{\partial^2 z}{\partial x \partial y} = \frac{\partial^2 z}{\partial y \partial x} = 2 $$

On the other hand, order does matter in calculation of the second covariant derivative of a tensor due to the presence of Christoffel symbols.

To illustrate, we start by taking the covariant derivative of $$A_\sigma$$ with respect to $$x^\tau$$:

Follow by taking the second covariant derivative with respect to $$x^\rho$$:

Substituting ($$) into ($$) yields

Taking the derivatives in reverse order yields

The first terms of ($$) and ($$) are equal:
 * $$\frac{\partial^2 A_{\sigma}}{\partial x^\tau x^\rho} =

\frac{\partial^2 A_{\sigma}}{\partial x^\rho x^\tau}$$

The second term of ($$) and the fourth term of ($$) are equal, since the choice of dummy symbol used for the summation makes no difference:
 * $$\Gamma^\alpha_{\sigma\tau} \frac{\partial A_\alpha}{\partial x^\rho} =

\Gamma^\epsilon_{\sigma\tau} \frac{\partial A_\epsilon}{\partial x^\rho} $$

Likewise, the fourth term of ($$) and the second term of ($$) are equal:
 * $$\Gamma^\epsilon_{\sigma\rho} \frac{\partial A_\epsilon}{\partial x^\tau} = \Gamma^\alpha_{\sigma\rho} \frac{\partial A_\alpha}{\partial x^\tau}

$$

The sixth and seventh terms of ($$) are equal to the sixth and seventh terms of ($$), since swapping the $$\tau$$ and $$\rho$$ leaves the value of $$\Gamma^\epsilon_{\tau\rho}$$ unchanged. This is easily seen in the definition of the Christoffel symbol ($$), remembering that $$g_{\mu\nu}$$ is symmetric. Likewise, the final terms of ($$) and ($$) are equal.

The third and fifth terms of ($$), however, are not equal to any of he terms of ($$). Subtracting ($$) from ($$) followed by rearrangement, we obtain

The difference on the left-hand side of ($$) is a covariant tensor of rank three. On the right-hand side of ($$), we had specified $$A_\alpha$$ as being an arbitrary covariant tensor of rank one. Since the inner product of $$A_\alpha$$ and the quantity in brackets is a covariant tensor of rank three, the Quotient Theorem tells us that the quantity in brackets must be a mixed tensor of rank four. This quantity is the Riemann-Christoffel curvature tensor:

Properties of the curvature tensor
If the Christoffel symbols on the right side of ($$) are expanded according to their definition in ($$), it is observed that the Riemann-Christoffel curvature tensor is an expression containing first and second derivatives of the $$g \, \text{'s},$$ which are themselves coefficients of ($$), the expression for $$ds^2.$$

In two dimensions, each of the indices of the curvature tensor has two possible values, so that $$R^\alpha_{\sigma\tau\rho}$$ has sixteen components. In three-space, the curvature tensor has 34 or 81 components, while in the four dimensions of spacetime, the curvature tensor has 44 or 256 components.

Various symmetries reduce the complexity of this expression. The first to note is that interchanging the $$\tau$$ and the $$\rho$$ of this expression merely changes its sign, so that of the sixteen possible combinations of $$\tau$$ and the $$\rho$$, only six are independent. This may be seen as follows:


 * 1. Suppose that we have sixteen quantities $$a_{\alpha\beta} \; (n=4)$$ arranged in a matrix:
 * $$\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\

a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \\ \end{bmatrix}$$
 * 2. If we stipulate that $$a_{\alpha\beta} = -a_{\beta\alpha},$$ then the terms in the principal diagonal are necessarily zero, and the array becomes
 * $$\begin{bmatrix} 0 & a_{12} & a_{13} & a_{14} \\

-a_{12} & 0 & a_{23} & a_{24} \\ -a_{13} & -a_{23} & 0 & a_{34} \\ -a_{14} & -a_{24} & -a_{34} & 0 \\ \end{bmatrix}$$
 * 3. The above antisymmetric matrix has only six independent components rather than sixteen. If, on the other hand, we had stipulated that $$a_{\alpha\beta} = a_{\beta\alpha},$$ the resulting symmetric matrix would have ten independent components.

The six independent combinations of $$\tau$$ and $$\rho ,$$ combined with the sixteen combinations of $$\sigma$$ and $$\alpha$$ gives 96 independent components rather than 256. Further symmetries reduce the total number of independent components from $$n^4 = 256$$ to $$ \tfrac{1}{12} n^2(n^2 - 1) = 20. $$

We had earlier shown that superficial examination of $$ds^2$$ does not reveal whether a space is flat or not, since the expression is dependent both on the properties of the space(time) in question and on the coordinate system used. The curvature tensor, however, allows us to make such a determination. If we apply $$R^\alpha_{\sigma\tau\rho}$$ to ($$), ($$), and ($$), we find its components are all zero, while if we apply it to ($$), the components are non-zero.

In the case of ($$), which applies to a Euclidean plane using ordinary Cartesian coordinates, the $$g \text{'s}$$ are constants, with $$g_{11} = 1, \, g_{22} = 1$$ with the others all zero. Hence the derivatives are all zero, the Christoffel symbols are all zero, and the components of the curvature tensor are all zero.

It would be a useful exercise for the reader to compute $$R^\alpha_{\sigma\tau\rho}$$ for ($$), which applies to a Euclidean plane using polar coordinates. Here, $$g_{11} = 1, \, g_{12} = g_{21} = 0, \, g_{22} = (x^1)^2.$$

In summary,

is a necessary and sufficient condition for the local space(time) to be flat. This holds regardless of dimensionality and the coordinate system used.

The vacuum field solution
In the development of general relativity, Einstein sought a means to relate spacetime curvature to mass and energy. However, the Riemann curvature tensor is of rank four, while the energy-momentum tensor is of rank two. Two tensors that are proportional to each other must be the same rank as well as have the same symmetries. Einstein, therefore, needed to derive a rank two tensor from the Riemann curvature tensor. (The alternative possibility, finding a rank four tensor expression of energy-momentum, makes no physical sense.) Of the three possible contractions of $$R^\alpha_{\sigma\tau\rho} \, ,$$ contraction with the first subscript gives zero, while contraction with the second and third subscripts gives the same result but of opposite sign. Therefore, there was only one independent contraction of the curvature tensor that presented itself to Einstein.

Contracting ($$) with the third subscript yields the Ricci tensor, where


 * $$G_{11} = R^1_{111} + R^2_{112} + R^3_{113} + R^4_{114} = 0$$
 * $$G_{12} = R^1_{121} + R^2_{122} + R^3_{123} + R^4_{124} = 0$$

and so forth for each of the sixteen possible combinations of $$\sigma$$ and $$\tau, \,$$ ultimately yielding

In examining ($$) before contracting it to yield ($$), we see that

From the definition of the Christoffel symbol, ($$) is revealed to be an expression containing first and second partial derivatives of the $$g \, \text{'s}.$$ Since $$\sigma$$ and $$\tau$$ may each take on four different values, ($$) represents sixteen equations. However symmetry considerations reduce this to ten equations, of which only six are independent.

Einstein proposed that ($$) should represent the vacuum field equations of general relativity, i.e. the equations that should be valid where the mass-energy density is zero.
 * 1) Einstein's views on the equivalence principle had evolved significantly over the years since he first conceived of the principle in 1907. His early results in applying the equivalence principle, for example his deduction of the existence of gravitational time dilation and his early arguments on the bending of light in a gravitational field, used kinematic and dynamic analysis rather than geometric arguments. Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to the realization that he needed to adopt a geometric interpretation of spacetime, which he had formerly eschewed. (See Einstein's thought experiments: Non-Euclidean geometry and the rotating disk for a discussion of this point.) In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods.
 * 2) The equivalence principle states that if we freefall in a gravitational field, gravity is locally eliminated. Since locally, we cannot distinguish a gravitational field from an inertial field resulting from uniform acceleration, gravitation should be regarded as an inertial force.
 * 3) By 1912, Einstein had fully embraced the view that the paths of freely moving objects are determined by the geometry of the spacetime through which they travel. Freely moving objects always follow a straight line in their local inertial frames, which is to say, they always follow along the path of timelike geodesics. As indicated earlier in section Basic propositions, evidence of gravitation is observed by variation in the field rather than the field itself, as manifest in the relative accelerations of two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The convergence or divergence of the test particles is described with the aid of the Riemann curvature tensor  which is the analog of Newtonian tidal forces.
 * 4) The $$g \, \text{'s}$$ of the spacetime metric serve to quantify the shape of spacetime. In analogy with the field formulation of Newtonian gravitational theory, which we will discuss in the next section, ($$) represents a set of second-order partial differential equations for the potentials as field equations of the theory. These equations, of course, must be tensoral.

The equations of ($$) represent the simplest expression which is analogous to the field formulation of Newtonian gravitational theory (in regions of zero mass density). Predictions of this theory match up with the predictions of Newtonian gravitational theory in the low-speed, low-gravitation regime. These equations also predict additional effects that have been fully verified by observation and experiment.

The field formulation of Newtonian gravitation
Newton's law of universal gravitation is inherently non-relativistic. The most familiar expression of the law is in its action-at-a-distance form,

where $$G$$ in this case is the gravitational constant (not to be confused with the Ricci tensor), and the force is along a line connecting the two masses. The law requires that the forces between the gravitating bodies be transmitted instantaneously. Newton's law is incompatible with a finite speed of gravity. In 1805, Laplace concluded that the speed of gravitational interactions must be at least 7×106 times the speed of light, otherwise the resulting orbital instabilities should long ago have caused the Earth to plunge into the Sun.

Einstein wanted to construct a theory of gravitation that adhered to relativistic principles. From his own work in 1905, he knew that Maxwell's theory of electromagnetism was consistent with special relativity. He also knew that it was Faraday's development of the field concept that led the way for Maxwell's inherently relativistic theory. Therefore, Einstein was certain that the general theory that he wanted to create would be a field theory rather than an action-at-a-distance theory.

In a field theory, changes in the field are expressed by means of differential equations. The gravitational potential $$\phi$$ is a function expressing the potential energy of a particle with unit mass in the gravitational field. The potential energy of a particle at position $$P$$ is the energy required to move the particle from an arbitrary position of zero energy to $$P.$$ This position of zero energy may be chosen freely. When performing calculations near the surface of the Earth, it is frequently chosen to be sea level. For celestial mechanics calculations, it is usually chosen to be from a position infinitely distant in space. The potential's value increases in the upward direction in the gravitational field.

To derive a field theory version of Newton's law, we first rearrange ($$) as follows:
 * $$ \frac{F}{m_2} = - G \frac{m_1}{r^2} = a $$

On the left side of the equation, $$F/m_2$$ represents the acceleration of $$m_2$$ due to the gravitational field surrounding $$m_1 .$$ Since $$-G m_1$$ is a constant, we may rewrite the above equation as

Fig. 6–6 shows two axes of a three-dimensional diagram, the third $$Z$$ axis pointing out of the page towards the reader. Mass $$m_1$$ is at the origin, $$m_2$$ is at $$P$$ with coordinates $$x, \, y, \, z, \,$$ and $$OP = r. \,$$ Acceleration $$\vec a$$ is a vector quantity and may be split up into three components, $$\vec a_x, \, \vec a_y, \, \vec a_z. \,$$ It is evident that
 * $$ a_x = -a \cdot \frac {x}{r}, \ a_y = -a \cdot \frac {y}{r}, \ a_z = -a \cdot \frac {z}{r} $$

Substituting in the value of $$a$$ from ($$), we get
 * $$a_x = -\frac{Cx}{r^3}, \ a_y = -\frac{Cy}{r^3}, \ a_z = -\frac{Cz}{r^3} $$

Taking the partial derivative of $$a_x$$ with respect to $$x$$, we obtain
 * $$\frac{\partial a_x}{\partial x} = -Cr^{-3} + 3 Cx r^{-4} = \, $$ $$\frac { -C r^3 + 3 Cxr^2 \cdot \partial r / \partial x }{ r^6}   $$

and likewise for $$a_y$$ and $$a_z. \,$$ But since $$ r^2 = x^2 + y^2 + z^2, $$
 * $$ \frac{\partial r}{\partial x} = \frac{x}{r} . $$

Substituting this into the above equation,
 * $$\frac{\partial a_x}{\partial x} = \frac{-C(r^2 - 3x^2)}{r^5} $$

and likewise
 * $$\frac{\partial a_y}{\partial y} = \frac{-C(r^2 - 3y^2)}{r^5} \quad$$ and $$

\quad \frac{\partial a_z}{\partial z} = \frac{-C(r^2 - 3z^2)}{r^5} $$

Adding together the above equations, we obtain

From the definition of gravitational potential, we may write
 * $$a_x = \frac{\partial \phi}{\partial x} ,\; a_y = \frac{\partial \phi}{\partial y}

,\; a_z = \frac{\partial \phi}{\partial z} $$

Substituting into ($$), we obtain

The above field formulation of Newton's law of gravitation is known as Laplace's equation, valid for regions of zero mass density. It may be written more succinctly using the $$\nabla ^2 $$ operator (pronounced "del square"):
 * $$ \nabla ^2 \phi = 0$$

We observe in ($$) that the field formulation of Newton's law of gravitation is an equation containing second partial derivatives of the gravitational potential. By way of comparison, the vacuum solution of Einstein's field equation ($$) is a set of equations containing nothing higher than the second partial derivatives of the components of the metric tensor. Einstein's field equation expresses the equivalence principle by replacing the concept of a varying gravitational potential originating from action-at-a-distance forces, with the concept of a spacetime varying in shape.

We had noted before that each component of the Ricci tensor $$G_{\sigma\tau}$$ represents the sum of four components of the Riemann curvature tensor $$R^\alpha_{\sigma\tau\rho}.$$ If the components of the Riemann tensor are all zero, then spacetime is flat and the components of $$G_{\sigma\tau}$$ will all be zero. However, the converse is not true. If the components of $$G_{\sigma\tau}$$ are all zero, that does not imply that the components of the Riemann tensor need all be zero.

Even as, in Newtonian theory, $$ \nabla ^2 \phi = 0$$ is the field equation for regions of zero mass density around gravitating bodies, so $$G_{\sigma\tau} = 0$$ is the relativistic field equation for regions of zero mass-energy density around gravitating bodies.

Solving the vacuum field equations
The vacuum field solution of general relativity,
 * $$G_{\sigma\tau} = 0$$

comprises six independent equations containing partial derivatives of the components of the metric tensor $$g.$$ To test these equations, we must use a form of the expression for $$ds^2$$ applicable to the physical situation which we are modeling and which preferably should be in a form convenient for calculation.

The classical tests for general relativity include observations of
 * The anomalous perihelion precession of Mercury.
 * The deflection of light by the Sun
 * The gravitational redshift of light

Since the gravitational field of the Sun is very nearly spherically symmetric and decreases with radial distance from the Sun, a form of the expression for $$ds^2$$ which reflects this symmetry would be convenient for computation of anomalous perihelion precession, the deflection of light by the Sun, and the gravitational redshift. We begin by adopting spherical coordinates.

In three-dimensional Euclidean space, the expression for $$ds^2$$ in terms of spherical coordinates is
 * $$ds^2 = dr^2 + r^2 d\theta^2 + r^2 \sin^2 \theta \cdot d\phi^2 $$

as may be readily derived from $$ds^2 = (dx^1)^2 + (dx^2)^2 + (dx^3)^2$$ with the aid of Fig. 6–7.

The expression for flat Minkowski spacetime in four dimensions using Cartesian coordinates is
 * $$ds^2 = -dx^2 - dy^2 -dz^2 + c^2dt^2$$

which in spherical coordinates would be
 * $$ds^2 = -dr^2 - r^2d\theta^2 - r^2 \sin^2 \theta \cdot d \phi^2 + c^2dt^2$$

However, general relativity involves consideration of curved spacetime. It is reasonable to assume that the expression for curved spacetime using spherical coordinates will have the form
 * $$ ds^2 = -e^\lambda dr^2 $$ $$ - \; e^\mu r^2(  d\theta^2 + \sin ^2 \theta \cdot d\phi^2 )

+ e^\nu dt^2 $$


 * $$ds^2 = -e^\lambda (dx^1)^2 - e^\mu (x^1)^2(  (dx^2)^2  $$ $$ + \; \sin ^2 x^2 \cdot (dx^3)^2 )

+ e^\nu (dx^4)^2 $$ where $$x^1, \, x^2, \, x^3, \, x^4$$ represent, respectively, the spherical coordinates $$r, \, \theta, \, \phi, \, t, \;$$ while $$\lambda, \, \mu, \, \nu $$ will be functions only of $$x^1 \equiv r. \,$$ In other words, there will be no directional dependence of these functions, nor will there be any time dependence of these functions.

The requirement for spherical symmetry implies that $$ds^2$$ should not vary when $$\theta$$ and $$\phi$$ are varied, so that $$\theta$$ and $$\phi$$ only occur in the form $$ ( d\theta^2 + \sin ^2 \theta \cdot d\phi^2 ) .$$

Furthermore, there are no product terms of the form $$dx^\sigma dx^\tau$$ where $$\sigma \ne \tau. \;$$ If terms like $$dr \cdot d\theta, \, d\theta \cdot d\phi, \, $$ or $$dr \cdot d\phi$$ existed, then the expression for $$ds^2$$ would be different if we turned in different directions. In particular, the metric needs to be invariant under the reflections $$\theta \rightarrow \theta' = \pi - \theta $$ and $$\phi \rightarrow \phi' = -\phi. \,$$ Likewise, since we are considering a static solution, we do not consider use of product terms such as $$dr \cdot dt $$ and so forth.

This eliminates all of the cross terms of the general expression for $$ds^2$$ presented in ($$). Only the squared terms $$ dr^2, \, d\theta^2, \, d\phi^2, \, dt^2 $$ are used.

Functions $$e^\lambda, \, e^\mu, \, e^\nu$$ are inserted into the coefficients of ($$) to allow for the fact that the spacetime is curved. The form of these functions allows them to be adjusted to fit the scenario which we are modeling, and the expression of these functions as exponentials in the generalized formula is a mathematical convention that
 * ensures that their values are always positive, thus guaranteeing that the signature of the metric (i.e. the excess of plus signs over minus signs) is -2.
 * conveniently reduce in forthcoming calculations involving differentiation and the natural log.

Equation ($$) can be simplified by transforming coordinates:
 * $$e^\mu r^2 \rightarrow \bar r^2 $$
 * or, using generalized coordinates,
 * $$e^\mu(x^1)^2 \rightarrow (\bar x^1)^2$$

By taking $$\bar x^1$$ as a new coordinate, it is possible to eliminate $$e^\mu$$ entirely. We may even drop the bar notation, since any change in $$ (dx^1)^2$$ resulting from the above substitution can be compensated for by modifying function $$\lambda.$$ Equation ($$) hence becomes


 * $$ ds^2 = -e^\lambda dr^2$$ $$- \; r^2 ( d\theta^2 + \sin ^2 \theta \cdot d\phi^2 )

+ e^\nu dt^2 $$


 * $$ ds^2 = -e^\lambda (dx^1)^2 - (x^1)^2( (dx^2)^2 $$ $$ + \; \sin ^2 x^2 \cdot (dx^3)^2 ) + e^\nu (dx^4)^2 $$

The task now is to express $$e^\lambda$$ and $$e^\nu$$ as functions of $$x^1.$$

The Schwarzchild metric
From ($$), we have the following:

and $$g_{\sigma\tau} = 0$$ when $$\sigma \ne \tau.$$

Hence the components of $$g_{\mu\nu}$$ form a diagonal matrix (i.e. have nonzero elements only along the principal diagonal). The determinant of $$g_{\mu\nu}$$ will therefore be simply equal to the product of the elements along the principal diagonal. Representing this determinant by the symbol $$g,$$ we have:

Also in this case,
 * $$g^{\sigma\sigma} = 1/g_{\sigma\sigma} $$

(meaning that $$g^{11} = 1/g_{11}, \; g^{22} = 1/g_{22}$$ and so forth), and
 * $$g^{\sigma\tau} = 0$$ when $$\sigma \ne \tau.$$

The above relationships enable determining the coefficients $$e^\lambda$$ and $$e^\nu $$ of the metric tensor as well as enable establishing the form of the Ricci tensor $$G_{\sigma\tau}$$, which represents the sixteen equations expressed by Equation ($$). In the following, these sixteen equations will be reduced to ten, then to six in the general solution. The Christoffel symbols in the solution will be categorized, and then each term will be individually addressed, ultimately leading to the Schwarzchild metric.

From sixteen equations to ten
We first show that $$G_{\sigma\tau}$$ is symmetric, which reduces $$G_{\sigma\tau} = 0$$ to ten equations. Note the expression $$\Gamma^\alpha_{\sigma\alpha}$$ which is the first term on the right-hand side of ($$). From the definition of the Christoffel symbol (see ($$)),
 * $$\Gamma^\alpha_{\sigma\alpha} = \tfrac{1}{2} g^{\alpha\epsilon} \left(

\frac{\partial g_{\sigma\epsilon}}{\partial x^\alpha} + \frac{\partial g_{\alpha\epsilon}}{\partial x^\sigma} - \frac{\partial g_{\sigma\alpha}}{\partial x^\epsilon} \right)  $$

When the above expression is expanded using the Einstein summation convention, it is readily seen that most of the terms cancel out to yield
 * $$\Gamma^\alpha_{\sigma\alpha} = \tfrac{1}{2} g^{\alpha\epsilon}

\frac{\partial g_{\sigma\epsilon}}{\partial x^\sigma} $$

From the definition of the contravariant metric tensor $$g^{\mu\nu},$$ we obtain
 * $$\tfrac{1}{2} g^{\alpha\epsilon}

\frac{\partial g_{\sigma\epsilon}}{\partial x^\sigma} = \frac{1}{2g}\frac{\partial g}{\partial x^\sigma} $$

where $$g$$ is the determinant as described above. From basic calculus, we obtain
 * $$ \frac{1}{2g}\frac{\partial g}{\partial x^\sigma}= \frac{\partial}{\partial x^\sigma} \ln \sqrt{-g}

, $$ the negative of $$g$$ being chosen so that the square root is real.

Hence,
 * $$ \Gamma^\alpha_{\sigma\alpha} = \frac{\partial}{\partial x^\sigma} \ln \sqrt{-g} $$

and by similar reasoning
 * $$ \Gamma^\alpha_{\epsilon\alpha} = \frac{\partial}{\partial x^\epsilon} \ln \sqrt{-g} $$

Substituting these into ($$), we obtain

It is straightforward to demonstrate that interchange of $$\sigma$$ and $$\tau$$ in ($$) leaves the equations unchanged. To start with, from the properties of the Christoffel symbol,
 * $$\Gamma^\alpha_{\epsilon\tau} = \Gamma^\alpha_{\tau\epsilon}$$

so that the two factors of the first term trade places but are otherwise unchanged ($$\epsilon$$ and $$\alpha$$ are dummy variables that disappear upon expansion using the Einstein summation convention). The values of the second, third and fourth terms of ($$) are likewise unaffected by swapping $$\sigma$$ and $$\tau .$$ Therefore,
 * $$G_{\sigma\tau} = G_{\tau\sigma} $$

so that the number of independent equations is reduced from sixteen to ten.

From ten equations to six
We refer the reader to treatments in standard textbooks such as Grøn & Næss (2011) for information on this step. The reduction of the ten equations of $$G_{\mu\nu} = 0$$ to six is of considerable historical and physical importance, and took Einstein from 1913 to 1915 to resolve. He wished to be able to relate $$G_{\mu\nu}$$ to the energy-momentum tensor. Since energy and momentum are conserved, the four covariant derivatives of the energy-momentum tensor must be zero. Therefore the four covariant derivatives of the Einstein tensor must also be zero, but it was not obvious to Einstein how this should be the case. The mathematics demonstrating that this must be so had actually been developed many years earlier by Luigi Bianchi, but the Bianchi identities were unknown to Einstein in 1913. Furthermore, even if he could reduce the equations from ten to six, he still had the problem that the ten components of the metric tensor $$g_{\mu\nu}$$ would be underdetermined, since he would have only six equations to work with. It was not until the fall of 1915 that Einstein realized that he had a four-fold freedom in the choice of metric tensor, now called a gauge invariance, that reduced the ten $$g \, \text{'s}$$ to six, so that the number of unknowns would match the number of equations that he had available.

Categorizing the Christoffel symbols in the Ricci tensor
The Christoffel symbols in the expression for $$G_{\sigma\tau}$$ presented in ($$) are highly degenerate, and over two hundred terms will drop out in the following analysis.

To accomplish this simplification, we first need to classify the Christoffel symbols in ($$). We distinguish four classes of symbol:

$$: Those where all the Greek letters are alike, i.e. $$\Gamma^\sigma_{\sigma\sigma}$$ $$: Those of form $$\Gamma^\tau_{\sigma\sigma}$$ $$: Those of form $$\Gamma^\tau_{\sigma\tau} = \Gamma^\tau_{\tau\sigma}$$ $$: Those where the Greek letters are all different, i.e. $$\Gamma^\rho_{\sigma\tau}$$

According to the definition of the Christoffel symbol ($$),
 * $$\Gamma^\sigma_{\sigma\sigma} = \tfrac{1}{2} g^{\sigma\alpha} \left(

\frac{\partial g_{\sigma\alpha}}{\partial x^\sigma} + \frac{\partial g_{\sigma\alpha}}{\partial x^\sigma} - \frac{\partial g_{\sigma\sigma}}{\partial x^\alpha} \right) $$

We had previously noted that $$g_{\sigma\tau} = 0$$ when the indices are not alike. The $$g \, \text{'s}$$ non-zero only when the indices are the same. Furthermore, $$g^{\sigma\sigma} = 1/g_{\sigma\sigma}.$$ We use these facts to simplify the above equation:
 * $$\Gamma^\sigma_{\sigma\sigma} = \frac{1}{2 g_{\sigma\sigma} } \left(

\frac{\partial g_{\sigma\sigma}}{\partial x^\sigma} + \frac{\partial g_{\sigma\sigma}}{\partial x^\sigma} - \frac{\partial g_{\sigma\sigma}}{\partial x^\sigma} \right) $$

Two terms cancel, so that
 * $$\Gamma^\sigma_{\sigma\sigma} = \frac{1}{2 g_{\sigma\sigma} }

\frac{\partial g_{\sigma\sigma}}{\partial x^\sigma} $$

which yields, from basic calculus,
 * $$   $$\Gamma^\sigma_{\sigma\sigma} = \frac{1}{2} \frac{\partial }{\partial x^\sigma} \ln g_{\sigma\sigma} $$

One handles the second case in similar fashion:
 * $$\Gamma^\tau_{\sigma\sigma} = \tfrac{1}{2} g^{\tau\alpha} \left(

\frac{\partial g_{\sigma\alpha}}{\partial x^\sigma} + \frac{\partial g_{\sigma\alpha}}{\partial x^\sigma} - \frac{\partial g_{\sigma\sigma}}{\partial x^\alpha} \right) $$

Here, $$g^{\tau\alpha}$$ is non-zero only when $$\alpha = \tau.$$ This case is distinguished from the first case because $$\tau \ne \sigma,$$ so that the first two terms within the parentheses are zero. Hence,
 * $$\Gamma^\tau_{\sigma\sigma} = -\tfrac{1}{2} g^{\tau\tau}

\frac{\partial g_{\sigma\sigma}}{\partial x^\tau} $$

which yields
 * $$   $$\Gamma^\tau_{\sigma\sigma} = -\frac{1}{2 g_{\tau\tau}} \frac{\partial g_{\sigma\sigma}}{\partial x^\tau} $$

Likewise,
 * $$   $$\Gamma^\tau_{\sigma\tau} = \Gamma^\tau_{\tau\sigma} = \frac{1}{2} \frac{\partial }{\partial x^\sigma} \ln g_{\tau\tau}$$
 * $$   $$\Gamma^\rho_{\sigma\tau} = 0$$

Term-by-term analysis of Case A
For $$\sigma = 1 ,$$ and remembering the relationships in ($$),
 * $$\Gamma^1_{11} = \frac{1}{2} \frac{\partial}{\partial x^1} \ln g_{11} = $$ $$\frac{1}{2} \frac{\partial}{\partial r} \ln (- e^\lambda)$$

Then
 * $$\Gamma^1_{11} = \frac{1}{2} \frac{-e^\lambda}{-e^\lambda} \frac{\partial \lambda}{\partial r} = $$ $$\frac{1}{2} \frac{\partial \lambda}{\partial r} = \tfrac{1}{2} \lambda' \, ,$$

where $$ \lambda'$$ represents $$ \partial \lambda / \partial x^1$$ or $$\partial \lambda / \partial r \, .$$

For $$\sigma = 2$$
 * $$\Gamma^2_{22} = \frac{1}{2} \frac{\partial} {\partial x^2} \ln g_{22} = $$ $$\frac{1}{2} \frac{\partial}{\partial x^2} \ln (-x^1)^2 = \frac{1}{2} \frac{\partial}{\partial \theta} \ln (-r^2) = 0 \, ,$$

since $$r$$ and $$\theta$$ are independent variables.

For $$\sigma = 3$$ and $$\sigma = 4, $$ we have:
 * $$\Gamma^3_{33} = \Gamma^4_{44} = 0 \, .$$

Term-by-term analysis of Case B
Let us first look at $$\sigma = 1, \, \tau=2 \,:$$
 * $$ \Gamma^2_{11} = - \frac{1}{2 g_{22}} \frac{\partial}{\partial x^2} g_{11} =

- \frac{1}{2 g_{22}} \frac{\partial}{\partial x^2} (-e^\lambda)$$

Since $$\lambda$$ was defined as being a function of $$x^1 \equiv r$$ only, the partial with respect to $$x^2 \equiv \theta$$ is equal to zero,
 * $$ \Gamma^2_{11} = 0 .$$

In like manner, we can work through all of the others through this case.

Complete list of non-zero Christoffel symbols in $$G_{\sigma\tau}$$
In all, there are 4 specific examples of $$, $$4 \cdot 3 = 12$$ combinations of $$\sigma$$ and $$\tau$$ for $$, $$4 \cdot 3 = 12$$ combinations of $$\sigma$$ and $$\tau$$ for $$, and $$(4 \cdot 3 \cdot 2) / 2 = 12$$ combinations of $$\sigma, \, \tau, \, \rho$$ for $$ (since the value of the Christoffel symbol is unchanged when the two lower indices are swapped).

Hence, there are 40 distinct combinations, 31 of which reduce to zero. The complete list of non-zero Christoffel symbols in $$G_{\sigma\tau}$$ is: {{NumBlk|| $$ \left. \begin{align} &\Gamma^1_{11} = \tfrac{1}{2} \lambda' \\ &\Gamma^2_{12} = \Gamma^2_{21} = \frac{1}{r} \\ &\Gamma^3_{13} = \Gamma^3_{31} = \frac{1}{r} \\ &\Gamma^4_{14} = \Gamma^4_{41} = \tfrac{1}{2} \nu' \\ &\Gamma^1_{22} = -r e^{-\lambda} \\ &\Gamma^3_{23} = \cot \theta \\ &\Gamma^1_{33} = -r \sin^2 \theta \cdot e^{-\lambda} \\ &\Gamma^2_{33} = -\sin \theta \cdot \cos \theta \\ &\Gamma^1_{44} = \tfrac{1}{2} e^{\nu - \lambda} \cdot \nu' \end{align} \right\} $$| $$}}

where $$\nu' \equiv \frac{\partial \nu}{\partial x^1} \equiv \frac{\partial \nu}{\partial r} $$ After dropping all of the (over 200) zero terms from ($$), there remain only five equations with a much reduced number of terms. Here are the remaining equations of $$G_{\sigma\tau} = 0$$ after the zero terms have been eliminated: $$ \begin{align} G_{11} = & \; 0 \\ = &\; \Gamma^1_{11}\Gamma^1_{11} + \Gamma^2_{12}\Gamma^2_{21} + \Gamma^3_{13}\Gamma^3_{31} + \Gamma^4_{14}\Gamma^4_{41} \\ &-\frac{\partial}{\partial x^1}\Gamma^1_{11} + \frac{\partial^2}{\partial (x^1)^2} \ln \sqrt{-g} \\ &-\Gamma^1_{11}\frac{\partial}{\partial x^1} \ln \sqrt{-g} \end{align} $$

$$ \begin{align} G_{22} = & \; 0 \\ = &\; 2 \, \Gamma^1_{22}\Gamma^2_{12} + \Gamma^3_{23}\Gamma^3_{23} \\ &-\frac{\partial}{\partial x^1}\Gamma^1_{22} + \frac{\partial^2}{\partial (x^2)^2} \ln \sqrt{-g} \\ &-\Gamma^1_{22}\frac{\partial}{\partial x^1} \ln \sqrt{-g} \end{align} $$

$$ \begin{align} G_{33} = & \; 0 \\ = &\; 2 \, \Gamma^1_{33}\Gamma^3_{13} + 2 \, \Gamma^2_{33}\Gamma^3_{23} \\ &-\Gamma^1_{33}\frac{\partial}{\partial x^1} \ln \sqrt{-g} \\ &-\Gamma^2_{33}\frac{\partial}{\partial x^2} \ln \sqrt{-g} \end{align} $$

$$ \begin{align} G_{44} = & \; 0 \\ = &\; 2 \, \Gamma^1_{44}\Gamma^4_{14} - \frac{\partial}{\partial x^1} \Gamma^1_{44} \\ &-\Gamma^1_{44}\frac{\partial}{\partial x^1} \ln \sqrt{-g} \end{align} $$

$$ \begin{align} G_{12} = & \; 0 \\ = &\, \Gamma^3_{13}\Gamma^3_{23} - \Gamma^2_{12} \frac{\partial}{\partial x^2} \ln \sqrt{-g} \end{align} $$

We now substitute into the above five equations the values from ($$) and the value of $$g$$ from ($$):

$$ \begin{align} G_{11} = &\; 0 \\ = &\; \tfrac{1}{4}\lambda'^2 + \frac{1}{r^2} + \frac{1}{r^2} + \tfrac{1}{4} \nu'^2 - \tfrac{1}{2} \lambda'' \\ &\; + \left( \tfrac{1}{2} \lambda + \tfrac{1}{2}\nu - \frac{2}{r^2} \right) - \tfrac{1}{2} \lambda' \left( \tfrac{1}{2}\lambda' + \tfrac{1}{2}\nu' + \frac{2}{r} \right) \\ = &\; \tfrac{1}{4}\nu'^2 + \tfrac{1}{2}\nu'' - \tfrac{1}{4}\lambda' \nu' - \frac{\lambda'}{r} \end{align} $$

$$ \begin{align} G_{22} = &\; 0 \\ = &\; e^{-\lambda} \left[ 1 + \tfrac{1}{2}r \left(\nu' - \lambda' \right) \right] -1 \end{align} $$

$$ \begin{align} G_{33} = &\; 0 \\ = &\; \sin^2 \theta \cdot e^{-\lambda} \left[ 1 + \tfrac{1}{2}r \left(\nu' - \lambda'   \right) \right] - \sin^2 \theta \end{align} $$

$$ \begin{align} G_{44} = &\; 0 \\ = &\; e^{\nu - \lambda} \left(-\tfrac{1}{2}\nu'' + \tfrac{1}{4} \lambda' \nu' - \tfrac{1}{4}\nu'^2 - \frac{\nu'}{r} \right) \end{align} $$

where $$\lambda = \frac{\partial ^2 \lambda}{\partial r^2} \,$$ and $$\, \nu = \frac{\partial^2 \nu}{\partial r^2}$$

On the other hand,
 * $$G_{12} = \frac{1}{r} \cot \theta - \frac{1}{r} \cot \theta $$

which is identically zero and is therefore eliminated, leaving four equations.

Also note that the expression for $$G_{33}$$ contains the expression for $$G_{22}.$$ The two equations are not independent, so we are left with only three independent equations.

Solving for eλ and eμ: The Schwarzschild metric
If we divide $$G_{44}$$ by $$e^{\nu - \lambda} $$ and add to $$G_{11}, $$ we get

Integrating ($$) yields $$\, \lambda = -\nu + C \, $$ where $$C$$ is a constant of integration. The value of the constant can be found by noting the following boundary condition on ($$): At points infinitely distant from gravitating masses, spacetime is flat so that the coefficients $$e^\lambda$$ and $$e^\nu$$ of $$dr^2$$ and $$dt^2$$ are both equal to one, i.e.

Infinitely distant from gravitating masses, therefore, $$\lambda = -\nu = 0 $$ and so $$C$$ must be zero. Hence,

Substituting ($$) and ($$) into the expression for $$G_{22} $$ above yields $$ \begin{align} G_{22} &= 0 \\ &= e^\nu(1 + r\nu') - 1 \end{align} $$

which informs us that

Let $$\; \gamma = e^\nu \,$$ which implies $$\, \gamma' = e^\nu \nu'. \,$$ Substituting into ($$) and rearranging, we get the separable differential equation $$\, \gamma + r \gamma' = 1 \,$$ which yields

where $$2m$$ is a constant of integration expressed as such for reasons that will be discussed later on.

We have thus determined $$e^\lambda$$ and $$e^\nu$$
 * $$e^\nu \;=\; 1/e^\lambda \;=\; \gamma \;=\;$$ $$1 - \frac{2m}{r} \;=\; 1- \frac{2m}{x^1} $$

Equation ($$) therefore becomes


 * $$ ds^2 = - \gamma^{-1} dr^2 $$ $$ - \; r^2 (  d\theta^2 + \sin ^2 \theta \cdot d\phi^2 )

+ \gamma dt^2 $$
 * $$ ds^2 = - (1 - \frac{2m}{r})^{-1} dr^2 $$ $$ - \; r^2 (  d\theta^2 + \sin ^2 \theta \cdot d\phi^2 )

+ (1 - \frac{2m}{r}) dt^2 $$

This is the famous Schwarzschild metric.

Movement along geodesics
According to Newton's laws of motion, a planet orbiting the Sun would move in a straight line except for being pulled off course by the Sun's gravity. According to general relativity, there is no such thing as gravitational force. Rather, as discussed in section Basic propositions, a planet orbiting the Sun continuously follows the local "nearest thing to a straight line", which is to say, it follows a geodesic path.

Finding the equation of a geodesic requires knowing something about the calculus of variations, which is outside the scope of the typical undergraduate math curriculum, so we will not go into details of the analysis.

Determining the straightest path between two points resembles the task of finding the maximum or minimum of a function. In ordinary calculus, given the function $$y = f(x), \,$$ an "extremum" or "stationary point" may be found wherever the derivative of the function is zero.

In the calculus of variations, we seek to minimize the value of the functional between the start and end points. In the example shown in Fig. 6–8, this is by finding the function for which
 * $$\delta \int^B_A ds = 0$$

where $$\delta$$ is the variation and the integral of $$ds$$ is the world-line.

Skipping the details of the derivation, the general formula for the equation of a geodesic is

valid for all dimensionalities and shapes of space(time). As a geometric expression, the derivative is with respect to the line element, whereas classical theory involves time derivatives.

Let us consider a flat, three dimensional Euclidean space using Cartesian coordinates. For such a space,
 * $$ g_{11} = g_{22} = g_{33} = 1 \,$$ and
 * $$ g_{\mu\nu} = 0 \,$$ for $$\mu \ne \nu$$

The derivatives of the $$g \, \text{'s}$$ in the Christoffel symbol ($$) are all zero, so ($$) becomes

After replacing $$ds$$ by the proper time $$dt$$ (the time along the timelike world line, i.e. the time experienced by the moving object) and expanding $$, we get

which is to say, an object freely moving in Euclidean three-space travels with unaccelerated motion along a straight line.

Orbital motion: Stability of the orbital plane
Equation ($$) is a general expression for the geodesic. To apply it to the gravitational field around the Sun, the $$g \, \text{'s}$$ in the Christoffel symbols must be replaced with those specific to the Schwarzschild metric.

Equations ($$) present the values of $$\Gamma^\sigma_{\alpha\beta}$$ in terms of $$\lambda, \, \nu, \, r, \, \theta$$ while ($$) allows simplification of the expression to terms of $$\nu , \, r, \, \theta. \,$$ Since $$e^\nu = \gamma$$ and ($$) allows us to express $$\gamma$$ in terms of $$r$$, we can thus express $$\Gamma^\sigma_{\alpha\beta}$$ in terms of $$r$$ and $$\theta.$$

Remember that ($$) is actually four equations. In particular, $$x^\sigma$$ for $$\sigma = 2$$ corresponds to $$\theta$$ in Fig. 6-7. Suppose we launched an object into orbit around the Sun with $$\theta = \pi / 2$$ and an initial velocity in the $$xy$$ plane? How would the object subsequently behave? Equation ($$) for $$x^2 \equiv \theta$$ becomes

From ($$), we know that the non-zero Christoffel symbols for $$\sigma = 2$$ are
 * $$\Gamma^2_{12} = \Gamma^2_{21} = \frac{1}{r} $$

and
 * $$\Gamma^2_{33} = -\sin \theta \cdot \cos \theta$$

so that in summing ($$) over all values of $$\alpha$$ and $$\beta, $$ we get

Since we stipulated an initial $$\theta = \pi / 2$$ and an initial velocity in the $$xy$$ plane, $$\cos \theta = 0$$ and $$d\theta / ds = 0$$ so that ($$) becomes

In other words, a planet launched into orbit around the Sun remains in orbit around the same plane in which it was launched, the same as in Newtonian physics.

Orbital motion: Modified Keplerian ellipses
Starting with ($$), we explore the behavior of the other variables of the geodesic equation applied to the Schwarzschild metric:

For $$\sigma = 1,$$ ($$) becomes
 * $$\frac {d^2 x^1}{ds^2} + \Gamma^1_{11} \left( \frac{ dx^1}{ds} \right)^2

+ \Gamma^1_{22} \left( \frac{dx^2}{ds} \right)^2 $$ $$+ \; \Gamma^1_{33} \left( \frac{dx^3}{ds} \right)^2 + \Gamma^1_{44} \left( \frac {dx^4}{ds}\right)^2 = 0$$
 * or
 * $$\frac{d^2r}{ds^2} + \tfrac{1}{2} \lambda' \left( \frac{dr}{ds} \right)^2

- re^{-\lambda} \left(  \frac{d\theta}{ds} \right)^2 $$ $$- \; r \cdot \sin^2 \theta \cdot e^{-\lambda} \left( \frac{d \phi}{ds} \right)^2 + \tfrac{1}{2} e^{\nu - \lambda} \nu' \left( \frac{dt}{ds} \right)^2 = 0$$

Since we have stipulated that $$\theta = \pi/2, \;$$ $$d\theta / ds = 0 \,$$ and $$\, \sin \theta = 1, \,$$ the above equation therefore becomes

Likewise, for $$\sigma = 3 \,$$ and $$\, \sigma = 4, \,$$ we get

($$), ($$), ($$), and ($$) may be combined to get: {{NumBlk|| $$ \left. \begin{align} &\frac{d^2 u}{d\phi^2} + u = \frac{m}{h^2} + 3 m u^2 \\ &r^2 \frac{d\phi}{ds} = h \end{align} \right\} $$| $$}}

where $$m$$ and $$h$$ are constants of integration and $$u = 1/r.$$

The equations above are those of an object in orbit around a central mass. The second of the two equations is essentially a statement of the conservation of angular momentum. The first of the two equations is expressed in this form so that it may be compared with the Binet equation, devised by Jacques Binet in the 1800s while exploring the shapes of orbits under alternative force laws.

For an inverse square law, the Binet equation predicts, in agreement with Newton, that orbits are conic sections. Given a Newtonian inverse square law, the equations of motion are: {{NumBlk||$$ \left. \begin{align} &\frac{d^2 u}{d\phi^2} + u = \frac{m}{h^2} \\ &r^2 \frac{d\phi}{dt} = h \end{align} \right\} $$| $$ }}

where $$m$$ is the mass of the Sun, $$r$$ is the orbital radius, and $$d\phi / dt$$ is the angular velocity of the planet.

The relativistic equations for orbital motion ($$) are observed to be nearly identical to the Newtonian equations ($$) except for the presence of $$3 m u^2$$ in the relativistic equations and the use of $$ds$$ rather than $$dt.$$

The Binet equation provides the physical meaning of $$m,$$ which we had introduced as an arbitrary constant of integration in the derivation of the Schwarzschild metric in ($$).

Orbital motion: Anomalous precession
The presence of the term $$3mu^2$$ in ($$) means that the orbit does not form a closed loop, but rather shifts slightly with each revolution, as illustrated (in much exaggerated form) in Fig. 6–9.

Now in fact, there are a number of effects in the Solar System that cause the perihelia of planets to deviate from closed Keplerian ellipses even in the absence of relativity. Newtonian theory predicts closed ellipses only for an isolated two-body system. The presence of other planets perturb each others' orbits, so that Mercury's orbit, for instance, would precess by slightly over 532 arcsec/century due to these Newtonian effects.

In 1859, Urbain Le Verrier, after extensive extensive analysis of historical data on timed transits of Mercury over the Sun's disk from 1697 to 1848, concluded that there was a significant excess deviation of Mercury's orbit from the precession predicted by these Newtonian effects amounting to 38 arcseconds/century (This estimate was later refined to 43 arcseconds/century by Simon Newcomb in 1882). Over the next half-century, extensive observations definitively ruled out the hypothetical planet Vulcan proposed by Le Verrier as orbiting between Mercury and the Sun that might account for this discrepancy.

Starting from ($$), the excess angular advance of Mercury's perihelion per orbit may be calculated:

The first equality is in relativistic units, while the second equality is in MKS units. In the second equality, we replace $$m,$$ the geometric mass (units of length) with M, the mass in kilograms.
 * $$G$$ is the gravitational constant (6.672 × 10-11 m3/kg-s2)
 * $$M$$ is the mass of the Sun (1.99 × 1030 kg)
 * $$c$$ is the speed of light (2.998 × 108 m/s)
 * $$a$$ is Mercury's perihelion (5.791 × 1010 m)
 * $$e$$ is Mercury's orbital eccentricity (0.20563)

We find that
 * $$\Delta \phi_{orbit} = 5.021 \times 10^{-7} \text{radian}$$

which works out to 43 arcsec/century.

Deflection of light in a gravitational field
The most famous of the early tests of general relativity was the measurement of the gravitational deflection of starlight passing near the Sun. As noted before, anything moving freely in spacetime travels along the path of a geodesic. This includes light.

Consider Fig. 6–10. Line $$AE$$ represents the straight-line path of a ray of light in the absence of any large mass along its path. If the ray passes near the Sun, however, its path is deflected so that it follows the curved line $$AF, \,$$ which we illustrate as just grazing the Sun of radius $$R. \,$$ An observer situated at $$F$$ sees the star as apparently being at position $$B$$ rather than at its true position $$A.$$ The angle $$\alpha$$ is the angle between the true position of the star and its apparent position.

We have learned above, in the Spacetime interval section of this article, that the interval between two events on the world line of a particle moving at the speed of light is zero. Equations ($$) present the geodesic equation ($$) applied to the Schwarzschild metric ($$). Substituting $$\, ds = 0 \,$$ in the second equation of ($$) gives $$\, h = \infty, \, $$ which results in the first equation of ($$) becoming
 * $$\frac{d^2 u}{d \phi^2} + u = 3 mu^2 $$

which is hence a differential equation describing the path of light passing by a massive spherical object. Solving this differential equation yields, in Cartesian coordinates:
 * $$x = R - \frac{m}{R} \frac{x^2 + 2y^2}{\sqrt{x^2 + y^2}} $$

Given $$\alpha$$ a very small angle, the asymptotes of this curve are:
 * $$x = R \pm 2y\frac{m}{R},$$

where $$m, \,$$ in relativistic units, is a length.

The angle $$\alpha$$ may be calculated from the slopes of the asymptotes:

which for very small $$\, \alpha \,$$ and $$\, m \ll R \,$$ becomes

Plugging in $$R = 6.955 \times 10^5 \text{km}$$ and $$m = 1.477 \, \text{km}, \,$$ we get
 * $$\alpha = 8.494 \times 10^{-6} \text{rad} = 1.75 \, \text{arcsec}$$

The earliest measurement of the gravitational deflection of light, the 1919 Eddington experiment, established the validity of this figure to within broad limits. Modern measurements have validated the accuracy of this prediction to the 0.03% level.

Gravitational redshift
The third of the classical tests of relativity is the prediction of gravitational red shift. This was initially thought to represent an important test of general relativity because the Schwarzschild solution was employed in its derivation. However, as demonstrated above in the section Curvature of time, red shift is predicted by any theory of gravitation that is consistent with the equivalence principle. This includes Newtonian gravitation.

The derivation presented in Curvature of time uses kinematic arguments and does not make use of the field equations. Nevertheless, it is instructive to compare the kinematic arguments presented earlier with the more geometric approach accorded by use of the Schwarzschild solution.

Let $$ds$$ represent the invariant proper time of the period (i.e. inverse frequency) of some well-defined spectral line of an element. We know from special relativity that although observers in different frames may measure different $$dx,\, dy, \,dz,\, dt$$ for an interval, that the interval does not change with change of frame. Likewise the proper time of the period should not change with position in a gravitational potential field. Assume that a distant observer is at rest relative to an atom at the surface of the Sun as it emits light. In the Schwarzschild solution ($$), we may write $$dr = d \theta = d \phi = 0, $$ leaving $$dt$$ as the only non-zero term. The Schwarzschild solution reduces to
 * $$ ds = \sqrt{1 - \frac{2m}{r}} dt$$

If $$m \ll r$$,

Plugging in the values for the Sun's geometric mass and radius, we conclude that the distant observer should observe the light emitted by the atom as being redshifted by a factor $$1 + 1.477/695500 = 1.00000212 \, .$$

This is an extremely small factor of redshift, and confirmation took many years. See Gravitational redshift and time dilation for details.