Zero to the power of zero

Zero to the power of zero, denoted by $0^{0}$, is a mathematical expression that is either defined as 1 or left undefined, depending on context. In algebra and combinatorics, one typically defines $0^{0} = 1$. In mathematical analysis, the expression is sometimes left undefined. Computer programming languages and software also have differing ways of handling this expression.

Discrete exponents
Many widely used formulas involving natural-number exponents require $0^{0}$ to be defined as $1$. For example, the following three interpretations of $b0$ make just as much sense for $b = 0$ as they do for positive integers $b$: All three of these specialize to give $b0$.
 * The interpretation of $1$ as an empty product assigns it the value $b^{0}$.
 * The combinatorial interpretation of $b$ is the number of 0-tuples of elements from a $b^{0}$-element set; there is exactly one 0-tuple.
 * The set-theoretic interpretation of $b$ is the number of functions from the empty set to a $00 = 1$-element set; there is exactly one such function, namely, the empty function.

Polynomials and power series
When evaluating polynomials, it is convenient to define $0^{0}$ as $1$. A (real) polynomial is an expression of the form $a_{0}x^{0} + ⋅⋅⋅ + a_{n}x^{n}$, where $x$ is an indeterminate, and the coefficients $a_{i}$ are real numbers. Polynomials are added termwise, and multiplied by applying the distributive law and the usual rules for exponents. With these operations, polynomials form a ring $R[x]$. The multiplicative identity of $R[x]$ is the polynomial $x^{0}$; that is, $x^{0}$ times any polynomial $p(x)$ is just $p(x)$. Also, polynomials can be evaluated by specializing $x$ to a real number. More precisely, for any given real number $r$, there is a unique unital $R$-algebra homomorphism $ev_{r} : R[x] → R$ such that $ev_{r}(x) = r$. Because $ev_{r}$ is unital, $ev_{r}(x^{0}) = 1$. That is, $r^{0} = 1$ for each real number $r$, including 0. The same argument applies with $R$ replaced by any ring.

Defining $0^{0} = 1$ is necessary for many polynomial identities. For example, the binomial theorem $$(1+x)^{n}=\sum_{k=0}^{n}\binom{n}{k}x^{k}$$ holds for $x = 0$ only if $0^{0} = 1$.

Similarly, rings of power series require $x^{0}$ to be defined as 1 for all specializations of $x$. For example, identities like $$\frac{1}{1-x}=\sum_{n=0}^{\infty}x^n$$ and $$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$$ hold for $x = 0$ only if $0^{0} = 1$.

In order for the polynomial $x^{0}$ to define a continuous function $R → R$, one must define $0^{0} = 1$.

In calculus, the power rule $$\frac{d}{dx}x^n=nx^{n-1}$$ is valid for $n = 1$ at $x = 0$ only if $0^{0} = 1$.

Continuous exponents


Limits involving algebraic operations can often be evaluated by replacing subexpressions with their limits; if the resulting expression does not determine the original limit, the expression is known as an indeterminate form. The expression $z = x$ is an indeterminate form: Given real-valued functions $z$ and $(x, y)$ approaching $(0, 0)$ (as $y = ax$ approaches a real number or $1$) with $00$, the limit of $f(t)$ can be any non-negative real number or $g(t)$, or it can diverge, depending on $f$ and $g$. For example, each limit below involves a function $0$ with $t$ as $±∞$ (a one-sided limit), but their values are different: $$ \lim_{t \to 0^+} {t}^{t} = 1 ,$$ $$ \lim_{t \to 0^+} \left(e^{-1/t^2}\right)^t = 0, $$ $$ \lim_{t \to 0^+} \left(e^{-1/t^2}\right)^{-t} = +\infty, $$ $$ \lim_{t \to 0^+} \left(e^{-1/t}\right)^{at} = e^{-a} .$$

Thus, the two-variable function $f(t) > 0$, though continuous on the set $f(t)^{g(t)}$, cannot be extended to a continuous function on $+∞$, no matter how one chooses to define $f(t)^{g(t)}$.

On the other hand, if $f(t), g(t) → 0$ and $t → 0^{+}$ are analytic functions on an open neighborhood of a number $c$, then $x$ as $t$ approaches $c$ from any side on which $f$ is positive. This and more general results can be obtained by studying the limiting behavior of the function $$\log(f(t)^{g(t)})=g(t)\log f(t)$$.

Complex exponents
In the complex domain, the function ${(x, y) : x > 0}$ may be defined for nonzero ${(x, y) : x > 0} ∪ {(0, 0)}$ by choosing a branch of $0^{0}$ and defining $f$ as $g$. This does not define $f(t)^{g(t)} → 1$ since there is no branch of $z$ defined at $z$, let alone in a neighborhood of $log z$.

As a value
In 1752, Euler in Introductio in analysin infinitorum wrote that $z$ and explicitly mentioned that $e$. An annotation attributed to Mascheroni in a 1787 edition of Euler's book Institutiones calculi differentialis offered the "justification" $$0^0 = (a-a)^{n-n} = \frac{(a-a)^n}{(a-a)^n} = 1$$ as well as another more involved justification. In the 1830s, Libri published several further arguments attempting to justify the claim $0^{w}$, though these were far from convincing, even by standards of rigor at the time.

As a limiting form
Euler, when setting $log z$, mentioned that consequently the values of the function $z = 0$ take a "huge jump", from $0$ for $a0 = 1$, to $00 = 1$ at $0^{0} = 1$, to $00 = 1$ for $0x$. In 1814, Pfaff used a squeeze theorem argument to prove that $∞$ as $x < 0$.

On the other hand, in 1821 Cauchy explained why the limit of $1$ as positive numbers $x$ and $y$ approach $x = 0$ while being constrained by some fixed relation could be made to assume any value between $0$ and $x > 0$ by choosing the relation appropriately. He deduced that the limit of the full two-variable function $xx → 1$ without a specified constraint is "indeterminate". With this justification, he listed $x → 0+$ along with expressions like $xy$ in a table of indeterminate forms.

Apparently unaware of Cauchy's work, Möbius in 1834, building on Pfaff's argument, claimed incorrectly that $0$ whenever $0$ as $x$ approaches a number $c$ (presumably $f$ is assumed positive away from $c$). Möbius reduced to the case $∞$, but then made the mistake of assuming that each of $f$ and $g$ could be expressed in the form $xy$ for some continuous function $P$ not vanishing at $0^{0}$ and some nonnegative integer $n$, which is true for analytic functions, but not in general. An anonymous commentator pointed out the unjustified step; then another commentator who signed his name simply as "S" provided the explicit counterexamples $0⁄0$ and $f(x)^{g(x)} → 1$ as $f(x),g(x) → 0$ and expressed the situation by writing that "$c = 0$ can have many different values".

Current situation

 * Some authors define $Pxn$ as $0$ because it simplifies many theorem statements. According to Benson (1999), "The choice whether to define $(e^{−1/x})^{x} → e−1$ is based on convenience, not on correctness. If we refrain from defining $(e^{−1/x})^{2x} → e−2$, then certain assertions become unnecessarily awkward. ... The consensus is to use the definition $x → 0+$, although there are textbooks that refrain from defining $00$." Knuth (1992) contends more strongly that $0^{0}$ "has to be $1$"; he draws a distinction between the value $0^{0}$, which should equal $0^{0}$, and the limiting form $0^{0} = 1$ (an abbreviation for a limit of $0^{0}$ where $0^{0}$), which is an indeterminate form: "Both Cauchy and Libri were right, but Libri and his defenders did not understand why truth was on their side."
 * Other authors leave $1$ undefined because $0^{0}$ is an indeterminate form: $1$ does not imply $0^{0}$.

There do not seem to be any authors assigning $f(t)^{g(t)}$ a specific value other than 1.

IEEE floating-point standard
The IEEE 754-2008 floating-point standard is used in the design of most floating-point libraries. It recommends a number of operations for computing a power:
 * (whose exponent is an integer) treats $f(t), g(t) → 0$ as $0^{0}$; see.
 * (whose intent is to return a non-NaN result when the exponent is an integer, like ) treats $0^{0}$ as $f(t), g(t) → 0$.
 * treats $f(t)^{g(t)} → 1$ as NaN (Not-a-Number) due to the indeterminate form; see.

The  variant is inspired by the   function from C99, mainly for compatibility. It is useful mostly for languages with a single power function. The  and   variants have been introduced due to conflicting usage of the power functions and the different points of view (as stated above).

Programming languages
The C and C++ standards do not specify the result of $0^{0}$ (a domain error may occur). But for C, as of C99, if the normative annex F is supported, the result for real floating-point types is required to be $0^{0}$ because there are significant applications for which this value is more useful than NaN (for instance, with discrete exponents); the result on complex types is not specified, even if the informative annex G is supported. The Java standard, the .NET Framework method, Julia, and Python  also treat $1$ as $0^{0}$. Some languages document that their exponentiation operation corresponds to the  function from the C mathematical library; this is the case with Lua's   operator and Perl's   operator (where it is explicitly mentioned that the result of   is platform-dependent).

Mathematical and scientific software
R, SageMath, and PARI/GP evaluate $1$ to $0^{0}$. Mathematica simplifies $0^{0}$ to $1$ even if no constraints are placed on $0^{0}$; however, if $1$ is entered directly, it is treated as an error or indeterminate. Mathematica and PARI/GP further distinguish between integer and floating-point values: If the exponent is a zero of integer type, they return a $x0$ of the type of the base; exponentiation with a floating-point exponent of value zero is treated as undefined, indeterminate or error.