Boolean algebras canonically defined


 * Boolean algebras are models of the equational theory of two values; this definition is equivalent to the lattice and ring definitions.

Boolean algebra is a mathematically rich branch of abstract algebra. Stanford Encyclopaedia of Philosophy defines Boolean algebra as 'the algebra of two-valued logic with only sentential connectives, or equivalently of algebras of sets under union and complementation.' Just as group theory deals with groups, and linear algebra with vector spaces, Boolean algebras are models of the equational theory of the two values 0 and 1 (whose interpretation need not be numerical). Common to Boolean algebras, groups, and vector spaces is the notion of an algebraic structure, a set closed under some operations satisfying certain equations.

Just as there are basic examples of groups, such as the group $$\mathbb Z$$ of integers and the symmetric group $S_{n}$ of permutations of $n$ objects, there are also basic examples of Boolean algebras such as the following. Boolean algebra thus permits applying the methods of abstract algebra to mathematical logic and digital logic.
 * The algebra of binary digits or bits 0 and 1 under the logical operations including disjunction, conjunction, and negation. Applications include the propositional calculus and the theory of digital circuits.
 * The algebra of sets under the set operations including union, intersection, and complement. Applications are far-reaching because set theory is the standard foundations of mathematics.

Unlike groups of finite order, which exhibit complexity and diversity and whose first-order theory is decidable only in special cases, all finite Boolean algebras share the same theorems and have a decidable first-order theory. Instead, the intricacies of Boolean algebra are divided between the structure of infinite algebras and the algorithmic complexity of their syntactic structure.

Definition
Boolean algebra treats the equational theory of the maximal two-element finitary algebra, called the Boolean prototype, and the models of that theory, called Boolean algebras. These terms are defined as follows.

An algebra is a family of operations on a set, called the underlying set of the algebra. We take the underlying set of the Boolean prototype to be {0,1}.

An algebra is finitary when each of its operations takes only finitely many arguments. For the prototype each argument of an operation is either $0$ or $1$, as is the result of the operation. The maximal such algebra consists of all finitary operations on {0,1}.

The number of arguments taken by each operation is called the arity of the operation. An operation on {0,1} of arity $n$, or $n$-ary operation, can be applied to any of $2^{n}$ possible values for its $n$ arguments. For each choice of arguments, the operation may return $0$ or $1$, whence there are $2^{ 2^{n} } n$-ary operations.

The prototype therefore has two operations taking no arguments, called zeroary or nullary operations, namely zero and one. It has four unary operations, two of which are constant operations, another is the identity, and the most commonly used one, called negation, returns the opposite of its argument: $1$ if $0$, $0$ if $1$. It has sixteen binary operations; again two of these are constant, another returns its first argument, yet another returns its second, one is called conjunction and returns 1 if both arguments are 1 and otherwise 0, another is called disjunction and returns 0 if both arguments are 0 and otherwise 1, and so on. The number of $(n+1)$-ary operations in the prototype is the square of the number of $n$-ary operations, so there are $16^{2} = 256$ ternary operations, $256^{2} = 65,536$ quaternary operations, and so on.

A family is indexed by an index set. In the case of a family of operations forming an algebra, the indices are called operation symbols, constituting the language of that algebra. The operation indexed by each symbol is called the denotation or interpretation of that symbol. Each operation symbol specifies the arity of its interpretation, whence all possible interpretations of a symbol have the same arity. In general it is possible for an algebra to interpret distinct symbols with the same operation, but this is not the case for the prototype, whose symbols are in one-one correspondence with its operations. The prototype therefore has $2^{ 2^{n} } n$-ary operation symbols, called the Boolean operation symbols and forming the language of Boolean algebra. Only a few operations have conventional symbols, such as $¬$ for negation, $∧$ for conjunction, and $∨$ for disjunction. It is convenient to consider the $i$-th $n$-ary symbol to be $^{n}f_{i}$ as done below in the section on truth tables.

An equational theory in a given language consists of equations between terms built up from variables using symbols of that language. Typical equations in the language of Boolean algebra are $x∧y = y∧x$, $x∧x = x$, $x∧¬x = y∧¬y$, and $x∧y = x$.

An algebra satisfies an equation when the equation holds for all possible values of its variables in that algebra when the operation symbols are interpreted as specified by that algebra. The laws of Boolean algebra are the equations in the language of Boolean algebra satisfied by the prototype. The first three of the above examples are Boolean laws, but not the fourth since $1∧0 ≠ 1$.

The equational theory of an algebra is the set of all equations satisfied by the algebra. The laws of Boolean algebra therefore constitute the equational theory of the Boolean prototype.

A model of a theory is an algebra interpreting the operation symbols in the language of the theory and satisfying the equations of the theory.
 * A Boolean algebra is any model of the laws of Boolean algebra.

That is, a Boolean algebra is a set and a family of operations thereon interpreting the Boolean operation symbols and satisfying the same laws as the Boolean prototype.

If we define a homologue of an algebra to be a model of the equational theory of that algebra, then a Boolean algebra can be defined as any homologue of the prototype.

Example 1. The Boolean prototype is a Boolean algebra, since trivially it satisfies its own laws. It is thus the prototypical Boolean algebra. We did not call it that initially in order to avoid any appearance of circularity in the definition.

Basis
The operations need not be all explicitly stated. A basis is any set from which the remaining operations can be obtained by composition. A "Boolean algebra" may be defined from any of several different bases. Three bases for Boolean algebra are in common use, the lattice basis, the ring basis, and the Sheffer stroke or NAND basis. These bases impart respectively a logical, an arithmetical, and a parsimonious character to the subject.
 * The lattice basis originated in the 19th century with the work of Boole, Peirce, and others seeking an algebraic formalization of logical thought processes.
 * The ring basis emerged in the 20th century with the work of Zhegalkin and Stone and became the basis of choice for algebraists coming to the subject from a background in abstract algebra. Most treatments of Boolean algebra assume the lattice basis, a notable exception being Halmos[1963] whose linear algebra background evidently endeared the ring basis to him.
 * Since all finitary operations on {0,1} can be defined in terms of the Sheffer stroke NAND (or its dual NOR), the resulting economical basis has become the basis of choice for analyzing digital circuits, in particular gate arrays in digital electronics.

The common elements of the lattice and ring bases are the constants 0 and 1, and an associative commutative binary operation, called meet $x∧y$ in the lattice basis, and multiplication $xy$ in the ring basis. The distinction is only terminological. The lattice basis has the further operations of join, $x∨y$, and complement, $¬x$. The ring basis has instead the arithmetic operation $x⊕y$ of addition (the symbol $⊕$ is used in preference to $+$ because the latter is sometimes given the Boolean reading of join).

To be a basis is to yield all other operations by composition, whence any two bases must be intertranslatable. The lattice basis translates $x∨y$ to the ring basis as $x⊕y⊕xy$, and $¬x$ as $x⊕1$. Conversely the ring basis translates $x⊕y$ to the lattice basis as $(x∨y)∧¬(x∧y)$.

Both of these bases allow Boolean algebras to be defined via a subset of the equational properties of the Boolean operations. For the lattice basis, it suffices to define a Boolean algebra as a distributive lattice satisfying $x∧¬x = 0$ and $x∨¬x = 1$, called a complemented distributive lattice. The ring basis turns a Boolean algebra into a Boolean ring, namely a ring satisfying $x^{2} = x$.

Emil Post gave a necessary and sufficient condition for a set of operations to be a basis for the nonzeroary Boolean operations. A nontrivial property is one shared by some but not all operations making up a basis. Post listed five nontrivial properties of operations, identifiable with the five Post's classes, each preserved by composition, and showed that a set of operations formed a basis if, for each property, the set contained an operation lacking that property. (The converse of Post's theorem, extending "if" to "if and only if," is the easy observation that a property from among these five holding of every operation in a candidate basis will also hold of every operation formed by composition from that candidate, whence by nontriviality of that property the candidate will fail to be a basis.) Post's five properties are: The NAND (dually NOR) operation lacks all these, thus forming a basis by itself.
 * monotone, no 0-1 input transition can cause a 1-0 output transition;
 * affine, representable with Zhegalkin polynomials that lack bilinear or higher terms, e.g. $x⊕y⊕1$ but not $xy$;
 * self-dual, so that complementing all inputs complements the output, as with $x$, or the median operator $xy⊕yz⊕zx$, or their negations;
 * strict (mapping the all-zeros input to zero);
 * costrict (mapping all-ones to one).

Truth tables
The finitary operations on {0,1} may be exhibited as truth tables, thinking of 0 and 1 as the truth values false and true. They can be laid out in a uniform and application-independent way that allows us to name, or at least number, them individually. These names provide a convenient shorthand for the Boolean operations. The names of the $n$-ary operations are binary numbers of $2^{n}$ bits. There being $2^{2 ^{n} }$ such operations, one cannot ask for a more succinct nomenclature. Note that each finitary operation can be called a switching function.

This layout and associated naming of operations is illustrated here in full for arities from 0 to 2.

These tables continue at higher arities, with $2^{n}$ rows at arity $n$, each row giving a valuation or binding of the $n$ variables $x_{0},...x_{n−1}$ and each column headed $^{n}f_{i}$ giving the value $^{n}f_{i}(x_{0},...,x_{n−1})$ of the $i$-th $n$-ary operation at that valuation. The operations include the variables, for example $^{1}f_{2}$ is $x_{0}$ while $^{2}f_{10}$ is $x_{0}$ (as two copies of its unary counterpart) and $^{2}f_{12}$ is $x_{1}$ (with no unary counterpart). Negation or complement $¬x_{0}$ appears as $^{1}f_{1}$ and again as $^{2}f_{5}$, along with $^{2}f_{3}$ ($¬x_{1}$, which did not appear at arity 1), disjunction or union $x_{0}∨x_{1}$ as $^{2}f_{14}$, conjunction or intersection $x_{0}∧x_{1}$ as $^{2}f_{8}$, implication $x_{0}→x_{1} as ^{2}f_{13}$, exclusive-or symmetric difference $x_{0}⊕x_{1}$ as $^{2}f_{6}$, set difference $x_{0}−x_{1}$ as $^{2}f_{2}$, and so on.

As a minor detail important more for its form than its content, the operations of an algebra are traditionally organized as a list. Although we are here indexing the operations of a Boolean algebra by the finitary operations on {0,1}, the truth-table presentation above serendipitously orders the operations first by arity and second by the layout of the tables for each arity. This permits organizing the set of all Boolean operations in the traditional list format. The list order for the operations of a given arity is determined by the following two rules.


 * (i) The $i$-th row in the left half of the table is the binary representation of $i$ with its least significant or $0$-th bit on the left ("little-endian" order, originally proposed by Alan Turing, so it would not be unreasonable to call it Turing order).


 * (ii) The $j$-th column in the right half of the table is the binary representation of $j$, again in little-endian order. In effect the subscript of the operation is the truth table of that operation. By analogy with Gödel numbering of computable functions one might call this numbering of the Boolean operations the Boole numbering.

When programming in C or Java, bitwise disjunction is denoted x|y, conjunction x&y , and negation ~x. A program can therefore represent for example the operation $x∧(y∨z)$ in these languages as x&(y|z), having previously set x = 0xaa , y = 0xcc , and z = 0xf0 (the " 0x " indicates that the following constant is to be read in hexadecimal or base 16), either by assignment to variables or defined as macros. These one-byte (eight-bit) constants correspond to the columns for the input variables in the extension of the above tables to three variables. This technique is almost universally used in raster graphics hardware to provide a flexible variety of ways of combining and masking images, the typical operations being ternary and acting simultaneously on source, destination, and mask bits.

Bit vectors
Example 2. All bit vectors of a given length form a Boolean algebra "pointwise", meaning that any $n$-ary Boolean operation can be applied to $n$ bit vectors one bit position at a time. For example, the ternary OR of three bit vectors each of length 4 is the bit vector of length 4 formed by oring the three bits in each of the four bit positions, thus $0100∨1000∨1001 = 1101$. Another example is the truth tables above for the $n$-ary operations, whose columns are all the bit vectors of length $2^{n}$ and which therefore can be combined pointwise whence the $n$-ary operations form a Boolean algebra. This works equally well for bit vectors of finite and infinite length, the only rule being that the bit positions all be indexed by the same set in order that "corresponding position" be well defined.

The atoms of such an algebra are the bit vectors containing exactly one 1. In general the atoms of a Boolean algebra are those elements $x$ such that $x∧y$ has only two possible values, $x$ or $0$.

Power set algebra
Example 3. The power set algebra, the set $2^{W}$ of all subsets of a given set $W$. This is just Example 2 in disguise, with $W$ serving to index the bit positions. Any subset $X$ of $W$ can be viewed as the bit vector having 1's in just those bit positions indexed by elements of $X$. Thus the all-zero vector is the empty subset of $W$ while the all-ones vector is $W$ itself, these being the constants 0 and 1 respectively of the power set algebra. The counterpart of disjunction $x∨y$ is union $X∪Y$, while that of conjunction $x∧y$ is intersection $X∩Y$. Negation $¬x$ becomes $~X$, complement relative to $W$. There is also set difference $X\Y = X∩~Y$, symmetric difference $(X\Y)∪(Y\X)$, ternary union $X∪Y∪Z$, and so on. The atoms here are the singletons, those subsets with exactly one element.

Examples 2 and 3 are special cases of a general construct of algebra called direct product, applicable not just to Boolean algebras but all kinds of algebra including groups, rings, etc. The direct product of any family $B_{i}$ of Boolean algebras where $i$ ranges over some index set $I$ (not necessarily finite or even countable) is a Boolean algebra consisting of all $I$-tuples $(...x_{i},...)$ whose $i$-th element is taken from $B_{i}$. The operations of a direct product are the corresponding operations of the constituent algebras acting within their respective coordinates; in particular operation $^{n}f_{j}$ of the product operates on $n$ $I$-tuples by applying operation $^{n}f_{j}$ of $B_{i}$ to the $n$ elements in the $i$-th coordinate of the $n$ tuples, for all $i$ in $I$.

When all the algebras being multiplied together in this way are the same algebra $A$ we call the direct product a direct power of $A$. The Boolean algebra of all 32-bit bit vectors is the two-element Boolean algebra raised to the 32nd power, or power set algebra of a 32-element set, denoted $2^{32}$. The Boolean algebra of all sets of integers is $2^{Z}$. All Boolean algebras we have exhibited thus far have been direct powers of the two-element Boolean algebra, justifying the name "power set algebra".

Representation theorems
It can be shown that every finite Boolean algebra is isomorphic to some power set algebra. Hence the cardinality (number of elements) of a finite Boolean algebra is a power of $2$, namely one of $1,2,4,8,...,2^{n},...$ This is called a representation theorem as it gives insight into the nature of finite Boolean algebras by giving a representation of them as power set algebras.

This representation theorem does not extend to infinite Boolean algebras: although every power set algebra is a Boolean algebra, not every Boolean algebra need be isomorphic to a power set algebra. In particular, whereas there can be no countably infinite power set algebras (the smallest infinite power set algebra is the power set algebra $2^{N}$ of sets of natural numbers, shown by Cantor to be uncountable), there exist various countably infinite Boolean algebras.

To go beyond power set algebras we need another construct. A subalgebra of an algebra $A$ is any subset of $A$ closed under the operations of $A$. Every subalgebra of a Boolean algebra $A$ must still satisfy the equations holding of $A$, since any violation would constitute a violation for $A$ itself. Hence every subalgebra of a Boolean algebra is a Boolean algebra.

A subalgebra of a power set algebra is called a field of sets; equivalently a field of sets is a set of subsets of some set $W$ including the empty set and $W$ and closed under finite union and complement with respect to $W$ (and hence also under finite intersection). Birkhoff's [1935] representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets. Now Birkhoff's HSP theorem for varieties can be stated as, every class of models of the equational theory of a class $C$ of algebras is the Homomorphic image of a Subalgebra of a direct Product of algebras of $C$. Normally all three of H, S, and P are needed; what the first of these two Birkhoff theorems shows is that for the special case of the variety of Boolean algebras Homomorphism can be replaced by Isomorphism. Birkhoff's HSP theorem for varieties in general therefore becomes Birkhoff's ISP theorem for the variety of Boolean algebras.

Other examples
It is convenient when talking about a set X of natural numbers to view it as a sequence $x_{0},x_{1},x_{2},...$ of bits, with $x_{i} = 1$ if and only if $i ∈ X$. This viewpoint will make it easier to talk about subalgebras of the power set algebra $2^{N}$, which this viewpoint makes the Boolean algebra of all sequences of bits. It also fits well with the columns of a truth table: when a column is read from top to bottom it constitutes a sequence of bits, but at the same time it can be viewed as the set of those valuations (assignments to variables in the left half of the table) at which the function represented by that column evaluates to 1.

Example 4. Ultimately constant sequences. Any Boolean combination of ultimately constant sequences is ultimately constant; hence these form a Boolean algebra. We can identify these with the integers by viewing the ultimately-zero sequences as nonnegative binary numerals (bit $0$ of the sequence being the low-order bit) and the ultimately-one sequences as negative binary numerals (think two's complement arithmetic with the all-ones sequence being $−1$). This makes the integers a Boolean algebra, with union being bit-wise OR and complement being $−x−1$. There are only countably many integers, so this infinite Boolean algebra is countable. The atoms are the powers of two, namely 1,2,4,.... Another way of describing this algebra is as the set of all finite and cofinite sets of natural numbers, with the ultimately all-ones sequences corresponding to the cofinite sets, those sets omitting only finitely many natural numbers.

Example 5. Periodic sequence. A sequence is called periodic when there exists some number $n &gt; 0$, called a witness to periodicity, such that $x_{i} = x_{i+n}$ for all $i ≥ 0$. The period of a periodic sequence is its least witness. Negation leaves period unchanged, while the disjunction of two periodic sequences is periodic, with period at most the least common multiple of the periods of the two arguments (the period can be as small as $1$, as happens with the union of any sequence and its complement). Hence the periodic sequences form a Boolean algebra.

Example 5 resembles Example 4 in being countable, but differs in being atomless. The latter is because the conjunction of any nonzero periodic sequence $x$ with a sequence of coprime period (greater than 1) is neither $0$ nor $x$. It can be shown that all countably infinite atomless Boolean algebras are isomorphic, that is, up to isomorphism there is only one such algebra.

Example 6. Periodic sequence with period a power of two. This is a proper subalgebra of Example 5 (a proper subalgebra equals the intersection of itself with its algebra). These can be understood as the finitary operations, with the first period of such a sequence giving the truth table of the operation it represents. For example, the truth table of $x_{0}$ in the table of binary operations, namely $^{2}f_{10}$, has period $2$ (and so can be recognized as using only the first variable) even though 12 of the binary operations have period $4$. When the period is $2^{n}$ the operation only depends on the first $n$ variables, the sense in which the operation is finitary. This example is also a countably infinite atomless Boolean algebra. Hence Example 5 is isomorphic to a proper subalgebra of itself! Example 6, and hence Example 5, constitutes the free Boolean algebra on countably many generators, meaning the Boolean algebra of all finitary operations on a countably infinite set of generators or variables.

Example 7. Ultimately periodic sequences, sequences that become periodic after an initial finite bout of lawlessness. They constitute a proper extension of Example 5 (meaning that Example 5 is a proper subalgebra of Example 7) and also of Example 4, since constant sequences are periodic with period one. Sequences may vary as to when they settle down, but any finite set of sequences will all eventually settle down no later than their slowest-to-settle member, whence ultimately periodic sequences are closed under all Boolean operations and so form a Boolean algebra. This example has the same atoms and coatoms as Example 4, whence it is not atomless and therefore not isomorphic to Example 5/6. However it contains an infinite atomless subalgebra, namely Example 5, and so is not isomorphic to Example 4, every subalgebra of which must be a Boolean algebra of finite sets and their complements and therefore atomic. This example is isomorphic to the direct product of Examples 4 and 5, furnishing another description of it.

Example 8. The direct product of a Periodic Sequence (Example 5) with any finite but nontrivial Boolean algebra. (The trivial one-element Boolean algebra is the unique finite atomless Boolean algebra.) This resembles Example 7 in having both atoms and an atomless subalgebra, but differs in having only finitely many atoms. Example 8 is in fact an infinite family of examples, one for each possible finite number of atoms.

These examples by no means exhaust the possible Boolean algebras, even the countable ones. Indeed, there are uncountably many nonisomorphic countable Boolean algebras, which Jussi Ketonen [1978] classified completely in terms of invariants representable by certain hereditarily countable sets.

Boolean algebras of Boolean operations
The $n$-ary Boolean operations themselves constitute a power set algebra $2^{W}$, namely when $W$ is taken to be the set of $2^{n}$ valuations of the $n$ inputs. In terms of the naming system of operations $^{n}f_{i}$ where $i$ in binary is a column of a truth table, the columns can be combined with Boolean operations of any arity to produce other columns present in the table. That is, we can apply any Boolean operation of arity $m$ to $m$ Boolean operations of arity $n$ to yield a Boolean operation of arity $n$, for any $m$ and $n$.

The practical significance of this convention for both software and hardware is that $n$-ary Boolean operations can be represented as words of the appropriate length. For example, each of the 256 ternary Boolean operations can be represented as an unsigned byte. The available logical operations such as AND and OR can then be used to form new operations. If we take $x$, $y$, and $z$ (dispensing with subscripted variables for now) to be $10101010$, $11001100$, and $11110000$ respectively (170, 204, and 240 in decimal, $0xaa$, $0xcc$, and $0xf0$ in hexadecimal), their pairwise conjunctions are $x∧y = 10001000$, $y∧z = 11000000$, and $z∧x = 10100000$, while their pairwise disjunctions are $x∨y = 11101110$, $y∨z = 11111100$, and $z∨x = 11111010$. The disjunction of the three conjunctions is $11101000$, which also happens to be the conjunction of three disjunctions. We have thus calculated, with a dozen or so logical operations on bytes, that the two ternary operations
 * $$(x \land y)\lor (y\land z)\lor (z\land x)$$

and
 * $$(x\lor y)\land (y\lor z)\land (z\lor x)$$

are actually the same operation. That is, we have proved the equational identity
 * $$(x\land y)\lor (y\land z)\lor (z\land x) = (x\lor y)\land (y\lor z)\land (z\lor x)$$,

for the two-element Boolean algebra. By the definition of "Boolean algebra" this identity must therefore hold in every Boolean algebra.

This ternary operation incidentally formed the basis for Grau's [1947] ternary Boolean algebras, which he axiomatized in terms of this operation and negation. The operation is symmetric, meaning that its value is independent of any of the $3! = 6$ permutations of its arguments. The two halves of its truth table $11101000$ are the truth tables for $∨$, $1110$, and $∧$, $1000$, so the operation can be phrased as if $z$ then $x∨y$ else $x∧y$. Since it is symmetric it can equally well be phrased as either of if $x$ then $y∨z$ else $y∧z$, or if $y$ then $z∨x$ else $z∧x$. Viewed as a labeling of the 8-vertex 3-cube, the upper half is labeled 1 and the lower half 0; for this reason it has been called the median operator, with the evident generalization to any odd number of variables (odd in order to avoid the tie when exactly half the variables are 0).

Axiomatizing Boolean algebras
The technique we just used to prove an identity of Boolean algebra can be generalized to all identities in a systematic way that can be taken as a sound and complete axiomatization of, or axiomatic system for, the equational laws of Boolean logic. The customary formulation of an axiom system consists of a set of axioms that "prime the pump" with some initial identities, along with a set of inference rules for inferring the remaining identities from the axioms and previously proved identities. In principle it is desirable to have finitely many axioms; however as a practical matter it is not necessary since it is just as effective to have a finite axiom schema having infinitely many instances each of which when used in a proof can readily be verified to be a legal instance, the approach we follow here.

Boolean identities are assertions of the form $s = t$ where $s$ and $t$ are $n$-ary terms, by which we shall mean here terms whose variables are limited to $x_{0}$ through $x_{n-1}$. An $n$-ary term is either an atom or an application. An application $^{m}f_{i}(t_{0},...,t_{m-1})$ is a pair consisting of an $m$-ary operation $^{m}f_{i}$ and a list or $m$-tuple $(t_{0},...,t_{m-1})$ of $m n$-ary terms called operands.

Associated with every term is a natural number called its height. Atoms are of zero height, while applications are of height one plus the height of their highest operand.

Now what is an atom? Conventionally an atom is either a constant (0 or 1) or a variable $x_{i}$ where $0 ≤ i &lt; n$. For the proof technique here it is convenient to define atoms instead to be $n$-ary operations $^{n}f_{i}$, which although treated here as atoms nevertheless mean the same as ordinary terms of the exact form $^{n}f_{i}(x_{0},...,x_{n-1})$ (exact in that the variables must listed in the order shown without repetition or omission). This is not a restriction because atoms of this form include all the ordinary atoms, namely the constants 0 and 1, which arise here as the $n$-ary operations $^{n}f_{0}$ and $^{n}f_{−1}$ for each $n$ (abbreviating $2^{2 ^{n} }−1$ to $−1$), and the variables $x_{0},...,x_{n-1}$ as can be seen from the truth tables where $x_{0}$ appears as both the unary operation $^{1}f_{2}$ and the binary operation $^{2}f_{10}$ while $x_{1}$ appears as $^{2}f_{12}$.

The following axiom schema and three inference rules axiomatize the Boolean algebra of n-ary terms.
 * A1. $^{m}f_{i}(^{n}f_{j _{0} },...,^{n}f_{j _{m-1} }) = ^{n}f_{ioĵ}$ where $(i o ĵ)_{v} = i_{ĵ _{v} }$, with $ĵ$ being $j$ transpose, defined by $(ĵ_{v})_{u} = (j_{u})_{v}$.
 * R1. With no premises infer $t = t$.
 * R2. From $s = u$ and $t = u$ infer $s = t$ where $s$, $t$, and $u$ are $n$-ary terms.
 * R3. From $s_{0} = t_{0}, ... , s_{m-1} = t_{m-1}$ infer $^{m}f_{i}(s_{0},...,s_{m-1}) = ^{m}f_{i}(t_{0},...,t_{m-1})$, where all terms $s_{i}, t_{i}$ are $n$-ary.

The meaning of the side condition on A1 is that $i o ĵ$ is that $2^{n}$-bit number whose $v$-th bit is the $ĵ_{v}$-th bit of $i$, where the ranges of each quantity are $u: m$, $v: 2^{n}$, $j_{u}: 2^{2 ^{n} }$, and $ĵ_{v}: 2^{m}$. (So $j$ is an $m$-tuple of $2^{n}$-bit numbers while $ĵ$ as the transpose of $j$ is a $2^{n}$-tuple of $m$-bit numbers. Both $j$ and $ĵ$ therefore contain $m2^{n}$ bits.)

A1 is an axiom schema rather than an axiom by virtue of containing metavariables, namely $m$, $i$, $n$, and $j_{0}$ through $j_{m-1}$. The actual axioms of the axiomatization are obtained by setting the metavariables to specific values. For example, if we take $m = n = i = j_{0} = 1$, we can compute the two bits of $i o ĵ$ from $i_{1} = 0$ and $i_{0} = 1$, so $i o ĵ = 2$ (or $10$ when written as a two-bit number). The resulting instance, namely $^{1}f_{1}(^{1}f_{1}) = ^{1}f_{2}$, expresses the familiar axiom $¬¬x = x$ of double negation. Rule R3 then allows us to infer $¬¬¬x = ¬x$ by taking $s_{0}$ to be $^{1}f_{1}(^{1}f_{1})$ or $¬¬x_{0}$, $t_{0}$ to be $^{1}f_{2}$ or $x_{0}$, and $^{m}f_{i}$ to be $^{1}f_{1}$ or $¬$.

For each $m$ and $n$ there are only finitely many axioms instantiating A1, namely $2^{2 ^{m} } × (2^{2 ^{n} })^{m}$. Each instance is specified by $2^{m}+m2^{n}$ bits.

We treat R1 as an inference rule, even though it is like an axiom in having no premises, because it is a domain-independent rule along with R2 and R3 common to all equational axiomatizations, whether of groups, rings, or any other variety. The only entity specific to Boolean algebras is axiom schema A1. In this way when talking about different equational theories we can push the rules to one side as being independent of the particular theories, and confine attention to the axioms as the only part of the axiom system characterizing the particular equational theory at hand.

This axiomatization is complete, meaning that every Boolean law $s = t$ is provable in this system. One first shows by induction on the height of $s$ that every Boolean law for which $t$ is atomic is provable, using R1 for the base case (since distinct atoms are never equal) and A1 and R3 for the induction step ($s$ an application). This proof strategy amounts to a recursive procedure for evaluating $s$ to yield an atom. Then to prove $s = t$ in the general case when $t$ may be an application, use the fact that if $s = t$ is an identity then $s$ and $t$ must evaluate to the same atom, call it $u$. So first prove $s = u$ and $t = u$ as above, that is, evaluate $s$ and $t$ using A1, R1, and R3, and then invoke R2 to infer $s = t$.

In A1, if we view the number $n^{m}$ as the function type $m→n$, and $m_{n}$ as the application $m(n)$, we can reinterpret the numbers $i$, $j$, $ĵ$, and $i o ĵ$ as functions of type $i: (m→2)→2$, $j: m→((n→2)→2)$, $ĵ: (n→2)→(m→2)$, and $i o ĵ: (n→2)→2$. The definition $(i o ĵ)_{v} = i_{ĵ _{v} }$ in A1 then translates to $(i o ĵ)(v) = i(ĵ(v))$, that is, $i o ĵ$ is defined to be composition of $i$ and $ĵ$ understood as functions. So the content of A1 amounts to defining term application to be essentially composition, modulo the need to transpose the $m$-tuple $j$ to make the types match up suitably for composition. This composition is the one in Lawvere's previously mentioned category of power sets and their functions. In this way we have translated the commuting diagrams of that category, as the equational theory of Boolean algebras, into the equational consequences of A1 as the logical representation of that particular composition law.

Underlying lattice structure
Underlying every Boolean algebra $B$ is a partially ordered set or poset $(B,≤)$. The partial order relation is defined by $x ≤ y$ just when $x = x∧y$, or equivalently when $y = x∨y$. Given a set $X$ of elements of a Boolean algebra, an upper bound on $X$ is an element $y$ such that for every element $x$ of $X$, $x ≤ y$, while a lower bound on $X$ is an element $y$ such that for every element $x$ of $X$, $y ≤ x$.

A sup of $X$ is a least upper bound on $X$, namely an upper bound on $X$ that is less or equal to every upper bound on $X$. Dually an inf of $X$ is a greatest lower bound on $X$. The sup of $x$ and $y$ always exists in the underlying poset of a Boolean algebra, being $x∨y$, and likewise their inf exists, namely $x∧y$. The empty sup is 0 (the bottom element) and the empty inf is 1 (top). It follows that every finite set has both a sup and an inf. Infinite subsets of a Boolean algebra may or may not have a sup and/or an inf; in a power set algebra they always do.

Any poset $(B,≤)$ such that every pair $x,y$ of elements has both a sup and an inf is called a lattice. We write $x∨y$ for the sup and $x∧y$ for the inf. The underlying poset of a Boolean algebra always forms a lattice. The lattice is said to be distributive when $x∧(y∨z) = (x∧y)∨(x∧z)$, or equivalently when $x∨(y∧z) = (x∨y)∧(x∨z)$, since either law implies the other in a lattice. These are laws of Boolean algebra whence the underlying poset of a Boolean algebra forms a distributive lattice.

Given a lattice with a bottom element 0 and a top element 1, a pair $x,y$ of elements is called complementary when $x∧y = 0$ and $x∨y = 1$, and we then say that $y$ is a complement of $x$ and vice versa. Any element $x$ of a distributive lattice with top and bottom can have at most one complement. When every element of a lattice has a complement the lattice is called complemented. It follows that in a complemented distributive lattice, the complement of an element always exists and is unique, making complement a unary operation. Furthermore, every complemented distributive lattice forms a Boolean algebra, and conversely every Boolean algebra forms a complemented distributive lattice. This provides an alternative definition of a Boolean algebra, namely as any complemented distributive lattice. Each of these three properties can be axiomatized with finitely many equations, whence these equations taken together constitute a finite axiomatization of the equational theory of Boolean algebras.

In a class of algebras defined as all the models of a set of equations, it is usually the case that some algebras of the class satisfy more equations than just those needed to qualify them for the class. The class of Boolean algebras is unusual in that, with a single exception, every Boolean algebra satisfies exactly the Boolean identities and no more. The exception is the one-element Boolean algebra, which necessarily satisfies every equation, even $x = y$, and is therefore sometimes referred to as the inconsistent Boolean algebra.

Boolean homomorphisms
A Boolean homomorphism is a function $h: A→B$ between Boolean algebras $A,B$ such that for every Boolean operation $^{m}f_{i}$:
 * $$h(^m\!f_i(x_0,...,x_{m-1})) = {}^m\!f_i(h(x_0,...,x_{m-1}))$$

The category Bool of Boolean algebras has as objects all Boolean algebras and as morphisms the Boolean homomorphisms between them.

There exists a unique homomorphism from the two-element Boolean algebra 2 to every Boolean algebra, since homomorphisms must preserve the two constants and those are the only elements of 2. A Boolean algebra with this property is called an initial Boolean algebra. It can be shown that any two initial Boolean algebras are isomorphic, so up to isomorphism 2 is the initial Boolean algebra.

In the other direction, there may exist many homomorphisms from a Boolean algebra $B$ to 2. Any such homomorphism partitions $B$ into those elements mapped to 1 and those to 0. The subset of $B$ consisting of the former is called an ultrafilter of $B$. When $B$ is finite its ultrafilters pair up with its atoms; one atom is mapped to 1 and the rest to 0. Each ultrafilter of $B$ thus consists of an atom of $B$ and all the elements above it; hence exactly half the elements of $B$ are in the ultrafilter, and there as many ultrafilters as atoms.

For infinite Boolean algebras the notion of ultrafilter becomes considerably more delicate. The elements greater than or equal to an atom always form an ultrafilter, but so do many other sets; for example, in the Boolean algebra of finite and cofinite sets of integers, the cofinite sets form an ultrafilter even though none of them are atoms. Likewise, the powerset of the integers has among its ultrafilters the set of all subsets containing a given integer; there are countably many of these "standard" ultrafilters, which may be identified with the integers themselves, but there are uncountably many more "nonstandard" ultrafilters. These form the basis for nonstandard analysis, providing representations for such classically inconsistent objects as infinitesimals and delta functions.

Infinitary extensions
Recall the definition of sup and inf from the section above on the underlying partial order of a Boolean algebra. A complete Boolean algebra is one every subset of which has both a sup and an inf, even the infinite subsets. Gaifman [1964] and Hales [1964] independently showed that infinite free complete Boolean algebras do not exist. This suggests that a logic with set-sized-infinitary operations may have class-many terms—just as a logic with finitary operations may have infinitely many terms.

There is however another approach to introducing infinitary Boolean operations: simply drop "finitary" from the definition of Boolean algebra. A model of the equational theory of the algebra of all operations on {0,1} of arity up to the cardinality of the model is called a complete atomic Boolean algebra, or CABA. (In place of this awkward restriction on arity we could allow any arity, leading to a different awkwardness, that the signature would then be larger than any set, that is, a proper class. One benefit of the latter approach is that it simplifies the definition of homomorphism between CABAs of different cardinality.) Such an algebra can be defined equivalently as a complete Boolean algebra that is atomic, meaning that every element is a sup of some set of atoms. Free CABAs exist for all cardinalities of a set $V$ of generators, namely the power set algebra $2^{2 ^{V} }$, this being the obvious generalization of the finite free Boolean algebras. This neatly rescues infinitary Boolean logic from the fate the Gaifman–Hales result seemed to consign it to.

The nonexistence of free complete Boolean algebras can be traced to failure to extend the equations of Boolean logic suitably to all laws that should hold for infinitary conjunction and disjunction, in particular the neglect of distributivity in the definition of complete Boolean algebra. A complete Boolean algebra is called completely distributive when arbitrary conjunctions distribute over arbitrary disjunctions and vice versa. A Boolean algebra is a CABA if and only if it is complete and completely distributive, giving a third definition of CABA. A fourth definition is as any Boolean algebra isomorphic to a power set algebra.

A complete homomorphism is one that preserves all sups that exist, not just the finite sups, and likewise for infs. The category CABA of all CABAs and their complete homomorphisms is dual to the category of sets and their functions, meaning that it is equivalent to the opposite of that category (the category resulting from reversing all morphisms). Things are not so simple for the category Bool of Boolean algebras and their homomorphisms, which Marshall Stone showed in effect (though he lacked both the language and the conceptual framework to make the duality explicit) to be dual to the category of totally disconnected compact Hausdorff spaces, subsequently called Stone spaces.

Another infinitary class intermediate between Boolean algebras and complete Boolean algebras is the notion of a sigma-algebra. This is defined analogously to complete Boolean algebras, but with sups and infs limited to countable arity. That is, a sigma-algebra is a Boolean algebra with all countable sups and infs. Because the sups and infs are of bounded cardinality, unlike the situation with complete Boolean algebras, the Gaifman-Hales result does not apply and free sigma-algebras do exist. Unlike the situation with CABAs however, the free countably generated sigma algebra is not a power set algebra.

Other definitions of Boolean algebra
We have already encountered several definitions of Boolean algebra, as a model of the equational theory of the two-element algebra, as a complemented distributive lattice, as a Boolean ring, and as a product-preserving functor from a certain category (Lawvere). Two more definitions worth mentioning are:.


 * Stone (1936): A Boolean algebra is the set of all clopen sets of a topological space. It is no limitation to require the space to be a totally disconnected compact Hausdorff space, or Stone space, that is, every Boolean algebra arises in this way, up to isomorphism. Moreover, if the two Boolean algebras formed as the clopen sets of two Stone spaces are isomorphic, so are the Stone spaces themselves, which is not the case for arbitrary topological spaces. This is just the reverse direction of the duality mentioned earlier from Boolean algebras to Stone spaces. This definition is fleshed out by the next definition.


 * Johnstone (1982): A Boolean algebra is a filtered colimit of finite Boolean algebras.

(The circularity in this definition can be removed by replacing "finite Boolean algebra" by "finite power set" equipped with the Boolean operations standardly interpreted for power sets.)

To put this in perspective, infinite sets arise as filtered colimits of finite sets, infinite CABAs as filtered limits of finite power set algebras, and infinite Stone spaces as filtered limits of finite sets. Thus if one starts with the finite sets and asks how these generalize to infinite objects, there are two ways: "adding" them gives ordinary or inductive sets while "multiplying" them gives Stone spaces or profinite sets. The same choice exists for finite power set algebras as the duals of finite sets: addition yields Boolean algebras as inductive objects while multiplication yields CABAs or power set algebras as profinite objects.

A characteristic distinguishing feature is that the underlying topology of objects so constructed, when defined so as to be Hausdorff, is discrete for inductive objects and compact for profinite objects. The topology of finite Hausdorff spaces is always both discrete and compact, whereas for infinite spaces "discrete"' and "compact" are mutually exclusive. Thus when generalizing finite algebras (of any kind, not just Boolean) to infinite ones, "discrete" and "compact" part company, and one must choose which one to retain. The general rule, for both finite and infinite algebras, is that finitary algebras are discrete, whereas their duals are compact and feature infinitary operations. Between these two extremes, there are many intermediate infinite Boolean algebras whose topology is neither discrete nor compact.