Fold (higher-order function)

In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.

Folds are in a sense dual to unfolds, which take a seed value and apply a function corecursively to decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on its terminal values and the recursive results (catamorphism, versus anamorphism of unfolds).

As structural transformations
Folds can be regarded as consistently replacing the structural components of a data structure with functions and values. Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly called nil&thinsp;, or is constructed by prefixing an element in front of another list, creating what is called a cons&thinsp; node , resulting from application of a  function (written down as a colon    in Haskell). One can view a fold on lists as replacing&thinsp; the nil at the end of the list with a specific value, and replacing each cons with a specific function. These replacements can be viewed as a diagram:



There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function:



These pictures illustrate right and left fold of a list visually. They also highlight the fact that  is the identity function on lists (a shallow copy in Lisp parlance), as replacing cons with   and nil with   will not change the result. The left fold diagram suggests an easy way to reverse a list,. Note that the parameters to cons must be flipped, because the element to add is now the right hand parameter of the combining function. Another easy result to see from this vantage-point is to write the higher-order map function in terms of, by composing the function to act on the elements with  , as:

where the period (.) is an operator denoting function composition.

This way of looking at things provides a simple route to designing fold-like functions on other algebraic data types and structures, like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype with provided functions, and any constant values of the type with provided values. Such a function is generally referred to as a catamorphism.

On lists
The folding of the list  with the addition operator would result in 15, the sum of the elements of the list. To a rough approximation, one can think of this fold as replacing the commas in the list with the + operation, giving.

In the example above, + is an associative operation, so the final result will be the same regardless of parenthesization, although the specific way in which it is calculated will be different. In the general case of non-associative binary functions, the order in which the elements are combined may influence the final result's value. On lists, there are two obvious ways to carry this out: either by combining the first element with the result of recursively combining the rest (called a right fold), or by combining the result of recursively combining all elements but the last one, with the last element (called a left fold). This corresponds to a binary operator being either right-associative or left-associative, in Haskell's or Prolog's terminology. With a right fold, the sum would be parenthesized as, whereas with a left fold it would be parenthesized as.

In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when one reaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list. In the example above, the value 0 (the additive identity) would be chosen as an initial value, giving  for the right fold, and   for the left fold. For multiplication, an initial choice of 0 wouldn't work:. The identity element for multiplication is 1. This would give us the outcome.

Linear vs. tree-like folds
The use of an initial value is necessary when the combining function f&thinsp; is asymmetrical in its types (e.g. ), i.e. when the type of its result is different from the type of the list's elements. Then an initial value must be used, with the same type as that of f&thinsp;'s result, for a linear chain of applications to be possible. Whether it will be left- or right-oriented will be determined by the types expected of its arguments by the combining function. If it is the second argument that must be of the same type as the result, then f&thinsp; could be seen as a binary operation that associates on the right, and vice versa.

When the function is a magma, i.e. symmetrical in its types, and the result type is the same as the list elements' type, the parentheses may be placed in arbitrary fashion thus creating a binary tree of nested sub-expressions, e.g.,. If the binary operation f&thinsp; is associative this value will be well-defined, i.e., same for any parenthesization, although the operational details of how it is calculated will be different. This can have significant impact on efficiency if f&thinsp; is non-strict.

Whereas linear folds are node-oriented and operate in a consistent manner for each node of a list, tree-like folds are whole-list oriented and operate in a consistent manner across groups of nodes.

Special folds for non-empty lists
One often wants to choose the identity element of the operation f as the initial value z. When no initial value seems appropriate, for example, when one wants to fold the function which computes the maximum of its two parameters over a non-empty list to get the maximum element of the list, there are variants of  and   which use the last and first element of the list respectively as the initial value. In Haskell and several other languages, these are called  and , the 1 making reference to the automatic provision of an initial element, and the fact that the lists they are applied to must have at least one element.

These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same. Richard Bird in his 2010 book proposes "a general fold function on non-empty lists"  which transforms its last element, by applying an additional argument function to it, into a value of the result type before starting the folding itself, and is thus able to use type-asymmetrical binary operation like the regular   to produce a result of type different from the list's elements type.

Linear folds
Using Haskell as an example,  and   can be formulated in a few equations.

If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result of applying f to the old initial value and the first element.

If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest.

Tree-like folds
Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists:

In the case of  function, to avoid its runaway evaluation on indefinitely defined lists the function   must not always demand its second argument's value, at least not all of it, or not immediately (see example below).

Evaluation order considerations
In the presence of lazy, or non-strict evaluation,  will immediately return the application of f to the head of the list and the recursive case of folding over the rest of the list. Thus, if f is able to produce some part of its result without reference to the recursive case on its "right" i.e., in its second argument, and the rest of the result is never demanded, then the recursion will stop (e.g., head == foldr (\a b->a) (error "empty list")). This allows right folds to operate on infinite lists. By contrast,  will immediately call itself with new parameters until it reaches the end of the list. This tail recursion can be efficiently compiled as a loop, but can't deal with infinite lists at all — it will recurse forever in an infinite loop.

Having reached the end of the list, an expression is in effect built by  of nested left-deepening  -applications, which is then presented to the caller to be evaluated. Were the function  to refer to its second argument first here, and be able to produce some part of its result without reference to the recursive case (here, on its left i.e., in its first argument), then the recursion would stop. This means that while  recurses on the right, it allows for a lazy combining function to inspect list's elements from the left; and conversely, while   recurses on the left, it allows for a lazy combining function to inspect list's elements from the right, if it so chooses (e.g., last == foldl (\a b->b) (error "empty list")).

Reversing a list is also tail-recursive (it can be implemented using rev = foldl (\ys x -> x : ys) []). On finite lists, that means that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf.&thinsp; 1+>(2+>(3+>0)) == ((0<+3)<+2)<+1), with a modification to the function  so it reverses the order of its arguments (i.e., foldr f z == foldl (flip f) z . foldl (flip ) []), tail-recursively building a representation of expression that right-fold would build. The extraneous intermediate list structure can be eliminated with the continuation-passing style technique, foldr f z xs == foldl (\k x-> k . f x) id xs z; similarly, foldl f z xs == foldr (\x k-> k . flip f x) id xs z (&thinsp; is only needed in languages like Haskell with its flipped order of arguments to the combining function of   unlike e.g., in Scheme where the same order of arguments is used for combining functions to both   and foldr).

Another technical point is that, in the case of left folds using lazy evaluation, the new initial parameter is not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end of the list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often provide a stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call. In Haskell this is the  (note the apostrophe, pronounced 'prime') function in the   library (one needs to be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituents automatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constant space operation, when lazy evaluation of the final result is impossible or undesirable.

Examples
Using a Haskell interpreter, the structural transformations which fold functions perform can be illustrated by constructing a string:

Infinite tree-like folding is demonstrated e.g., in recursive primes production by unbounded sieve of Eratosthenes in Haskell: where the function  operates on ordered lists in a local manner to efficiently produce their set union, and   their set difference.

A finite prefix of primes is concisely defined as a folding of set difference operation over the lists of enumerated multiples of integers, as For finite lists, e.g., merge sort (and its duplicates-removing variety, ) could be easily defined using tree-like folding as with the function   a duplicates-preserving variant of.

Functions  and   could have been defined through folding as

Universality
Fold is a polymorphic function. For any g having a definition

then g can be expressed as

Also, in a lazy language with infinite lists, a fixed point combinator can be implemented via fold, proving that iterations can be reduced to folds: