Fitness function

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in software architecture and evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

Software programmers know that they shouldn’t release a bad code, but that priority competes with many other priorities for busy developers. That's why they should use fitness functions to keep their software in check.

In the field of EAs, each design solution is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing, or simulation, the idea is to delete the n worst design solutions, and to breed n new ones from the best design solutions. Each design solution, therefore, needs to be awarded a figure of merit, to indicate how close it came to meeting the overall specification, and this is generated by applying the fitness function to the test, or simulation, results obtained from that solution.

Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run.

A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, such as tournament selection or Pareto optimization.

Requirements of evaluation and fitness function
The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all.

Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans.

Computational efficiency
The fitness function should not only correlate closely with the designer's goal, but it also should be computationally efficient. Speed of execution is very important, as a typical genetic algorithm must be iterated many times in order to produce a usable result for a non-trivial problem.

Fitness approximation may be appropriate, especially in the following cases: Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel.
 * Fitness computation time of a single solution is extremely high
 * Precise model for fitness computation is missing
 * The fitness function is uncertain or noisy.

Multi-objective optimization
Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum.

Weighted sum and penalty functions
When optimizing with the weighted sum, the single values of the $$O$$ objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale. Without loss of generality, fitness is assumed to represent a value to be maximized. Each objective $$o_i$$ is assigned a weight $$w_i$$ in the form of a percentage value so that the overall raw fitness $$f_{raw}$$ can be calculated as a weighted sum: "$f_{raw} = \sum_{i=1}^O{o_i \cdot w_i} \quad \mathsf{with} \quad \sum_{i=1}^O{w_i} = 1 $"A violation of $$R$$ restrictions $$r_j$$ can be included in the fitness determined in this way in the form of penalty functions. For this purpose, a function $$pf_j(r_j)$$ can be defined for each restriction which returns a value between $$0$$ and $$1$$ depending on the degree of violation, with the result being $$1$$ if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitness $$f_{final}$$: "$f_{final}= f_{raw} \cdot \prod_{j=1}^R{pf_j(r_j)} = \sum_{i=1}^O{(o_i \cdot w_i)} \cdot \prod_{j=1}^R{pf_j(r_j)}$"This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. In addition, certain solutions may not be obtained, see the section on the comparison of both types of optimization.

Pareto optimization
A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectives $$f_1$$ and $$f_2$$ to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution. Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides.

It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well. Besides the SPEA2, the NSGA-II and NSGA-III have established themselves as standard methods.

The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives. If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods.

Comparison of both types of assessment
With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it is convex. This is illustrated by the adjacent picture on the left. The point $$\mathsf{P}$$ on the green Pareto front is reached by the weights $$w_1$$ and $$w_2$$, provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution set $$Z$$ is shown by the drawn arrows.

In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between points $$\mathsf{A}$$ and $$\mathsf{B}$$. This can be remedied to a limited extent by using an extension of the weighted sum, the cascaded weighted sum.

Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes.

Auxiliary objectives
In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.