Learning augmented algorithm

A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance. Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter. This extra parameter often is a prediction of some property of the solution. This prediction is then used by the algorithm to improve its running time or the quality of its output.

Description
A learning augmented algorithm typically takes an input $$(\mathcal{I}, \mathcal{A})$$. Here $$\mathcal{I}$$ is a problem instance and $$\mathcal{A}$$ is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:


 * Consistency. A learning augmented algorithm is said to be consistent if the algorithm can be proven to have a good performance when it is provided with an accurate prediction. Usually, this is quantified by giving a bound on the performance that depends on the error in the prediction.
 * Robustnesss. An algorithm is called robust if its worst-case performance can be bounded even if the given prediction is inaccurate.

Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.

Binary search
The binary search algorithm is an algorithm for finding elements of a sorted list $$x_1,\ldots,x_n$$. It needs $$O(\log(n))$$ steps to find an element with some known value $$x$$ in a list of length $$n$$. With a prediction $$i$$ for the position of $$x$$, the following learning augmented algorithm can be used.


 * First, look at position $$i$$ in the list. If $$x_i=y$$, the element has been found.
 * If $$x_iy$$, do the same as in the previous case, but instead consider $$i-1,i-2,i-4,\ldots$$.

The error is defined to be $$\eta=|i-i^*|$$, where $$i^*$$ is the real index of $$y$$. In the learning augmented algorithm, probing the positions $$i+1,i+2,i+4,\ldots$$ takes $$\log_2(\eta)$$ steps. Then a binary search is performed on a list of size at most $$2\eta$$, which takes $$\log_2(\eta)$$ steps. This makes the total running time of the algorithm $$2\log_2(\eta)$$. So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent. Even in the worst case, the error will be at most $$n$$. Then the algorithm takes at most $$O(\log(n))$$ steps, so the algorithm is robust.

More examples
Learning augmented algorithms are known for:
 * The ski rental problem
 * The maximum weight matching problem
 * The weighted paging problem