Anticipation (artificial intelligence)

In artificial intelligence (AI), anticipation occurs when an agent makes decisions based on its explicit beliefs about the future. More broadly, "anticipation" can also refer to the ability to act in appropriate ways that take future events into account, without necessarily explicitly possessing a model of the future events.

The concept stays in contrast to the reactive paradigm, which is not able to predict future system states.

In AI
An agent employing anticipation would try to predict the future state of the environment (weather in this case) and make use of the predictions in the decision making. For example,

If the sky is cloudy and the air pressure is low, it will probably rain soon so take the umbrella with you. Otherwise leave the umbrella home.

These rules explicitly take into account possible future events.

In 1985, Robert Rosen defined an anticipatory system as follows:


 * A system containing a predictive model of itself and/or its environment,
 * which allows it to change state at an instant in accord
 * with the model's predictions pertaining to a later instant.

To some extent, Rosen's definition of anticipation applies to any system incorporating machine learning. At issue is how much of a system's behaviour should or indeed can be determined by reasoning over dedicated representations, how much by on-line planning, and how much must be provided by the system's designers.

In animals
Humans can make decisions based on explicit beliefs about the future. More broadly, animals can act in appropriate ways that take future events into account, although they may not necessarily have an explicit cognitive model of the future; evolution may have shaped simpler systemic features that result in adaptive anticipatory behavior in a narrow domain. For example, hibernation is anticipatory behavior, but does not appear to be driven by a cognitive model of the future.