Model-based reasoning

In artificial intelligence, model-based reasoning refers to an inference method used in expert systems based on a model of the physical world. With this approach, the main focus of application development is developing the model. Then at run time, an "engine" combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.

Reasoning with declarative models
A robot and dynamical systems as well are controlled by software. The software is implemented as a normal computer program which consists of if-then-statements, for-loops and subroutines. The task for the programmer is to find an algorithm which is able to control the robot, so that it can do a task. In the history of robotics and optimal control there were many paradigm developed. One of them are expert systems, which is focused on restricted domains. Expert systems are the precursor to model based systems.

The main reason why model-based reasoning is researched since the 1990s is to create different layers for modeling and control of a system. This allows to solve more complex tasks and existing programs can be reused for different problems. The model layer is used to monitor a system and to evaluate if the actions are correct, while the control layer determines the actions and brings the system into a goal state.

Typical techniques to implement a model are declarative programming languages like Prolog and Golog. From a mathematical point of view, a declarative model has much in common with the situation calculus as a logical formalization for describing a system. From a more practical perspective, a declarative model means, that the system is simulated with a game engine. A game engine takes a feature as input value and determines the output signal. Sometimes, a game engine is described as a prediction engine for simulating the world.

In 1990, criticism was formulated on model-based reasoning. Pioneers of Nouvelle AI have argued, that symbolic models are separated from underlying physical systems and they fail to control robots. According to behavior-based robotics representative a reactive architecture can overcome the issue. Such a system doesn't need a symbolic model but the actions are connected direct to sensor signals which are grounded in reality.

Knowledge representation
In a model-based reasoning system knowledge can be represented using causal rules. For example, in a medical diagnosis system the knowledge base may contain the following rule:
 * $$\forall$$ patients : Stroke(patient) $$\rightarrow$$ Confused(patient) $$\land$$ Unequal(Pupils(patient))

In contrast in a diagnostic reasoning system knowledge would be represented through diagnostic rules such as:
 * $$\forall$$ patients : Confused(patient) $$\rightarrow$$ Stroke(patient)
 * $$\forall$$ patients : Unequal(Pupils(patient)) $$\rightarrow$$ Stroke(patient)

There are many other forms of models that may be used. Models might be quantitative (for instance, based on mathematical equations) or qualitative (for instance, based on cause/effect models.) They may include representation of uncertainty. They might represent behavior over time. They might represent "normal" behavior, or might only represent abnormal behavior, as in the case of the examples above. Model types and usage for model-based reasoning are discussed in.