User:Guillaume.Aucher/sandbox

= Dynamic Epistemic Logic =

Dynamic Epistemic Logic (DEL) is a logic for reasoning about information change and exchange of information. These changes can be due to events that change factual properties of the actual world: for example a coin is publicly (or privately) flipped over. But what is mostly studied in dynamic epistemic logic are events that do not change factual properties of the world (they are called epistemic events) but that nevertheless bring about changes of (higher-order) beliefs: for example a card is revealed publicly (or privately) to be red.

Dynamic epistemic logic is a young field of research. It really started with Plaza’s logic of public announcement. Independently, Gerbrandy and Groeneveld proposed a system dealing moreover with private announcement and that was inspired by the work of Veltman. Another system was proposed by van Ditmarsch whose main inspiration was the Cluedo game. But the most influential and original system was the system proposed by Baltag, Moss and Solecki. This system can deal with all the types of situations studied in the works above and provides a general approach to the topic.

DEL can be used in the context of multi-agent systems. It is built on top of epistemic logic to which it adds dynamics and events. Epistemic logic will first be recalled. Then, actions and events will enter into the picture and we will introduce the logical framework of DEL.

Epistemic Logic
Epistemic logic is a modal logic that is concerned with the logical study of the notions of knowledge and belief. It is thereby concerned with understanding the process of reasoning about knowledge and belief: which principles relating the notions of knowledge and belief are intuitively plausible ? As epistemology, it stems from the Greek word $$\epsilon\pi\iota\sigma\tau\eta\mu\eta$$ or ‘episteme’ meaning knowledge. But epistemology is more concerned with analyzing the very nature of knowledge (addressing questions such as “What is the definition of knowledge?” or “How is knowledge acquired?”). In fact, epistemic logic grew out of epistemology in the middle ages thanks to the efforts of Burley and Ockham. But the formal work, based on modal logic, that inaugurated contemporary research into epistemic logic dates back only to 1962 and is due to Hintikka. It then sparked in the 1960’s discussions about the principles of knowledge and belief and many axioms for these notions were proposed and discussed. For example, the interaction axioms $$K\phi\rightarrow B\phi$$ and $$B\phi\rightarrow KB\phi$$ are often considered to be intuitive principles: if agent $$a$$ Knows $$\phi$$ then (s)he also Believes $$\phi$$, or if agent $$a$$ Believes $$\phi$$, then (s)he Knows that (s)he Believes $$\phi$$. More recently, these kinds of philosophical theories were taken up by researchers in economics, artificial intelligence and theoretical computer science where reasoning about knowledge is a central topic. Due to the new setting in which epistemic logic was used, new perspectives and new features such as computability issues were then added to the research agenda of epistemic logic.

Syntax
$$AGTS=\{1,\ldots,n\}$$ is a finite set whose elements are called agents and $$PROP$$ is a set of propositional letters.

The epistemic language is an extension of the basic multi-modal language of modal logic with a common knowledge operator $$C_{A}$$ and a distributed knowledge operator $$D_{A}$$. The epistemic language $$\mathcal{L}_{\textsf{EL}}$$ is defined inductively by the following grammar in BNF:

$$\mathcal{L}_{\textsf{EL}}:\phi::= p~\mid~\neg\phi~\mid~(\phi\land\phi)~\mid~ K_j\phi~\mid~ C_{A}\phi ~\mid~ D_{A}\phi$$

where $$p\in PROP$$, $$j\in {AGTS}$$ and $$A\subseteq {AGTS}$$. The formula $$\langle K_{j}\rangle\phi$$ is an abbreviation for $$\neg K_j\neg\phi$$, $$E_{A}\phi$$ is an abbreviation for $$\bigwedge\limits_{j\in A} K_j\phi$$ and $$C\phi$$ an abbreviation for $$C_{AGTS}\phi$$.

Group notions: general, common and distributed knowledge.

In a multi-agent setting there are three important epistemic concepts: general belief (or knowledge), distributed belief (or knowledge) and common belief (or knowledge). The notion of common belief (or knowledge) was first studied by Lewis in the context of conventions. It was then applied to distributed systems and to game theory, where it allows to express that the rationality of the players, the rules of the game and the set of players are commonly known.

 General knowledge.  General knowledge of $$\phi$$ means that everybody in the group of agents $${AGTS}$$ knows that $$\phi$$. Formally this corresponds to the following formula: $$E\phi:=\underset{j\in {AGTS}}\bigwedge K_j\phi.$$  Common knowledge.  Common knowledge of $$\phi$$ means that everybody knows $$\phi$$ but also that everybody knows that everybody knows $$\phi$$, that everybody knows that everybody knows that everybody knows $$\phi$$, and so on ad infinitum. Formally, this corresponds to the following formula $$C\phi:=E\phi\land E E\phi\land E E E\phi\land\ldots$$ As we do not allow infinite conjunction the notion of common knowledge will have to be introduced as a primitive in our language. Before defining the language with this new operator, we are going to give an example introduced by that illustrates the difference between the notions of general knowledge and common knowledge. Lewis wanted to know what kind of knowledge is needed so that the statement $$p$$: “every driver must drive on the right” be a convention among a group of agents. In other words he wanted to know what kind of knowledge is needed so that everybody feels safe to drive on the right. Suppose there are only two agents $$i$$ and $$j$$. Then everybody knowing $$p$$ (formally $$E p$$) is not enough. Indeed, it might still be possible that the agent $$i$$ considers possible that the agent $$j$$ does not know $$p$$ (formally $$\neg K_i K_j p$$). In that case the agent $$i$$ will not feel safe to drive on the right because he might consider that the agent $$j$$, not knowing $$p$$, could drive on the left. To avoid this problem, we could then assume that everybody knows that everybody knows that $$p$$ (formally $$E E p$$). This is again not enough to ensure that everybody feels safe to drive on the right. Indeed, it might still be possible that agent $$i$$ considers possible that agent $$j$$ considers possible that agent $$i$$ does not know $$p$$ (formally $$\neg K_i K_j K_i p$$). In that case and from $$i$$’s point of view, $$j$$ considers possible that $$i$$, not knowing $$p$$, will drive on the left. So from $$i$$’s point of view, $$j$$ might drive on the left as well (by the same argument as above). So $$i$$ will not feel safe to drive on the right. Reasoning by induction, Lewis showed that for any $$k\in \mathbb{N}$$, $$E p\land E^1 p\land \ldots \land E^k p$$ is not enough for the drivers to feel safe to drive on the right. In fact what we need is an infinite conjunction. In other words, we need common knowledge of $$p$$: $$C p$$.  Distributed knowledge.  Distributed knowledge of $$\phi$$ means that if the agents pulled their knowledge altogether, they would know that $$\phi$$ holds. In other words, the knowledge of $$\phi$$ is distributed among the agents. The formula $$D_{A}\phi$$ reads as ‘it is distributed knowledge among the set of agents $$A$$ that $$\phi$$ holds’. 

Semantics
Epistemic logic is a modal logic. So, what we call an epistemic model $$\mathcal{M}=(W, R_1,\ldots, R_n,I)$$ is just a Kripke model as used in modal logic. The possible worlds $$W$$ are the relevant worlds needed to define such a representation and the valuation $$I:W\rightarrow 2^{PROP}$$ specifies which propositional facts (such as ‘Ann has the red card’) are true in these worlds. Finally the accessibility relations $$R_j\subseteq W\times W$$ for each $$j\in AGTS$$ can model either the notion of knowledge or the notion of belief. We set $$w'\in R_j(w)$$ in case the world $$w'$$ is compatible with agent $$j$$’s belief (respectively knowledge) in world $$w$$. Intuitively, a pointed epistemic model $$(\mathcal{M},w_a)$$, where $$w_a\in\mathcal{M}$$, represents from an external point of view how the actual world $$w_a$$ is perceived by the agents $${AGTS}$$.

For every epistemic model $$\mathcal{M}$$, $$w\in \mathcal{M}$$ and $$\phi\in\mathcal{L}_{\textsf{EL}}$$, we define the $$\mathcal{M},w\models\phi$$ by the following truth conditions:

where $$\left(\underset{j\in A}{\bigcup}R_j\right)^+$$ is the transitive closure of $$\underset{j\in A}{\bigcup}R_j$$: we have that $$v\in\left(\underset{j\in A}{\bigcup}R_j\right)^+(w)$$ if, and only if, there are $$w_0,\ldots,w_m\in\mathcal{M}$$ and $$j_1,\ldots,j_m\in A$$ such that $$w_0=w, w_m=v$$ and for all $$i\in\{1,\ldots,m\}$$, $$w_{i-1} R_{j_i} w_i$$.

Despite the fact that the notion of common belief has to be introduced as a primitive in the language, we can notice that the definition of epistemic models does not have to be modified in order to give truth value to the common knowledge and distributed knowledge operators.

Card Example:

Players $$A$$, $$B$$ and $$C$$ (standing for Ann, Bob and Claire) play a card game with three cards: a red one, a green one and a blue one. Each of them has a single card but they do not know the cards of the other players. This example is depicted in the pointed epistemic model $$(\mathcal{M},w) $$ represented below. There is an arrow indexed by agent $$j\in\{A,B,C\} $$ from a possible world $$u$$ to a possible world $$v$$ when $$(u,v)\in R_j$$. Moreover, reflexive arrows are omitted, which means that for all $$j\in \{A,B,C\}$$ and all $$v\in \mathcal{M}$$, we have that $$(v,v)\in R_j$$. $${\color{red}{A}}$$ stands for : "$$A$$ has the red card ''

$${\color{blue}{C}}$$ stand for: "C has the blue card ''

$${\color{green}{B}}$$ stands for: "B has the green card ''

and so on...

Then, the following statements hold:

$$\mathcal{M},w\models({\color{red}{A}}\land K_A{\color{red}{A}})\land({\color{blue}{C}}\land K_C{\color{blue}{C}})\land ({\color{green}{B}}\land K_B{\color{green}{B}})$$

'All the agents know the color of their card'.

$$\mathcal{M},w\models K_A({\color{blue}{B}}\vee{\color{green}{B}})\land K_A({\color{blue}{C}}\vee{\color{green}{C}})$$

'$$A$$ knows that $$B$$ has either the blue or the green card and that $$C$$ has either the blue or the green card'.

$$\mathcal{M},w\models E({\color{red}{A}}\vee{\color{blue}{A}}\vee{\color{green}{A}})\land C({\color{red}{A}}\vee{\color{blue}{A}}\vee{\color{green}{A}})$$

'Everybody knows that $$A$$ has the red, green or blue card and this is even common knowledge among all agents'.

The notion of knowledge might comply to some constraints (or axioms) such as $$K_j\phi\rightarrow K_j K_j\phi$$: if agent $$j$$ knows something, she knows that she knows it. These constraints might affect the nature of the accessibility relations $$R_j$$ which may then comply to some extra properties. So, we are now going to define some particular classes of epistemic models that all add some extra constraints on the accessibility relations $$R_j$$. These constraints are matched by particular axioms for the knowledge operator $$K_j$$.

Below is a list of properties of the accessibility relations. We also give, below each property, the axiom which defines the class of epistemic frames that fulfill this property.

Knowledge versus Belief
We use the same notation $$K$$ for both knowledge and belief. Hence, depending on the context, $$K\phi$$ will either read ‘the agent Knows that $$\phi$$ holds’ or ‘the agent Believes that $$\phi$$ holds’. A crucial difference is that, unlike knowledge, beliefs can be wrong: the Truth axiom $$K\phi\rightarrow \phi$$ holds only for knowledge, but not necessarily for belief. We are going to examine other axioms, some of them pertain more to the notion of knowledge whereas some others pertain more to the notion of belief.

Axiomatization
The Hilbert proof system for the basic modal logic K is defined by the following axioms and inference rules: for all $$j\in AGTS$$,

The axioms of an epistemic logic obviously display the way the agents reason. For example the axiom K together with the rule of inference Nec entail that if I know $$\phi$$ ($$K\phi$$) and I know that $$\phi$$ implies $$\psi$$ ($$K(\phi\rightarrow \psi))$$ then I know that $$\psi$$ ($$K\psi$$). Stronger constraints can be added. The following proof system s for $$\mathcal{L}_{\textsf{EL}}$$ are often used in the literature.

We denote by $$\mathbb{L}_{\textsf{EL}}$$ the set of proof systems $$\mathbb{L}_{\textsf{EL}}:=\{\textsf{K}, \textsf{KD45},\textsf{S4},\textsf{S4.2},\textsf{S4.3},\textsf{S4.3.2},\textsf{S4.4},\textsf{S5}\}$$.

Moreover, for all $$\mathcal{H}\in\mathbb{L}_{\textsf{EL}}$$, we define the proof system $$\mathcal{H}^{\textsf{C}}$$ by adding the following axiom schemes and rules of inference to those of $$\mathcal{H}$$. For all $$A\subseteq AGTS$$,

The relative strength of the proof systems for knowledge is as follows:

$$\textsf{S4}\subset \textsf{S4.2}\subset \textsf{S4.3}\subset\textsf{S4.3.2}\subset\textsf{S4.4}\subset \textsf{S5}.$$

So, all the theorems of $$\textsf{S4.2}$$ are also theorems of $$\textsf{S4.3}, \textsf{S4.3.2}, \textsf{S4.4}$$ and $$\textsf{S5}$$. Many philosophers claim that in the most general cases, the logic of knowledge is $$\textsf{S4.2}$$ or $$\textsf{S4.3}$$. Typically, in computer science, the logic of belief (doxastic logic) is taken to be and the logic of knowledge (epistemic logic) is taken to be, even if the logic is only suitable for situations where the agents do not have mistaken beliefs.

We discuss the most important axioms introduced above. Axioms T and 4 state that if the agent knows a proposition, then this proposition is true (axiom for ruth), and if the agent knows a proposition, then she knows that she knows it (axiom 4, also known as the “KK-principle”or “KK-thesis”). Axiom T is often considered to be the hallmark of knowledge and has not been subjected to any serious attack. In epistemology, axiom 4 tends to be accepted by internalists, but not by externalists. Axiom 4 is nevertheless widely accepted by computer scientists (but also by many philosophers, including Plato, Aristotle, Saint Augustine, Spinoza and Shopenhauer, as Hintikka recalls ). A more controversial axiom for the logic of knowledge is axiom : This axiom states that if the agent does not know a proposition, then she knows that she does not know it. This addition of 5 to S4 yields the logic S5. Most philosophers (including Hintikka) have attacked this axiom, since numerous examples from everyday life seem to invalidate it. In general, axiom 5 is invalidated when the agent has mistaken beliefs which can be due for example to misperceptions, lies or other forms of deception. Axiom D states that the agent’s beliefs are consistent. In combination with axiom K (where the knowledge operator is replaced by a belief operator), axiom D is in fact equivalent to a simpler axiom D' which conveys, maybe more explicitly, the fact that the agent’s beliefs cannot be inconsistent ($$B \bot$$): $$\neg B \bot$$. In all the theories of rational agency developed in artificial intelligence, the logic of belief is KD45. Note that all these agent theories follow the perfect external approach. This is at odds with their intention to implement their theories in machines. In that respect, an internal approach seems to be more appropriate since, in this context, the agent needs to reason from its own internal point of view. For the internal approach, the logic of belief is S5.

For all $$\mathcal{H}\in\mathbb{L}_{\textsf{EL}}$$, the class of $$\mathcal{H}$$–models or $$\mathcal{H}^{\textsf{C}}$$–models is the class of epistemic models whose accessibility relations satisfy the properties listed above defined by the axioms of $$\mathcal{H}$$ or $$\mathcal{H}^{\textsf{C}}$$. Then, for all $$\mathcal{H}\in\mathbb{L}_{\textsf{EL}}$$, $$\mathcal{H}$$ is sound and strongly complete for $$\mathcal{L}_{\textsf{EL}}$$ w.r.t. the class of $$\mathcal{H}$$–models, and $$\mathcal{H}^{\textsf{C}}$$ is sound and strongly complete for $$\mathcal{L}_{\textsf{EL}}^{\textsf{C}}$$ w.r.t. the class of $$\mathcal{H}^{\textsf{C}}$$–models.

Decidability
All the logics introduced are decidable. We list below the complexity of the satisfiability problem for each of them. Note that if the satisfiability problem for these logics becomes linear time if there are only finitely many propositional letters in the language. For $$n\geq 2$$, if we restrict to finite nesting, then the satisfiability problem is NP-complete for all the modal logics considered. If we then further restrict the language to having only finitely many primitive propositions, the complexity goes down to linear time in all cases.

The computational complexity of the model checking problem is in P in all cases.

Adding Dynamics
Dynamic Epistemic Logic (DEL) is a formalism trying to model epistemic situations involving several agents, and changes that can occur to these situations after incoming information or more generally incoming action. The methodology of DEL is such that it splits the task of representing the agents’ beliefs and knowledge into three parts:


 * 1) One represents their beliefs about an initial situation thanks to an epistemic model;
 * 2) One represents their beliefs about an event taking place in this situation thanks to an event model;
 * 3) One represents the way the agents update their beliefs about the situation after (or during) the occurrence of the event thanks to a product update.

Typically, an informative event can be a public announcement to all the agents of a formula $$\psi$$: this public announcement and correlative update constitute the dynamic part. Note that epistemic events can be much more complex than simple public announcement, including hiding information for some of the agents, cheating, lying, etc. This complexity is dealt with when we introduce the notion of event model. We will first focus on public announcements to get an intuition of the main underlying ideas of DEL.

Public Announcement Logic
We start by giving a concrete example where DEL can be used, to better understand what is going on. This example is called the muddy children puzzle. Then, we will present a sketchy formalization of the phenomenon called Public Announcement Logic (PAL).

Muddy Children Example:

We have two children, A and B, both dirty. A can see B but not himself, and B can see A but not herself. Let $$p$$ be the proposition stating that A is dirty, and $$q$$ be the proposition stating that B is not dirty.

Now we roughly present the main ingredients of a logic called Public Announcement Logic (PAL), which formalizes these ideas and combines epistemic and dynamic logic. We have seen that a public announcement of a proposition $$\psi$$ changes the current epistemic model as in the figure below. We define the language $${\mathcal{L}_{PAL}}$$ inductively as follows:
 * 1) We represent the initial situation by the pointed epistemic model represented below, where relations are equivalence relations. States $$s,t,u,v$$ intuitively represent possible worlds, a proposition (for example $$p$$) satisfiable at one of these states intuitively means that in the possible world corresponding to this state, the intuitive interpretation of $$p$$ (A is dirty) is true. The links between states labelled by agents (A or B) intuitively express a notion of indistinguishability for the agent at stake between two possible worlds. For example, the link between $$s$$ and $$t$$ labelled by A intuitively means that A can not distinguish the possible world $$s$$ from $$t$$ and vice versa. Indeed, A can not see himself, so he cannot distinguish between a world where he is dirty and one where he is not dirty. However, he can distinguish between worlds where B is dirty or not because he can see B. With this intuitive interpretation we are brought to assume that our relations between states are equivalence relations.WikiDEL2b.png
 * 2) Now, suppose that their father comes and announces that at least one is dirty (formally, $$p\vee q$$). Then we update the model and this yields the pointed epistemic model represented below. What we actually do is suppressing the worlds where the content of the announcement is not fulfilled. In our case this is the world where $$\neg p$$ and $$\neg q$$ are true. This suppression is what we call the update. We then get the model depicted. As a result of the announcement, both A and B do know that at least one of them is dirty. We can read this from the model.WikiDEL3b.png
 * 3) Now suppose there is a second (and final) announcement that says that neither knows they are dirty (an announcement can express facts about the situation as well as epistemic facts about the knowledge held by the agents). We then update similarly the model by suppressing the worlds which do not satisfy the content of the announcement, or equivalently by keeping the worlds which do satisfy the announcement. This update process thus yields the pointed epistemic model represented below. By interpreting this model, we get that A and B both know that they are dirty, which seems to contradict the content of the announcement. Actually, we see here at work one of the main features of the update process: a proposition is not necessarily true after being announced. That is what we technically call “self-persistence” and this problem arises for epistemic formulas (unlike propositional formulas). One must not confuse the announcement and the update induced by this announcement, which might cancel some of the information encoded in the announcement (like in our example).

$${\mathcal{L}_{PAL}}:\phi::= p~\mid~\neg\phi~\mid~(\phi\land\phi)~\mid~K_j\phi~\mid~[\phi!]\phi$$

where $$j\in AGTS$$.

The epistemic language is interpreted as in epistemic logic. The truth condition for the new dynamic action modality $$[\psi!]\phi$$ is defined as follows:

$$\mathcal{M},w\models [\psi!]\phi \mbox{ iff } \mbox{if } \mathcal{M},w\models\psi\mbox{ then } \mathcal{M}^\psi,w\models\phi$$

where $$\mathcal{M}^\psi:=(W^\psi,R_1^\psi,\ldots, R_n^\psi,V^\psi)$$ with

$$W^\psi:=\{w\in W; \mathcal{M},w\models\psi\}$$,

$$R_j^\psi:=R_j\cap (W^\psi\times W^\psi)$$ for all $$j\in\{1,\ldots,n\}$$ and

$$V^\psi(p):=V(\psi)\cap W^\psi$$.

The formula $$[\psi!]\phi$$ intuitively means that after a truthful announcement of $$\psi$$, $$\phi$$ holds.

The proof system $$\mathcal{H}_{PAL}$$ defined below is sound and strongly complete for $${\mathcal{L}_{PAL}}$$ w.r.t. $$\mathcal{C}_{\textsf{ML}}$$.

Here is a theorem of PAL: $$[q!]K q$$. It states that after a public announcement of $$q$$, the agent knows that $$q$$ holds.

PAL is decidable, its model checking problem is solvable in polynomial time and its satisfiability problem is PSPACE-complete.

The General Case
In this section, we focus on items 2 and 3 above, namely on how to represent events and on how to update an epistemic model with such a representation of events by means of a product update.

Event Model
Epistemic logic is used to model how the agents perceive the actual world in terms of beliefs about the world and about the other agents’ beliefs. The insight of the DEL approach is that one can describe how an event is perceived by the agents in a very similar way. Indeed, the agents’ perception of an event can also be described in terms of beliefs and knowledge. For example, the private announcement of $$A$$ to $$B$$ that her card is red can also be described in terms of beliefs and knowledge: while $$A$$ tells $$B$$ that her card is red (event a) $$C$$ believes that nothing happens (event b). This leads to define the notion of event model whose definition is very similar to that of an epistemic model.

A pointed event model $$(\mathcal{E},e)$$ represents how the actual event represented by $$e$$ is perceived by the agents. Intuitively, $$f\in R_j(e)$$ means that while the possible event represented by $$e$$ is occurring, agent $$j$$ considers possible that the possible event represented by $$f$$ is actually occurring.

An event model is a tuple $$\mathcal{E}=(W_\alpha,R_1,\ldots,R_m,I)$$ where:


 * $$W_\alpha$$ is a non-empty set of possible events,
 * $$R_j\subseteq W_\alpha\times W_\alpha$$ is an accessibility relation on $$W_\alpha$$, for each $$j\in AGTS$$,
 * $$I:W_\alpha\rightarrow \mathcal{L}_{\textsf{EL}}$$ is a function assigning to each possible event a formula of $$\mathcal{L}_{\textsf{EL}}$$. The function $$I$$ is called the precondition function.

We write $$e\in \mathcal{E}$$ for $$e\in W_\alpha$$, and $$(\mathcal{E},e)$$ is called a pointed event model ($$e$$ often represents the actual event). $$R_j(e)$$ denotes the set $$\{f\in W_\alpha\mid (e,f)\in R_j \}$$.

Card Example:

Let us resume the card example and assume that players $$A$$ and $$B$$ show their card to each other. As it turns out, $$C$$ noticed that $$A$$ showed her card to $$B$$ but did not notice that $$B$$ did so to $$A$$. Players $$A$$ and $$B$$ know this. This event is represented below in the event model $$(\mathcal{E},e)$$.

The boxed possible event $$e$$ corresponds to the actual event ‘players $$A$$ and $$B$$ show their and cards respectively to each other’ (with precondition $${\color{red}{A}}\land {\color{green}{B}}$$), $$f$$ stands for the event ‘player $$A$$ shows her card’ (with precondition $${\color{green}{A}}$$) and $$g$$ stands for the atomic event ‘player $$A$$ shows her card’ (with precondition $${\color{red}{A}}$$). Players $$A$$ and $$B$$ show their cards to each other, players $$A$$ and $$B$$ ‘know’ this and consider it possible, while player $$C$$ considers possible that player $$A$$ shows her red card and also considers possible that player $$A$$ shows her green card, since he does not know her card. In fact, that is all that player $$C$$ considers possible.



Another example of event model is given below. This second example corresponds to the event whereby Player $$A$$ shows her red card publicly to everybody. Player $$A$$ shows her red card and players $$A$$, $$B$$ and $$C$$ ‘know’ it, players $$A$$, $$B$$ and $$C$$ ‘know’ that each of them ‘know’ it, etc. In other words, there is common knowledge among players $$A$$, $$B$$ and $$C$$ that player $$A$$ shows her red card.



Product Update
The DEL product update is defined below. This update yields a new $$\mathcal{L}_{\textsf{EL}}$$-model $$(\mathcal{M},w)\otimes (\mathcal{E},e)$$ representing how the new situation which was previously represented by $$(\mathcal{M},w)$$ is perceived by the agents after the occurrence of the event represented by $$(\mathcal{E},e)$$.

Let $$\mathcal{M}=(W,R_1,\ldots,R_n,I)$$ be an epistemic model and let $$\mathcal{E}=(W_\alpha,R_1,\ldots,R_n,I)$$ be an event model. The product update of $$\mathcal{M}$$ and $$\mathcal{E}$$ is the epistemic model $$\mathcal{M}\otimes\mathcal \mathcal{E}=(W^\otimes,R^\otimes_1,\ldots,R^{\otimes}_n,I^\otimes)$$ defined as follows: for all $$v\in W$$ and all $$f\in W_\alpha$$,

If $$w\in W$$ and $$e\in W_{\alpha}$$ are such that $$\mathcal{M},w\models I(e)$$ then $$(\mathcal{M},w)\otimes(\mathcal{E},e)$$ denotes the pointed epistemic model $$(\mathcal{M}\otimes\mathcal{E},(w,e))$$.
 * $$W^\otimes=\{(v,f)\in W\times W_\alpha\mid \mathcal{M},v\models I(f)\}$$,
 * $$R_j^\otimes(v,f)=\{(u,g)\in W^\otimes\mid u\in R_j(v)$$ and $$g\in R_j(f)\}$$,
 * $$I^\otimes(v,f)= I(v)$$.

Card Example:

As a result of the first event described above (Players $$A$$ and $$B$$ show their cards to each other in front of player $$C$$), the agents update their beliefs. We get the situation represented in the pointed epistemic model $$(\mathcal{M},w)\otimes(\mathcal{E},e)$$ below. In this $$\mathcal{L}_{\textsf{EL}}$$–model, we have for example the following statement: $$(\mathcal{M},w)\otimes(\mathcal{E},e)\models ({\color{green}{B}}\land K_{A} {\color{green}{B}}) \land K_{C}\neg K_{A} {\color{green}{B}}.$$ It states that player $$A$$ ‘knows’ that player $$B$$ has the card but player $$C$$ believes that it is not the case.

The result of the second event described above (Public event model of ) is described below. In this pointed epistemic model, the following statement holds: $$(\mathcal{M},w)\otimes(\mathcal{F},e)\models C_{B,C}({\color{red}{A}}\land{\color{green}{B}}\land{\color{blue}{C}})\land \neg K_A({\color{green}{B}}\land{\color{blue}{C}})$$. It states that there is common knowledge among $$B$$ and $$C$$ that they know the true state of the world (namely $$A$$ has the red card, Bob has the green card and $$C$$ has the blue card), but $$A$$ does not know it.