User:Jochen Burghardt/sandbox

Distributive elements
In a (distributive or non-distributive) lattice, an element x is called a distributive element if ∀y,z: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z). An element x is called a dual distributive element if ∀y,z: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z). In a distributive lattice, every element is both distributive and dual distributive. In a non-distributive lattice, there may be elements that are distributive, but not dual distributive, and vice versa. For example, in the depicted pentagon lattice N5,

Rational consequence relation
In classical logic, a statement $$\theta \vdash^* \phi$$ implies the statement $$\theta \wedge \psi \vdash^* \phi$$ under all circumstances. For example, the statement


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \vdash^* \text{delicious}(x)$$,
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar then it tastes good", formally
 * }

implies under a monotone consequence relation the statement


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \land \text{contains}(x,\text{soap}) \vdash^* \text{delicious}(x)$$.
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar and soap then it tastes good",
 * }

Clearly this doesn't match our own understanding of cakes. Non-monotonic logic, in particular the rational consequence relation, is designed to avoid this undesired inference. By asserting


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \vdash \text{delicious}(x)$$,
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar then it usually tastes good",
 * }

a rational consequence relation allows for a more realistic model of the real world, and it does not automatically follow that


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \land \text{contains}(x,\text{soap}) \vdash \text{delicious}(x)$$.
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar and soap then it usually tastes good",
 * }

However, in some cases, we do want to conclude $$\theta \wedge \psi \vdash \phi$$ from $$\theta \vdash \phi$$. Rules CMO and RMO serve that purpose; we give an example how each of them can be applied. If we also have the information


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \vdash \text{contains}(x,\text{butter})$$,
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar then it usually contains butter",
 * }

then we may legally conclude, under CMO, that


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \land \text{contains}(x,\text{butter})\vdash \text{delicious}(x)$$.
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar and butter then it usually tastes good",
 * }

Using rule RMO,


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \vdash \lnot \text{contains}(x,\text{soap})$$
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If a cake contains sugar then usually it contains no soap",
 * }

then we could legally conclude from RMO that


 * {| class="wikitable"

$$\text{isCake}(x) \land \text{contains}(x,\text{sugar}) \land \text{contains}(x,\text{soap})\vdash \text{delicious}(x)$$.
 * - style="text-align: left; vertical-align: top; style="background: white"
 * "If the cake contains sugar and soap then it usually tastes good",
 * }

If this latter conclusion seems ridiculous to you then it is likely that you are subconsciously asserting your own preconceived knowledge about cakes when evaluating the validity of the statement. That is, from your experience you know that cakes that contain soap are likely to taste bad so you add to the system your own knowledge such as "Cakes that contain sugar do not usually contain soap.", even though this knowledge is absent from it. If the conclusion seems silly to you then you might consider replacing the word soap with the word eggs to see if it changes your feelings.

Relation (mathematics)


Properties of, and operations on, mathematical relations can be explained along family relationships.

The relation "is a child of" is
 * irreflexive (nobody is a child of her/himself),
 * asymmetric (if x is a child of y, then y cannot be a child of x),
 * right-total (everybody is the child of someone),
 * but not left-total (not everybody has a child).

The relation "is a decendant of" is Different conventions may be adopted as to whether it is reflexive ("everybody is considered a descendant of her/himself"), irreflexive ("nobody is considered a descendant of her/himself"), or possibly even none of both ("some people are considered a descendant of her/himself, while others are not").
 * antisymmetric (if x is a decendant of y, then y cannot be a decendant of x),
 * transitive (if x is a descentant of y, and y is a descendant of z, then x is also a descendant of z),
 * right-total (everybody is descendant of someone).

The relation "is a parent of" is the converse of "is a child of": x is a parent of y if, and only if, y is a child of x. Therefore, "is a parent of" is
 * irreflexive (nobody is a parent of her/himelf, since -see above- nobody is a child of her/himself),
 * asymmetric (if x is a parent of y, then y is a child of x, hence -see above- x cannot be a child of y, that is, y cannot be a parent of x),
 * left-total (everybody has a parent, since -see above- everybody is the child of someone),
 * but not right-total (not everybody is the parent of somebody, since -see above- not everybody has a child).

The relation "is a child of" is the union of "is a daughter of" and "is a son of": x is a child of y if, and only if, x is a daughter of y, or x is a son of y.

The relation "is an aunt of" is the composition of "is a parent of" ∘ "is a sister of": x is an aunt of z if, and only if, x is a sister of some y such that y is a parent of z. For example, Bronisława Dłuska is an aunt of Ève Curie, since Bronisława Dłuska is a sister of Marie Curie, which, in turn, is a parent of Ève Curie, cf. picture.

Composing in reverse order yields a different relation R: we have x R z if, and only if, x is a parent of some y such that y is a sister of z. While R is contained in the relation "is a parent of", both relations do not coincide: for example, Anne Joliot is a parent of Marc Joliot, but (Anne Joliot)R(Marc Joliot) does not hold as long as Marc Joliot does not have a sister.

TO DO: illustrate Intersection, Complement, Restriction TO DO: illustrate all Combinations of properties TO DO: illustrate properties Connected

Recent

 * User:Jochen_Burghardt/sandbox1 - Stanislas I. Dockx
 * User:Jochen_Burghardt/sandbox2 - Equation
 * User:Jochen_Burghardt/sandbox3 - Karl-Heinz Boseck
 * User:Jochen_Burghardt/sandbox4 - Path ordering (term rewriting)
 * User:Jochen_Burghardt/sandbox5 - Many-sorted logic
 * User:Jochen_Burghardt/sandbox6 - Chomsky hierarchy
 * User:Jochen_Burghardt/sandbox7 - New riddle of induction
 * User:Jochen_Burghardt/sandbox8 - Computable function

In particular, a definition an intended function f may establish a relation that is not right-unique or not serial, and hence in fact not a function. In that case, f is called ill defined; if it is not right-serial, f is also called ambiguous.

commons:User:Jochen Burghardt

Relation (mathematics)
In mathematics, a relation on a set $$X$$ associates certain elements of $$X$$ with each other. Formally, a relation $$R$$ on $$X$$ is a set of ordered pairs of elements of $$X,$$ that is, a subset of the Cartesian product $$X \times X$$. An element $$x$$ of $$X$$ is related to an element $$y$$ if the pair $$(x,y)$$ is an element of $$R$$.

For example, "is adjacent to" is a relation on the set of faces of the depicted example die; it associates e.g. 5,6, and likewise 6,3, but neither 3,4, nor 1,6. The left table shows for each cell whether its row's face is adjacent to its column's face. The example adjacency relation is represented as the set
 * $$\begin{array}{lcccc}

R_1 = \{ & (1,2), & (1,3), & (1,4), & (1,5), \\ & (2,1), & (2,3), & (2,4), & (2,6), \\        & (3,1), & (3,2), & (3,5), & (3,6), \\         & (4,1), & (4,2), & (4,5), & (4,6), \\         & (5,1), & (5,3), & (5,4), & (5,6), \\         & (6,2), & (6,3), & (6,4), & (6,5) & \}; \\ \end{array}$$ this set has one element for each "" entry in the left table.

Another relation example on the same set is "divides"; it associates e.g. 3,6, but not 6,3. The right table illustrates this relation. Its is represented as the set


 * $$\begin{array}{lccccc}

R_2 = \{ & (1,1), & (1,2), & (1,3), & (1,4), & (1,5), \\ & (2,2), & (2,4), & (2,6), & (3,3), & (3,6), \\        & (4,4), & (5,5), & (6,6) &&& \}; \\ \end{array}$$ again, this set has one element for each "" entry in the right table.

Equation

 * Well, Variety_(universal_algebra) link to identity (mathematics), which gives a fairly usable definition in its first sentence, viz. "An identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain range of validity". I'd formalize this as: "An identity of a formula of the form ∀x1,...,xn. s = t, where s and t are terms with no other free variables than x1,...,xn". An example identity set is { ∀x,y,z.x*(y*z)=(x*y)*z, ∀x. x*1=x } for the theory of monoids. The quantifier prefix is often omitted, like { x*(y*z)=(x*y)*z , x*1=x } in the example.
 * In English Wikipedia, at least the articles "identity (mathematics)", "equality (mathematics)", and "equation" deal with this issue; they mainly differ by the purpose a formula is used for. "Identity" and "equality" (I believe there is no difference between these concepts) take the formula as an assertion from which other formulas may be inferred, while "equation" takes the "s=t" part as something to be solved (like x2-2x+1=0); more formally, a constructive proof is sought for ∃x1,...,xn. s = t. However, none of the 3 articles mentions quantification at all.

Main subsection
Main text blah blah.

Next subsection
Text in next subsection.

Set constraint
Different approaches admit different operators on sets, cf. table.

U: Universe, X: Variable, Ei: sub-expression

Markup / template examples

 * A link to the lead of this page


 * Reference within note: Chaplin believed he was born in South London.


 * Reference within reference: linear indexed grammars, By requiring ...