Interference freedom

In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis (and papers arising from it ) under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs.

Concurrent programming had been in use since the mid 1960s for coding operating systems as sets of concurrent processes (see, in particular, Dijkstra. ), but there was no formal mechanism for proving correctness. Reasoning about interleaved execution sequences of the individual processes was difficult, was error prone, and didn't scale up. Interference freedom applies to proofs instead of execution sequences; one shows that execution of one process cannot interfere with the correctness proof of another process.

A range of intricate concurrent programs have been proved correct using interference freedom, and interference freedom provides the basis for much of the ensuing work on developing concurrent programs with shared variables and proving them correct. The Owicki-Gries paper An axiomatic proof technique for parallel programs I received the 1977 ACM Award for best paper in programming languages and systems.

Note. Lamport presents a similar idea. He writes, "After writing the initial version of this paper, we learned of the recent work of Owicki. " His paper has not received as much attention as Owicki-Gries, perhaps because it used flow charts instead of the text of programming constructs like the if statement and while loop. Lamport was generalizing Floyd's method while Owicki-Gries was generalizing Hoare's method. Essentially all later work in this area uses text and not flow charts. Another difference is mentioned below in the section on Auxiliary variables.

Dijkstra's Principle of non-interference
Edsger W. Dijkstra introduced the principle of non-interference in EWD 117, "Programming Considered as a Human Activity", written about 1965. This principle states that: The correctness of the whole can be established by taking into account only the exterior specifications (abbreviated specs throughout) of the parts, and not their interior construction. Dijkstra outlined the general steps in using this principle:


 * 1) Give a complete spec of each individual part.
 * 2) Check that the total problem is solved when program parts meeting their specs are available.
 * 3) Construct the individual parts to satisfy their specs, but independent of one another and the context in which they will be used.

He gave several examples of this principle outside of programming. But its use in programming is a main concern. For example, a programmer using a method (subroutine, function, etc.) should rely only on its spec to determine what it does and how to call it, and never on its implementation.

Program specs are written in Hoare logic, introduced by Sir Tony Hoare, as exemplified in the specs of processes $S1$ and $S2$:

${pre-S1}$   ${pre-S2}$    $S1$    $S2$

${post-S1}$   ${pre-S2}$

Meaning: If execution of $Si$ in a state in which precondition $pre-Si$ is true terminates, then upon termination, postcondition $post-Si$ is true.

Now consider concurrent programming with shared variables. The specs of two (or more) processes $S1$ and $S2$ are given in terms of their pre- and post-conditions, and we assume that implementations of $S1$ and $S2$ are given that satisfy their specs. But when executing their implementations in parallel, since they share variables, a race condition can occur; one process changes a shared variable to a value that is not anticipated in the proof of the other process, so the other process does not work as intended.

Thus, Dijkstra's Principle of non-interference is violated.

In her PhD thesis of 1975 in Computer Science, Cornell University, written under advisor David Gries, Susan Owicki developed the notion of interference freedom. If processes $S1$ and $S2$ satisfy interference freedom, then their parallel execution will work as planned. Dijkstra called this work the first significant step toward applying Hoare logic to concurrent processes. To simplify discussions, we restrict attention to only two concurrent processes, although Owicki-Gries allows more.

Interference freedom in terms of proof outlines
Owicki-Gries introduced the proof outline for a Hoare triple ${P}S{Q}$. It contains all details needed for a proof of correctness of ${P}S{Q}$ using the axioms and inference rules of Hoare logic. (This work uses the assignment statement $x:$$= e$, $if-then$ and $if-then-else$ statements, and the $while$ loop.) Hoare alluded to proof outlines in his early work; for interference freedom, it had to be formalized.

A proof outline for ${P}S{Q}$ begins with precondition $P$ and ends with postcondition $Q$. Two assertions within braces { and } appearing next to each other indicates that the first must imply the second.

Example: A proof outline for ${P} S {Q}$ where $S$ is:

$x$$$$= a; if e then S1 else S2$

${P}$

${P1[x/a]}$

$x$$$$= a;$

${P1}$

$if e then {P1 &and; e}$

$S1$

${Q1}$

$else {P1 &and; ¬ e}$

$S2$

${Q1}$

${Q1}$

${Q}$

$P ⇒ P1[x/a]$ must hold, where $P1[x/a]$ stands for $P1$ with every occurrence of $x$ replaced by $a$. (In this example, $S1$ and $S2$ are basic statements, like an assignment statement, skip, or an await statement.)

Each statement $T$ in the proof outline is preceded by a precondition $pre-T$ and followed by a postcondition $post-T$, and ${pre-T}T{post-T}$ must be provable using some axiom or inference rule of Hoare logic. Thus, the proof outline contains all the information necessary to prove that ${P}S{Q}$ is correct.

Now consider two processes $S1$ and $S2$ executing in parallel, and their specs:

${pre-S1}$   ${pre-S2}$    $S1$    $S2$

${post-S1}$   ${post-S2}$

Proving that they work suitably in parallel will require restricting them as follows. Each expression $E$ in $S1$ or $S2$ may refer to at most one variable $y$ that can be changed by the other process while $E$ is being evaluated, and $E$ may refer to $y$ at most once. A similar restriction holds for assignment statements $x:$$= E$.

With this convention, the only indivisible action need be the memory reference. For example, suppose process $S1$ references variable $y$ while $S2$ changes $y$. The value $S1$ receives for $y$ must be the value before or after $S2$ changes $y$, and not some spurious in-between value.

Definition of Interference-free

The important innovation of Owicki-Gries was to define what it means for a statement $T$ not to interfere with the proof of ${P}S{Q}$. If execution of $T$ cannot falsify any assertion given in the proof outline of ${P}S{Q}$, then that proof still holds even in the face of concurrent execution of $S$ and $T$.

Definition. Statement $T$ with precondition $pre-T$ does not interfere with the proof of ${P}S{Q}$ if two conditions hold:

(1) ${Q &and; pre-T} T {Q}$

(2) Let $S'$ be any statement within $S$ but not within an $await$ statement (see later section). Then ${pre-S' &and; pre-T} T {pre-S'}$.

Read the last Hoare triple like this: If the state is such that both $T$ and $S'$ can be executed, then execution of $T$ is not going to falsify $pre-S'$.

Definition. Proof outlines for ${P1}S1{Q1}$ and ${P2}S2{Q2}$ are interference-free if the following holds. Let $T$ be an $await$ or assignment statement (that does not appear in an $await$) of process $S1$. Then $T$ does not interfere with the proof of ${P2}S2{Q2}$. Similarly for $T$ of process $S2$ and ${P1}S1{Q1}$.

Statements cobegin and await
Two statements were introduced to deal with concurrency. Execution of the statement $cobegin S1 // S2 coend$ executes $S1$ and $S2$ in parallel. It terminates when both $S1$ and $S2$ have terminated.

Execution of the $await$ statement $await B then S$ is delayed until condition $B$ is true. Then, statement $S$ is executed as an indivisible action—evaluation of $B$ is part of that indivisible action. If two processes are waiting for the same condition $B$, when it becomes true, one of them continues waiting while the other proceeds.

The $await$ statement cannot be implemented efficiently and is not proposed to be inserted into the programming language. Rather it provides a means of representing several standard primitives such as semaphores—first express the semaphore operations as $awaits$, then apply the techniques described here.

Inference rules for $await$ and $cobegin$ are:

$await$

${P &and; B} S {Q}⁄{P} await B then S {Q}$

$cobegin$

${P1} S1 {Q1}, {P2} S2 {Q2} interference-free⁄{P1&and;P2} cobegin S1//S2 coend {Q1&and;Q2}$

Auxiliary variables
An auxiliary variable does not occur in the program but is introduced in the proof of correctness to make reasoning simpler —or even possible. Auxiliary variables are used only in assignments to auxiliary variables, so their introduction neither alters the program for any input nor affects the values of program variables. Typically, they are used either as program counters or to record histories of a computation.

Definition. Let $AV$ be a set of variables that appear in $S'$ only in assignments $x:$$= E$, where $x$ is in $AV$. Then $AV$ is an auxiliary variable set for $S'$.

Since a set $AV$ of auxiliary variables are used only in assignments to variables in $AV$, deleting all assignments to them doesn't change the program's correctness, and we have the inference rule $AV$ elimination:

${P} S' {Q}⁄{P} S {Q}$

$AV$ is an auxiliary variable set for $S'$. The variables in $AV$ do not occur in $P$ or $Q$. $S$ is obtained from $S'$ by deleting all assignments to the variables in $AV$.

Instead of using auxiliary variables, one can introduce a program counter into the proof system, but that adds complexity to the proof system.

Note: Apt discusses the Owicki-Gries logic in the context of recursive assertions, that is, effectively computable assertions. He proves that all the assertions in proof outlines can be recursive, but that this is no longer the case if auxiliary variables are used only as program counters and not to record histories of computation. Lamport, in his similar work, uses assertions about token positions instead of auxiliary variables, where a token on an edge of a flow chart is akin to a program counter. There is no notion of a history variable. This indicates that Owicki-Gries and Lamport's approach are not equivalent when restricted to recursive assertions.

Deadlock and termination
Owicki-Gries deals mainly with partial correctness: ${P} S {Q}$ means: If $S$ executed in a state in which $P$ is true terminates, then $Q$ is true of the state upon termination. However, Owicki-Gries also gives some practical techniques that use information obtained from a partial correctness proof to derive other correctness properties, including freedom from deadlock, program termination, and mutual exclusion.

A program is in deadlock if all processes that have not terminated are executing $await$ statements and none can proceed because their $await$ conditions are false. Owicki-Gries provides conditions under which deadlock cannot occur.

Owicki-Gries presents an inference rule for total correctness of the while loop. It uses a bound function that decreases with each iteration and is positive as long as the loop condition is true. Apt et al show that this new inference rule does not satisfy interference freedom. The fact that the bound function is positive as long as the loop condition is true was not included in an interference test. They show two ways to rectify this mistake.

A simple example
Consider the statement: ${x=0}$   $cobegin await true then x:$$= x+1$    // $await true then x:$$= x+2$    $coend$    ${x=3}$

The proof outline for it:

${x=0}$

$S: cobegin$

${x=0}$

${x=0 &or; x=2}$

$S1: await true then$$x$$$$=$ $x+1$

${Q1: x=1 &or; x=3}$

//

${x=0}$

${x=0 &or; x=1}$

$S2: await true then$$x$$$$=$ $x+2$

${Q2: x=2 &or; x=3}$

$coend$

${(x=1 &or; x=3) &and; (x=2 &or; x=3)}$

${x=3}$

Proving that $S1$ does not interfere with the proof of $S2$ requires proving two Hoare triples:

(1) ${(x=0 &or; x=2) &and; (x=0 &or; x=1} S1 {x=0 &or; x=1}$   (2) ${(x=0 &or; x=2) &and; (x=2 &or; x=3} S1 {x=2 &or; x=3}$

The precondition of (1) reduces to $x=0$ and the precondition of (2) reduces to $x=2$. From this, it is easy to see that these Hoare triples hold. Two similar Hoare triples are required to show that $S2$ does not interfere with the proof of $S1$.

Suppose $S1$ is changed from the $await$ statement to the assignment $x:$$= x+1$. Then the proof outline does not satisfy the requirements, because the assignment contains two occurrences of shared variable $x$. Indeed, the value of $x$ after execution of the $cobegin$ statement could be 2 or 3.

Suppose $S1$ is changed to the $await$ statement $await true then x:$$= x+2$, so it is the same as $S2$. After execution of $S$, $x$ should be 4. To prove this, because the two assignments are the same, two auxiliary variables are needed, one to indicate whether $S1$ has been executed; the other, whether $S2$ has been executed. We leave the change in the proof outline to the reader.

Examples of formally proved concurrent programs
A. Findpos. Write a program that finds the first positive element of an array (if there is one). One process checks all array elements at even positions of the array and terminates when it finds a positive value or when none is found. Similarly, the other process checks array elements at odd positions of the array. Thus, this example deals with while loops. It also has no $await$ statements.

This example comes from Barry K. Rosen. The solution in Owicki-Gries, complete with program, proof outline, and discussion of interference freedom, takes less than two pages. Interference freedom is quite easy to check, since there is only one shared variable. In contrast, Rosen's article uses $Findpos$ as the single, running example in this 24 page paper.

An outline of both processes in a general environment:

$cobegin producer: ...$

$await in-out < N then skip;$

$add: b[in mod N]:= next value;$

$markin:$ $in:$$=$ $in+1;$

$...$

//

$consumer: ...$

$await in-out > 0 then skip;$

$remove:$ $this value:=

b[out mod N];$

$markout:$ $out:$$=$ $out+1;$

$coend$

$...$

B. Bounded buffer consumer/producer problem. A producer process generates values and puts them into bounded buffer $b$ of size $N$; a consumer process removes them. They proceed at variable rates. The producer must wait if buffer $b$ is full; the consumer must wait if buffer $b$ is empty. In Owicki-Gries, a solution in a general environment is shown; it is then embedded in a program that copies an array $c[1..M]$ into an array $d[1..M]$.

This example exhibits a principle to reduce interference checks to a minimum: Place as much as possible in an assertion that is invariantly true everywhere in both processes. In this case the assertion is the definition of the bounded buffer and bounds on variables that indicate how many values have been added to and removed from the buffer. Besides buffer $b$ itself, two shared variables record the number $in$ of values added to the buffer and the number $out$ removed from the buffer.

C. Implementing semaphores. In his article on the THE multiprogramming system, Dijkstra introduces the semaphore $sem$ as a synchronization primitive: $sem$ is an integer variable that can be referenced in only two ways, shown below; each is an indivisible operation:

1. $P(sem)$: Decrease $sem$ by 1. If now $sem < 0$, suspend the process and put it on a list of suspended processes associated with $sem$.

2. $V(sem)$: Increase $sem$ by 1. If now $sem ≤ 0$, remove one of the processes from the list of suspended processes associated with $sem$, so its dynamic progress is again permissible.

The implementation of $P$ and $V$ using $await$ statements is:

$P(sem):$

$await true then$

$begin sem:$$=$$sem-1;$

$if sem < 0 then$

$w[this process]:$$=$$true$

$end;$

$await ¬w[this process]$

$then skip$

$V(sem):$

$await true then$

$begin sem:$$=$$sem+1;$

$if sem ≤ 0 then$

$begin choose p such$

$that w[p];$

$w[p]:$$=$$false$

$end$

$end$

Here, $w$ is an array of processes that are waiting because they have been suspended; initially, $w[p]$$=$$false$ for every process $p$. One could change the implementation to always waken the longest suspended process.

D. On-the-fly garbage collection. At the 1975 Summer School Marktoberdorf, Dijkstra discussed an on-the-fly garbage collector as an exercise in understanding parallelism. The data structure used in a conventional implementation of LISP is a directed graph in which each node has at most two outgoing edges, either of which may be missing: an outgoing left edge and an outgoing right edge. All nodes of the graph must be reachable from a known root. Changing a node may result in unreachable nodes, which can no longer be used and are called garbage. An on-the-fly garbage collector has two processes: the program itself and a garbage collector, whose task is to identify garbage nodes and put them on a free list so that they can be used again.

Gries felt that interference freedom could be used to prove the on-the-fly garbage collector correct. With help from Dijkstra and Hoare, he was able to give a presentation at the end of the Summer School, which resulted in an article in CACM.

E. Verification of readers/writers solution with semaphores. Courtois et al use semaphores to give two versions of the readers/writers problem, without proof. Write operations block both reads and writes, but read operations can occur in parallel. Owicki provides a proof.

F. Peterson's algorithm, a solution to the 2-process mutual exclusion problem, was published by Peterson in a 2-page article. Schneider and Andrews provide a correctness proof.

Dependencies on interference freedom
The image below, by Ilya Sergey, depicts the flow of ideas that have been implemented in logics that deal with concurrency. At the root is interference freedom. The file contains references. Below, we summarize the major advances.




 * Rely-Guarantee. 1981. Interference freedom is not compositional. Cliff Jones recovers compositionality by abstracting interference into two new predicates in a spec: a rely-condition records what interference a thread must be able to tolerate and a guarantee-condition sets an upper bound on the interference that the thread can inflict on its sibling threads. Xu et al observe that Rely-Guarantee is a reformulation of interference freedom; revealing the connection between these two methods, they say, offers a deep understanding about verification of shared variable programs.
 * CSL. 2004. Separation logic supports local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component. Concurrent separation logic (CSL) was originally proposed by Peter O'Hearn, We quote from: "the Owicki-Gries method involves explicit checking of non-interference between program components, while our system rules out interference in an implicit way, by the nature of the way that proofs are constructed."
 * Deriving concurrent programs. 2005-2007. Feijen and van Gasteren show how to use Owicki-Gries to design concurrent programs, but the lack of a theory of progress means that designs are driven only by safety requirements. Dongol, Goldson, Mooij, and Hayes have extended this work to include a "logic of progress" based on Chandy and Misra's language Unity, molded to fit a sequential programming model. Dongel and Goldson describe their logic of progress. Goldson and Dongol show how this logic is used to improve the process of designing programs, using Dekker's algorithm for two processes as an example. Dongol and Mooij present more techniques for deriving programs, using Peterson's mutual exclusion algorithm as one example. Dongol and Mooij show how to reduce the calculational overhead in formal proofs and derivations and derive Dekker's algorithm again, leading to some new and simpler variants of the algorithm. Mooij studies calculational rules for Unity's leads-to relation. Finally, Dongol and Hayes provide a theoretical basis for and prove soundness of the process logic.
 * OGRA. 2015. Lahav and Vafeiadis strengthen the interference freedom check to produce (we quote from the abstract) "OGRA, a program logic that is sound for reasoning about programs in the release-acquire fragment of the C11 memory model." They provide several examples of its use, including an implementation of the RCU synchronization primitives.
 * Quantum programming. 2018. Ying et al extend interference freedom to quantum programming. Difficulties they face include intertwined nondeterminism: nondeterminism involving quantum measurements and nondeterminism introduced by parallelism occurring at the same time. The authors formally verify Bravyi-Gosset-König's parallel quantum algorithm solving a linear algebra problem, giving, they say, for the first time an unconditional proof of a computational quantum advantage.
 * POG. 2020. Raad et al present POG (Persistent Owicki-Gries), the first program logic for reasoning about non-volatile memory technologies, specifically the Intel-x86.

Texts that discuss interference freedom

 * On A Method of Multiprogramming, 1999. Van Gasteren and Feijen discuss the formal development of concurrent programs entirely on the idea of interference freedom.
 * On Current Programming, 1997. Schneider uses interference freedom as the main tool in developing and proving concurrent programs. A connection to temporal logic is given, so arbitrary safety and liveness properties can be proven. Control predicates obviate the need for auxiliary variables for reasoning about program counters.
 * Verification of Sequential and Concurrent Programs, 1991, 2009. This first text to cover verification of structured concurrent programs, by Apt et al, has gone through several editions over several decades.
 * Concurrency Verification: Introduction to Compositional and Non-Compositional Methods, 2112. De Roever et al provide a systematic and comprehensive introduction to compositional and non-compositional proof methods for the state-based verification of concurrent programs

Implementations of interference freedom

 * 1999: Nipkow and Nieto present the first formalization of interference freedom and its compositional version, the rely-guarantee method, in a theorem prover: Isabelle/HOL.
 * 2005: Ábrahám's PhD thesis provides a way to prove multithreaded Java programs correct in three steps: (1) Annotate the program to produce a proof outline, (2) Use their tool Verger to automatically create verification conditions, and (3) Use the theorem prover PVS to prove the verification conditions interactively.
 * 2017: Denissen reports on an implementation of Owicki-Gries in the "verification ready" programming language Dafny. Denissen remarks on the ease of use of Dafny and his extension to it, making it extremely suitable when teaching students about interference freedom. Its simplicity and intuitiveness outweighs the drawback of being non-compositional. He lists some twenty institutions that teach interference freedom.
 * 2017: Amani et al combine the approaches of Hoare-Parallel, a formalisation of Owicki-Gries in Isabelle/HOL for a simple while-language, and SIMPL, a generic language embedded in Isabelle/HOL, to allow formal reasoning on C programs.
 * 2022: Dalvandi et al introduce the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs, building on Nipkow and Nieto's encoding of Owicki–Gries in the Isabelle theorem prover.
 * 2022: This webpage describes the Civl verifier for concurrent programs and gives instructions for installing it on your computer. It is built on top of Boogie, a verifier for sequential programs. Kragl et al describe how interference freedom is achieved in Civl using their new specification idiom, yield invariants. One can also use specs in the rely-guarantee style. Civl offers a combination of linear typing and logic that allows economical and local reasoning about disjointness (like separation logic). Civl is the first system that offers refinement reasoning on structured concurrent programs.
 * 2022. Esen and Rümmer developed TRICERA, an automated open-source verification tool for C programs. It is based on the concept of constrained Horn clauses, and it handles programs operating on the heap using a theory of heaps. A web interface to try it online is available. To handle concurrency, TRICERA uses a variant of the Owicki-Gries proof rules, with explicit variables to added to represent time and clocks.