Bornat, [2009] “Separation logic and concurrency”
Programming is a combination of mechanical computation and formal logic. As a rule, programmers use every new formalism to increase the scope of expressible programs rather than to increase correctness (for the balance between expressiveness and consistency see also e.g. this tweet). Multithreaded computing is the most difficult kind of programming, and here, as always, computer science theory both follows and overtakes practice.
The fundamental problem of concurrency is that the synchronization of processes that exchange information is itself an exchange of information, i.e. we are somehow trapped inside this game. Dijkstra proposed to use synchronization primitives to structure concurrent programs, and the correctness problem was split into problems of fidelity of what happens during successful synchronization (usually formulated as safety) and fidelity of the synchronization itself (often expressed in terms of liveness properties).
Concurrent separation logic (CSL) is classically concerned with safety, i.e. the partial correctness of a multithreaded program with respect to its specification and consistency model. Its predecessor is primarily the Owicki-Gries method of the 1970s - an extension of Hoare’s axiomatic logic to programs with parallel composition and synchronization primitives based on non-interference proofs. The OG method popularized the ideas of resource invariants and auxiliary (ghost) variables. In the 1980’s Cliff Jones refined the OG method to rely-guarantee by adding two conditions of the same name to the pre- and postconditions. One of the weaknesses of both methods is working with aliasing, i.e. the fact that a memory cell can be indexed by different character expressions. To solve this problem, separation logic was invented - a parallel branch of Hoare logic development, where “separation” is a binary predicate over cells that guarantees the inequality of their addresses. This allows to have a so-called frame rule (the terminology, by the way, is borrowed from AI), which allows to discard “irrelevant” logical conditions, i.e. to work only with the state that is directly affected by the part of the program in question.
Like the OG method, CSL adds parallel composition primitives and critical sections of the form with r when B do C
(for which a lower-level implementation is implied). The r
is a so-called resource, for each of which a different invariant is introduced into the logic (also an OG idea). The paper shows an example proof for programs with semaphores, then moves on to the permissions mechanism needed to keep track of read-only auxiliary variables (there are alternatives to this mechanism - subjective auxiliary state in FCSL and ghost objects/higher-order ghost code in Iris). Next, a proof of a readers-writer lock (prohibiting a race between reads and writes) on semaphores is demonstrated, using the ghost ticket trick.
The last section is devoted to a proof of a non-blocking 4-slot algorithm by Hugo Simpson - the algorithm implements a wait-free single-reader single-writer atomic register. The proof is done “a la modelchecking”, with explicit enumeration of states in the invariant. It would be nice to have an algorithmic resource theory that we could use in compilers, the article says, and ends with optimism about the coming incorporation of rely-guarantee and temporal logic ideas into CSL.