Weißenbacher, [2010] “Program Analysis with Interpolants”
This high-quality dissertation provides an overview of the state of affairs in model checking as of 2010.
In short, the idea of model checking can be described as follows: we want to automatically verify programs, so we approximate them with models - i.e., automata or transition systems with a finite set of states, set a specification (usually in some variety of propositional temporal logic), and then, using search algorithms and heuristics, exhaustively search through the states of the model, checking that the specification holds for all states.
Conceptually, this approach is described by model theory (one of the two main branches of logic, the second being proof theory, on which type theory and proof assistants are based). Interestingly, in model checking, the dominant paradigm seems to change about once a decade. Overall, its timeline looks something like this:
- 1980s - The birth of the very idea of MC from the works of Edmund Clarke on calculating fixed points for proof systems in predicate transformers; use of BDD for state compaction
- 1990s - Further state reduction through partial order reduction; the emergence of predicate abstraction and CEGAR - methods for automatically constructing models from a set of assertions about a program
- 2000s - SAT/SMT revolution (https://www.cs.rice.edu/~vardi/papers/highlights15.pdf) and departure from BDD; fast approximation through Craig interpolation
- 2010s - Aaron Bradley invents the PDR family of algorithms, where the process of constructing an invariant alternates and interacts with the construction of a counterexample, mutually intersecting the corresponding search spaces
- 2020s - Excitement around machine learning techniques
The first three decades and their main ideas are described in the first two and a half chapters of the dissertation (the second half of the third and fourth chapters are more technical).