Chen, [2022] “A Hoare logic style refinement types formalisation”
This time I’ve used Idris as a target for porting; the result is reasonably close to Agda, apart from the fact that eta-laws for tuples refuse to work in Idris.
Refinement types, understood as subsets, i.e., pairs (sigmas) of a type and a predicate on it {t ∶ U ∣ P t}
, are popular in automated proof systems but require telescopic function signatures and a mechanism for handling subtyping (weakening-strengthening-transforming the refinement to fit the necessary use context). The paper considers an Agda-formalized correspondence of ref. types to Floyd-Hoare logics, mapping pre- and postconditions to input and output function types. A simplified version of lambda calculus with primitive booleans, numbers, and first-order functions only is considered. The context in such a system decomposes into a list of typed variables and a list of predicates indexed by them (corresponding to a precondition); type checking corresponds to the inference of the weakest precondition.
Using the Hoare style does away with telescopic rules and simplifies the formalization of the system. The work with predicates in the article is manual, by interpreting basic types into Agda types, i.e., we’re faced with a kind of semi-shallow encoding. Separation of predicates from context also helps to work with additional equations that arise as the program is analyzed and does not depend on its variables.
The formalization consists of four calculi - 𝜆B (lambda with first-order functions), 𝜆R (refinements), 𝜆A, and 𝜆C. The key point in 𝜆R is the rule for subtyping (strengthening the postcondition), requiring to present the corresponding implication (a rough analog of consequence rules in Hoare logic). The symmetric rule for weakening the precondition is inferable within the system. The chosen encoding is also the reason for the absence of higher-order functions in the paper - to encode predicates on them, we would have to model a full-fledged set theory in the spirit of the semantic subtyping approach (I wonder if one can do away with homotopy sets?). It’s also worth noting that 𝜆R is encoded in an inductive-recursive fashion through a combination of rules and an erasure function to 𝜆B, which removes predicates and subtyping nodes.
The soundness and completeness of 𝜆R are proved semantically. Its type checker (essentially predicate derivation) is formulated via the computation of the weakest precondition by substituting the interpretation of the term into the postcondition. The problem with such a type checker is that it does not work with functions. A new pair of calculi 𝜆A (different from 𝜆B only by requiring refinement signatures for functions) and 𝜆C (corresponding refinements) are introduced to fix this, where the rules for LET and APP are changed; reduction for predicates on functions is also blocked. In such a calculus, type-checking is no longer possible by mere substitution; the computation of the weakest precondition decomposes into the calculation of preconditions for everything but functions and the validation of function annotations. Monotonicities on predicates are required for the proofs of soundness and completeness here.
The author cites the addition of complete higher-order functions and general recursion as future work, as well as exploring connections to incorrectness logics.