**Or: Generating experimental evidence for propositions-as-types**

S. C. Steenkamp, October 2020

This is a Part II or Part III/MPhil project suggestion from S. C. Steenkamp, a PhD student in the theory group. This project idea may be suitable for a student with an interest in functional programming, type theory, category theory, or related topics. Please get in touch with me at s.c.{lastname}@cl.cam.ac.uk if you are interested or have any questions!

Dependently-typed programming languages and proof assistants, such as Idris and Agda, can express equality propositions through the type system. For example, the proposition $∀ xs ∈ \mathsf{List}\ .\ \mathsf{reverse}(\mathsf{reverse}(xs)) = xs$ is a type, $(xs : \mathsf{List}) \to \mathsf{reverse}\;(\mathsf{reverse}\;xs) ≡ xs$ and its terms are functions taking lists to proofs that reversing a list twice yields the original list. Most programming languages, however, do not have dependent types and thus cannot express such properties in the type system. QuickCheck is an automatic property-based testing framework originally written for Haskell (though re-implemented in many other languages) that provides a more abstract, and potentially more thorough, method of testing than unit tests which are based on hand-chosen examples. (In practice the effectiveness of QuickCheck depends on the quality of the properties chosen, and the effectiveness of unit tests depends on the quality of examples chosen.) **The idea for this project comes from the observation that the majority of QuickCheck properties will be equalities**, and while Haskell cannot express equalities through the type system, **languages like Idris and Agda tend to have propositional equalities all over the place. Since these dependently-typed languages already have many examples of equality propositions, why not run QuickCheck on them?** This is, of course, not the certainty of a proof, however, when programming one might not want to go to the effort of a proof and yet still gain the extra assurance provided by QuickCheck that the propositional equality enforced through the type system holds for many examples.

Unlike most programming languages, dependently typed languages and proof assistants are able to express any proposition through the propositions-as-types paradigm (Curry-Howard isomorphism). However, some propositions might be infeasible to proof, but still worth testing in and using in program. For example, the Collatz conjecture is a famously unsolved problem in number theory, (and there are Collatz-like problems which are undecidable [Michel (2004) Small Turing machines and generalized busy beaver competition]), however, it can easily be tested using QuickCheck. That way we can at least get some “experimental evidence” that our propositions-as-types are true!

I would like to thank James Wood for suggesting this idea during discussions we had at the Midlands Graduate School 2019, during which John Hughes taught a course on property-based testing.

More details follow in this section, which could give some inspiration for a project proposal and/or an introduction to a dissertation.

Much of the development of programming languages and language design in the past couple of decades has, in my view, followed a trend of increasingly expressive and powerful type systems, which in turn provide much greater *type safety* guarantees about a program, *memory safety* being a particularity notable focus. This can be seen in examples such as C#, Swift, Go, and Rust, all of which have been designed to avoid memory errors common in the code of their predecessors.

We can view the development of QuickCheck as following a somewhat similar progression, albeit for software testing practices rather than type systems: QuickCheck moves beyond hand-picked *examples* in unit tests, towards hand-picked *properties* in automated property tests, where each property will be test for orders of magnitude more examples.

Here are some good introductory tutorials on QuickCheck, if you are unfamiliar then have a read of one of them:

- The Design and Use of QuickCheck
- An introduction to QuickCheck by example: Number theory and Okasaki's red-black trees
- An introduction to QuickCheck testing
- QuickCheck and Magic of Testing

A major reason why QuickCheck was first implemented in Haskell rather than another programming language is that Haskell is a pure language – functions don't have side effects by default, and if they do it must be wrapped in `IO`

, `State`

, or some other monad. In languages with side effects many properties will be badly behaved. Consider the following addition function:

```
def strange(a, b):
result = a + b + strange.counter
strange.counter += 1
return result
strange.counter = 0
```

The property `$∀ a, b\ .\$strange($a$, $b$)$\ =\$strange($b$, $a$)`

would not be true since the second time `strange`

is called it will be greater by 1. In this example we could attempt to fix it by subtracting 1 from the RHS, however, that is already assuming a left-to-right evaluation order, and in general it can get much more complicated.

It is not absolutely mandatory that the language itself is pure, however, so long as the function is written in a pure style. If the function has impurities between repeated calls then it will likely cause one of the quick-checks to fail, and can then be re-written to fix the impurity. Consequently QuickCheck has been re-implemented in many other languages. That said, there may be side effects resulting from interleaving of different functions, and this would not show up in QuickCheck, so a pure language is preferred. As this example illustrates, QuickCheck is much more amenable to testing pure code.

Equality properties are not the only properties that QuickCheck can test, but they are the most frequently used form of property, and the ones which we will be focusing on.

When programming in Haskell, or other functional languages, we often want to represent abstract data structure, or algebraic theories. For example, one might want to manipulate matrices using addition and multiplication just as one can with numbers. Both numbers (natural; integers; reals), and matrices form a ring with $+$ and $×$ as operators.

Let us consider a sub-example: Both addition on numbers, and addition on matrices, form a monoid with $0$ as the identity, or neutral, element $0 + x = x = x + 0$ and associativity $(x + y) + z = x + (y + z)$. In Haskell the type class for monoids looks something like:

```
class Monoid m where
-- mempty: monoid empty, or neutral, or identity element
e :: m
-- mappend: e.g. addition, list append, string concatenation
(•) :: m -> m -> m
```

And we can write down an instance for the integers and addition monoid:

```
instance Monoid Int where
e = 0
x • y = x + y
```

The wiki page points out that the following laws must hold:

- Left identity:
`$∀ x\ .\$e • $x = x$`

- Right identity:
`$∀ x\ .\ x$ • e$\ = x$`

- Associativity:
`$∀ x, y, z\ .\$($x$ • $y$) • $z = x$ • ($y$ • $z$)`

And for natural numbers, integers, reals, and matrices these laws do hold, and they can be proven by pen-and-paper, or using a proof assistant like Lean, Agda, Coq, Isabelle, etc. or even within Haskell using LiquidHaskell.

However, none of these laws are enforced in the type system, so it is also possible to write:

```
instance Monoid Int where
e = 0
x • y = x - y
```

And Haskell will accept this even though it is not a monoid – right identity holds, but left identity and associativity clearly don't work for subtraction.

In my view the irony of the situation is that it is possible to “quick-check” that a property like `$∀ x, y, z\ .\$($x$ + $y$) + $z = x$ + ($y$ + $z$)`

holds (whereas it does not for `-`

), but it is not possible to actually utilize that property in the language: Given a monoid you have no guarantee that someone hasn't made “integers with subtraction” as an instance.

So many would suggest using a more powerful type system which can express stronger properties. Indeed, LiquidHaskell is a more powerful type system which can express some properties, and Haskell has become increasingly dependently-typed during its development. That said, every type in Haskell is inhabited (trivially, by non-terminating functions), so it will never be able to express propositions-as-types.

Since there is nothing one is able to do with properties in Haskell (because the type system cannot represent these properties) one has to “come-up with“ properties to be quick-checked, whereas, in dependant type theory one is working with identity types all the time: they are just lying around waiting to be proven, or else quick-checked!

Mathematics is typically done in the foundation of ZF(C) set theory, which is a collection of 9 axioms (and Choice) about the relation $∈$, forming a one-sorted theory in first-order logic. This is a less-proof-relevant theory since the “terms” of first-order logic do not capture any information. In particular, the existential statement, $∃$, can be proved in first-order logic (and hence, ZF(C)) without providing a witness.

Conversely, (intuitionistic) dependent type theory (or Martin-Löf type theory [MLTT]), can be seen as a multi-sorted and proof-relevant theory of mathematics. It is also more intentional. In particular, the sigma type, $∑$, cannot be proved without providing a witness. Working with a dependent type theory gives some nice properties:

- proofs can be machine-checked/verified;
- proof verification is decidable;
- proofs are first-class objects, gives a way to compare multiple different ways of proving the same thing; and
- executable programs can be automatically extracted from proofs (may be very slow or inefficient, however);

It is especially good for reasoning about category theory or homotopy theory, since types give an internal notion of certain categories and a synthetic notion of various spaces. Both of which can be easier to work with than directly working with categories or spaces, used to prove new results, or used to gain new insight into existing results.

A dependent type theory can also be used as the basis for implementing a type system for a programming language. For example, over the years Haskell has gradually included more and more features in it's type system which come from dependent type theory. Idris is newer programming language with a dependent type system.

In our case the particular feature that we benefit from is the identity type. The elements of the identity type are proofs that the equality holds. So, for example, we could write a matrix multiplication function:

`matrix_multiply : ∀ {m n p q} → Matrix m n → Matrix p q → n ≡ p → Matrix m q`

And if you have two matrices, `A`

and `B`

, and you want to multiply them using `matrix_multiply A B`

then you need to provide an argument that proves that the width of `A`

is equal to the height of `B`

before you could call the function.

(Of course, normally we would write this function simply as:

`matrix_multiply : ∀ {k l m} → Matrix k l → Matrix l m → Matrix k m`

.)

A programming language with dependent types has much greater expressive power, for example, it is possible to have a monoid data structure which requires the monoid laws be true (compare the first two fields with the Haskell class above):

```
record Monoid (A : Set) : Set where
field
ε : A
_•_ : A → A → A
lid : (x : A) → ε • x ≡ x
rid : (x : A) → x • ε ≡ x
assoc : (x y z : A) → (x • y) • z ≡ x • (y • z)
```

Now when implementing this data structure one must prove that the laws hold. Programmers are not often expected to actually prove any properties about their programs (except in a 1st-year algorithms course). Is it possible, however, to gain the extra assurances from the identity types without having to put in the extra work of proving it? What if we could just “test the laws” à la QuickCheck?

Looking at this from the other perspective, that of proving mathematics, leads to another nice practice: It is common in mathematics to notice a pattern, and them form it into a statement and try out some more examples. For instance, one might notice that, when tried with *small* examples, even numbers can always be written as the sum of two primes, as Goldbach did, leading to the conjecture “*every* even number greater than two is the sum of two primes”. Now, no-one has a proof for this, but some people have run computer-checks up to very large numbers, which gives some assurance that it may be true (or, at the very least, it's not *obviously* false). Similarly, we may come up with a conjecture when working with a proof assistant, and want to work under the assumption that it is true after having checked that it holds for a somewhat more substantial number of examples.

This serves as a middle-ground between an entirely unevidenced postulate, and a full proof. For instance, one can postulate absurd statements such as $∀x. x ≡ \mathsf{suc}(x)$ and very quickly use this to derive false, however, QuickCheck would tell you this is an unsound assumption. Thus it can potentially reduce wasted work attempting to prove things based on false premises by checking that the postulates are not “obviously false”. (Of course some things will be let through since it is a stochastic sample of an [typically] infinite parameter-space.)

Now we'd desire to distinguish between proofs which contain only sound (proven) lemmas, and those proofs which contain unproven (albeit quick-checked) conjectures. Just as in Haskell, if you want to call an impure function, your own function must become impure (through the IO monad). Similarly, we want our proofs in dependent type theory that rely on QuickCheck to be “impure”, meaning in this case roughly “we are only sure it is true for the examples that we have actually tried with QuickCheck”. All other proofs will be unaffected, and thus we can be sure that they are “real” proofs. If you are doing mathematics one would presumably like to eventually remove all the quick-checked conjectures, and this gives a stepping stone from experimentally-evidenced-conjectures to full proofs.

Conversely, when programming, and QuickCheck may cover the vast majority of cases that are likely to arise in the real-world usage of the program, so writing a full proof may not be necessary. In this instance we have the added benefit above any other language with a QuickCheck implementation that these quick-checked properties are backed-up by the more expressive type system, so code that relies on a monoid, say, actually has some assurance that the monoid laws hold, unlike in Haskell where there is no guarantee that the `Monoid`

type class obeys identity or associativity.

Even if a full proof is necessary (for example in aeronautics or aerospace, or healthcare), the quick-checked properties again serve as a nice middle-ground for porting an existing program to a more powerful type system (almost like how TypeScript provides gradual-typing for JavaScript). Code without properties can be ported to code with properties in the type system, and until those properties are proven they can be quick-checked.