Bitesize Functional Programming: Universals As Existentials
In type theory, the existential quantifier, is the dual to the more familiar universal quantifier, (which expresses parametric polymorphism). Believe it or not, these existentially quantified types can be useful in day-to-day programming in languages like Haskell where you wish to perform some kind of “information hiding”. As a simple example, consider this program:
We start off by declaring a datatype, Showable, with a single constructor, MkShowable that has type . This is to say, it injects any value with a Show instance into the Showable type.
This is really an existentially quantified datatype. However, many newbies are confused by this pronouncement, as it quite clearly uses the forall keyword, which introduces universal quantification - I’ll come back to that at the end of this article.
This next chunk of code declares that any value of type Showable is itself a member of the Show type class. We implement it by unpacking the Showable value and applying show “recursively” to that value. We are able to do this precisely because of the constraint in the MkShowable constructor that specified that whatever type gets injected into Showable must have a Show instance somewhere.
An aside: this construction has something of the flavour of object orientation, as it implements a uniform interface using dynamic dispatch. I’ll leave it up to the inimitable Oleg to expand on this point…
Here we make use of the marvellous information-hiding properties of the existential to create a list holding values of varying types. All we can know about the elements of the resulting list is that they certainly have a Show instance.
This last declaration is just a bit of top-level gunk that outputs all the things in that list to the screen using the Show typeclass.
So, that’s a pretty interesting technique. But in precisely what sense is it existential, given that we got all this going by using the universal quantifier (the forall keyword)? I think the easiest way to look at this problem is using the marvellously useful Curry-Howard correspondence between types and terms in a logic system.
First, remember than the function arrow at the type level () just corresponds to implication in proofs, and the quantifiers carry through unchanged. Secondly, remember that in the proof system corresponding to simple ML-style types (second-order intuionistic logic) . Thirdly, recall that if is free in then . Finally, you need to know that pushing a negation through an existiental or universal quantifier flips the kind of quantifier we have (the statement that an does not exist satisfying is equivalent to saying that all s do not satisfy ). Now we can gradually manipulate the type of a simple data constructor function into a form that makes it obvious where the existiential comes from. Let’s try this example (which doesn’t have any typeclasses, since they don’t really fit into the classical correspondence):
So MkK has type . Let’s push that into the realm of logic and proof and see what we can do:
And we’re done! We can extract this last proof term back out as a type in our source language without any fuss, and hence from only a few logical rules (which, if you didn’t know them, are fairly easy to convince yourself of) we have derived why these constructs are known as “existential”. In fact, some Haskell compilers once supported the exists keywoard to let you declare them directly, but due to precisely this correspondence between the two quantifiers GHC dropped the extra keyword in favour of just using the forall to do both jobs.
Ahem. In intuitionistic logic
$$!a \rightarrow \beta = \lnot \alpha \lor \beta$$
is wrong (is there a way to get LaTeX in the comments?)
Ed: I've typeset Alexeys comments for him. You can use LaTeX by enclosing your code in a couple of dollar signs at either end - like \$\$this\$\$, without the slashes.
Sorry Alexey, you are quite right - that will teach me to post a "proof" before checking I understand the axioms :-). It is easy to see why $$\alpha \rightarrow \beta = \lnot \alpha \lor \beta$$ is wrong since $$\alpha \rightarrow \alpha$$ must be true, which leads directly to the law of the excluded middle, $$\lnot \alpha \lor \alpha$$.
But I suppose we can still argue that the proof is OK (if you ignore my witterings about intuitionism), since the Curry Howard isomorphism extends to classical logic as soon as you introduce an operator like call/cc (which is indeed definable in Haskell - see http://www.haskell.org/all_about_monads/html/contmonad.html).
Except $$callCC$$ in Haskell has type $$((a \rightarrow \text{Cont } b) \rightarrow \text{Cont } a) \rightarrow \text{Cont } a$$. $$call/cc$$ in Scheme has type $$((a \rightarrow b) \rightarrow a) \rightarrow a$$. I think (I am going by the description here, but the examples of Scheme's call/cc make my head hurt. It's a lot easier to work with the Haskell examples). And you need $$((a \rightarrow b) \rightarrow a) \rightarrow a$$ to extend CH to classical logic.
Alexey, thanks for coming along and correcting my error once again - I just assumed callCC was realisable in Haskell, but we do appear to need that embedding into the Cont monad to make it work. So Haskell is intuitionistic after all, and we can use a tool like Lennart Augustssons wonderful Djinn to check that the type we were after is a no-go:
So this means I do indeed need to rethink the proof in terms of the axioms of intuitionism. I've had a crack using the Wikipedia article on the subject, and I get:$$!\begin{array}{cccc}\forall \alpha. \alpha \rightarrow \text{K} &\rightarrow& \beta \rightarrow \text{K} & \text{for fresh } \beta\ \text{, using rule PRED-1}\\ &\rightarrow& (\exists \beta. \beta) \rightarrow \text{K}& \text{using rule } \exists \text{-GEN}\end{array}$$
Which appears to work, albeit not quite as satisfyingly (I'm clearly less comfortable with intuitionistic logic!).
Yes. The proof in the other direction is also pretty easy.