The CurryHoward correspondence is a map for moving between logic and type theory, relating propositions with types and proofs with programs. It describes a twoway street, and we can freely move between the two worlds, or perhaps merely two perspectives, so long as we follow the map.
Sometimes the road takes us to unexpected places. Here’s a trip I’ve been on recently.
Double negation
I’ve been working on a language named sequoia
, which embeds polarized classical logic in Haskell. Of course, Haskell corresponds to an intuitionistic logic, so we can’t just bring arbitrary classical proofs over directly. We have to use a doublenegation translation, importing classical propositions A as intuitionistic propositions
CurryHoward tells us a translation for negations, but we can work this one out ourselves with a little logical knowledge: a negation
Implications translate to functions, but what about Data.Void
module:
data Void
Having no constructors, Void
also has no inhabitants—no proof terms—just like Void
indeed corresponds to a > Void
? A function returning Void
is a function that cannot return; it can only pass control along to another function returning Void
. In other words, a > Void
is just what CurryHoward tells us: a continuation.
Thus, the double negation
type DoubleNegation a = (a > Void) > Void
which is a shape also known as continuationpassing style. Classical languages embed into intuitionistic ones via CPS.
As discussed, modelling Void
extends to modelling negations absurd
, defined as:
absurd :: Void > a
= case v of {} absurd v
to supply proofs using the principle of explosion, or ex falso quodlibet. And Void
is an appropriately abortive substitute for
However good a fit Void
might be initially, it’s quite inconvenient when embedding a language within another, whether by encoding or interpretation. You typically want control to return, and to run code afterwards, and to collect results, to continue with the next iteration of a REPL, or to print, save, or transmit computed values, clean up acquired resources, and so on. Absent tricks like throwing values up the stack with exception handlers, you simply can’t: nothing returns from the Void
.
We don’t necessarily have to use Void
itself, however, but merely something which gives us the same proprties w.r.t. control. In particular, we can substitute Void
out for another type, often a parameter, in our definition of DoubleNegation
. I mentioned earlier that DoubleNegation
is continuationpassing style, so I’m going to go ahead and spoil the punchline by renaming the type to Cont
at the same time:
newtype Cont r a = Cont { runCont :: (a > r) > r }
Controlwise, the main feature of Void
is abortive control—it can never return, so it can only pass control to something else with the same property. Cont
offers us the same behaviour, in that normal control flow is represented as applications of the continuation, passing control along to the next computation, while abortive control is possible by returning r
directly.
By replacing Void
with a type parameter r
(the traditional name, perhaps for “result,”, or “return”), particularly one visible at the type level (vs. universally quantified over as with Codensity
), we’ve also opened the door to more interesting control tricks like multiprompt delimited control, and therefore to arbitrary effects. Handy tricks to have up your sleeve when implementing a language, to say the least.
Beginnings and endings
I recently wrote a post about what I termed environmentpassing style. A series of observations arrives at (in some sense) dual translations of a > b
as (b > r) > (a > r)
and (e > a) > (e > b)
(shapes that you start seeing everywhere once you learn to think of them this way, e.g. in folds and unfolds, in Mendlerstyle algebras and coalgebras, in lambda encodings, etc.). Here, just as above, r
substitutes for—indeed, it abstracts—e
?
The common theme running throughout the sequent calculus is duality. r
abstracts Void
, Void
corresponds to ()
, ()
is abstracted to e
. (Back and forth ’cross the CurryHoward bridge!)
Earlier, we judged r
a fitting substitute for Void
because it behaved compatibly with respect to control. In contrast, ()
doesn’t behave in any way at all.
r
’s abstraction of Void
is determined entirely by CurryHoward: r
abstracts Void
insofar as Void
corresponds to
This gives us ex falso quodlibet: Void
provides this via absurd
. For r
, imagine a type representing sequents implemented using Cont
. If your arguments contain an r
, there’s no need to consider any other arguments or what values you could compute and return; in fact, you don’t need the continuation at all. Just return the argument of type r
and you’re done.
The right rule is instead:
This instead says that you can construct it along any sequent which was provable anyway; or, working bottomup, Void
, period; but if we could, it certainly wouldn’t add any information. On the other hand, r
is inhabited, so we can certainly follow the analogy: we can construct an r
if we can return without it anyway; in fact, we do, by applying the return continuation.
For e
we instead need to trace its relationships with
Left rules can be read by recalling that a sequent is sort of like a function: we can build a function to eliminate ()
works much the same way: adding an argument of type ()
won’t help you construct anything, since you could have just introduced it as a literal. e
therefore must work the same way, but we’re going to weaken our informal restatement of this down to: you are free to ignore an argument of type e
. Of course, we aren’t in an e.g. linear setting, so we’re free to ignore every argument. But even if we were in a linear setting, e
should still be discardable.
On the right:
You can always make a ()
; it’s what makes it discardable. (e
, then, must also be freely introduced, ambiently, ubiquitously.
Considering that we started with the humble
Dually, demand for
Assertive negativity
R represents the ubiquitously available ability to jump to the end of the program and abort. E, on the other hand, represents the ubiquitously available ability to summon (already globallyavailable) information out of thin air: the environment (hence the name E). Neither of these give us anything really new—after all, we could always pass information inwards, threading it through all the intervening calls, or abort and return outwards by means of Maybe
or the like. But doing either in a single step without changing the rest of the code base is pretty handy.
Further, Cont r a
gives us some tools that reset
, and even to resume control at the point at which we jumped afterwards. This in turn allows us to encode arbitrary effects and handlers.
In much the same way, the dual structure—probably a comonad—gives us local environments, sandboxing, and coeffects.
If Cont r a
is ¬¬A, then what is this dual structure? Following the thread backwards, Cont r a
is Cont r a
is
Unlike
Polarization
I mentioned before that sequoia
embeds polarized classical logic. Thus, the above tells only half the story, because we have two different negations: the negative negation ¬ (“not”), and the positive negation ~ (“negate”). They further accept propositions of the opposite polarity, i.e. ¬ takes a positive proposition and ~ takes a negative one, and are involutive, cancelling each other out. ¬~A^{} ≈ A^{}, and ~¬A^{+} ≈ A^{+}.
Likewise, there are actually two different assertions.
The encoding of ¬A as A →
While the logical rules for its introduction and elimination offer some insight into what its representation must hold, it’s perhaps clearest under a different set of encodings: this time, encoding → and − in terms of disjunction/conjunction and negations. Classically, A → B can be encoded as ¬A ∨ B, while A − B can be encoded as A ∧ ¬B (i.e. “A and not B,” hence the pronunciation of − as “without”). If A − B could be encoded as a conjunction of A and the negation of B, then what does CurryHoward have to say about that? Conjunctions are product types; negations are still continuations; A − B is isomorphic to a pair of an A and a continuation from B.
We can see now that A → B and A  B are dual: A − B holds both the argument to and continuation from A → B. I sometimes imagine A − B as a pipeline, with a section missing; A → B is precisely the missing section that fits and completes it.
Thus far our encodings of the two negations and our single assertion are:
A ≈A A ≈A A =A
We can further organize these by polarity and purpose:
negative  positive  

negation 


assertion 

…? 
The negations both invert polarity, whereas ¬̷ maintains it. Further, the negative connectives both employ →, whereas ~ uses −. Putting it together, we can expect the positive assertion to encode as −, and to maintain polarity, and that gives us:
negative  positive  

negation 


assertion 


✓, pronounced “true,” is truly dual to ¬, and ¬̷ is dual to ~. ✓’s encoding gives us the following rules:
So far, so… disappointing. These are precisely the same rules as we found for ¬̷; only the symbols and the polarities have been swapped. And what’s worse, the same was already true of the rules for the negations.
Speaking of which, the encoding for ~ seems circular: the positive continuation ~A can be encoded as
It was in precisely such a mood that I happened to open Paul Downen & Zena Ariola’s recent paper Compiling with Classical Connectives to where I’d last left off, on a page starting with this paragraph:
In short, you don’t use the same representation of continuations under two different names; you use two different representations, each with their own strengths. I was delighted to read this, because it reflects something about ~ that I’d only just noticed a day or two prior: the reason representing ¬A with continuations a > r
works is that we’re consistent about it. Shouldn’t we be consistent about our treatment of
negative  positive  

negation 
¬A ≈ A → 
~A ≈ 
assertion 
¬̷A = 
✓A ≈ A  
In other words, ~A is defined to hold the environment in which it was constructed. It’s a lot like a closure, and precisely the kind of thing Downen & Ariola describe.
This also sheds some light on the contrast between the two assertions. ✓A holds an A, along with a continuation from R. (I’m not exactly certain what that means, just yet.) ¬̷A, on the other hand, may close over an A, or it may construct one from E. In other words, ¬̷A models dynamicallyscoped variables, and ✓A models lexicallyscoped ones. And while ¬̷A may look partial at first blush, it‘s important to remember that while the logical rules don’t (currently) encode this fact, E needn’t be the same for every sequent any more than R need be. (reset
, for example, allows the conclusion’s R to vary from the premise’s.)
Endings and beginnings
I set out to understand continuationpassing style better, and whatever its opposite might be, and ended up coming up with two new connectives modelling lexical & dynamic scoping, a way to integrate coeffects with the same deep level of access as effects and CPS already enjoy, and a deeper appreciation the positive negation ~ and its role in contrast to ¬. Given all that, it’s hard to be too sad that I still don’t know much about what I was calling environmentpassing style. I intend to carry the evenhanded treatment of ~ further still, and try to understand the encodings of → and −. I also intend to write about the modular, composable representation of ninput, noutput computations I’ve come up with based on the above findings. That will have to wait.
For now, I mostly just wanted to share how wonderful it is what sort of things we can discover as we follow the road back and forth across the CurryHoward correspondence.