Duality


The rules for a variety of polarized classical connectives, in a focused sequent calculus presentation to reflect a variety of dualities, and interpreted via Curry-Howard.

Additive

1 A AB 2 B AB
, A , B
, AB
& (“with”): negative conjunction ≈ lazy pair
A, B,
AB,
A AB 1 B AB 2
⊕ (“sum”): positive disjunction ≈ either
(AB)AB
(AB)AB

The additive connectives, & (pronounced “with”) and ⊕ (“sum”) are unfamiliar symbols for familiar connectives: & is conjunction (∧, “and”), while ⊕ is disjunction (∨, “or”).

& is negative, and thus focused on the left (indicated by the square brackets). In a polarized calculus, negative connectives are defined by their left rules, which are rules for how to use them. (Γ contains inputs, so left rules consume things.) & has two left rules, which use it by projecting out the left and right parts, respectively. Its right rule builds it from its components: think pair/tuple literals.

⊕ is positive, and thus focused on the right. Positive connectives are defined by their right rules, which are rules for how to make them. (Δ contains outputs, so right rules produce things.) Its left rule consumes the sum by handling each alternative separately; think case expressions over an Either.

The additive units, ⊤ (“top”) and 0, are a bit of a puzzle. Note that neither has a focusing rule: you can’t use a ⊤ once you have it, and you can’t get a 0 at all. On the other hand, both their invertible rules are axioms (i.e. have no premises).

A sequent Γ ⊢ Δ can be read as “all of Γ prove any of Δ;” under that interpretation we can see that ⊤, which corresponds to logical truth, indeed satisfies that claim: once at least one thing on the right is proven, we’re done.

Likewise, the left rule for 0, falsity, is recognizable as the principle of ex falso quodlibet: you can prove anything at all from a contradiction.

no left rule for

,
⊤ (“top”): negative truth ≈ unit
,

no right rule for

0: positive falsity ≈ void

Multiplicative

A B
AB
, A, B , AB
⅋ (“par”): negative disjunction ≈ parallel/nondeterministic choice
A, B, AB,
A B
AB
⊗ (“tensor”): positive conjunction ≈ strict pair
(AB)AB
(AB)AB

In addition to & and ⊕, the additive conjunction and disjunction, ⅋ (“par”) and ⊗ (“tensor”) are multiplicative disjunction and conjunction. The difference is more obvious in a linear logic, but for us the relevant distinction between & and ⊗ is that we identify negative (resp. positive) types like & (resp. ⊗) with call-by-name (resp. call-by-value).

⅋, however, is stranger. It’s a disjunction, eliminated by using both halves (just like ⊕), but introduced using two things. Linear logic again offers one perspective, but another reading of Γ ⊢ Δ (via Curry-Howard) as “consume Γ to produce Δ” is salient: parallelism (hence “par”), or nondeterminism. The latter explains why ⅋ is disjunctive despite (at least notionally) containing two things; think of <|>. For Maybe, it selects the leftmost Just, acting as a choice; for [], it instead selects all alternatives, acting as a union.

Alongside multiplicative disjunction and conjunction are ⊥ (“bottom”), a negative falsity, and 1, a positive truth.

⊥, like positive falsity (0), proves anything if you’ve already got one (⊥⊢). Surprisingly, we can introduce one when we can prove the rest of the sequent (⊢⊥). Logically, falsity can’t change Δ’s provability, so this rule is justified.

Dually, we can’t learn anything from vacuous truth, so 1 can be eliminated freely (1⊢). Introducing it is likewise trivial (⊢1), just as for ⊤.

,
⊥ (“bottom”): negative falsity ≈ void
,
1: positive truth ≈ unit

Implicative

A B
AB
A, , B , AB
→: negative implication ≈ function
A, , B BA,
A B
BA
– (“subtraction”, sometimes ⤙, “co-implication”): positive implication ≈ calling context
(AB)BA
(BA)AB

Just as implication resembles the turnstile (as is evident in the premise of ⊢→), our computational reading of Γ ⊢ Δ as “consume Γ to produce Δ” indicates that functions do as well.

A – B can be read as “A without B” (both arithmetically and logically), but insight into its computational interpretation can be gained by its relationship to functions.

The premises for →⊢ and ⊢– are the same, as are the ones for ⊢→ and –⊢: duality means “each eliminates the other.” → eliminations are function calls, so – has an argument: the A in the first premise of ⊢–. B is a hypothesis, meaning that the derivation must provide it. Therefore, it also has a continuation from B—and thus, everything you need to call a function: both input, and output destination.

This gives a vivid picture of the nature of duality: a subtraction is the negative space around an implication, and an implication is therefore also the negative space around a subtraction. A complete view of either requires both.

Negating

Like → and –, negations have the interesting behaviour of moving propositions across the turnstile: from antecedents to succeedents. They are constructed (⊢¬ and ⊢∼) from a premise with a hypothesis: to make a ¬A or ∼A, we must have a premise which uses an A—rather like the premise of ⊢→. Dually, their elimination (¬⊢ and ∼⊢) requires an A to use as input. Thus, negations have long been given computational meaning as continuations, a representation of “the rest of the program” from some program point.

Intuitionistic logics therefore often encode negation as implication of falsity, i.e. ¬A ≈ A → ⊥. Intuitively, “not A” and “A implies falsity” align. This has the precise shape of a continuation. Further, double-negation like ¬¬A becomes (A → ⊥) → ⊥, which has the shape of continuation-passing style.

Our negations are involutive, i.e. (mutual) self-inverses, due both to polarization (flipped from their argument) and focusing behaviour (remaining in the current phase).

A A A, , A
¬ (“not”): positive-within-negative negation ≈ lazy continuation
, A A, A A
∼ (“negate”): negative-within-positive negation ≈ strict continuation
AA
AA

Shifts

A, A A , A
↑ (“up shift,” call-by-push-value’s F): positive-within-negative shift ≈ return
A A, , A A
↓ (“down shift”, call-by-push-value’s U): negative-within-positive shift ≈ thunk
(AB)N = (AN)B
(AB)V = (AV(BV))
translations of CBN and CBV function types into polarized calculi

Shifts serve two pragmatic purposes: like negations, they have the opposite polarity of whatever they shift; and they end the focusing phase of proof search and begin the inversion phase.

Up shifts may be unfamiliar in terms of types, but at the term level are something like a return, particularly in monadic code: they embed a value (positive) within a computation (negative). In CBPV (call-by-push-value), this is the left adjoint, F.

Down shifts, meanwhile, may be more familiar as thunks: they wrap a computation (negative) up into a value (positive). In CBPV, this is the right adjoint, U.

The translations of function types from CBN and CBV into polarized calculi (left) model evaluation orders using shifts. The CBN translation treats arguments as thunks around lazy computations, while the CBV translation explicitly wraps the result in a computation.

Quantifiers

The rules for quantification are slightly unusual in this presentation in their use of both side conditions (checking that the variable does not occur free elsewhere within the sequent) and (capture-avoiding) substitution (A{B/X} substitutes the proposition B for the variable X within A, e.g. (X ⊕ Y){Z/X} = Z ⊕ Y).

Quantified propositions—and thus, types—offer yet more dualities: both can be used to hide information, but from different perspectives. ∀ constrains its construction, but is used freely; ∃ constrains its uses, but is constructed freely. Universally-quantified types are used when folding polymorphically recursive data and a variety of encodings using functions. Existentials model abstraction in abstract data types, ML modules, open universes of exceptions, and GADTs.

Here, the identification of negative = computation, positive = value is stretched. If computation in the CBPV sense is “something resident in code pages” then ∀ seems misplaced—why should ∀ be represented at runtime (absent dependent types)? On the other hand, the identification of positive = data, negative = codata fits: ∀ is defined by how it is used, and ∃ by how it is made.

A{B/X} X.A , A
Xfv()
, X.A
∀ (“for all”): universal quantification ≈ polymorphism
Xfv()
A,
X.A,
A{B/X} X.A
∃ (“there exists”): existential quantification ≈ abstract data type
(X.A)X.A
(X.A)X.A

Core

init A A cut
, A A,

The init and cut rules are quite common to sequent (and other) calculi. This presentation clarifies a couple of useful properties.

init allows us to begin a proof (or end one, working bottom-up as we often do, from goal to premise) with a proposition given to us. Under Curry-Howard’s interpretation of proofs as programs, we can understand this as the polymorphic identity function, id. (Sequent calculus proofs in particular correspond to higher-order programs, e.g. functions taking functions to functions.)

cut instead allows us to prove a sequent by introducing and immediately eliminating a proposition in its premises. Again taking sequents as analogues of function types, we can think of it as the composition of proofs with compatible consequents and hypotheses.

Furthermore, cut and init form a category, where the morphisms are sequents between contexts, joined by cuts. The Curry-Howard correspondence holds up here, too: id and function composition give a category of functions between types.

Structural

Structural rules governing our use of the context. Without these, our logic would be classical linear logic (and would have to adjust the multiplicative and cut rules to split the context between premises). The rules are given their traditional names here, but since we mostly operate bottom-up, starting with the goal given by the syntax of a program, they actually have the exact opposite sense for us.

Top-down, weaken introduces an arbitrary hypothesis or consequent. An extra hypothesis weakens the proof because more information must be provided to prove the same conclusion. An extra conclusion instead weakens what we can learn from it. Contraction, meanwhile, eliminates redundant information from the context.

Bottom-up, weakening instead strengthens a premise, tightening its requirements and guarantees. In reality, however, this mostly amounts to bookkeeping, removing propositions that are in the way of some goal. Contraction, meanwhile, allows us to duplicate information we already know, again primarily as a way of getting one’s ducks in a row for the benefit of proof search. e.g. init proves a minimalistic A ⊢ A. Weakening is often required to whittle sequents down for init to apply.

Note that the customary exchange rule is not required as this presentation’s contexts are unordered.

weaken A, weaken , A
contract A, A, A, contract , A, A , A