That Logic Blog

February 13, 2006

Linear Logic - Naturally!

Linear logic has enjoyed enormous popularity over the last couple of decades or so. For those without some training in structural proof theory, understanding the system can be quite intimidating, especially because of the funny notation and weird jargon. In this post, I am going to show you that, in fact, you could have invented linear logic!

To start with, we need to understand classical logic. Our basic ingredients are conjunction, disjunction and negation. I'll denote these by ⊗, ⊕ and ¬ respectively. Since we are doing proof theory, we build our logic by saying how to "introduce" these connectives and describing how they play with each other. This is easy for negation. We want it to act as a unit, so we just say that we have I → A ⊗ ¬A and A ⊕ ¬A → I. Here, I have overloaded the symbol "I" to mean "the strongest false proposition" when it appears on the left of an arrow and "the strongest true proposition" when it appears on the right of an arrow. Such overloading of symbols is quite common in proof theory, so it is best to get used to it. Now, at least for classical logic, I can be taken to mean, respectively, the bottom and top element of the associated boolean algebra, viewed as a lattice. Alternatively, it can be viewed as either the initial or terminal object of your categorical semantics. Anyway, we complete our describption of negation by throwing in the De Morgan laws (but these are derivable from the rest of the things we'll throw in later).

Anyway, let's move on. We need to say how to introduce conjunction and disjunction. As it turns out, there are two ways in which to do this - this is important later. Essentially, the distinction is to whether we treat conjunction as a cartesian product or a tensor product and dually for disjunction. We'll see in just a bit that this distinction does not matter at the moment - classical logic cannot tell the difference! With a view to the future, I'll reserve ⊗ and ⊕ for the tensor intepretation and use x and + for the cartesian/cocartesian interpretation of conjunction and disjunction, respectively.

If we choose the tensor option, then we throw in implications A ⊗ B → A ⊗ B and A ⊕ B → A ⊕ B. For the cartesian interpretation, we would throw in implications A x B → A, A x B → B, A → A + B and B → A + B. We also say that if A → B and A → C, then A → B x C. Dually if A → C and B → C, then A+B → C. The cartesian/cocartesian structure should be clear from these implications.

Now, classical logic also includes certain "structural rules", which describe the structure of sequents. If you don't know what sequents are don't worry - we won't need them today. There are three basic flavours of structural rules, each of which has a version for conjunction and for disjunction, respectively. The first is "exchange", which says that things commute. That is, we throw in implications A ⊗ B → B ⊗ A and A ⊕B → B ⊕ A. We're going to keep exchange in all of our systems today. The second type of structural rule is called "weakening". For this, we throw in implications A ⊗ B &rarr A, A &otimes B → A and, dually, A → A ⊕ B as well as B → A ⊕ B. Notice the similarity with the cartesian introduction rules! We'll exploit this in just a sec. Before that, let's take a peek at the last type of structural rule. It is called "contraction" and says that we have to throw in all implications of the form A → A ⊗ A and A ⊕ A → A.

With all of these structural rules, the cartesian and tensor interpretations collapse! Let's show that A ⊗ B → A x B. First, we certainly have A → A and B → B. We can then use weakening to obtain A ⊗ B &rarr A and A ⊗ B → B. Then, we can use the cartesian introduction rule to obtain A ⊗ B → A x B. I'll leave the derivation of A x B → A ⊗ B to you (hint: use a contraction). The derivations for disjunction are completely dual.

Now, what you may argue that you would like your logic to be able to tell the difference between a tensor product and a cartesian product. In order to achieve this, all you need to do is remove weakening and contraction. If you do this, then the tensor and caresian introduction laws fail to lead to the same thing, so they create different connectives. The ⊗, ⊕ are the main part of the multiplicative fragment of linear logic, whereas the x and + connectives come from the additive fragment of linear logic. To complete the scenario, all you need to do is add in some units for the additive fragment.

Well, we have have almost finished creating linear logic and it is not even dinner time yet! We have one more thing left to do, however. Recall that Glivenko's Theorem says that you can embed classical logic in intuitionistic logic, via the double-negation translation. That is, A is classically provable if and only if ¬¬A is intuitionistically provable. Can we get a similar embedding of classical logic into this logic we have just cooked up?

As it stands, there is no obvious way in which to embed classical logic. What we need to do is create a mechanism whereby we can access weakening and contraction without actually adding full weakening and contraction to our logic. Let's worry about contraction and weakening for ⊕ only, for now. Remember, what we want to do is to take a classical formula A and translate it into something provable in our logic. But to do so, we need to somehow give it access to weakening and contraction. So, let's have the translation mark A in a way that gives it access to these rules. Let's be crazy and denote this mark by ?. We'll also throw in some rules that say that any formula can be given access to weakening/contraction: A → ?A. Also, There's no point in marking a formula twice, so let's also say that ??A → ?A. Finally, postulate the ⊕ version of weakening and contraction only for formulae that have a ? out front. That is, contraction looks something like ?A ⊕ ?A → ?A. Everything for ⊕ pushes through in a completely dual manner. We could, say, mark formulae with a ! to give them access to the ⊗ weakening/contraction rules. These marks are the "exponentials" of linear logic and actually comprise an S4 modality - more on these critters here.

Apart from some added formalism and some theorems to say that everything works out as it should, you have now created linear logic!

6 Comments:

Anonymous Anonymous said...

Peter, have you seen computability logic? It claims to be the logic of resources just as linear logic does. And the two appear to disagree on many principles. I wonder which one is "right".

7:53 AM  
Blogger Unknown said...

Nice introduction. Reminds me somehow of Boolos' "Gödel's Second Incompleteness Theorem Explained in Words of One Syllable".

9:10 AM  
Anonymous Anonymous said...

Sorry, by Peter I meant Jon.

9:43 AM  
Anonymous Anonymous said...

I have seen a little bit of computability logic and wrote a bit about it here and here. I do plan on continuing that series of posts, but have been sidetracked by other things recently...

11:18 AM  
Blogger John Baez said...

By some coincidence Phil Scott has just been teaching me linear logic and related stuff... and I think I've finally gotten the point.

By some further coincidence,Vladimir Voevodsky is giving lectures at Stanford on the "homotopy lambda calculus".

Can anyone report on what he said?

I have a guess, because Phil Scott noted that in getting a cartesian closed category from intuitionistic logic, one takes sequents

Gamma |- Delta

as objects and equivalence classes of proofs as morphisms. One needs to take equivalence classes to get composition of morphisms to be associative, etc. From an n-categorical viewpoint, taking equivalence classes is evil. Instead, one should use higher morphisms. It would be nicer to instead throw in 2-morphisms between morphisms, like associators, and so on, thus getting a "weak cartesian closed omega-category" -
a concept which, alas, has probably not been defined yet.

For someone like Voevodsky it would be natural to use ideas from homotopy theory instead and define something like a "cartesian closed category up to coherent homotopy". Such a thing should be lurking in the ordinary typed lambda calculus.

Is this what Voevodsky is talking about? Or something else?

9:06 AM  
Anonymous Anonymous said...

The closest to adding higher dimensional structure that I am aware of are the classical categories of Fuhrmann and Pym. They enrich over a partial order. On the one hand, this may seem like too little, since you might want to consider different proof reduction paths. On the other hand, the partial order is induced by the cut elimination reductions. Categorical proof theorists seem to like having their cut elimination procedure be confluent, so I suppose that one could take this as a coherence theorem in the strong "all diagrams commute" sense, thereby justifying using only a partial order.

I suppose that the question then becomes whether proof reduction steps that are not related to cut elimination are interesting. Sure you could, say, make associativity of tensor introduction hold only up to coherent natural isomorphism, but does this tell you anything extra about the meaning of the connective? Personally, I think that it does, though I am not quite sure what.

5:22 PM  

Post a Comment

<< Home