That Logic Blog

August 26, 2005

Coherence

Take a category. Now add a tensor product. Slap in a unit for the tensor product. Add a dose of associativity for the tensor product. While you're at it, stir in some commutativity also. Fold in some diagrams saying that the associativity, commutativity and unit behave properly. What you have now is a symmetric monoidal category. This is the basic building block for making categorical semantics of substructural logics such as, say, linear logic.

If you're smart as a whip, you'll be able to prove a coherence theorem. In grandiose wording, this says that "every diagram commutes". More down to earth, any diagram built only out of the associativity, commutativity and unit maps commutes. This was first formulated and proven by Kelley and Mac Lane in response to a question of Steenrod (when is there a canonical map between modules of certain "shapes" over a fixed ring). First, those Cat maniacs formulated the problem in terms of monoidal categories (heck, they cooked up the categories on the spot!). Then they ran into a bit of a problem. Sure, there are some shapes that have a canonical map between them. The problem, however, is constructing the map. Generally you need to compose together a few maps to get it. Only, sometimes an intermediate shape appears in the composition which is in neither the domain nor codomain. And this object's size is unbounded. Oh, woe is canonicity!

Notice something there? This is precisely the same as the situation with the cut rule. The problem with it is that it leads to intermediate formulae of unbounded size in the proof. Kelley and Mac Lane noticed this analogy, but were not able to make it into anything more than just an analogy. Instead, they proved coherence by defining a notion of a constructible map and proving that every canonical map is constructible. Sounds familiar, eh? This is precisely the point of cut elimination.

Now, via Kreisel, Mac Lane started up a correspondence with Grigori Mints, who was still in the Soviet Union at the time. Mints was able to turn the analogy with cut elimination into more than just an analogy. He showed Mac Lane how the terms arising in a monoidal category are, for all intents and purposes, the same as the terms arising in a certain relevant logic. By proving cut elimination for this logic, Mints was able to conclude coherence immediately! For a short summary of the correspondence between Mac Lane and Mints, see:

"Why Commutative diagrams coincide with equivalent proofs", Saunders Mac Lane, Contemporary Mathematics vol 13, 1982; 387--401

In short, if you have any interest in proof theory and have not read this article yet, then what are you doing still sitting at your computer? Go get it!

What blew me away with this article is that it pretty much covers most of the category theory wizardry used in (exponential free) linear logic, but predates Girard's paper by 5 years. Moreover, the stuff that it speaks about happened way before the article itself was published! Crazy!

August 23, 2005

Computer Mathematics

I am rather swamped with things to do at the moment, so my computability logic escapades have been put on halt. This recent upswing in the number of things to do has led me to wish that my computer could do some of them for me. Perhaps the hard bits such as, oh I don't know, proving my theorems. Yeah. That would be grand.

There are two possible routes to follow when getting computers to attack theorems. On the one hand, we can ask that the computer does everything on its own. We just give it the statement of the theorem and away it churns creating a proof. Most succesful theorem provers of this kind more or less use souped up versions of resolution. The most prominent example is OTTER, which handles first order logic with equality.

What's resolution? Say you have a clause C that contains some literal x and another clause D which contains the negation ~x. Then, the resolution rule says to pass to the union of C\{x} and D\{~x}. A resolution refutation is one which consists of a sequence of applications of the resolution rule and ends in the empty clause. As such, it is a negative procedure: feed in the negation of your statement. If your statement is a theorem then the negation is not satisfiable and resolution will be able to pick up on this.

As you might suspect, just using resolution is not a particularly good way of deriving theorems that mathematicians are actually interested in. Moreover, doing things only in first order logic vastly limits what you can even express to begin with. So, not much analysis can be done, for example.

The other approach is to have some amount of user interaction. The most prominent approach to this style of theorem proving is dubbed the "LCF philosophy". LCF stands for the Logic of Computable Functions, which was introduced by Dana Scott around 1970. LCF is also the name of a theorem prover, originally developed in Lisp at Edinburgh. Its philosophy is to have a trusted kernel of procedures that are "trusted" to be correct. Typically, these would be some set of rules characterising higher order logic or some sort of typed lambda calculus. Everything else in the system is built on top of this kernel and compiled down to functions that only use the kernel rules. Much like how a modern operating system is built. While LCF itself is no longer around, many famous modern interactive theorem provers are directly descendent from it, for example Isabelle and HOL4. Incidentelly, the language ML was designed and built specifically for the task of interactive theorem proving which led inter alia to the first incarnation of Isabelle. LCF-style theorem proving has shown up recently on mathematical radars with Georges Gonthier's recent verification of the proof of the Four Colour Theorem:

A Computer Checked Proof of the Four Colour Theorem
; Georges Gonthier

Alas, interactive theorem provers do not free me from doing any work. Quite the opposite in fact. This is because, in order to give a fully verified proof of a theorem in an intereactive prover, one must grapple with the "gap problem". What's that? In almost any nontrivial proof, there are statements to the effect of "it is trivial to see that...", "The reader may verify that,...", "the other 1,987,456 cases go through without essential changes" etc. Now, you can't just say that to a computer. You have to fill in all of the gaps! It would be nice if these systems were able to automatically fill in the gaps but, alas, they can't. Oh dear. At any rate, for a light overview of the sorts of problems involved and stuff that has been done, look no further than:

The Challenge of Computer Mathematics
; Henk Barendregt and Freek Wiedijk

August 09, 2005

Computability Logic 1

In between much excitement, I have started teaching myself computability logic. As I mentioned before, this logic is semantically motivated and, as yet, does not have a completely satisfactory proof theory.

The idea behind computability logic is to model interactive computation. In order to do this, we set up the situation as a game between the user and the computer. In this scenario, we want the computer to win, since this means that the problem is computable and there is an algorithm for solving it. Play proceeds via a sequence of labelled moves, or labmoves. These are strings representing the move, labelled with which player made the move. A run is a finite or infinite sequence of labmoves and a position is a finite run.

It is quite easy to define one of the simplest notions of game involved, that of a constant game. This consists of a set of runs not containing a special symbol, say ♠. This set satisfies the condition that a run is contained within it iff all of its nonempty finite initial segments are there too. These runs code up the legal runs for the game. We force each player to play according to the rules by stipulating that the first player to make an illegal move immediately loses. That is, say the game is in position Γ and I make the move α. If Γ is a legal run but (Γ, α) is not then I lose. Boo!

Together with this collection of legal runs, a constant game also contains a function which sends each run to one of the players, indicating who has won. There are various finiteness conditions that one can stipulate on the set of runs, but I will introduce these as and when they are needed.

One important point to note is that we have not stipulated that, at any given stage of the game, only one player can move. If this happens to be the case, we say that the game is strict. In general, we will not restrict ourselves to strict games as these do not model all the sorts of interactive computation we may find ourselves performing.

One way to think of what is going on is to imagine interacting with a server. We keep sending it queries and it keeps replying. Of course, nothing is stopping, say, Bob from being annoying and sending lots of requests before receiving any replies from the server. Moreover, at any given stage the server may reply to any of Bob's previous queries or Bob could send yet another request. No sweat, we can model that, since we do not require the game to be strict.

Let's look at how to recover classical logic and Church-Turing style computablity. If every legal run in a constant game has length at most k, then we say that the game has depth k. Classical propositional logic corresponds to depth 0 games. There are precisely two of these. One of them returns the computer as the winner immediately and the other returns me as the winner immediately. If we model one of these as truth and the other as falsity, then we have classical logic. That is, all that matters is the input - we immediately get an answer. This is not particularly interesting but will eventually lead to classical logic being an elementary fragment of computability logic.

Modelling traditional computation is slightly trickier, though not too hard. What we do is model it as a depth 2 game. If we do nothing, then the computer has won (since it has nothing to answer for!). The next level consists of us handing the computer an input. At this stage, we are at depth 1 and we have won the game! The next level consists of all of the computer's possible responses. Now, what does it mean for the computer to have a winning strategy for this game? Precisely that no matter what our input, one of the computer's possible replies is the correct one. So, traditional computation is a depth 2 game, in this setup. This is not to be confused with how we traditionally model nondeterministic computation by building a tree or somesuch. In this situation, the tree represents the actual computation steps that the computer is performing. In Computability Logic, we are only interested in the interaction going on, not in the actual computation steps.

Next time: Non-constant games and operations on games.

August 02, 2005

Proofs as Games

Proof theory has undergone many transformations over the hundred or so years of its existence. It seems reasonable to roughly segregate its development, though note that my partition of history is rather arbitrary! In particular, important open problems from each period are still being actively researched today and I haven't mentioned stuff like proof complexity. Nevertheless, here it is:
  1. 1900-1930: The birth of proof theory, in the guise of Hilbert's program for formalising mathematics. The end of this period is marked, of course, by the incompleteness theorems. However, this is not to say that the program was completely without use. On the contrary, much current work in computer science can be seen as an extension of work arising from this period. For instance, Russell and Whitehead's Principia did not give us a foundations for mathematics, but it did so for programming language theory by giving us type theory. Moreover, Hilbert's epsilon calculus is still going strong and is in active use in the proof assistant community.
  2. 1931-1986: This period predominantly consisted of analysing the ordinal strength of increasingly complex systems. A representative problem of this period was Takeuti's Conjecture, which is roughly the assertion that second order logic has cut elimination. Or, if you want to be fancy, you can say analysis instead of second order logic. This conjecture implies, for instance, the consistency of full second order arithmetic (PA2). Of course, Takeuti's conjecture cannot be proven from within PA2. As such, it was widely believed to be false for quite a while. This belief turned out to be wrong. The first proof was given by Tait in 1965, building on previous work of Schutte. This proof was semantic, a style of proving cut elimination that is often unsatisfactory since it does not provide enough information. In 1972, Jean-Yves Girard provided a syntactic proof of the theorem and subsequently gave birth to System F, which has continued to hold importance for programming language theory.
  3. 1987-2000: I choose to mark the beginning of this period with Girard's paper on linear logic. This view may be somewhat contentious. The reason is that, if one focuses on the exponential-free fragment of linear logic, then these logics had been around for many years under various names (relevant, paraconsistent, substructural,...). However, the main feature of Girard's approach is more philosophical: we should study proofs as the objects of interest, not the set of provable assertions. Subsequently, our semantics should be a semantics of proofs. That is, formulae should correspond to objects in a space and proofs to maps between them. In this way, we can begin to ask what distinguishes proofs of the same assertion. Developing a good semantics of proofs is an ongoing problem and much exciting work remains to be done. Already at this stage of history, there were indications that classical proof systems such as sequent calculi would buckle under the pressure of applications. They managed to hold up, but only just...
  4. 2001-Present: Welcome to the new millenium! With the majority of logicians finding themselves in computer science departments, proof theory has become a powerful weapon for attacking computational problems. It turns out that the world of computation presents many more challenges for proof theory than the world of truth. For instance, in a concurrent or distributed system, it matters in what order processes are executed. This leads to a whole host of concerns, since nice things like conjunction no longer commute. Moreover, computers are now networked, with computations regularly being distributed around many machines. What does interactive computation mean? What, pray tell, is a computational problem anyway and how can we capture this in logic?
Typically, for a computational logic, nobody knows how to give a cut-free sequent calculus or similar system. This has led to a host of different formalisms being invented or brought in from the cold, with the inter-relationships amongst them not completely understood: Display logic, hypersequents, calculus of structures, proof nets, cirquent calculus,...

The problem is fundamental. If we are going to take seriously the claim that proofs correspond to programs, then we need a system that facilitates the study of the dynamics of proofs. Arm-wavy claims such as "linear logic is resource sensitive" are not sufficient. Increasingly, logicians and computer scientists are starting to desire a reasonable model of interactive computation. Two related systems present themselves at the moment: Computability Logic, introduced by Giorgi Japaridze and Ludics, introduced by Girard. The striking feature of each is that they are based on the idea of modelling computations/proofs by certain games. Computability Logic presents a major problem to proof theory, since it is semantically motivated and developed, with no clear way of capturing it in a syntactic system. As for Ludics, it is not clear precisely how it relates to computation.

As I am currently teaching myself both computability logic and ludics (I spent most of the morning printing and binding imposingly heavy papers!), I have decided to make a series of posts keeping a log of my progress. If anyone else is interested in learning about these systems, let me know and we can maybe do an internet-based reading group thingy!