That Logic Blog

November 19, 2005

Rock On

I don't know about you, but when I think proof theory, I think high impact seminars. I think seminar announcements that say "Proof Theory is a highly explosive force waiting to erupt at every show". What am I talking about? Why, the Proof Theory Band of course! Cutting right to the truth, they say "The ability to create such a diverse style is why Proof Theory captures and holds the interest of fans...". Yes indeed! Be sure to check out the nifty wallpaper while you are there.

November 16, 2005

Modal Space and Computation

It is an old result of Goedel that we can embed propositional intuitionistic logic (Int) in the classical propositional modal logic S4. This latter logic extends classical propositional logic (Prop) via a modality, [], satisfying the following conditions:

(1) [](p -> q) -> []p -> []q
(2) []p -> p
(3) []p -> [][]p

But in what sense is this result obvious? One route to the reason is provided by knowing that Int has as algebraic models complete Heyting algebras. These are complete lattices in which meet distributes over arbitrary join. Now, a complete boolean algebra is also a complete Heyting algebra. Since complete Boolean algebras form models of Prop, we may expect classical logic to embed into Int. This is indeed the case - it does so via the double negation translation (Glivenko's Theorem): p is provable in Prop if and only if ~~p is provable in Int. But we want to go the other way! That is, we want to embed Int inside of some classically based logic. How to do this?

Let's look at a specific Heyting algebra. Pick your favourite topological space X and let O(X) be the collection of all open sets of X. Then, O(X) is a complete Heyting algebra, with join defined by union and meet almost defined by set intersection. Since O(X) is closed under arbitrary union, we are happy on the join front, but for the meet of arbitrarly many members of O(X), we have to take the interior of the intersection.

Now, O(X) directly forms a model of Int by interpreting conjunction as the interior of intersection, disjunction as union, implication as the subset relation and negation as the interior of the relative complement. It is a theorem that every Boolean algabra is isomorphic to a Boolean algebra of sets. That is, a collection of sets closed under intersection, union and relative complement. So, what we need to do is to add a way of talking about topology to Prop. Have another look at the axioms for S4. What they seem to be saying is that [] forms a (topological) interior operator! This is precisely what we need, since an interior operator uniquely determines a topology of open sets (this is usually stated in terms of the dual notion of a closure operator and holds for a large variety of lattices, not just topological spaces).

So, with our S4 modality, we can build a topology on a Boolean algebra. The Goedel translation then becomes obvious. The only thing we really need to worry about is how to translate a negation. If t is our translation function, then we do this by simply taking t(~p) to be []~t(p). That is, we take it to be the interior of the relative complement!

This topological interpretations of things is very neat and leads to lots of nice things. But we can be a bit brash and crazy and ask what happens if we add an S4 modality to Int directly? Things start getting very interesting indeed! It turns out to be best to switch to category theory at this stage. Instead of working with interior operators, lets work with their dual - closure operators (the diamond of S4). Now, the categorical generalisation of a closure operator is what is called a monad. Since Int has as categorical models Cartesian Closed Categories (CCC), what we are effectively trying to do is to add a monad to a CCC. Moreover, since CCC's are equivalent to simply typed lambda calculi, we seem to be doing something sinister to type theory! What we have, in effect, done is to create what is known as the computational lambda calculus. This is a formal framework for reasoning about the equality of programs and was introduced in the following paper:

Computational Lambda Calculus and Monads; Eugenio Moggi

While the computational lambda calculus is, indeed, a rather nifty thing, we seem to have strayed from geometry. Can we get back? We can! One way is to go back to thinking about the box, or interior operator. Categorically, we can see this as a comonad, which preserves some monoidal structure (that of conjunction). We can then interpret this topologically as an "augmented simplicial set". Since simplical sets are a mainstay of topology, we can start thinking geometrically again. This is rather heady, abstract stuff, but you can read all about it here:

On the Geometry of Intuitionistic S4 Proofs; Jean Goubault-Larrecq, Éric Goubault

So, once again, we have derived a huge benefit by asking the question "Why is this result obvious?". Always a good thing to ask!

November 11, 2005

Formal Philosophy

I have become increasingly interested in philosophical work that incorporates various bits and pieces of logic. Thus, it came as a very pleasant surprise when Vincent Hendricks passed on details of the following fascinating looking collection of interviews with a group of eminent philosophical logicians, semanticists etc. I'll reproduce the announcement here for those of you who like to think about formal approaches to philosophy, or philosophical methodology in general.

http://www.formalphilosophy.com/ :
Formal Philosophy
by

Vincent F. Hendricks

John Symons

Interviews with:
Johan van Benthem, Brian F. Chellas, Anne Fagot-Largeault, Melvin Fitting, Dagfinn Føllesdal, Haim Gaifman, Clark Nøren Glymour, Adolf Grünbaum, Susan Haack, Sven Ove Hansson, Jaakko Hintikka, H. Jerome Keisler, Isaac Levi, Ruth Barcan Marcus, Rohit Parikh, Jeff Paris, Gabriel Sandu, Krister Segerberg, Wolfgang Spohn, Patrick Suppes, Timothy Williamson.

Formal Philosophy is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in formal philosophy. We hear their views on formal philosophy, its aim, scope and how their work fits in these respects.

"This is a fabulous collection. Hendricks and Symons have performed an important service to the entire philosophical community. The interviews are not only rewarding in and of themselves but they will help the reader understand what has been going on and has been achieved in the past fifty years. "
--Ernie Lepore, Rutgers, NJ, USA

"Why do you do philosophy that way? Do you believe all philosophy could be done that way? Do you think it should be done that way? These are questions one seldom asks, except perhaps at dinner. Yet there is a lot one could learn from the answers, especially when they come from philosophers who do have a distinguished way of doing their job. Formal Philosophy identifies one such way and collects the answers of its eminent practitioners—not the quick answers one might give over an entrecôte, but the answers one gives when seriously prompted to reflect upon their daily profession. An enticing, provocative, completely novel way of surveying the landscape of contemporary philosophy."
--Achille Varzi, Columbia University, NY, USA

Read extracts of the interviews at www.formalphilosophy.com
Buy the book from Amazon
Formal Philosophy
by Vincent F. Hendricks & John Symons
Automatic Press / VIP
ISBN 8799101300 (paperback)
ISBN 8799101300 (hardback)
264 pages

November 01, 2005

Doing it Deeply

One of the salient features of logical formalisms such as sequent calculi, tableaux and natural deduction is that the logical inference rules attack the main connective of a formula, either introducing it or eliminating it. This is to be contrasted with the situation in, say, elementary algebra where one may apply properties such as associativity and commutativity anywhere within a formula.

One may wish to add the ability for rules to act arbitrarily deeply within a formula to a logical formalism. But how can one do this in a sensible way? For many, the most obvious reply would be to utilise term rewriting. This consists of a bunch of "rewrite rules" which allow one to rewrite one term to another anywhere deep within a structure. For example, in a transformational grammar, one may have a rule such as VP --> V + NP, which rewrites a verb phrase into a verb followed by a noun phrase (that is, into a transitive verb).

The problem with just using term rewriting is that there does not, initially, seem to be a good notion of analytic proof, which is provided in the sequent calculus by cut elimination and in natural deduction by normalisation. This is a big concern, since it is analyticity that leads to important metalogical results such as consistency, interpolation and so on. One solution that has been pioneered recently goes under the name of Calculus of Structures. A good introduction to this formalism is provided by the following two papers:

A System of Interaction and Structure, Alessio Guglielmi

Deep Inference and Symmetry in Classical Proofs, Kai Brünnler

By massaging a one sided sequent calculus via the addition of the ability for rules to act arbitrarily deeply (effectively turning them into rewrite rules), one gains the ability to act arbitrarily deeply, while clinging to some notion of analyticity. This is evidenced by the fact that one can still prove all of the same metalogical results.

While this seems promising, to date the Calculus of Structures has only been convincingly adapted to logics which have an involutive negation. In other words, anything with an intuitionistic flavour has been more or less left out in the cold (though there are current efforts to rectify this situation). Moreover, many of the broad concepts are subsumed by categorical logic, while missing out on the important categorical notion of coherence, as is pointed out in the following paper:

Deep inference proof theory equals categorical proof theory minus coherence, Dominic Hughes

Does the above paper sound the death knell for the Calculus of Structures? Probably not, though it does suggest that one needs to understand categorical proof theory when attempting to build a system with deep inference. If not, then one faces the problem of not knowing whether something is just reinventing the wheel. If so, then fitting one's formalism within the context of existing work on things such as categorical logic leads to a clearer understanding of the contribution of the work and also makes the job of selling it to people in adjoining research areas that much easier.