That Logic Blog

March 18, 2005

To cut or not to cut...

In a Gentzen-like formulation of a logic, we take as axioms only those consecutions of the form A |- A. For a lot of sensible logics, we can even take A to be atomic. We then postulate a few rules for introducing connectives on both sides of |-. Most of these rules satisfy the subformula property, which is to say that any formula or structure appearing in the numerator (assumption) of the rule is a subformula of something appearing in the denominator (conclusion). Case in point, the contraction rule, one of whose variants is:

X, A, A |- Y
------------
X,A |- Y

In the above, X and Y are structures (multisets of formulae) and A is a formula. It says that if we can conclude Y from X and A and A, then we can conclude it from X and A also (this rule is present in classical logic but not in, say, linear logic).

One rule is a bit conniving and does not obey the subformula property: it is called cut, one version of which says that if you have X|- A and also A |- Y, then you can "cut out" A and conclude X |- Y (this is nothing but the transitivity of deduction).

But this is ok in sensible logics, for in these logics cut is admissable, which is to say that adding it as a rule does not give us anything new. This is quite an impressive result, since the cut rule is basically a rule which says that we can use lemmas. That it is admissable means that we do not need to use them. Actually, it says more than that. Good proofs of its admissability show how to explicitly transform a proof with cuts into one without. Crikey! To see cut elimination in action on a theorem from real analysis, look no further than the following (which also includes an introduction to sequent calculus and proof theory in general):

Making Proofs Without Modus Ponens A. Carbonne and S. Semmes

One very important consequence of cut elimination for a logic is consistency (although sometimes you need to use a bit of transfinite induction...). By eliminating the cut rule, we also ensure that (formal) proofs satisfy the subformula property. So, a proof of a formula (/structure/sequent/whatever) gives a method for constructing it. This leads, in one direction, to the field of logic programming, where one performs a backtrack search for a cut-free proof. See:

A Tutorial on Proof Theoretic Foundations of Logic Programming Paola Bruscoli and Alessio Guglielmi

The other side of the coin is that transforming a proof with cuts into one without cuts can lead to an exponential blowup in the length of the proof: obviously a bad thing.

So, what is a logician to do? One option is to only use a modified form of the cut rule, called an "analytic cut". In this version, we are only allowed to cut on a formula if it is a subformula of something in the conclusion. Ok, so this obviously lets us keep cut and still have the subformula property. So all is well right? Wrong! Surprisingly enough, there is trouble with this even in the case of propositional logic: a system with analytic cut is no better than truth tables. In technical terms, analytic tableaux cannot p-simulate truth tables.

Are Tableaux an Improvement on Truth-Tables? Marcello D'Agostino. Journal of Logic, Language and Information 1: 235 -- 252, 1992.

Although, this is not too surprising, since if a system with analytic cut did not give exponential length proofs of some propositional formulae, then we would have coNP = NP. More on this another time...

March 15, 2005

Semantics is (coNP) hard, syntax is (GI) easy

It seems that posting regularly to a blog is harder than I thought. My excuse is that I have been quite busy learning a lot of background for what is essentially a new research area for me, as well as more mundane things such as furnishing a house. I am also organising a regular informal gathering of academics from across campus (maths, cs, philosophy, linguistics) to discuss the nature of proof. It should, hopefully, facilitate some interesting debates.

Anyway, enough of that. Let's get down to business. A little while ago I cleared up my understanding of symmetries of propositional logic. As it turns out, there are actually two notions of symmetry in the literature, and the authors of papers are not always at pains to clearly elucidate what they mean. This can be quite confusing! Let me try and settle things here. There are essentially two notions: a semantic one and a syntacitic one. Throughout, p is a propositional wff over n variables.

Semantics: We consider the boolean function f:(x1,x2,...,xn) -> {0,1} induced by p. A semantic automorphism is a permutation of the xi's that results in the same boolean function. That is, if f' results from f by merely twiddling around the arguments of f, then the "twiddle" is an automorphism if f' has the same truth table as f. The group of all automorphisms of f is denoted by Aut(f). Buried in a paper of Emerson and Sistla (Symmetry and Model Checking) is the result that the problem of determining whether Aut(f) is the full symmetric group, Sn, is coNP-Complete (recall that Sn is the group of *all* permutations of x1,x2,...,xn).

While their proof is not particularly difficult, there really is a much cleaner method. I'll just sketch the argument. First, determining whether a given permutation is in Aut(f) is coNP-hard. Why? Well, in order to determine if it is an automorphism, we need an efficient method for checking if truth tables are equivalent. If we had such a method, then we could use it to solve validity too. The problem is easily seen to be in coNP, so is coNP-Complete. Now, to see if all permutations are in Aut(f) is not too hard. It is an old piece of group theoretic folklore that an n-cycle and a transposition moving two consecutive points of that n-cycle generate Sn (an n cycle is a permutation which sends x1 to x2 to x3 to... to xn to x1 and a transposition is a permutation which just swaps two points). Now, pick an arbitrary n-cycle, test to see if it is an automorphism. If it is, then we need to check at mose n transpositions to see if we have Sn (all adjacent pairs in the n-cycle). So this problem is coNP-Complete also.

A natural question now is whether or not every finite group arises as the automorphism group of some boolean function. It turns out that the answer is no. I'll sketch the proof if anyone is interested, but will omit it for now.

Syntax: We assume that our formula f is given in conjunctive normal form. A syntactic automorphism is a permutation of the literals appearing in p, which preserves the syntactic structure (so, swapping two clauses counts as an automorphism). This is different from our previous notion of automorphism. For example, consider the CNF formula whose only clause is [x,-y]. Then, swapping x and y is a syntactic automorphism but not a semantic automorphism. I mentioned in the first post that it is possible to construct a coloured graph whose automorphism group is the same as the syntactic automorphism group of p. This can be done in polynomial time. Then, assume that we can compute generators for the automorphism group of this graph. This problem is equivalent in complexity to the graph isomorphism (GI) problem. This is interesting, because GI falls in NP but it is not known whether or not it isin NP-Complete, or P. Anyway, given these generators, we can use some group theoretic algorithms to solve the problems from before in polynomial time. That is, testing whether an arbitary permutation is a syntactic automorphisms and testing whether the syntactic automorphism group consists of all possible permutations.

The moral of the story is that the semantic aspecs of logical symmetry are inherently coNP-Complete, whereas the syntactic aspects are at most as hard as graph isomorphism. So, barring something unforseen like coNP= NP or P = NP, semantics is harder than syntax. Computationally speaking at least.

March 03, 2005

Sign me up

For those who are interested in logic, but are not affiliated with a university, it can be quite expensive to keep up with things. After all, journal subscription prices are often out of the reach of individuals. The purpose of this post is to point out some options for individuals. Many are free, but you'll go a lot further by spending a little bit.

Of course, on the sidebar of this blog, are links to electronic resources which make their papers /articles available for free. This is a good thing. It promotes wider and faster dissemination of results. I'll briefly describe each of these below.

Australasian Journal of Logic: This is managed by the philosophy department at The University of Melbourne and is edited by Greg Restall. It is fully refereed and freely available and seeks to cover the breadth of modern pure and applied logic. Currently, the majority of the articles deal with philosophical logic.

ACM Transactions on Computational Logic: This is obviously put out by the Association for Computing Machinery. As such, its focus is predominantly computational. However, given the nature of the subject, many articles should be of interest to philosophical logicians as well. For instance, both proof theory and modal logic have philosophical and computational branches. While the concerns of each area are often quite distinct, one gets a fuller picture of the area by "seeing how the other half lives". The articles here are usually of a rather technical nature.

Bulletin of Symbolic Logic: This is put out quarterly by the Association for Symbolic Logic. It contains many helpful book and article reviews as well as many good expository papers. It is well worth browsing through the archives. While you are at it, why not join the association? As well as your very own print version of the bulletin, you recieve the Journal of Symbolic Logic and significantly reduced subscription rates for the Journal of Philosophical Logic. If you are a student, they also provide some funding support for attending ASL affiliated conferences.

arXiv.org: This is a major preprint server. By sending an email to math@arxiv.org with the body 'add LO' (without the quotes), you can receive updates on mathematical logic related preprints daily to your email address. By sending a message with the same body to cs@arxiv.org, you can sign up for computational logic as well.

The European Association for Theoretical Computer Science publishes a quarterly bulletin. It contains many interesting columns. For computational logicians, the logic in computer science, formal specification, computational complexity and formal language theory columns often contain interesting nuggets. You do need to pay an annual membership fee, but the bulletin is more than worth it (often running to a few hundred pages!). The sister publication, published by the ACM, is the Special Interest Group on Algorithms and Computation Theory News. It has recently rekindled its logic in computer science column. The two organisations currently have a reciprocal arrangement, whereby you receive a discount if you join both. Oh and if, like me, you live outside of Europe and/or the US, it is well worth spending the little bit extra for airmail.

An interesting quirk with the above publications is something that Richard Zach has mentioned in his blog. That is, Europeans seem to be a lot keener on logic than Americans. That is, American theoretical computer science tends to be very combinatorial, whereas its European counterpart is often a lot more logical. One certainly gets this impression when reading both publications.

Finally, there is a very good free online magazine devoted to philosophical logic and its applications: PhiNews.