That Logic Blog

May 24, 2005

Dichotomy

Way back at the dawn of complexity theory, it was proven that the problem of deciding whether an arbitrary 3-CNF formula is satisfiable is NP-Complete. Here, 3-CNF means that in conjunctive normal form, each clause has three literals. Say that your 3-CNF formula is built from k variables. One way to model the satisfiability problem is to attach to each clause the set of all k-tuples that satisfy that clause. The satisfiability problem then boils down to deciding whether these sets have a nontrivial intersection.

We can abstract things a little here. The tuples we considered above are, of course, k-ary relations on the alphabet {0,1}. Start by taking a collection of relations on {0,1} of arbitrary, but finite, rank. Now, build a formula by using existential quantification of variables as well as conjunctions of relations on those symbols. In other words, conjunction of things that say stuff like "(x1,x2,x3) is in relation R". What we have now is something called a boolean constraint satisfaction problem. We may very well be interested in the complexity of such problems.

Let X be a set of these sorts of relations. Then, the "Schaefer Dichotomy Theorem" says that deciding whether the constraint problem for X is satisfiable is in P if and only if one of the folloing conditions holds:
(1) Every relation in X contains a tuple in which all entries are 0
(2) Every relation in X contains a tuple in which all entries are 1
(3) Every relation in X is definable by a conjunction of horn clauses
(4) Every relation in X is definable by a formula by a formula in CNF in which each clause contains at most one unnegated variable.
(5) Every relation in X is definable by a 2-CNF formula
(6) Every relation in X is the set of solutions of a set of linear equations over {0,1}, considered as a field.
Otherwise, the problem is NP-Complete.

This result is contained in:

The complexity of satisfiability problems
, Thomas J. Schaefer

We might be interested in extending this "dichotomy" to arbitrary constraint satisfaction problems (CSPs). This is considered in the following paper, where it is also shown that there is a strong connection to the algebraic notion of "clones":

Classifying the Complexity of Constraints using Finite Algebras
; Bulatov, Jeavons and Krokhin.

They state Schaeffer's dichotomy theorem in the special case of boolean CSPs where each operation is actually a function. It turns out that a set of such operations is tractable (i.e., the corresponding problem is in P) if it contains an essentially non-unary operation. Otherwise it is NP-Complete. What is an essentially non-unary operation? Well, it is an n-ary function whose value does not depend on only one coordinate as, say, the projection functions do.

This is all quite neat. However, times have changed and P is no longer considered all that tractable. Can we get a more refined analysis of the cases in the Dichotomy Theorem? The answer is yes, and is considered in the following paper, which also uses clones:

The Complexity of Satisfiability Problems: Refining Schaefer's Theorem

For those who prefer categorical language, a clone is what is also known as an "algebraic theory". That is, take a category, C, which has finite products. An algebraic theory is more or less the image of a functor from C to the category of sets that preserves finite products.

May 19, 2005

Logicomp

Anthony Widjaja To has started Logicomp, a blog devoted to the links between logic and computation and, in particular, finite model theory and descriptive complexity. In his first couple of posts, Anthony looks at different subsystems of first order logic that are decidable and yet surprisingly strong.

It's nice to see the blogging community growing and incorporating a nice cross section of modern logic. I am also quite pleased to grab my first blog scoop :)

May 14, 2005

Quantifying Symmetry

This week, my first published paper came out online. In it, I consider the problem of making precise the vague notion that one object may have more qualitatively different symmetries than another. It's expository in nature and (I hope) easy to read. There's nothing new in it, but I am still quite pleased that my first publication was an invited paper!

Quantifying Symmetry
, Jonathan A. Cohen

Its contents are more or less a different spin on a piece of my undergraduate dissertation. For the adventurous:

Small Base Groups, Large Base Groups and the Case of Giants, Jonathan Cohen

May 11, 2005

Linguistic Tricks

So, I have been doing a little linguistics. A shocking revelation, I know, but I may as well jabber on about it a bit.

Way back in the days before colour television, Jim Lambek turned the linguistic notion of categorial grammar into logic. We've all seen something of the sort in school - nouns, adjectives, subjects and silly old intransitive verbs - to name but a few of the categories you may very well have been subjected to. The question then is how members of these categories can be glued together to form a valid sentence. Or, to turn the problem on its head, how we can get a computer to decide whether or not a sentence is grammatical. Lambek addresses these in his paper:

The Mathematics of Sentence Structure, J. Lambek

Forever more, the system presented in that paper has been known as Lambek Calculus. If you are familiar with Sequent calculus, Lambek Calculus fits in like this. Start with Ye Olde Calculus for the negation free fragment of intuitionistic logic. First off, get rid of contraction and weakening. What happens is that conjunction shatters into an "additive" version and a "multiplicative" version. Keep only the multiplicative version. Then, do something dastardly and devious - drop exchange also. If you do this, then implication breaks up into two versions of implication, both of which are residuated with respect to your conjunction. What this means is that x and y entails z if and only if x entails that y implies z. And the same thing with "implies" replaced with your other implication, let's be silly and call that "coimplication". Lambek denotes these little beauties by / and \ respectively. If you like category theory, this just means that Hom (x and y, z) is isomorphic to Hom(x, y / z) and also to Hom(x, y \ z). So, we have some adjunctions floating around.

Linguistically, however, getting rid of all of the structural rules is a bit on the dodgy side. For instance, it has been shown that the Lambek Calculus only really generates context free grammars. Most unsatisfactory! What we can do, however, is to let the structural rules back in a controlled and regimented way (much like linear logic...). So, for instance, we may have a unary operator that says that everything in its scope has access to weakening. Here is an example where this all makes sense:

(1) Take the book to John

Let us now postulate that the noun phrase "John" has access to weakening. In which case, we derive that the following is grammatically valid:

(1') Take the book to John, Mary, Jane and Susan

This makes a lot of sense linguistically. Logically, however, these structural "modalities" open up a can of worms. For instance, if we make the conjunction commutative, by adding back exchange, then the two sorts of implication collapse back to classical implication. If we now add modalities for contraction and weakening, then we get linear logic! Although...exchange is not really warranted on linguistic grounds, because we do really want the two sorts of implication to be different.

I suppose the area where this sort of stuff is best understood is the proof theoretic side of things, where Belnap's "Display Logic" provides a nice formalism for systematically adding in these sort of modalities and obtaining cut elimination for free. More or less what is known is contained in this thesis:

Reasoning with polarity in categorial type logic
, Raffaella Bernardi

I'll speak about the semantic side of things another time, when I have grokked a bit more...

May 06, 2005

Logician?

(Definition via Uncyclopedia)

A logician is an entertainer who tries to make people laugh by playing tricks and acting, not unlike clowns do, but clowns take their jobs much more seriously than most logicians. Most traditional logicians try to reduce as much as they can to absurdity, using reductio ad absurdum, thus making falsum the sound basis of all knowledge.

In most cases, logicians have a very high EQ, and their social skills are more developed than those of average people. At parties, they are often in the center of the attention, dancing and telling wild stories and a lot of rubbish to entertain people. Their favorite trick is to ask people to state something, which they then reduce to absurdity within seconds.

May 02, 2005

Doing Proofs

Casually reading through Greg Restall's book on substructural logics, I came across a curious line:

"However, proofs are not so easy to construct in tree form - often it is easier to construct proofs writing the consecutions in a list."

Personally, I have always found the "list" way of constructung proofs to be rather arbitrary and nonintuitive. If I have access to a sequent(esque) system, then I invariably construct the proof by doing a backwards "proof search". But maybe that is because I cut my (logical) teeth by working on theorem proving software.

I'm interested in seeing what method other people prefer, so leave a comment if you have a preference!