That Logic Blog

February 22, 2005

Goodbye Syntax?

I am now settled in, so can resume posting again. My goal is to average one post a week from now on, so there should be a bit more activity on this little piece of cyberspace.

Onto more interesting things. There are many different methods for performing proofs in logic. For instance, sequent calculi, natural deduction and related systems. However, they all have one thing in common: they manipulate syntax. This is fair enough, since a proof is a syntactic object, right? Well, yes. However, when we work in, say, propositional logic, we have access to a particularly perspicuous semantics and we know that the syntax matches up with the semantics very nicely. And, so, why not perform proofs in the semantics. Of course, the problem is that in order to find a tautology, we pretty much have to try out all possible valuations (barring something obscene like P=NP). Well, now there is a third way: combinatorics and, in particular, graph theory.

Proofs Without Syntax - Dominic Hughes

The above paper does a good job of explaining the necessary background, so I will only dwell on it briefly. The crux of the paper is in defining a "combinatorial" proof to be a special sort of graph homomorphism (map between graphs that preserves edges) between very special graphs. The main result is a soundness and completeness type of afair: if a proposition is a tautology, it has a combinatorial proof and vice versa. In other words, we have a bit more structure for performing "semantic" proofs. It remains to be seen whether this can be done reasonably efficiently.

February 09, 2005

Oligomorphic Magic

I hadn't planned on making a new post until after moving but here we go. It seems that there are a few people with a philosophical bent reading this, so I will try and make a philosophical post soon. Today, however, is mathematical.

A couple of days ago, I was chatting with some group theorists and they mentioned the strange notion of an oligomorphic group. From a purely permutation group perspective, it does not seem like a particularly interesting object, so why has there been such a hubbub of late? The answer lies in logic and, in particular, model theory.

To make this post a manageable length, I assume familiarity with the definition of a group. Given a set X, the symmetric group on X, denoted Sym(X), is the group of all bijections mapping X to itself with function composition as the group operation. A permutation group, G, is a subgroup of Sym(X) for some X. Given a point x in X, the orbit of x is the set of all images of x under maps in G. It is a standard result that X is partitioned by the set of all orbits of G on X. The flavour of this area of group theory is markedly different depending upon whether X is finite or infinite. We shall deal with the infinite case here.

Now, a model, M, is a tuple consisting of a set m called the domain, together with a collection of (possibly no) functions from M^n to M and another collection of (possibly no) n-ary relations on M as well as some (possibly none) elements of m that are distinguished as constants. An automorphism of M is a permutation of m that preserves all functions and relations in M. For example, if (N,+,=) denotes the natural numbers together with addition and equality symbols, then an automorphism of this structure is just a semigroup automorphism. The collection of all automorphisms of M form a group under composition, the automorphism group of M, denoted by Aut(M). Given two models M and N, we say that they are isomorphic if there is a bijection mapping the domain of one to the domain of the other, which preserves functions and relations.

Much like in the first post of this blog, the automorphism group is going to tell us something logically interesting. Except, instaead of loooking at automorphisms of formulae, we are going to look at automorphisms of models. A set of sentences (quantifier-free formulae), S, is omega-categorical if every model of S with a countably infinite (omega) domain is isomorphic. Given a model M, the theory of M, denoted Th(M), is the set of all sentences true in M. Can we find omega-categorical theories?

It turns out that we can use the information about Aut(M) in order to find such theories. It is a result of Engeler, Ryll-Nardzewski and Svenonius that the following are equivalent for a countably infinite model M with domain m over a countable language:

1. Th(M) is omega-categorical
2. For each natural number k, there are only finitely orbits of Aut(M) on the set of k-tuples of elements of m.

This second condition is what is meant by an "oligomorphic" group: it has only finitely many orbits on the k-tuples of its permutation domain. We can also go the other way. Given an oligomorphic group, we can construct the so-called "canonical model" and this is going to have an omega-categorical theory.

Interesting flows of results start to go between model theory and permtuation group theory at this stage. For instance, from some elementary model theory, it follows that if a group G is the automorphism group of a model of an omega-categorical theory and H is any subgroup of G, then there are only finitely many groups between H and G (that is groups K such that H < K < G)

All of this is the starting point of the following wonderful book, which I, sadly, do not have the time to browse through properly before jetting off on Saturday:
Automorphisms of First Order Structures

February 04, 2005

Learning for Logic

The long break in blogs is because I am preparing to move to another university next week, which happens to be a few thousand kilometers away. I am not sure how many people actually read this thing though - leave a comment to let me know you are here (If you wish to remain anonymous, then something like "Hi from anonymous person number n+1", where n is the number of the last anonymous person).

I'll hold off on the symmetry stuff for a while, because I am still absorbing a few references from some kindly local mathematicians, which may answer one of my questions from last time (at least partially). So, I'll talk about something else.

While browsing through the library the other day, the following caught my eye:

Donald N. Cohen: Knowledge Based Theorem Proving and Learning.

(no relation, honest). This turned out to be a copy of Cohen's PhD thesis, wherein he describes his admirable attempt to make an automated theorem prover that is less stupid than usual. Sadly, since the thesis dates from 1981 (making it older than me!), a large chunk of the work concerns efforts to make the program more memory efficient, despite his "large" 256kb memory space :) The basic premise is that humans hardly ever prove theorems starting from the axioms (the exception being when one is cleaning up the trivial consequences of a new definition, such as that of a group). However, this is something that has not been adopted much in the automated reasoning community, for some good reasons mind you.

Cohen first sets about creating a database that can store new theorems, as well as new *techniques*. The latter is particularly interesting, because anyone who has done tedious high school maths homework knows that once you learn the trick to solving a problem, you can just blindly follow the recipe when you notice a new instance of the problem, as opposed to solving the same problem again.

Cohen focuses his attention on something that often occurs in a backtriacking search - you end up solving the same subproblem over and over again. The computer science community has developed many heuristics for avoiding this situation, including backjumping (don't backtrack to the previous node, backtrack to the previous significant point) and look-ahead (what would happen if I picked this option?). Cohen's approach is to have the computer "learn" about the subproblem and how to solve it efficiently. A more recent approach, is the following:

Andrew Slater: Investigations into Satisfiability Search.

The basic approach there is to use relevant logic, which essentially says that when proving an implication, the assumptions should all be "relevant" to the conclusion. For example, while the statement "It is raining and I am wearing my leather shoes, so it is wet outside" is technically correct, what shoes I am wearing is not really relevant. Using this philosophy, Slater creates an algorithm for reordering a search tree, so that assumptions are relevant to "conclusions", thus sidestepping the problem of the computer re-solving subproblems.

The "learning" in Cohen's thesis is hard to spot. What he calls learning seems to be the ability for the program to remember succesful proof attempts and the pattern of the formulae that it worked on (he works with the typed lambda calculus, but this is not really important). The trick here is to then avoid a "combinatorial explosion", where the prover ahs too many options at any given stage, so spends far too much time trying different alternatives. Cohen makes some steps towards giving the prover "semantic" knowledge, so that it can reason with models of the formulae (for example, when trying to prove a statement in, say, topology, one usually has a stock of different spaces to test the statement out on, to see what makes it tick). Cohen does not make much progress on this area (and who can blame him, given the number of tasks he sets himself to complete!). A more recent piece of software along the last lines, is SCOTT (Semantically Constrained OTTer).

So, what more can be said about making automated theorem provers more intelligent, using all that we have discovered int he 24 years since Cohen's thesis? Certainly, machine learning has come on leaps and bounds with, for example, support vector machines proving to be very good at classification tasks. So too has knowledge representation and database theory. As far as building models of theories goes, there are now very powerful systems for doing modern algebra (groups, rings, fields,...), which can be used to generate as many models as you please. Thus, it seems that the time is ripe to develop software that reasons and learns a lot more like humans...