SB logo
miley
en's Homepage

Does Gödel's Incompleteness Theorem Show That Minds Are Not Machines?

Gödel's Incompleteness Theorem is considered by many to be the most important discovery for the Philosophy of Mathematics ever, principally because it completed the search for a unified, complete, proven system of mathematics - by showing that such a search will always ultimately fail. Perhaps most surprising of all is the relative simplicity of the theorem - and yet this simplicity, without resort to especially complex calculations or formulations, is just the theorem's power: it is incredibly broad, and applies to any system capable of standard arithmetic. Put simply, Gödel's Incompleteness Theorem states that for any Turing machine, capable of algorithmically calculating whether a sentence is provable according to the rules governing the machine there will be sentences that cannot be proved within the system of the machine.

The argument goes roughly as follows (based upon Roger Penrose's exposition):

1) Let C0(n), C1(n), C2(n), C3(n)... be families of computations, which a Turing machine can perform, and will stop when they has reached their targets. For example, C87(n) could be the computation 'Find the lowest number greater than n that is the product of two square numbers'.

2) Let C0(0), C0(1), C0(2), C0(3), etc., therefore, be particular computations belonging to the family C0(n).

3) Let A(x, n) be a computation (similar to a Cx(n) computation) whose target is a (correct) proof that the computation Cx(n) will never stop.

4) From (1), (2) and (3), we know that:

stops (A(x, n)) -> Žstops (Cx(n)).

5) Using Cantor's diagonal slash we examine the computation where x = n . Namely:

stops (A(n, n)) -> Žstops (Cn(n)).

6) Since A(n, n) depends on just a single variable, n, A(n, n) must be a family of computations, like Cx( n). Let us call this Cn). So A(n, n) = Cn)

7) Again using Cantor's diagonal slash, example the case where k = n:

A(k, k) = Ck)

8) So now we have shown that:

stops ( A(k, k)) -> Žstops (Ck)).

9) Or even:

stops (Ck)) -> Žstops (Ck)).

Since (and this might be slightly worrying) if Ck) were to stop that would mean that it wouldn't we see that Ck) would never stop. The important conclusion to be drawn from this is that whilst we can see that Ck) does not stop, there is no algorithm in the system that can prove this fact.

Interestingly, these sentences, referred to as 'Gödel sentences', are not limited to arbitrary mathematics, but include some very simply stated sentences. The most famous of these is presumably ' This sentence cannot be proved in the system' - if it could be proved in the system then it would be false, so we can tell that it cannot proved in the system: which proves to us that it is true.

The argument is compelling, and yet, say some philosophers and mathematicians, it leads to an even more startling conclusion. Namely:

10) Since we can see that Ck) will not stop, yet it cannot be proved by any machine within any algorithmic system, our minds cannot simply be algorithmic machines.

So if Gödel's Theorem holds, and it seems very likely that it does, and thus any Turing machine is necessarily incomplete, do we really have to come to such a startling conclusion about the philosophy of mind? There are certainly a number of responses that I feel need to be discussed before we commit to this conclusion.

The first response might be that we simply don't know that Ck) will not stop - that we do not have any insight that a machine lacks. This objection can be construed in two ways - the first being i ntuitionistically, which I shall come back to in just a minute. The second construal, however, is simply to argue that, since (9) seems to be a contradiction, we don't know either way whether or not Ck) will stop. This seems simply to miss the point of the argument. We can see, and it should be fairly clear, that either Ck) doesn't stop in which case it doesn't stop, or it does stop, in which case (by (9)) it doesn't stop. We can see that either way, it doesn't stop, and that this is proven. This is even clearer with the ' This sentence cannot be proved within the system' example: either it can't be proved within the system, or it can; but if it can be proved in the system then it is false, and this proves it can't be, but not in a way that could be formulated within the system.

The intuitionistic response, however, is much more interesting. Intuitionists argue that truth is inextricably tied to knowledge, and that simply to assert that something whose truth value we'll happily admit ignorance of must still have one is unacceptable. For an intuitionist it is not that we just don't know whether something is true or false before we have examined all the necessary evidence, but that until we do so there isn't really a fact about it at all. If the intuitionist is correct that it isn't fair to complain that within the system 'Ck) will not stop' cannot be proven, because if it really is the case that we'll never get to a position where we have examined enough of the evidence to say that this is so, then there simply isn't a fact of the matter about whether it is true or not. If this is the case then clearly we can't complain that we can prove it whereas the system cannot because in some sense within the system there is no fact to be left unproven.

Unfortunately, as tempting as a retreat to intuitionism might seem, there are a number of serious problems with doing so. Firstly (though not an enormously good philosophical argument), intuitionism is quite a serious and substantial position to take, with many other metaphysical and ontological commitments, and we'd surely much prefer to escape from (10) without going to such extreme lengths, which would seem as bad as accepting (10) in the first place. Worse, though, is that intuitionism might well spell the end of mathematical systems anyway. Without the law of the excluded middle, and thus reductio ad absurdum which intuitionism preaches we must give up, forming a mathematical system is at least going to be a more complex task, and our minds will from the outset most probably be very different from how we conceive of machines - after all, out minds will intimately govern what facts about the world do or do not exist, which machines presumably do not. I would suggest, therefore, that becoming an intuitionist should not be considered the safest of escapes from (10), and may lead to far worst problems in and of itself.

Another response to such a strong conclusion might be simply to accept it, and ask why anyone would think that such a result as that which Gödel produced should be the first bit surprising. After all, most philosophers of mind already have huge difficulty analysing just what it means to say that a mind 'knows' anything. It isn't clear that it makes sense to claim that a Turing machine knows the truths it is able to prove any more that a photocopier does when copying such a proof, or the resulting copy on paper. It might well be that there is something additional to being able to prove something, which is knowing that you have a proof, or knowing that it shows the thing to be true. It might be argued that computers, which simply play syntactic games, reading in input and shifting data about according to arbitrary-seeming rules are never going to understand semantically what they are doing - and are never going to associate actual truth with things, even if the output clearly states that it is true, and accompanies this with a sound proof for why this is so. What are we doing, it might be asked, asserting that there are some sentences that we can know and prove to be true which a machine could not, when surely every sentence fits this pattern?

Obviously the minds-as-not-machines proponent is not going to worry about such a response, which is plainly agreeing with his conclusion, but anyone interesting in the intricacies of Gödel's Theorem may want to object. A counter might be to point out that whilst claims about knowledge and consciousness may creep into the discussion, the argument does not rest on these claims, and in fact works just as well when we regard proof as simply a systematic process, whereby a machine will clearly be able to participate in a proof-exercise, since to go through the motions of a proof, and produce a correct one, is just what it is do 'proving'.

Before I respond to this counter I think I should bring in Gaifman's contribution to the discussion, since I think that he is correct in his analysis of the problem facing Lucas and the other mind-as-not-machine proponents. Gaifman asserts that what Gödel really has succeeded in proving is that, just like machines, whilst we can have discussions, proofs, arguments and conclusions, we do so only within a system and according to rules which, themselves, cannot be established independently. It would be possible for an observer, standing outside our system to diagonalise and produce a Gödel sentence for us, just as we can produce Gödel sentences for any algorithmic system, and just as a machine can be taught to for other systems, but not for itself. Spinoza, he points out, argued that the human illusion of free will is derived simply from our inability to comprehensively understand the rules under which we operate.

The point he is making, and one that I would support, is that (it is at least possible that) we are within a frame ourselves, and that it may be possible to formulate Gödel sentences for a particular frame from outside it, but that doesn't prove that we are not machines, since anyone (God?) standing outside our frame could formulate one for us (and could be doing so now, boasting that they can't simply be minds or machines because you can produce Gödel sentences for any mind or machine!)

To return to the knowing / going-through-the-motions-of-proving distinction above, we find that minds-as-not-machines proponents are now unable to plum for the latter in an attempt to assert that our current doubts about machines having semantics as well as syntax are irrelevant to the discussion. In order to deny that we are simply in an equivalent frame they must assert that rather than simply following arbitrary rules when we prove things we are applying actual truths in order to perform our version or reasoning (and presumably it is in this that we differ from machines).

Penrose blithely asserts that were one mathematician to present another with a proof of a basic logical or mathematical rule the other mathematician would be able to assess it and would agree if the basic rule were proved, and argues that it is this consistency and communicability that shows us that our rules are connected to something external and correct, and not simply assigned. I am not convinced by this suggestion at all (nor am I certain that Penrose himself is). He gives no examples, and I cannot see how he could do, because I cannot see on what basis anyone could debate the rules of their own system. It is always possible to prove a rule that we take to be basic - we need simply assert the rule (and presumably, if we are real realists assert that the rule is true) - so whilst mathematicians might quite happily pat themselves on the back and claim that their rules are correct, I would certainly hope that philosophers would not be so quick.

Overall, therefore, this is the best response to this interesting conclusion from Gödel's theorem I can see being made, and yet it seems at least reasonable for Lucas et al's opponents to continue to insist that such a conclusion must be incorrect. After all, and, again, this isn't the most philosophical of lines of argument, but reasonable nonetheless, Lucas et al will not be convincing until they - or someone else (which is more likely, given Lucas et al's chosen field of discussion) - give some explanation of how minds come to differ from machines. If minds are instantiated upon brains it seems more than reasonable to demand an explanation of how they come to be more than deterministic machines. Do all animals have non-algorithmic minds, for example? Or is there some point in our evolutionary path at which it suddenly appeared? There is already, as I have said, a great deal of ground a (deterministic?) physicalist will have to cover before they have proved minds are just complex Turing machines, and unless Gödel's theorem is simply going to be tossed in with Descartes, the Chinese Room Argument, etc. as reasons in the 'against' column some analysis of where such a distinction between minds and machines arises will be needed. Until then it seems quite right that minds-as-machiners should hold out for a conclusive response, one way or the other, and suggest that if we are to make any such extraordinary conclusion from Gödel's theorem, it should be that he has proved that we must be working within a framework of rules, which themselves cannot be proved.

3/11/2001