Matters of Thought- Part 1
"[E]very copy is unique, irreplaceable, but (since the Library is total) there are always several hundred thousand imperfect facsimiles..." --Jorge Luis Borges, The Library of Babel
I don't know how this happens. Every couple of years, I fall into some book or conversation that purports to open windows onto the workings of human consciousness. I'm not sure that I like the subject exactly, and I may even hate it. For me, it's like that itch you notice it bugging for your attention for a long time, and when you break the urge and gouge up your skin in hopeful relief, you don't really feel any better. The deep fundamentals of mathematical and cognitive philosophy have a habit of annoying the fuck out of me, partly it's the language, and partly goofy notation, but mostly it's because my brain aches when it's knotted around in order to try and contemplate itself. As a rule, I just try to avoid them, but Oxford mathematician Roger Penrose has been taunting me from my bookshelf for seven years now with Shadows of the Mind. The basis of the book is an attempt to use Gödel's incompleteness theorem to show that consciousness can't be a product of a consistent formal mathematical/logical system, and from there move to some bizarre quantum theory of consciousness. I'm going to mostly concentrate on the first part here.
Wikipedia has a pretty decent discussion of incompleteness, stating it (not in the mathematician's own words) as,
"For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed. That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete."Basically, any set of theories which manages to conclude that "this set of theories can not be proven" contradicts itself, and is therefore inconsistent. (Also, everything I tell you is a lie.) Sounds reasonable, but the philosophical consequences of it get a little funky. In its least mind-bending application, Gödel's statement says that there are any number of mathematical truths that can't be proven by a system of rules. It also suggests that there will always be more axioms (unprovable starting points) to add, and any algorithm that operates by a mathematical set of rules--a computer program, say--will not be able to deduce any of those new axioms, including (and especially) the one that says it can't reach them all. Penrose takes this to mean that no Turing machine (i.e., a computer that operates according to a set of rules) will ever achieve the status of a human thought (or for that matter, human language). Why? Because humans can infer truths such as Gödel's theorem, that's why. Cognition can't, therefore, be a mathematically consistent process, and we'll never be able to fake that sort of thing using a computer.
This bothers me. It's not the idea that formal systems must be incomplete, it's that notions of inconsistency must be so rare and intolerable. In those increasingly rare times I act like an engineer, I'm constantly working in the realm of consistent enough. Engineering math works that way: it's good only over some range of underlying values, and only to the extent that the assumptions are valid. You can never quite capture every aspect of a physical phenomenon into a theoretical model and still have the model compute. The natural world quickly leads you past solutions that are analytic (that is, that can be solved by symbolic math), and even as you turn to numerical methods of problem-solving (that is, chunking the model up into approximate pieces, and keeping track of them all with a computer), you still need to limit your interpretation of the world in terms of mathematical logic that is possible to manipulate, and this always proves smaller than the world itself. Even revolutionary models like quantum mechanics rely on constructing equations in such a way that they can be partially solved, with just sufficient accuracy to get their points across. That's practical math for you, and I don't know if it's a correct extension of Gödel's theorem to say that as you specify a system more accurately, then your logic must become necessarily less consistent, but it's in the same vein. Actually, it's starting to sound more like quantum uncertainty than anything else, and that may well be somewhere close to where Penrose is heading*. I'll see over the next several days. (Even if he's not going there, some amazing conceptual beasties still lurk just beneath the Planck length.)
Computers are algorithmical by definition, but are they in practice? What is the relationship between the universe and math anyway? You can see how basic ideas such as natural numbers (one rock, two trees, three eggs, hrair bunnies) and basic arithmetic (addition, multiplication, etc.) are readily correlated to observable properties of the Universe, but I'm going to take a step beyond that correlation and assert that computations and ideas don't merely represent physical objects, they are in fact physical objects. Your conception of the number 3, say (or of Gödel's theorem), is a pattern of electrical impulses, probably a dynamic one, stored in or circulated around the knot of nerve cells in your head, which, when the power's turned on, can interact with many of the other nervous sparks, some old, and some freshly arrived from the senses, and churn around in fantastically complicated ways. Likewise, the concept of a 3 can be stored in that pile of blocks over there, as many beads on an abacus, so many divisions of a slide rule, or so much charge distributed over so many transistors. Algorithms are things too, objects on a tape, capacitated electrical charges, whatever. The ideas behind them are contained in all those networks of neurons, which are also things, and in the coded instructions for neurons that we leave everywhere for ourselves (such as these electromagnetic doohickeys that I'm directing at you right now, not to mention the modulated acoustics and the blobs of ink of produced by generations of philosophers and fools. There is no Platonic spirit realm of pure ideas--we approximate all of them over and over again, never quite the same but usually close enough, in accordance with a common outline of a wiring scheme and all of our own ubiquitous clues. Algorithms are arrangements of somethings too--Turing machines are worthwhile conceptual aids, but their instructions can clutter up the shadowy cave as much as anything else. Actual computers, just like brains, are really only algorithmic enough.
I'm not really going too far past Penrose here. His point is that thinking can not be based on sound algorithms. I'm saying that I don't think computers are algorithmic either, nor is the universe, not quite.
[Part II will discuss some ways to make computers that are more like brains.]
*Or not. I'm told that despite the profundity regarding the incompleteness theorem, his physics of consciousness are not widely accepted.