Back to First Page

The Mind's I

philosophising with a claw hammer

By David Gardiner

This article may be reproduced in whole or in part for any non-commercial purpose provided that authorship is acknowledged and credited. The copyright remains the property of the author



This is a bit of philosophizing of the kind that an academic philosopher might do in the pub after the seminar. Philosophizing with a claw hammer, so to speak. Ever since seeing "2001: A Space Odyssey" in the late 1960s I have been fascinated by the idea of artificial intelligence (or "machine intelligence" or "electronic intelligence" or "machine consciousness" or any of the other names by which it goes). I wrote a number of short stories about it, eventually a novel called "SIRAT", and more recently was invited to deliver a lecture on it (a very basic introduction to the subject) at an American university. I can't claim to be a genuine worker in the field but I am a very enthusiastic amateur.

The notion of creating some kind of a machine that can think, a conscious computer presumably, collides head on with a genuine and deep philosophical problem. The oldest one in Western philosophy, perhaps. The relationship between the inner world of the mind in which we all live, and the outer world of things and other people which science investigates.

There is such an obvious and glaring discontinuity between the two worlds that it is probably only children and philosophers who notice that it exists. Think about it for a moment. There is a vast Universe out there (I don't doubt it although some philosophers have) which consists of matter in all its states and arrangements: stars, galaxies, sub-atomic particles, traffic wardens, gas bills, ocean liners, you know the kind of thing. They can be seen or at least detected or inferred by my bodily senses and by scientific instruments. They can be weighed and timed and measured and categorised and theories constructed about how they are related to one another. They are, collectively, the external world. But they are not the only world. There is also me, this being called David, looking out from behind a pair of dirty-grey somewhat myopic eyes, feeling the draft from the open window on its skin, smelling the bacon cooking upstairs, hearing the clicking of these computer keys. Can David be seen, weighed, measured, investigated by scientific instruments? Well, certainly not directly, not by any technique known at present. David can be contacted by physical means, affected by sounds, light and dark, electric shock, spoken to and argued with, but the entity that is David is not directly accessible in any way from the external world. The only way that anybody out there can know about what is going on within David is if David tells them.

Am I imagining the face of my first lover right now? Am I performing a piece of mental arithmetic? Am I daydreaming about the next holiday I want to go on? There is, quite literally, no way for you to know. Even if I tell you I could be lying, it is I and I alone who have access to my internal mental states. The entire scientific endeavour comes to a halt at my scull. And at everybody else's of course. The inner and the outer realities seem to belong to two different orders of existence. Rene Descartes drew our attention to this when he wrote the most famous statement in all of Western philosophy: "I think, therefore I am". A better contextual translation would be: "I doubt, therefore I am". It is possible to doubt almost everything, to doubt the very existence of an external world, as we said earlier, but it is not possible to doubt that you are doubting. Hence it is not possible to doubt your OWN existence. That was the one solid and unquestionable bedrock from which Descartes set himself the task of rebuilding the whole of logic and philosophy.

Throughout the eighteenth, nineteenth and twentieth centuries science, and particularly psychology and neurology, wrestled with the problem that Descartes had posed. Scientists don't like to be told that certain territory is out of bounds. For practically all of intellectual history the realm of consciousness had been relegated to the soul, an entity different in kind to the body but temporarily inhabiting it until it finds its eternal home in either Heaven or Hell. As science veered away from religious ideas attempts were made to integrate the mental world into the physical world. It was found possible, for example, to correlate certain mental activities (e.g. seeing the colour red, or multiplying two numbers together) with electrical activity in particular regions of the brain. But even if correlations could be found, there was no way to take the next step and say that the electrical activity WAS the sensation of red or the process of multiplication. It clearly wasn't. Colour perception and mental operations still took place in another space into which the scientist could not see. The most extreme attempt to get around the problem came from the behaviourist movement in psychology, which said in effect that as minds could not be observed or experimented upon their existence should be ignored for scientific purposes and investigations confined to what could be observed and measured about a person's behaviour. This gave birth to very little psychological progress but inspired an excellent psychological joke: The behaviourist psychologist smiles at his girlfriend just after they have had sex and asks: "How was it for me?".

The relationship between mind and matter becomes a practical problem when you start considering how consciousness might be created (should we say "simulated"?) inside a man-made machine. We need to know what it is that we are trying to create.

We have reached the point in the science of neurology where we know quite a lot about brain cells and how they work, and our much deeper knowledge of digital computers and how they work tempts us to the conclusion that if we could simply set up a computer programme to simulate the workings of brain cells we would have a conscious computer. In fact I would not want to dismiss this approach out of hand, but the results it has yielded thus far seem a little disappointing. Crudely stated, there seems to be a missing "something" about the process of consciousness that we do not understand and can not therefore incorporate into AI (artificial intelligence) algorithms. We know how to make machines perform calculations, and learn, and even draw inferences and modify their own programmes as a result of "experience", but we do not know how to make them into conscious self-aware beings. We do not yet know how to simulate "mental space" or what we think of as "inner mental life" in a machine.

But the fact that we can't do it at this point in time doesn't mean that we should simply say it's impossible or it's magic or it's unknowable. A hundred years ago we didn't know how living things passed on their genetic code to their descendants or how the sun generated its heat. Now we do. The mysteries of one generation become the commonplace of another.

Unless we are content to give up the quest at the outset we have to accept as our starting point that whatever is required to generate consciousness and "mental space" is contained somewhere in the operation of the brain. If it involves supernatural processes or miracles then of course it must remain forever beyond the reach of science, but I wouldn't want to abdicate my scientific responsibility without putting up a respectable fight.

What do we actually know about the brain? I will try to offer a very general account. Don't worry if you can't follow all of it, it inevitably assumes some slight scientific background.

We know that the central nervous system is made up of an incredibly large number of specialised, elongated signal-carrying cells which are capable of firing in response to various kinds of stimuli and of causing other cells down the line to fire in a cascading fashion. This mechanism sends waves of electrical activity surging through large regions of the brain. Cells fire in response to stimuli, which might come from nearby cells or might be caused by sounds entering the ear or light falling on the retina of the eye or any other input from a sensory organ. Each brain cell or neuron has receptor areas (the dendrites) where the command to fire is picked up, one or more long transmission lines (the axons) and synapses at the end which are capable of telling other nearby cells to fire or NOT to fire depending on the kind of synapses that they are. The firing of a cell is not an instantaneous event but a short burst of pulses, like a burst of machine-gun fire, and the strength of the stimulus causing the firing is represented by the frequency of the pulses. Strong stimulus equals rapid fire, weak stimulus equals less rapid fire. The firing of a cell connected to muscle tissue is capable of causing that muscle to contract. That is how the brain makes the body move. So far so good. We also know of two mechanisms by which learning takes place in the brain. Pathways that have been stimulated frequently in the past undergo chemical changes which make them more easily activated in the future. Thus the brain takes on an imprint of past experiences which is thought to form the basis of its ability to recognise and remember past stimuli and form expectations about future stimuli. This mechanism of forming a record is coded in the strengths of the synapses, and is thought to underlie short and medium term memory. In addition to this the brain is capable of changing its internal architecture in response to stimuli, and individual cells can literally grow new axons to link previously unconnected areas of the brain. This happens more frequently in young brains but is now known to continue to some extent throughout an individual's life. This mechanism is thought to be responsible for the formation of long-term memories and deep patterns of behaviour.

I've given a rather sketchy account here but on the surface of it there is nothing in this description that could not be simulated in a computer. Computer models of this kind are called "neural networks" and have indeed been designed and built and thoroughly investigated. They behave in an eerily un-computer-like way, but it would be a bit premature to say that they produce anything approaching conscious thought.

What do I mean when I say that they behave in an un-computer-like way? Well, computers and brains, because of their natures, have vastly different strengths and weaknesses. A computer performs one task after another in accordance with a set of instructions stored in a memory of some kind (its programme). Every time it performs an operation (one line of its programme) on a piece of data it returns the result to memory and then looks at the next line of the programme and performs that operation, and returns that result to memory, and so on until it is switched off or an end point is reached. It is a very fast sequential device lifting tiny segments of data and of instructions out of memories or buffers, performing specified operations on them and then returning them to memory. An internal clock is used to control the speed and sequence of these operations, and large CPUs (central processing units) may be capable of running several separate programmes or sub-routines simultaneously. The programme may be very sophisticated and it may be capable of modifying itself in response to inputs of one kind or another but fundamentally that is what is going on.

By contrast, a brain is asynchronous, it has no central clock, and the CPU and the memory are the same thing. There is no memory distinct from the processing system, the memory is contained in the architecture of the brain itself, and is capable of continuous dynamic change in response to the processing that is taking place. Instead of a few programmes running sumultaneously the brain is potentially capable of running all of its programmes at the same time. It is, in computer terms, massively parallel. But although all the programmes may be thought of as running at the same time they are running at vastly different speeds depending on the specific neurons they involve (neurons differ widely, in fact by orders of magnitude, in their response speeds) and some programmes may not be running at a given moment in the sense that they may have no data on which to work. In addition to the individual speed variations the average speed at which messages can be transmitted within the brain is millions of times slower than transmission speeds within electronic computers.

Some of these differences in approach and the strengths and weaknesses to which they give rise are listed below:

COMPUTER

Stores most of its information in devices external to the CPU
Performs tasks sequentially, and transfers results back to memory
Performs relatively few tasks simultaneously
Relies on a central clock to synchronize and sequence its operations
Performs precise instructions which have been given in advance
Performs individual operations in a time scale of nanoseconds

BRAIN

The CPU and the memory are the same thing: there is no memory storage external to the CPU
Performs enormous numbers of tasks simultaneously
Asynchronous: no central clock
No separate instructions (program): the instructions are written-in to the architecture of the system and are subject to continuous reappraisal
Individual operations take place many millions of times more slowly

Brains are good at pattern recognition, extraction of signal from noise, and taking best advantage of faulty or incomplete information or unreliable inputs. They are good at ordering information into categories (called “concepts”) and seeing connections between them. But they are very slow and imprecise, and badly adapted for linear processing tasks with a unique (right or wrong) answer.

The computer, by comparison, is very fast at performing linear processing, very accurate, and very good at carrying out precise instructions. Its memory is better managed, hence more capacity is freed-up for processing. However, it is poor at pattern recognition, categorization and coping with noisy, faulty or imprecise inputs. It’s also vulnerable to “crashes” caused by minor errors.

Now the interesting thing is that when artificial neural networks are set up their strengths and weaknesses begin to invert with respect to these categories. They are found to be good at pattern recognition, less vulnerable to crashes, able to cope with imprecise and noisy data, but slower and less good at linear processing and very extravagant with regard to the time and processing power required as compared with linear computer programmes. They are used in search engines to get a "rough fit" between the question asked and the contents of the database, and for recognising spam in spam filters, and similar functions involving what has come to be called "fuzzy logic".

So does this kind of research provide us with a clue as to how the inner "me" relates to the physical world? Is the phenomenon of consciousness the outcome of a large number of processors of the "neural" kind acting in unison? Is this mysterious creature called "David" the view from the inside of all this "fuzzy logic" acting on inputs from biological sensory organs and previously stored information? If it is an inside view who or what is doing the "seeing"? Is that seeing agent part of the same mass of processing or do we have to postulate some other entity, overlooking it all?

These are by no means trivial questions, or uninteresting ones to human beings. They represent a form of the most fundamental question that can be asked by science or philosophy: Who am I?

There seems to be something missing from the physical account above, something for which the account does not allow, although that impression could be the result of our incomplete understanding of everything the physical account implies. Stand by with the headache pills. The missing element (or maybe we should say one of the most important of the missing elements) is the location of AGENCY. Of free will. I perceive myself as a free being able to choose between alternatives, both moral and practical. I can lift my hand or I can let it remain resting on the table, I can tell the truth or I can lie, I can go out shopping or I can stay at home and watch TV. There is not, or there does not appear to be, anything causing or forcing me to take one course of action rather than the other. The causative agent is me, David. Or is this some kind of persistent illusion brought about by my incomplete understanding of the kind of creature that David really is? But even that seems to issue in circularity. If I am CAUSED in all my beliefs then there seems to be no way that I can stand outside the chain of causality and evaluate different possibilities. Thus I can never know whether free will is genuine or not, because the conclusion that I will come to on the matter will be the outcome of causes which force me to one conclusion or the other. More fundamentally, who is this "me" struggling with the problem? Who is it that is troubled by the free will issue? Is that just another bundle of fully determined causal chains? The more we struggle with the logic the more strongly we feel the need for something transcendent, the more convinced we become that reflective consciousness is not the mere outcome of mechanical computational processes of any kind.

Re-enter the soul? Let us hope and trust not. The point that we have now reached is pretty much the frontier of present-day thinking on the mind/body problem. Leaving aside mysticism and religious thought, possible approaches to the question can be summarised under these three headings:

1. The Computational Model.

This approach is rooted in the belief that information processing of one kind or another is all that we need to explain the phenomenon of consciousness and the perception of a transcendent self with free will and all the other attributes of personhood. It's most influential proponent is Douglas Hofstadter of Indiana University. He holds that:

‘… an “I” comes about… via a kind of vortex whereby patterns in a brain mirror the brain’s mirroring of the world, and eventually mirror themselves, whereupon the vortex of “I” becomes a real, causal entity.’

He believes that we underestimate what computation is capable of, misunderstand its nature in effect, and that consciousness of an emergent kind is present in all digital processes that have a link to the external world. Some people have seen this as a comical view and pushed it to ridiculous extremes (the "inner life" of a thermostat) but it deserves serious consideration, and if it turns out to be correct the "problem" of consciousness will be seen to be a mistake: in fact a particular kind of mistake that philosophers call a "category error". The answer to our opening question will be that consciousness resides in the processing itself, its moment of "now" being related to the state of its short term memories and the speed at which this state changes. On this model we "live" in the brain's short term memory, and overlook the whole process of consciousness from there. This is the model assumed in my novel "SIRAT", namely that the hardware for electronic consciousness already exists and all we need is the right seed algorithm to set the process in motion, after which it will become self-correcting and evolve and grow in sophistication and power at an exponential rate and without any further need for human intervention.

2. The Quantum Model

This model is the original creation of the Oxford mathematician Roger Penrose, modified in response to the criticism of Stephen Hawking and others. Penrose speculates that the human brain is the kind of system in which quantum phenomena might well play a part, and he singles out structures called "microtubules" (which are very tiny tubes providing conduits for fluids inside neurons) as the most suitable hosts to be inhabited by large-scale uncollapsed quantum fields. The details of the argument are subtle and require some familiarity with quantum theory, but essentially he invokes the agency of a quantum field extending over large regions of the brain, or even the whole of the brain, as the integrating medium from which all the processing is overseen and influenced by way of the collapse of selected parts of the field. These local collapses in the field bring about "decision" events where a choice is made between alternative possibilities which co-existed at the quantum field level. Within an uncollapsed quantum field different "realities" or "futures" or "outcomes" can coexist without any particular one becoming realised until the field "collapses" in a particular way. This is very reminiscent of how we perceive ourselves as conscious beings, conceiving of different possibilities and choosing freely between them. Perhaps "mind stuff" has a quantum nature: perhaps it a very complex quantum field always balanced on the cusp of realising different choices or possibilities. On this model David is a quantum wave phenomenon, almost a ghost riding on the flux of computational events which are taking place in David's brain, and controlling them like the conductor of a mighty orchestra. It's an attractive model in some ways: it says that consciousness is indeed different in kind to mechanism and computation, but belongs to a category with which scientists are already familiar, namely quantum reality. Penrose's critics have accused him of using a mystery to explain a mystery, or of letting the soul in again by the back door. But mysterious or difficult as it might be, his theory could turn out to be correct, which would mean that we would have to wait for the quantum computer (which like fusion reactors seems to hover a permanent twenty years into the future) before we could have true artificial intelligence or consciousness. Personally I hope and believe that he is wrong.

3. The Electromagnetic Model

This is a model which I personally believe we can dismiss fairly quickly. It was put forward very recently by Johnjoe McFadden, and although obviously somewhat derivative from Penrose's work it is not the same and needs to be included for the sake of completeness. McFadden takes as his starting point the fact that the electrical processes in the brain give rise to an electromagnetic field, just as the electrical processes in a computer do, and he places this electromagnetic field in the position occupied by Penrose's quantum wave field.

But even if we could accept an electromagnetic field as being able to do so much: that is integrate all the individual events taking place in the neurons producing it and somehow turn back on those same neurons to modify their behavior in an organized and meaningful way, we have to look for some mechanism capable of producing the field in the first place. We can imagine some kind of very weak coupling between adjacent electrical pathways in the brain, indeed it’s something that we have to guard against in designing (especially) analogue electronic systems, but the notion that the tiny electromagnetic fields produced by slow-moving groups of ions in neurons might somehow join up and then turn into this amazing new entity isn’t really convincing. The electromagnetic field produced by the brain is known to be utterly feeble and incapable of inducing even the minute current needed to fire nearby neurons. An electromagnetic field doesn’t seem to be the right candidate for the role that McFadden wants to give it.

So after due deliberation we arrive at three possible accounts of what "I" might be. Perhaps "I" am the momentary pattern of activity in a myriad of synapses holding the instantaneous pattern of short term memories in my brain. Or perhaps "I" exist in the domain of quantum fields, overseeing in a holistic way everything that is going on at the sensory level and everything recorded in the different memory traces in my brain and choosing between different possible outcomes moment by moment. Or perhaps "I" am an electromagnetic being, like a complex television transmission, integrating and controlling all the activity within my brain by means of electromagnetic induction. Finding which is the true account will have a critical significance when it comes to creating artificial systems which can be inhabited by other "Is", which is perhaps not something that everybody necessarily sees as a good thing or wants to do, but believe me, it is something that is going to be done, and probably before very long.

But even if I had no interest in such an engineering programme and gave no credence to the notion of creating artificial minds I think I would still want to know, to satisfy my own curiosity, who "I" am. Wouldn't you?


FURTHER READING

Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter (Basic Books Inc. 1979)

The Mind’s I Edited Hofstadter & Dennett (New York: Bantam Books 1981)

The Emperor’s New Mind by Roger Penrose (Oxford University Press, 1989)

Shadows of the Mind by Roger Penrose (Oxford University Press, 1994)

The Large, the Small and the Human Mind Roger Penrose, Abner Shimony, Nancy Cartwright, Stephen Hawking (Cambridge University Press, 1997)

Quantum Evolution: The New Science of the Life Force by Johnjoe McFadden (Flamingo, 2000)

Machine Consciousness Edited Owen Holland (Imprint Academic 2003)

The Ghost in the Atom Edited P.C.W. Davies & J.R. Brown (Cambridge University Press 1986, revised 2002) (Excellent basic introduction to quantum mechanics)

SIRAT by David Gardiner (iUniverse 2000, 2nd edition 2003) Complete text available on-line here: SIRAT.

Götterdämmerung by Bill Hibbard (latest revision 2001) Complete text available on-line here: Götterdämmerung.





TO READ OR SIGN THE GUEST BOOK JUST CLICK ON THE
OPEN BOOK ICON BELOW!

Read or Sign Guest Book

Top of Page

Back to First Page