Knowledge & Reality 134.101

Study Guide


Week Ten: Functionalism

  1. Materials Assigned for the Week
  2. The Central Point of This Week's Material
  3. Other Concepts and Points you are Expected To Master This Week
  4. Miscellaneous Comments and Clarifications
    1. "Function" versus "Structure"
    2. Functions, Causes and "Causal Roles"
    3. "Species Chauvinism"
    4. How to Remember Functionalism: Four Easy Mnemonics
    5. The Turing Reading

I. Materials Assigned for the Week

Reading:

Lecture 22: Functionalism
Reading: Turing, Computing machinery and intelligence, pp. 352-374

Exercises:

Review Questions: pg. 305 esp. #2, 3 ,5
Problems for Further Thought: pp. 305-306 esp. #3


II. The Central Point of This Week's Material

Functionalism can sound like an "X is identical to Y" theory with a few extra bells and whistles. But it introduces a new concept which changes the whole discussion quite drastically. Any "X is identical to Y" story is basically a "thing X is identical to thing Y" story. As befits its name, Functionalism sharply distinguishes:

  • a thing (or piece of structure), from
  • a function (or job) which that thing performs.

Functionalism insists that we shouldn't categorise the mind as any piece of structure at all. Instead, we must redefine the mind as a certain job or task or function. Which function is rightly to be called "the mind" is the hard work yet to be done. But we already know that the brain isn't itself that function, or any other function. The brain is meat between the ears, purely an item of structure. Almost certainly it is the structure which in human beings performs the defined function. But we can expect that in other beings the same function will be performed by an entirely different piece of structure (this is already so with silicon-based computers).


III. Other Concepts And Arguments You Are Expected To Master This Week

  • The difference between a type and a token - in particular, be prepared to explain the difference between a token of a mental state and a type of mental state, with examples [Ignore the very confusing box at the top of pg. 302 however]
  • The difference between the Mind-Brain Identity Theory and the Theory of Functionalism
  • The difference between a physical structure and a physical function
  • What multiple realisability means and why it distinguishes those two theories
  • What species chauvinism is- and its connection with the concept of multiple realisability
  • What a causal role is- specifically what it is to redefine a mental event or state purely in terms of causal roles
  • Whether all mental expressions can be so redefined (contra Descartes of course)- note pg. 303
  • What the "Imitation Game" is
  • Assess the claim that the entire force of the question "Can X think?" is exhausted in the question "Can X win at the Imitation Game?" - Turing's bold thesis about the only meaning the question "Can machines think?" really has (pp. 299-300, 352-374)
  • What is the objection from consciousness to Turing's proposal? Is it any good?
  • What is "Lady Lovelace's objection"? - is it any better?

IV. Miscellaneous Comments and Clarifications

Sober's Lecture 22 is a very rich chapter because Functionalism is a very rich theory of the mind. The danger is that it may all be too rich - that is, that the lecture and the "ism" may contain too many new concepts for you to master. The idea that Sober wants you to understand about functionalism is that it aims to explain how a mental state can be multiply realisable (pp. 299-301) That is, how it can be possible for not just me and you to be in pain where pain is a brain state, but how a computer or a being from Alpha Centauri could also be in pain, even though they don't have brains or anything having to do with neurons at all. Sober introduces two completely unrelated concepts in his rather confusing account of multiple realisability.

The first is the so-called "Type-Token" distinction (pg. 298-299, introduced as functionalism's "negative thesis"). The "Type-Token" distinction is historically the province of mind-brain identity theory [see Week 9 IV.E]. In most accounts, it is the singular failure of the type/token distinction to handle multiple realisability very plausibly that led functionalists to scrap the distinction entirely and to base their theory on a different distinction which handles multiple realisability in a much more natural way.

The other distinction Sober introduces is the so-called "Function-Structure" distinction (pp. 302-303, introduced as functionalism's "positive thesis"). The "Function-Structure" distinction is rather more opaque, for everyone, at every level of philosophy. In the rest of Section IV, I am going to take only the second distinction and try to flesh it out. [Skip to IV.D for a quick summary.]

Warning!

  • When writing about identity theory, do not employ the "Function-Structure" distinction. It was never available to them, having been invented to patch up the problem all identity theories fall down on ("multiple realisability" or the "neural chauvinism objection").
  • Likewise, when writing about functionalism, do not employ the "Type-Token" distinction. It plays no interesting role in that theory and only makes for confusion all round.

IV.A "Function" Versus "Structure"

The really basic difference between Identity Theory and Functionalism can be found in the very name of the new "ism" - Functionalism. Functionalism is insisting that we need to add the concept of the function which something performs to our intellectual toolkit; we can no longer do justice to what we want to explain using only the concept of structure which we already been using up to now. Go backwards and then forwards.

Mind-Body Dualism

Dualism is the view that there are two entirely different types of things in the universe. There are minds or mental things, and the are bodies or physical things. Put in terms of the concepts of "structure" and "function", dualism is the view that the universe has two sorts of structures making it up - mental structures (and properties of such mental structures) plus physical structures (and properties of such physical structures).

Mind-Brain Identity Theory

Identity Theory is the view that there is really only one sort of thing in the universe, that the universe is made up of really only one type of entity. All there are are bodies. There are no minds. (Parsimony recommends this, amongst other reasons.) Put in terms of "structure" and "function", Identity Theory is the view that the universe fundamentally contains not two sorts of structures in it, but only one sort of structure - physical structures are all there are (and of course properties of such physical structures). Science studies only physical structures, and there are no other sorts of structures around anywhere. So far, both views make use of no other piece of conceptual apparatus than the notion of structures (entities, things, objects). The only difference is that they disagree about the number of such types of structures to be found in the universe.

Functionalism

This is where Functionalism says that both views fall down. Both are intellectually impoverished by trying to straight-jacket everything into the category of structure. In addition to that notion, we also need another very different notion in order to tell anything like the right story; we need functions as well as structures. That makes it sound mysterious. It isn't.

Example: Think of any McDonalds store.
What set of concepts do you need in order to understand a McDonalds? Well, let's list the structures only and see how far we get. The structures or structural items will include, for instance, the stoves and the freezers, and countertops and cash registers and each of the people who work there. Those are all quite obviously pieces of structure: simple objects or things or entities making up the structural composition of a McDonalds. They could all be listed in a standard inventory of things, the objects you need to buy or hire in order to set up a new McDonalds.

Now if you tried to tell the whole story of a McDonalds store solely in terms of the structural items involved in it - solely in terms of all the objects on such a checklist or inventory - it is perfectly obvious that you would completely miss it. In order to even start to explain a McDonalds store you simply have to understand one other sort of matter as well - namely, what function is served by each of these objects or bits of structure: not just that Mary Smith is one of the bits of structure making up the local McDonalds, for instance, but also what job Mary performs in that company, what function she has: is she the person who cooks the hamburgers? Or unwraps the fries? Or takes orders? Or manages the whole? And so on.

Knowing about the structure - the entities that make up some structural whole, is all very well. But you also need to know what job each of those entities performs. And in the reverse direction too: you need to know what job or function a McDonalds itself performs - that it has the function of serving up fast food rather than installing drains or mowing lawns or teaching you how to construct an argument. And you also need to know what sub-functions need to get performed in order for any McDonalds as a whole to perform that function of serving up fast food - for instance, that the sub-function of ordering pre-cut hamburger patties needs to get done, and the function of paying the bills, and every instant the job of taking an order followed by the job of taking the customer's money and giving back change followed by the job of scooping up the already cooked food followed by the job of putting it into a bag or on a tray etc. (This is the province of the organisational chart and the workflow diagrams everyone has to learn the first week.)

And in order to really understand the McDonalds you have just been employed at, you have to pretty quickly put the right structures together with the right functions: you need to catch on that James is the piece of structure who performs the function of cooking the Big Macs; and that Mary is the person who does the job of hiring and firing; and that Smith is the person who writes the checks to pay the food and utility bills; and that the shift boss has the function of reporting to the owner; and that this big stainless steel box is the freezer not the fryer (i.e. this box performs the function of keeping the patties frozen while another box performs the function of frying the fries); and that this odd-shaped hand-held piece of equipment has the function of putting on the right amount of salt while this other piece of equipment has the very different function of putting on the right amount of mustard; and so on ad nauseum. (This is what the first week's training on the floor does - help you find your way around the bewildering numbers of structures and their functions so that you can do your own job efficiently.)

Looked at like this, it starts to seem quite incredible that philosophers could have thought for so long that just getting down the structures involved in some thing would more or less finish the business. It is hardly even the beginnings of an explanation of any real thing! Every explanation we have ever given of anything is shot through and through with the notion of function. Ask "How does it work?" and of course you have to get clear what structures something is made up of; but just as important is getting clear what each of those structures does, what its function or its job is. Without it we have a phonebook or an inventory. With it we finally approach an explanation of something. Well, precisely that is the insight the Functionalist has which the Dualist and the Identity Theorist lacked.


IV.B Functions, Causes and "Causal Roles"

In addition to the word "function", Sober and most others also use the phrase "causal roles" and "causal role concepts". All three are interchangeable. But since mind-brain identity theory also speaks of having found the structures which "cause" our behaviour in certain circumstances (this neuron fires causing that neuron to fire), it can get a bit confusing to keep a handle on the enormous difference between the two claims:

Mind-Brain Identity Theory:

"Grief", say, is the name of the bit of our neurology (i.e. structure) which causes such and such physical behaviour.

Functionalism:

"Grief" is the name of the causal role (or function) defined as the role of taking in certain internal and external inputs and having certain internal and behavioural outputs - which role or function is performed by so and so bits of our neurology.

The operative word in "causal roles" is therefore "roles". The best way I know of to think about "functions" (which are hopefully pretty clear by now) and "causal roles" (which are probably still a bit uncomfortable) is this. The moment you actually try to spell out what some function or job description involves, you will almost always tend to do it in terms of causal inputs and causal connections to other states and causal outputs - that is, in terms of a quite specific causal role. Consider some ordinary examples where this happens.

Example: Consider the concept of a mousetrap. What is a mousetrap? "Mousetrap" is an ordinary common noun, so it may look like it stands for an object or thing, for a "medium-sized piece of dry goods" as one philosopher happily put it. But when you come to think about it, "mousetrap" isn't the name of a certain piece of structure at all. It is the name of a function which the piece of structure in my corner performs. That is, "mousetrap" is basically a job description: A mousetrap is essentially a trapper of mice. Some structures perform this job or function well (a piece of wood with a spring and some cheese, a cardboard box wielded by fast hands, a metal box with a one-way door and strychnine, a hungry cat). Other structures perform this job very poorly (a bucket of water, an elephant, a piece of sticky tape, a sheet of newspaper).

Put aside which structures can perform the job. What precisely is that job? If we were giving a job description, what would we write down? We would start by writing down all the smaller jobs or functions which need to be performed in order for the bigger job of catching mice to be performed - most functions split into sub-functions. For the function of trapping mice, there are say the functions of attracting a mouse, immobilising a mouse, killing a mouse. How do we spell out each of these smaller functions? Inevitably we must think in terms of inputs and outputs, causal inputs and causal outputs. For example, how is the function of attracting a mouse to be spelled out? And especially in terms of a particular set of causal inputs and causal outputs? (It's hard because it's so close to home!)

Causal inputs will include:

  • purchase of a chemical whose molecules are readily dispersed in air;
  • placement of the chemical inside the right part of the trap;
  • placing the trap where the molecules will be dispersed to the right places.

Causal outputs will include:

  • release of those molecules;
  • the molecules getting sniffed up the nose of a mouse;
  • a mouse coming inside the trap.

The function is not the thing which has these causal inputs and outputs, notice. The function is just all these inputs and outputs "bundled up", as it were, into a system or set of specific inputs and outputs - in a word, into a "causal role" (which any of a large number of things could play, just as any of a large number of people could play the role we call "Hamlet").

Example: Consider the concept of a poison. We might tend at first to think that a poison is a substance, a thing, a medium-sized piece of dry goods. But really we are not thinking of a particular substance when we do, but of a particular job which some substance might perform. Something is a poison if it brings about illness or death when ingested by a carbon-based life form. Bringing about illness or death specifies a causal role which some substance may or may not be capable of playing. Any substance which plays that causal role is a poison. Strychnine, for instance, plays such a causal role. That is why - and the only reason why - we call strychnine a poison: it does the job that a poison does, it satisfies the job description which we call "poison". That job description is spelled out entirely in terms of causes, causal inputs and causal outputs.

Causal inputs:

  • ingestion or injection into a carbon-based life form;
  • chemical reaction with various organic molecules.

Causal outputs:

  • alteration of further organic molecules;
  • death of the organism.

This is a (pretty awful) causal definition of the causal role of a poison. Strychnine is a chemical (a piece of structure) which meets that causal definition. That is why strychnine is a poison.

Example: We can see how such a functional definition gets spelled out more thoroughly in terms of a causal role if we complicate the case a bit more. Consider the functional concept - the function - of being a manufactured poison. How do we spell out that concept? In order to be a manufactured poison, we must have something which not only brings about illness, but which itself must be brought about in a certain way. "Manufactured poison" is again not the name for a substance, but the name for a causal niche which some substance (some piece of structure) may occupy. Thus time the causal niche is patently spelled out in terms of a certain set of causal inputs as well as a certain set of causal outputs.

Causal inputs:

  • manufacture in a factory rather than growth in a plant (say).

Causal outputs:

  • killing of certain carbon-based life forms.

The concept of manufactured poison is exhausted by a specification of that causal role. There is nothing more to a manufactured poison than what causes it and what it causes in turn. That's the first of the key ideas here. The second key idea is that which actual substances in fact play that causal role cannot be ascertained simply from an examination of the concept of a manufactured poison. In order to find out which substances play the relevant causal role, and which do not, one must undertake empirical research: one must find out which substances in fact cause death or illness when ingested, which don't, which occur naturally and which only by being manufactured.

Go Slow for the Next Few Pages!

So far things will have seemed fairly clear-cut (I hope). It is when we try to do the same for "mental" words that things get more difficult. What could it possibly be like to treat "pain" and the like as true-blue functions or job descriptions? Be prepared to read the following examples several times. It is the heart of the Functionalist's "Positive Thesis" (pp. 302-303), so there is really no avoiding it.

As we know, Functionalists make the bold claim that all mental concepts are more or less like the concepts of shift boss and mousetrap and poison and manufactured poison. Being functional concepts, they are essentially causal roles concepts too. That is, because all mental words are basically the names of functions rather than structures, they will unpack into complex sets of causal inputs, causal ancillaries, causal outputs too, rather than into bits of our brain or whatever else which are merely the physical structures which (we will discover) perform those functions in beings with brains. Some more examples, then, where when we try to specify which particular function it is that some mental word names, we tend to spell it out in causal terms.

Example: Take a simple case, arrogance. Arrogance, according to the functionalist, is simply a certain set of often very different behavioural traits. But why do we group these traits together rather than some other bunch, if not because arrogance is thought to be somehow a unified state of a person? But how can it be a unified state of a person on the purely behaviourist account - what does the behaviourist have available to do the unifying? The identity theorist adds into the pot the notion that arrogance is an inner structural state, specifically a state of the brain, which causes those different behaviours. Thus it can be a particular structural state of Jim's brain which, when he is asked about the achievements of others, triggers his reaction of depreciating the efforts of others, and comparing them unfavourably to his own, etc. And it can be that very same structural state of Jim's brain which causes the other behavioural traits that we associate with the mental state of arrogance - his sneering when asked a simple question, his boasting when asked a hard question and so on. In this way the identity theorist identifies Jim's arrogance ("mental") with some specific structural state, or type of structural state, of Jim's brain. [See Week 9 IV.B]

This is not the account which a functionalist wants to give, however, and it is crucial to realise this. For a start, the functionalist wants to add functions into the story and not keep things merely at the level of structures alone. And the functionalist wants to do this for the very good reason that he does not want to be forced to limit being arrogant exclusively to entities in the universe which happen to have the specific piece of structure we call a brain (see the "species chauvinism" argument below, IV.C). For the functionalist, then, arrogance is not the name of a state of a piece of structure (Jim's brain). It is the strictly and solely the name of a certain function or causal role, which in Jim we find is performed by his brain being in this structural state, and in me by my brain being in this other closely related structural state, and in you by...

Enough reminders. What then is this function? How are we supposed to think about arrogance in the same sort of way we already think about mousetrap? Again, the answer is that when push comes to shove we think about it in terms of a list of causal inputs and causal outputs; we think of it in terms of a causal role or a causal niche which we define in causal terms - that list of causal inputs and causal outputs will just be arrogance.

On the list of causal inputs will be the likes of:

  • belief that one is better or smarter than others;
  • belief that this somehow matters;
  • evidence that someone else has done something praiseworthy;
  • report that one might find the other a congenial companion accordingly.

On the list of causal outputs will be the likes of:

  • demeaning the reported deed;
  • praising it with faint praise;
  • changing the topic;
  • pointing out that one could have done better oneself.

The function of putting out those outputs when those inputs are taken in - that causal role or causal niche just is arrogance. (Nothing about brains or other structural items is even on the horizon yet.)

Example: Take grief.

(webmaster's note: anyone fancy a pint? When he's in Example Mode he'll go on like this for days...)

For the functionalist, every mental word is the name of a function or the short title of a job description; none of our mental words are ever the names of structures or pieces of structure. It is true that huge numbers of words - mental and physical both - appear to be structure words at first when really they actually name functions and only functions. (Indeed, this is the single Big Mistake responsible for both dualism and identity theories.) "Mousetrap" was one such. "Grief" is another. "Grief" also is an ordinary common noun. For the dualist it is the name of a certain experience or event in a mental structure, the mind. For the identity theorist it is the name of a certain event or property of a physical structure, the brain. For a functionalist, however, "grief" is basically a "mousetrap" type of word, for all that it may not look like it at first. That is, it is a word standing for a certain job or function or kind of work. And the functionalist really means this. "Grief" - and the rest of our mental vocabulary - is not the name of some thing (the part of the brain) which does some job. "Grief" - and the rest - is the name of that job itself. Well, then, what is the job description for "grief"? It will be written up in terms of causal inputs and causal outputs just as for every other functional or causal role concept.

Amongst the causal inputs will be:

  • death of a loved one;
  • closure (seeing the body buried);
  • lack of closure (disappearance without trace).

Amongst the causal outputs will be:

  • loneliness;
  • bouts of sudden crying;
  • moving house endlessly;
  • depression.

The causal role defined by "output those outputs when those inputs are inputted" is grief. Again, grief is not an event or even a state, but a causal role.

Example: Take finally the hardest case of all, pain.

(webmaster's note: it had bloody well better be finally...)

What is pain? On a non-causal concept of pain, such as the concept Descartes had, pain is an experience or event. It may have causal effects, but it certainly isn't to be defined by those effects. Everything mental for Descartes is to be defined exclusively in terms of Second Meditation tests: the mind is what is immune to every trick of the Evil Genius, the mind is that which is indubitable and incorrigible, the mind is that whose contents we each of introspect but which others can only infer by analogy, the mind is that to which we have first-person privileged access, the mind is what we are conscious of. So too pain then. Pain is what is available to us introspectively, incorrigibly, consciously:

pain is defined as an experience which hurts.

Now we can fully appreciate the revolutionary move which the Functionalist is making. It is not innocuous to define pain as a function or causal role concept. It is a new definition of pain entirely. Indeed, it is a new kind of definition of pain. Pain is no longer to be defined as a special type of occurrence in the world - an occurrence to which we have a certain sort of access (privileged introspectible access) and which has certain kinds of properties in and by itself alone (it hurts rather than itches). Instead, pain is to be defined as a special type of causal position in the world - a "causal role" consisting of a certain set of causal outputs (what bodily behaviour it causes) and a certain set of causal inputs (what other states cause it). Its causal inputs and outputs entirely exhaust what the "it" is; there are no Meditation Two properties left over. Treating pain as a causal concept rather than an experiential concept gives us the likes of:

pain is defined as the function of receiving information about bodily damage, and warning the rest of the organism that its body is damaged, and calling in whatever internal repair mechanisms are needed, and instructing the body to take appropriate evasive action to prevent further damage.

Philosophers- and I expect all of you too - are strongly tempted to call this function an "inner state" of a person, and then to think that the job of cashing out the function pain is the job of spelling out the causes and effects of being in that inner state:

pain is defined as the inner state of a person which is brought about by bodily damage and which brings about pain-behaviour.

Putting it this way will still be a bit dangerous however. For it will make it sound, once again, as if pain is a thing, specifically that it is the state of a structure which has so and so causes and effects. This loses the fact, once again, that "pain" isn't a thing or structure word at all. It is the name of a function - the function of doing such and such. The function of doing jobs 1,2,3,4 above is a function, the function we call "pain" - exactly as the function of doing jobs A,B,C,D is a function, the function we call "mousetrap". True enough, in both cases it will be some piece of structure (a medium-sized piece of dry goods) which performs that function, but the function mousetrap, or pain, isn't the same thing as the structure which performs those functions (hungry cat, or nerves so and so). Indeed, no function is ever a thing, it is a function. Best overall to stick to the first, exclusively functional definition.

Keeping such warnings in mind, when we see somebody whose body is damaged, and who is writhing and groaning, we say "Tom is in pain". But we do not simply mean that Tom is exhibiting pain behaviour (this is the advance over behaviourism). Rather, we are conjecturing that there is a certain complex but unified set of functions - the set of functions we call "pain" - which various structural items in Tom's body are performing. In particular, we are conjecturing that the present behaviour we see in Tom is Tom's body performing, say, the function of taking evasive action - groaning and writhing is just what a meat machine does in order to perform the function calling in the repair crew and avoiding more damage.

The point, again, is that when we spell out the whole complex function - the function of being in pain - we spell it out entirely in causal terms. There are causal inputs:

  • receiving information about bodily damage.

There are causal outputs to further internal functionaries:

  • warning the rest of the organism of damage,
  • calling in repair mechanisms.

There are causal outputs to external behaviour:

  • instructing the bodily to take evasive action.

The pain, again, is not some state of the structural item, the body or brain, which receives such inputs and sends such outputs - that's thing talk (piece of wood with spring and cheese). The pain, again, is just the system or collection of such functions to be performed (mousetrap).

Take time to digest this!

You may like to try your own hand at it. Take some mental concept - regret, annoyance, depression, elation, excitement - and treat it seriously as a job description or function. List the causal inputs and causal outputs which spell it out as a function or causal role.

IV.C "Species-Chauvinism"

So the big change from identity theory to functionalism is the introduction of the notion of the function which some structure performs, over and above the idea that the universe is made up solely of physical structures. By now (I hope), that won't seem such a weird idea, but just common-sense. The actual advance from identity theory to functionalism was not accomplished by a few philosophers regaining a bit of their common-sense, however. The deed was done by a few philosophers mounting a specific argument against the earlier theory, which seemed to leave the only alternative to take on board the new concept of function. The objection which these philosophers mounted against the Week Nine theory of mind-brain identity was completely devastating and the Week Ten theory of functionalism has completely taken its place. The basic idea of the objection is this:

  • Premise 1: We human beings on the third rock have brains. (That is, we have structures located in our heads composed of carbon-based neurons.)
  • Premise 2: It is probable that other beings in the universe don't have such brains (or any other structures made of neurons).
  • Premise 3: If mental events are just brain events, then any entity which doesn't have a brain can't, by definition, have mental events.
therefore
  • Conclusion 4: Mind-brain identity theory covertly restricts the ability to have mental life only to inhabitants of the third rock from the sun.

This is not just wildly implausible. Worse, it is wildly politically incorrect. Worse still, it is really bad science.

Functionalists detect a funny parochialism running through mind-brain identity theory. For all the parsimony business, it still doesn't quite behave as a properly scientific theory of something should behave. It closes off options way in advance of empirical research into those options. In this case, identity theory allows us to decide after studying only human beings and their brains, that only human beings - or at least only beings with brains made of neurons - can possibly have feelings and emotions, be embarrassed or angry, have regrets and desires, suffer pain or anxiety or depression, think ahead or do things intelligently. In short, we already know from just from a hundred years or so of science that mentality is limited to beings who have evolved on the third rock from the sun. Science has given us sufficient to know that much right this minute, decades before we have bothered to tinker much with computers and centuries before we have put a ship very far into space. Mentality is so limited by mind-brain identity theory because that theory identifies minds with brains.

Look back at Week Nine. A key example of the sort of identity discovered by earlier science which the identity theory points to with pride is:

"Lightning is one and the same thing as a kind of electrical discharge." (Sober, pg. 293)

Modern neuroscience will discover the likes of such identities as this:

"Feeling pain is one and the same event as having the c-fibres in one's brain fire." (pg. 294)

(Sober immediately notes that this particular identity turned out to be untrue, but that identity theorists were quite happy to leave it to neurology - as to meteorology - to work out the details of which specific identities they hypothesise.) The general requirement is just that "feeling pain is identical with being in some physical state or other; it is for science to tell us precisely what this physical state is." (pg. 294) The "some physical state or other" here is to be decided upon by the science of neurology, of course. So in the final analysis, what the identity theory proposes will be found by neuroscience to be the correct hypothesis is:

Feeling pain is one and the same event (or type of event) as the state of being in a certain brain state.
This is the key. But where do we get off thinking that every being in the universe which is to be interesting to the theory of mind must have a brain? After all, brains are brains - carbon-based meshes of neurons perched on top of a spinal column, curiously evolved by some higher apes on the third planet of a sun in the outer rim of one galaxy, which present neuroscientists study by slicing and probing through our bony skulls. You may hesitate: well surely every intelligent life form is going to have a brain, for crying out loud. Okay. Then simply shift to what is called "neural chauvinism". Do you have any serious inclination to think of whatever it is you want to call the "brain" of an Alpha Centaurian as something made of neurons? Neurons really are an evolutionary product of life of earth alone.

If we seriously identify being in pain (the mental state) with having a brain (or neural structure) which is doing so and so (the physical state), then we declare once and for all, in advance, that no other entity in the universe can ever be in pain! The point is perfectly general, alas. Take any candidate identification of some mental state with some brain state. If that identity statement is true, then that mental state simply can not be part of the life of any other entity in the universe. For only entities which have a brain could possibly have brain states. And only entities which had the brain state could possibly have the mental state.

Looked at in the cold light of scientific advance and its continuing erosion of a human-centred universe, how presumptuous to hang on to this last bit of human-centreness! Why shouldn't all manner of other higher life forms be in pain and have emotions and wishes, expect their neighbours and hate their kids - for all that they might well have a very different neurophysiology, one in which there might not be anything at all like a brain?

And it's not just hangovers of ethnocentrism which embarrass. Neural or species chauvinism violates every canon of good science. If it is all that hot on trotting alongside the proud march of science, then identity theory should be trying as hard as possible not to cut off further empirical investigation into the nature of mentality. But this is exactly what it does do. It is lousy science to construe minds as structural items in the first place, much less to then go on and identify those mental structures with neurophysiological structures. It is better science to define mentality exclusively in terms of a specific set of functions. Then no options whatever are closed off in advance. Maybe structures composed of neurons can perform some of those functions, but that doesn't mean the very same functions might not also be able to be performed by silicon or by any of an infinite range of other structures which await scientific investigation.


IV.D How to Remember Functionalism: Four Easy Mnemonics

That spells out the central components of the new theory of mind called functionalism (I hope). Now, how on earth are you going to remember it all? To remember what you get, remember what you always require and get with a good theory:

  • Concepts: Function versus Structure
  • Example: Mousetrap
  • Argument: Species Chauvinism
  • Advantages: Division of Labour

Let me spell these out so you know what you are memorising when you memorise them.

  • Remember what new concepts are being introduced by the new theory: Here my best advice is never to use the word "function" alone. Nor to use the word "structure" alone. Always use the longer phrase: "The function or job which some entity or structure performs". Structures are the medium-sized pieces of dry goods which make up the population of the universe. Functions or jobs are not themselves pieces of dry goods. They are the sorts of work to which such dry goods may be put, what we would stick in an organisational chart of a business, the duties we would teach a new person on the job, what gets specified in a job description rather than a telephone book.
  • Remember an example which encapsulates the distinction nicely: A mousetrap seems at first blush to be a perfectly good thing or object, a piece of structure or a medium-sized piece of dry goods, something which will show up on God's inventory of all the population of the universe. But a second look shows that it isn't any such thing. A mousetrap is really a kind of job or function which certain structures are good at performing and other structures are not good at performing. "Mousetrap" is not a thing word, it is a job description word ("trapper of mice" is the more revealing synonym). It wouldn't take much to give a pretty exact specification of that function: it is the function of attracting and immobilising and killing mice or some such. It also wouldn't take much to list a large number of genuine structures which can do that job: a piece of wood with a spring and cheese, a vigilant and hungry cat, a box with poison and a one-way door. Nothing is mysterious about functionalism provided you keep a specific example in mind. The structures are the objects with shapes and sizes and made up of chemicals. The functions are the jobs which those objects perform.
  • Remember the argument which forces one to this new "ism", the "species chauvinism" argument: No more than with any other "ism", we don't just wander into adopting functionalism as a theory of mind. We adopt it for a reason. And in philosophy the reason is always given in the form of an argument, with specific premises and conclusion. In the case of functionalism, the argument is the argument that the best previous view - mind-brain identity theory - has a fundamental flaw as a scientific theory. It forecloses prematurely on empirical research by chauvinistically limiting the very possibility of minds exclusively to being which possess brains. But brains are specific bits of structure composed of neurons, and neurons are further bits of structure which are carbon-based and have evolved on this planet and almost certainly only on this planet. To limit mentality just to beings with our kinds of structure is grossly chauvinistic. To limit any phenomenon in advance of empirical discovery is just as grossly bad science. This argument is what forces us to realise that we must develop a richer set of tools, quickly; the notion of structure alone can't handle mentality; we need the notion of function as well.
  • Remember the real advantage of having a distinction between functions and structures: Because "grief" is the name only of the job (the function), not the name of any of the things which might perform that job (the structures), functionalism becomes the first theory of mind to deliver us a tidy division of intellectual labour. This is not to be sneered at. We don't have to do the whole lot at once - neurology or computer architecture or exobiology - in order to find out what structures in different beings perform the job of being in grief or being grief-stricken. We can divide up the labour. We can leave it safely up to psychologists and philosophers to specify what the job is that we are referring to. We can then pass over this job description to the physical scientist and ask: "What entity in the human brain or in a computer or in a being on Alpha Centauri is the piece of structure which performs the function we have just specified or which satisfies the job description we have just given?" One part of the intellectual community spells out the functions which our mental or psychological predicates actually refer to - the words are short for job descriptions, remember, they do not refer to any structures. A quite different part of the intellectual community digs into the cranium or engineers silicon or flies off to the stars to discover which structures in which beings get those jobs done. Hand in hand science advances. Fade music and pan back.

IV.E The Turing Reading

The Turing reading (pp. 352-374) should be read as asking exactly the right functionalist question: If a computer can pass the "Imitation Game", what right have we to deny that it is thinking- for all that it has no spinal column or pus and custard? No right at all. Mind-brain identity therefore can't be right. We need to loosen things up to allow the "multiple realisability" of minds, in other sorts of physical things than neural brains. (Compare Sober, pp. 300-301.)

Indeed, the Turing reading is excellent for keying so exactly into the central concept of function which the functionalist exploits. What does Turing do? He defines "thinking" purely and strictly in terms of the function of passing the imitation game. Any structure which can perform that function - any structure which can pass the imitation game - thereby has full right to be called a thinking thing. Why? Because thinking just is the function of passing the imitation game. It is not a property of a substance (a subjective experience experienced by a mental stuff, a spiking frequency of a neurological stuff). It is exactly a function like trapper of mice or edge detecting and the like. Look at it! Just try to find in the notion of "passing the imitation game" anything structural, anything other than a purely functional notion.

  • Premise 1: Thinking is really a function, not a property or event in a piece of structure.
  • Premise 2: The function thinking = the function passing the imitation game.
therefore
  • Conclusion 3: Any structure which perform the function of passing the imitation game thereby is performing the function of thinking. That is, any structure which passes the imitation game thereby thinks- no matter what that structure might be made up of.
  • Premise 4: Computers made of valves (as they were in Turing's time) can or soon will pass the imitation game.
therefore
  • Conclusion 5: Computers made of valves think.

Smart chap Turing. And fifty years ahead of his time. We are talking about computers made with valves, after all. Roomfuls of valves.

Of course there is much else in the Turing reading than just his anticipation of the "structure" / "function" distinction. But as you read, notice how many of the standard objections he turns away by making the point about function again and again. Someone objects: "But you will never be able to make a computer - with valves, no less! - which can appreciate the taste of a bowl of fresh strawberries". This pushes in Cartesian subjective experience. For what else can a taste be but a Cartesian mental sensation? And what else can appreciating the taste be but an Cartesian mental act of assessing a mental sensation? Turing pushes subjective experience right back out again: they can both be functions. Tasting is the function of transmuting molecular differences into verbal differences. All sorts of entities can perform that function- entities built of tongues and neurons certainly, but entities built of strips of filter paper and valves too. Merely start thinking what you would have to do to make tasting part of the Imitation Game.



Contents
Previous Week: Mind-Brain Identity Theory
Next Week: The Problem of Freedom and Determinism



Content © 2000, 2001 Massey University | Design © 2000, 2001 Alun David Bestor | Any questions? Email the webmaster