The Recent History of Materialism

Week 11 Reading for Foundations of Computing and Communication

 

From:

Searle, John, R. (1992). The Resdiscovery of the Mind. MIT Press, Cambridge, Massachusetts.

 

[p27]

 

Chapter 2

 

The Recent History of Materialism: The Same Mistake Over and Over

 

I. The Mystery of Materialism

 

What exactly is the doctrine known as "materialism" supposed to amount to? One might think that it would consist in the view that the microstructure of the world is entirely made up of material particles. The difficulty, however, is that this view is consistent with just about any philosophy of mind, except possibly the Cartesian view that in addition to physical particles there are "immaterial" souls or mental substances, spiritual entities that survive the destruction of our bodies and live on immortally. But nowadays, as far as I can tell, no one believes in the existence of immortal spiritual substances except on religious grounds. To my knowledge, there are no purely philosophical or scientific motivations for accepting the existence of immortal mental substances. So leaving aside opposition to religiously motivated belief in immortal souls, the question remains: What exactly is materialism in the philosophy of mind supposed to amount to? To what views is it supposed to be opposed?

 

If one reads the early works of our contemporaries who describe themselves as materialists ‑ J. J. C. Smart (1965), U. T. Place (1956), and D. Armstrong (1968), for example ‑ it seems clear that when they assert the identity of the mental with the physical, they are claiming something more than simply the denial of Cartesian substance dualism. It seems to me they wish to deny the existence of any irreducible mental phenomena in the world. They want to deny that there are any irreducible phenomenological properties, such as consciousness, or qualia. Now why are they so anxious to deny the

[p28] existence of irreducible intrinsic mental phenomena? Why don't they just concede that these properties are ordinary higher‑level biological properties of neurophysiological systems such as human brains?

 

I think the answer to that is extremely complex, but at least part of the answer has to do with the fact that they accept the traditional Cartesian categories, and along with the categories the attendant vocabulary with its implications. I think from this point of view to grant the existence and irreducibility of mental phenomena would be equivalent to granting some kind of Cartesianism. In their terms, it might be a "property dualism" rather than a "substance dualism," but from their point of view, property dualism would be just as inconsistent with materialism as substance dualism. By now it will be obvious that I am opposed to the assumptions behind their view. What I want to insist on, ceaselessly, is that one can accept the obvious facts of physics ‑ for example, that the world is made up entirely of physical particles in fields of force ‑ without at the same time denying the obvious facts about our own experiences ‑ for example, that we are all conscious and that our conscious states have quite specific irreducible phenomenological properties. The mistake is to suppose that these two theses are inconsistent, and that mistake derives from accepting the presuppositions behind the traditional vocabulary. My view is emphatically not a form of dualism. I reject both property and substance dualism; but precisely for the reasons that I reject dualism, I reject materialism and monism as well. The deep mistake is to suppose that one must choose between these views.

 

It is the failure to see the consistency of naive mentalism with naive physicalism that leads to those very puzzling discussions in the early history of this subject in which the authors try to find a "topic‑neutral" vocabulary or to avoid something they call "nomological danglers" (Smart 1965). Notice that nobody feels that, say, digestion has to be described in a "topicneutral" vocabulary. Nobody feels the urge to say, "There is

[p29] something going on in me which is like what goes on when I digest pizza." Though they do feel the urge to say, "There is something going on in me which is like what goes on when I see an orange." The urge is to try to find a description of the phenomena that doesn't use the mentalistic vocabulary. But what is the point of doing that? The facts remain the same. The fact is that the mental phenomena have mentalistic properties, just as what goes on in my stomach has digestive properties. We don't get rid of those properties simply by finding an alternative vocabulary. Materialist philosophers wish to deny the existence of mental properties without denying the reality of some phenomena that underly the use of our mentalistic vocabulary. So they have to find an alternative vocabulary to describe the phenomena.1 But on my account, this is all a waste of time. One should just grant the mental (hence, physical) phenomena to start with, in the same way that one grants the digestive phenomena in the stomach.

 

In this chapter I want to examine, rather briefly, the history of materialism over the past half century. I believe that this history exhibits a rather puzzling but very revealing pattern of argument and counterargument that has gone on in the philosophy of mind since the positivism of the 1930s. This pattern is not always visible on the surface. Nor is it even visible on the surface that the same issues are being talked about. But I believe that, contrary to surface appearances, there really has been only one major topic of discussion in the philosophy of mind for the past fifty years or so, and that is the mind‑body problem. Often philosophers purport to talk about something else ‑ such as the analysis of belief or the nature of consciousness ‑ but it almost invariably emerges that they are not really interested in the special features of belief or consciousness. They are not interested in how believing differs from supposing and hypothesizing, but rather they want to test their convictions about the mind‑body problem against the example of belief. Similarly with consciousness: There is surprisingly little discussion of consciousness as such; rather,

[p30] materialists see consciousness as a special "problem" for a materialist theory of mind. That is, they want to find a way to "handle" consciousness, given their materialism.2

 

The pattern that these discussions almost invariably seem to take is the following. A philosopher advances a materialist theory of the mind. He does this from the deep assumption that some version of the materialist theory of the mind must be the correct one ‑ after all, do we not know from the discoveries of science that there is really nothing in the universe but physical particles and fields of forces acting on physical particles? And surely it must be possible to give an account of human beings in a way that is consistent and coherent with our account of nature generally. And surely, does it not follow from that that our account of human beings must be thoroughgoing materialism? So the philosopher sets out to give a materialist account of the mind. He then encounters difficulties. It always seems that he is leaving something out. The general pattern of discussion is that criticisms of the materialist theory usually take a more or less technical form, but in fact, underlying the technical objections is a much deeper objection, and the deeper objection can be put quite simply: The theory in question has left out the mind; it has left out some essential feature of the mind, such as consciousness or "qualia" or semantic content. One sees this pattern over and over. A materialist thesis is advanced. But the thesis encounters difficulties; the difficulties take different forms, but they are always manifestations of an underlying deeper difficulty, namely, the thesis in question denies obvious facts that we all know about our own minds. And this leads to ever more frenzied efforts to stick with the materialist thesis and try to defeat the arguments put forward by those who insist on preserving the facts. After some years of desperate maneuvers to account for the difficulties, some new development is put forward that allegedly solves the difficulties, but then we find that it encounters new difficulties, only the new difficulties are not so new ‑ they are really the same old difficulties.

 

[p31]

 

If we were to think of the philosophy of mind over the past fifty years as a single individual, we would say of that person that he is a compulsive neurotic, and his neurosis takes the form of repeating the same pattern of behavior over and over. In my experience, the neurosis cannot be cured by a frontal assault. It is not enough just to point out the logical mistakes that are being made. Direct refutation simply leads to a repetition of the pattern of neurotic behavior. What we have to do is go behind the symptoms and find the unconscious assumptions that led to the behavior in the first place. I am now convinced, after several years of discussing these issues, that with very few exceptions all of the parties to the disputes in the current issues in the philosophy of mind are captives of a certain set of verbal categories. They are the prisoners of a certain terminology, a terminology that goes back at least to Descartes if not before, and in order to overcome the compulsive behavior, we will have to examine the unconscious origins of the disputes. We will have to try to uncover what it is that everyone is taking for granted to get the dispute going and keep it going.

 

I would not wish my use of a therapeutic analogy to be taken to imply a general endorsement of psychoanalytic modes of explanation in intellectual matters. So let's vary the therapeutic metaphor as follows: I want to suggest that my present enterprise is a bit like that of an anthropologist undertaking to describe the exotic behavior of a distant tribe. The tribe has a set of behavior patterns and a metaphysic that we must try to uncover and understand. It is easy to make fun of the antics of the tribe of philosophers of mind, and I must confess that I have not always been able to resist the temptation to do so. But at the beginning, at least, I must insist that the tribe is us - we are the possessors of the metaphysical assumptions that make the behavior of the tribe possible. So before I actually present an analysis and a criticism of the behavior of the tribe, I want to present an idea that we should all find acceptable, because the idea is really part of our contemporary scientific

[p32] culture. And yet, I will later on argue that the idea is incoherent; it is simply another symptom of the same neurotic pattern.

 

Here is the idea. We think the following question must make sense: How is it possible for unintelligent bits of matter to produce intelligence? How is it possible for the unintelligent bits of matter in our brains to produce the intelligent behavior that we all engage in? Now that seems to us like a perfectly intelligible question. Indeed, it seems like a very valuable research project, and in fact it is a research project that is widely pursued3 and incidentally, very well funded.

 

Because we find the question intelligible, we find the following answer plausible: Unintelligent bits of matter can produce intelligence because of their organization. The unintelligent bits of matter are organized in certain dynamic ways, and it is the dynamic organization that is constitutive of the intelligence. Indeed, we can actually artificially reproduce the form of dynamic organization that makes intelligence possible. The underlying structure of that organization is called "a computer," the project of programming the computer is called "artificial intelligence"; and when operating, the computer produces intelligence because it is implementing the right computer program with the right inputs and outputs.

 

Now doesn't that story sound at least plausible to you? I must confess that it can be made to sound very plausible to me, and indeed I think if it doesn't sound even remotely plausible to you, you are probably not a fully socialized member of our contemporary intellectual culture. Later on I will show that both the question and the answer are incoherent. When we pose the question and give that answer in these terms, we really haven't the faintest idea of what we are talking about. But I present this example here because I want it to seem natural, indeed promising, as a research project.

 

I said a few paragraphs back that the history of philosophical materialism in the twentieth century exhibits a curious pattern, a pattern in which there is a recurring tension between the materialist's urge to give an account of mental phenomena that

[p33] makes no reference to anything intrinsically or irreducibly mental, on the one hand, and the general intellectual requirement that every investigator faces of not saying anything that is obviously false, on the other. To let this pattern show itself, I want now to give a very brief sketch, as neutrally and objectively as I can, of the pattern of theses and responses that materialists have exemplified. The aim of what follows is to provide evidence for the claims made in chapter 1 by giving actual illustrations of the tendencies that I identified.

 

 

II. Behaviorism

 

In the beginning was behaviorism. Behaviorism came in two varieties: "methodological behaviorism" and "logical behaviorism." Methodological behaviorism is a research strategy in psychology to the effect that a science of psychology should consist in discovering the correlations between stimulus inputs and behavioral outputs (Watson 1925). A rigorous empirical science, according to this view, makes no reference to any mysterious introspective or mentalistic items.

 

Logical behaviorism goes even a step further and insists that there are no such items to refer to, except insofar as they exist in the form of behavior. According to logical behaviorism, it is a matter of definition, a matter of logical analysis, that mental terms can be defined in terms of behavior, that sentences about the mind can be translated without any residue into sentences about behavior (Hempel 1949; Ryle 1949). According to the logical behaviorist, many of the sentences in the translation will be hypothetical in form, because the mental phenomena in question consist not of actual occurring patterns of behavior, but rather of dispositions to behavior. Thus, according to a standard behaviorist account, to say that John believes that it is going to rain is simply to say that John will be disposed to close the windows, put the garden tools away, and carry an umbrella if he goes out. In the material mode of speech, behaviorism claims that the mind is just behavior and dispositions to behavior. In the formal mode of speech, it consists in

[p34] the view that sentences about mental phenomena can be translated into sentences about actual and possible behavior.

 

Objections to behaviorism can be divided into two kinds: commonsense objections and more or less technical objections. An obvious commonsense objection is that the behaviorist seems to leave out the mental phenomena in question. There is nothing left for the subjective experience of thinking or feeling in the behaviorist account; there are just patterns of objectively observable behavior.

 

Several more or less technical objections have been made to logical behaviorism. First, the behaviorists never succeeded in making the notion of a "disposition" fully clear. No one ever succeeded in giving a satisfactory account of what sorts of antecedents there would have to be in the hypothetical statements to produce an adequate dispositional analysis of mental terms in behavioral terms (Hampshire 1950; Geach 1957). Second, there seemed to be a problem about a certain form of circularity in the analysis: to give an analysis of belief in terms of behavior, it seems that one has to make reference to desire; to give an analysis of desire, it seems that one has to make reference to belief (Chisholm 1957). Thus, to consider our earlier example, we are trying to analyze the hypothesis that John believes that it is going to rain in terms of the hypothesis that if the windows are open, John will close them, and other similar hypotheses. We want to analyze the categorical statement that John believes that it is going to rain in terms of certain hypothetical statements about what John will do under what conditions. However, John's belief that it is going to rain will be manifested in the behavior of closing the windows only if we assume such additional hypotheses as that John doesn't want the rainwater to come in through the windows and John believes that open windows admit rainwater. If there is nothing he likes better than rain streaming in through the windows, he will not be disposed to close them. Without some such hypothesis about John's desires (and his other beliefs), it looks as if we cannot begin to analyze any sentence about his original beliefs. Similar remarks can be made about the analysis of desires; such analyses seem to require reference to beliefs.

 

[p35]

 

A third technical objection to behaviorism was that it left out the causal relations between mental states and behavior (Lewis 1966). By identifying, for example, the pain with the disposition to pain behavior, behaviorism leaves out the fact that pains cause behavior. Similarly, if we try to analyze beliefs and desires in terms of behavior, we are no longer able to say that beliefs and desires cause behavior.

 

Though perhaps most of the discussions in the philosophical literature concern the "technical" objections, in fact it is the commonsense objections that are the most embarrassing. The absurdity of behaviorism lies in the fact that it denies the existence of any inner mental states in addition to external behavior (Ogden and Richards 1926). And this, we know, runs dead counter to our ordinary experiences of what it is like to be a human being. For this reason, behaviorists were sarcastically accused of "feigning anesthesia"4 and were the target of a number of bad jokes (e.g., first behaviorist to second behaviorist just after making love, "It was great for you, how was it for me?"). This commonsense objection to behaviorism was some times put in the form of arguments appealing to our intuitions. One of these is the superactor/superspartan objection (Putnam 1963). One can easily imagine an actor of superior abilities

who could give a perfect imitation of the behavior of someone in pain even though the actor in question had no pain, and one can also imagine a superspartan who was able to endure pain without giving any sign of being in pain.

 

 

III. Type Identity Theories

 

Logical behaviorism was supposed to be an analytic truth. It asserted a definitional connection between mental and behavioral concepts. In the recent history of materialist philosophies of mind it was replaced by the "identity theory," which claimed that as a matter of contingent, synthetic, empirical fact, mental states were identical with states of the brain and of the central nervous system (Place 1956; Smart 1965). According to the identity theorists, there was no logical absurdity in supposing that there might be separate mental

[p36] phenomena, independent of material reality; it just turned out as a matter of fact that our mental states, such as pains, were identical with states of our nervous system. In this case, pains were claimed to be identical with stimulations of C‑fibers.5 Descartes might have been right in thinking that there were separate mental phenomena; it just turned out as a matter of fact that he was wrong. Mental phenomena were nothing but states of the brain and central nervous system. The identity between the mind and the brain was supposed to be an empirical identity, just as the identity between lightning and electrical discharges (Smart 1965), or between water and H20 molecules (Feigl 1958; Shaffer 1961), were supposed to be empirical and contingent identities. It just turned out as a matter of scientific discovery that lightning bolts were nothing but streams of electrons, and that water in all its various forms was nothing but collections of H20 molecules.

 

As with behaviorism, we can divide the difficulties of the identity theory into the "technical" objections and the commonsense objections. In this case, the commonsense objection takes the form of a dilemma. Suppose that the identity theory is, as its supporters claim, an empirical truth. If so, then there must be logically independent features of the phenomena in question that enable it to be identified on the left‑hand side of the identity statement in a different way from the way it is identified on the right‑hand side of the identity statement (Stevenson 1960). If, for example, pains are identical with neurophysiological events, then there must be two sets of features, pain features and neurophysiological features, and these two sets of features enable us to nail down both sides of the synthetic identity statement. Thus, for example, suppose we have a statement of the form:

 

Pain event x is identical with neurophysiological event y.

 

We understand such a statement because we understand that one and the same event has been identified in virtue of two different sorts of properties, pain properties and neurophysiological properties. But if so, then we seem to be confronted with a

[p37] dilemma: either the pain features are subjective, mental, introspective features, or they are not. Well if they are, then we have not really gotten rid of the mind. We are still left with a form of dualism, albeit property dualism rather than substance dualism. We are still left with sets of mental properties, even though we have gotten rid of mental substances. If on the other hand we try to treat "pain" as not naming a subjective mental feature of certain neurophysiological events, then its meaning is left totally mysterious and unexplained. As with behaviorism, we have left out the mind. For we now have no way to specify these subjective mental features of our experiences.

 

I hope it is clear that this is just a repetition of the commonsense objection to behaviorism. In this case we have put it in the form of a dilemma: either materialism of the identity variety leaves out the mind or it does not; if it does, it is false; if it does not, it is not materialism.

 

The Australian identity theorists thought they had an answer to this objection. The answer was to try to describe the so-called mental features in a "topic‑neutral" vocabulary. The idea was to get a description of the mental features that did not mention the fact that they were mental (Smart 1965). This can surely be done: One can mention pains without mentioning the fact that they are pains, just as one can mention airplanes without mentioning the fact that they are airplanes. That is, one can mention an airplane by saying, "a certain piece of property belonging to United Airlines," and one can refer to a yellow‑orange afterimage by saying, "a certain event going on in me that is like the event that goes on in me when I see an orange." But the fact that one can mention a phenomenon without specifying its essential characteristics doesn't mean that it doesn't exist and doesn't have those essential characteristics. It still is a pain or an afterimage, or an airplane, even if our descriptions fail to mention these facts.

 

Another more "technical" objection to the identity theory was this: it seems unlikely that for every type of mental state there will be one and only one type of neurophysiological state with which it is identical. Even if my belief that Denver is the

[p38] capital of Colorado is identical with a certain state of my brain, it seems too much to expect that everyone who believes that Denver is the capital of Colorado must have an identical neurophysiological configuration in his or her brain (Block and Fodor 1972; Putnam 1967). And across species, even if it is true that in all humans pains are identical with human neurophysiological events, we don't want to exclude the possibility that in some other species there might be pains that were identical with some other type of neurophysiological configuration. It seems, in short, too much to expect that every type of mental state is identical with some type of neurophysiological state. And indeed, it seems a kind of "neuronal chauvinism" (Block 1978) to suppose that only entities with neurons like our own can have mental states.

 

A third "technical" objection to the identity theory derives from Leibniz's law. If two events are identical only if they have all of their properties in common, then it seems that mental states cannot be identical with physical states, because mental states have certain properties that physical states do not have (Smart 1965; Shaffer 1961). For example, my pain is in my toe, but my corresponding neurophysiological state goes all the way from the toe to the thalamus and beyond. So where is the pain, really? The identity theorists did not have much difficulty with this objection. They pointed out that the unit of analysis is really the experience of having pain, and that experience (together with the experience of the entire body image) presumably takes place in the central nervous system (Smart 1965). On this point it seems to me that materialists are absolutely right.

 

A more radical technical objection to the identity theory was posed by Saul Kripke (1971), with the following modal argument: If it were really true that pain is identical with C‑fiber stimulation, then it would have to be a necessary truth, in the same way that the identity statement "Heat is identical with the motion of molecules" is a necessary truth. This is because in both cases the expressions on either side of the identity statement are "rigid designators." By this he means that each

[p39] expression identifies the object it refers to in terms of its essential properties. This feeling of pain that I now have is essentially a feeling of pain because anything identical with this feeling would have to be a pain, and this brain state is essentially a brain state because anything identical with it would have to be a brain state. So it appears that the identity theorist who claims that pains are certain types of brain states, and that this particular pain is identical with this particular brain state, would be forced to hold both that it is a necessary truth that in general pains are brain states, and that it is a necessary truth that this particular pain is a brain state. But neither of these seems right. It does not seem right to say either that pains in general are necessarily brain states, or that my present pain is necessarily a brain state; because it seems easy to imagine that some sort of being could have brain states like these without having pains and pains like these without being in these sorts of brain states. It is even possible to conceive a situation in which I had this very pain without having this very brain state, and in which I had this very brain state without having a pain.

 

Debate about the force of this modal argument went on for some years and still continues (Lycan 1971, 1987; Sher 1977). From the point of view of our present interests, I want to call attention to the fact that it is essentially the commonsense objection in a sophisticated guise. The commonsense objection to any identity theory is that you can't identify anything mental with anything nonmental, without leaving out the mental. Kripke's modal argument is that the identification of mental states with physical states would have to be necessary, and yet it cannot be necessary, because the mental could not be necessarily physical. As Kripke says, quoting Butler, "Everything is what it is and not another thing."6

 

In any case, the idea that any type of mental state is identical with some type of neurophysiological state seemed really much too strong. But it seemed that the underlying philosophical motivation of materialism could be preserved with a much weaker thesis, the thesis that for every token instance of a men-

[p40] tal state, there will be some token neurophysiological event with which that token instance is identical. Such views were called "token‑token identity theories" and they soon replaced type‑type identity theories. Some authors indeed felt that a token‑token identity theory could evade the force of Kripke's modal arguments.7

 

 

IV. Token‑Token Identity Theories

 

The token identity theorists inherited the commonsense objection to type identity theories, the objection that they still seemed to be left with some form of property dualism; but they had some additional difficulties of their own.

 

One was this. If two people who are in the same mental state are in different neurophysiological states, then what it is about those different neurophysiological states that makes them the same mental state? If you and I both believe that Denver is the capital of Colorado, then what is it that we have in common that makes our different neurophysiological squiggles the same belief? Notice that the token identity theorists cannot give the commonsense answer to this question; they cannot say that what makes two neurophysiological events the same type of mental event is that it has the same type of mental features, because it was precisely the elimination or reduction of these mental features that materialism sought to achieve. They must find some nonmentalistic answer to the question, "What is it about two different neurophysiological states that makes them into tokens of the same type of mental state?" Given the entire tradition within which they were working, the only plausible answer was one in the behaviorist style. Their answer was that a neurophysiological state was a particular mental state in virtue of its function, and this naturally leads to the next view.

 

 

V. Black Box Functionalism

 

What makes two neurophysiological states into tokens of the same type of mental state is that they perform the same function in the overall life of the organism. The notion of a

[p41] function is somewhat vague, but the token identity theorists fleshed it out as follows. Two different brain‑state tokens would be tokens of the same type of mental state iff the two brain states had the same causal relations to the input stimulus that the organism receives, to its various other "mental" states, and to its output behavior (Lewis 1972; Grice 1975). Thus, for example, my belief that it is about to rain will be a state in me which is caused by my perception of the gathering of clouds and the increasing thunder; and together with my desire that the rain not come in the windows, it will in turn cause me to close them. Notice that by identifying mental states in terms of their causal relations ‑ not only to input stimuli and output behavior, but also to other mental states ‑ the token identity theorists immediately avoided two objections to behaviorism. One was that behaviorism had neglected the causal relations of mental states, and the second was that there was a circularity in behaviorism, in that beliefs had to be analyzed in terms of desires, desires in terms of beliefs. The token identity theorist of the functionalist stripe can cheerfully accept this circularity by arguing that the entire system of concepts can be cashed out in terms of the system of causal relations.

 

Functionalism had a beautiful technical device with which to make this system of relations completely clear without invoking any "mysterious mental entities." This device is called a Ramsey sentence,8 and it works as follows: Suppose that John has the belief that p, and that this is caused by his perception that p; and, together with his desire that q, the belief that p causes his action a. Because we are defining beliefs in terms of their causal relations, we can eliminate the explicit use of the word "belief" in the previous sentence, and simply say that there is a something that stands in such‑and‑such causal relations. Formally speaking, the way we eliminate the explicit mention of belief is simply by putting a variable, "x," in place of any expression referring to John's belief that p; and we preface the whole sentence with an existential quantifier (Lewis 1972). The whole story about John's belief that p can then be told as follows:

 


[p42]

 

($x) (John has x & x is caused by the perception that p & x together with a desire that q causes action a)

 

Further Ramsey sentences are supposed to get rid of the occurrence of such remaining psychological terms as "desire" and "perception." Once the Ramsey sentences are spelled out in this fashion, it turns out that functionalism has the crucial advantage of showing that there is nothing especially mental about mental states. Talk of mental states is just talk of a neutral set of causal relations; and the apparent "chauvinism" of type‑type identity theories ‑ that is, the chauvinism of supposing that only systems with brains like ours can have mental states ‑ is now avoided by this much more "liberal" view.9 Any system whatever, no matter what it was made of, could have mental states provided only that it had the right causal relations between its inputs, its inner functioning, and its outputs. Functionalism of this variety says nothing about how the belief works to have the causal relations that it does. It just treats the mind as a kind of a black box in which these various causal relations occur, and for that reason it was sometimes labeled "black box functionalism."

 

Objections to black box functionalism revealed the same mixture of the commonsensical and the technical that we have seen before. The commonsense objection was that the functionalist seems to leave out the qualitative subjective feel of at least some of our mental states. There are certain quite specific qualitative experiences involved in seeing a red object or having a pain in the back, and just describing these experiences in terms of their causal relations leaves out these special qualia. A proof of this was offered as follows: Suppose that one section of the population had their color spectra reversed in such a way that, for example, the experience they call "seeing red" a normal person would call "seeing green"; and what they call "seeing green" a normal person would call "seeing red" (Block and Fodor 1972). Now we might suppose that this "spectrum inversion" is entirely undetectable by any of the usual color blindness tests, since the abnormal group makes exactly the

[p43] same color discriminations in response to exactly the same stimuli as the rest of the population. When asked to put the red pencils in one pile and the green pencils in another they do exactly what the rest of us would do; it looks different to them on the inside, but there is no way to detect this difference from the outside.

 

Now if this possibility is even intelligible to us ‑ and it surely is ‑ then black box functionalism must be wrong in supposing that neutrally specified causal relations are sufficient to account for mental phenomena; for such specifications leave out a crucial feature of many mental phenomena, namely, their qualitative feel.

 

A related objection was that a huge population, say the entire population of China, might behave so as to imitate the functional organization of a human brain to the extent of having the right input‑output relations and the right pattern of inner cause‑and‑effect relations. But all the same, the system would still not feel anything as a system. The entire population of China would not feel a pain just by imitating the functional organization appropriate to pain (Block 1978).

 

Another more technical‑sounding objection to black box functionalism was to the "black box" part: Functionalism so defined failed to state in material terms what it is about the different physical states that gives different material phenomena the same causal relations. How does it come about that these quite different physical structures are causally equivalent?

 

 

VI. Strong Artificial Intelligence

 

At this point there occurred one of the most exciting developments in the entire two‑thousand‑year history of materialism. The developing science of artificial intelligence provided an answer to this question: different material structures can be mentally equivalent if they are different hardware implementations of the same computer program. Indeed, given this answer, we can see that the mind just is a computer program

[p44] and the brain is just one of the indefinite range of different computer hardwares (or "wetwares") that can have a mind. The mind is to the brain as the program is to the hardware (Johnson‑Laird 1988). Artificial intelligence and functionalism coalesced, and one of the most stunning aspects of this union was that it turned out that one can be a thoroughgoing materialist about the mind and still believe, with Descartes, that the brain does not really matter to the mind. Because the mind is a computer program, and because a program can be implemented on any hardware whatever (provided only that the hardware is powerful and stable enough to carry out the steps in the program), the specifically mental aspects of the mind can be specified, studied, and understood without knowing how the brain works. Even if you are a materialist, you do not have to study the brain to study the mind.

 

This idea gave birth to the new discipline of "cognitive science." I will have more to say about it later (in chapters 7, 9, and 10); at this point I am just tracing the recent history of materialism. Both the discipline of artificial intelligence and the philosophical theory of functionalism converged on the idea that the mind was just a computer program. I have baptized this view "strong artificial intelligence" (Searle 1980a), and it was also called "computer functionalism" (Dennett 1978).

 

Objections to strong AI seem to me to exhibit the same mixture of commonsense objections and more or less technical objections that we found in the other cases. The technical difficulties and objections to artificial intelligence in either its strong or weak version are numerous and complex. I will not attempt to summarize them. In general, they all have to do with certain difficulties in programming computers in a way that would enable them to satisfy the Turing test. Within the AI camp itself, there were always difficulties such as the "frame problem" and the inability to get adequate accounts of "nonmonotonic reasoning" that would mirror actual human behavior. From outside the AI camp, there were objections such as those of Hubert Dreyfus (1972) to the effect that the

[p45] way the human mind works is quite different from the way a computer works.

 

The commonsense objection to strong AI was simply that the computational model of the mind left out the crucial things about the mind such as consciousness and intentionality. I believe the best‑known argument against strong AI was my Chinese room argument (Searle 1980a) that showed that a system could instantiate a program so as to give a perfect simulation of some human cognitive capacity, such as the capacity to understand Chinese, even though that system had no understanding of Chinese whatever. Simply imagine that someone who understands no Chinese is locked in a room with a lot of Chinese symbols and a computer program for answering questions in Chinese. The input to the system consists in Chinese symbols in the form of questions; the output of the system consists in Chinese symbols in answer to the questions. We might suppose that the program is so good that the answers to the questions are indistinguishable from those of a native Chinese speaker. But all the same, neither the person inside nor any other part of the system literally understands Chinese; and because the programmed computer has nothing that this system does not have, the programmed computer, qua computer, does not understand Chinese either. Because the program is purely formal or syntactical and because minds have mental or semantic contents, any attempt to produce a mind purely with computer programs leaves out the essential features of the mind.

 

In addition to behaviorism, type identity theories, token identity theories, functionalism, and strong AI, there were other theories in the philosophy of mind within the general materialist tradition. One of these, which dates back to the early 1960s in the work of Paul Feyerabend (1963) and Richard Rorty (1965), has recently been revived in different forms by such authors as P. M. Churchland (1981) and S. Stich (1983). It is the view that mental states don't exist at all. This view is called "eliminative materialism" and I now turn to it.

 

[p46]

 

VII. Eliminative Materialism

 

In its most sophisticated version, eliminative materialism argued as follows: our commonsense beliefs about the mind constitute a kind of primitive theory, a "folk psychology." But as with any theory, the entities postulated by the theory can only be justified to the extent that the theory is true. Just as the failure of the phlogiston theory of combustion removed any justification for believing in the existence of phlogiston, so the failure of folk psychology removes the rationale for folk psychological entities. Thus, if it turns out that folk psychology is false, then we would be unjustified in believing in the existence of beliefs, desires, hopes, fears, etc. According to the eliminative materialists, it seems very likely that folk psychology will turn out to be false. It seems likely that a "mature cognitive science" will show that most of our commonsense beliefs about mental states are completely unjustified. This result would have the consequence that the entities that we have always supposed to exist, our ordinary mental entities, do not really exist. And therefore, we have at long last a theory of mind that simply eliminates the mind. Hence, the expression "eliminative materialism."

 

A related argument used in favor of "eliminative materialism" seems to me so breathtakingly bad that I fear I must be misunderstanding it. As near as I can tell, here is how it goes:

 

Imagine that we had a perfect science of neurobiology. Imagine that we had a theory that really explained how the brain worked. Such a theory would cover the same domain as folk psychology, but would be much more powerful. Furthermore, it seems very unlikely that our ordinary folk psychological concepts, such as belief and desire, hope, fear, depression, elation, pain, etc., would exactly match or even remotely match the taxonomy provided by our imagined perfect science of neurobiology. In all probability there would be no place in this neurobiology for expressions like "belief," "fear," "hope" and "desire," and no smooth reduction of these supposed phenomena would be possible.

 

[p47]

 

That is the premise. Here is the conclusion:

 

Therefore, the entities purportedly named by the expressions of folk psychology, beliefs, hopes, fears, desires, etc., do not really exist.

 

To see how bad this argument really is, just imagine a parallel argument from physics:

 

Consider our existing science of theoretical physics. Here we have a theory that explains how physical reality works, and is vastly superior to our commonsense theories by all the usual criteria. Physical theory covers the same domain as our commonsense theories of golf clubs, tennis rackets, Chevrolet station wagons, and split‑level ranch houses. Furthermore, our ordinary folk physical concepts such as "golf club," "tennis racket," "Chevrolet station wagon," and "split‑level ranch house" do not exactly, or even remotely, match the taxonomy of theoretical physics. There simply is no use in theoretical physics for any of these expressions and no smooth type reductions of these phenomena is possible. The way that an ideal physics - indeed the way that our actual physics ‑ taxonomizes reality is really quite different from the way our ordinary folk physics taxonomizes reality.

 

Therefore, split‑level ranch houses, tennis rackets, golf clubs, Chevrolet station wagons, etc., do not really exist.

 

I have not seen this mistake discussed in the literature. Perhaps it is so egregious that it has simply been ignored. It rests on the obviously false premise that for any empirical theory and corresponding taxonomy, unless there is a type-type reduction of the entities taxonomized to the entities of better theories of basic science, the entities do not exist. If you have any doubts that this premise is false, just try it out on anything you see around you ‑ or on yourself!10

 

With eliminative materialism, once again, we find the same pattern of technical and commonsense objections that we noted earlier. The technical objections have to do with the fact

[p48] that folk psychology, if it is a theory, is nonetheless not a research project. It isn't itself a rival field of scientific research, and indeed, the eliminative materialists who attack folk psychology, according to their critics, are often unfair. According to its defenders, folk psychology isn't such a bad theory after all; many of its central tenets are quite likely to turn out to be true. The commonsense objection to eliminative materialism is just that it seems to be crazy. It seems crazy to say that I never felt thirst or desire, that I never had a pain, or that I never actually had a belief, or that my beliefs and desires don't play any role in my behavior. Unlike the earlier materialist theories, eliminative materialism doesn't so much leave out the mind, it denies the existence of anything to leave out in the first place. When confronted with the challenge that eliminative materialism seems too insane to merit serious consideration, its defenders almost invariably invoke the heroic‑age‑of‑science maneuver (P. S. Churchland 1987). That is, they claim that giving up the belief that we have beliefs is analogous to giving up the belief in a flat earth or sunsets, for example.

 

It is worth pointing out in this entire discussion that a certain paradoxical asymmetry has come up in the history of materialism. Earlier type‑type identity theories argued that we could get rid of mysterious, Cartesian mental states because such states were nothing but physical states (nothing "over and above" physical states); and they argued this on the assumption that types of mental states could be shown to be identical with types of physical states, that we would get a match between the deliverances of neurobiology and our ordinary notions such as pain and belief. Now in the case of eliminative materialism, it is precisely the alleged failure of any such match that is regarded as the vindication of the elimination of these mental states in favor of a thoroughgoing neurobiology. Earlier materialists argued that there aren't any such things as separate mental phenomena, because mental phenomena are identical with brain states. More recent materialists argue that there aren't any such things as separate mental phenomena

[p49] because they are not identical with brain states. I find this pattern very revealing, and what it reveals is an urge to get rid of mental phenomena at any cost.

 

 

VIII. Naturalizing Content

 

After half a century of this recurring pattern in debates about materialism, one might suppose that the materialists and the dualists would think there is something wrong with the terms of the debate. But so far the induction seems not to have occurred to either side. As I write this, the same pattern is being repeated in current attempts to "naturalize" intentional content.

 

Strategically the idea is to carve off the problem of consciousness from the problem of intentionality. Perhaps, one admits, consciousness is irreducibly mental and thus not subject to scientific treatment, but maybe consciousness does not matter much anyway and we can get along without it. We need only to naturalize intentionality, where "to naturalize intentionality" means to explain it completely in terms of ‑ to reduce it to ‑ nonmental, physical phenomena. Functionalism was one such attempt at naturalizing intentional content, and it has been rejuvenated by being joined to externalist causal theories of reference. The idea behind such views is that semantic content, that is, meanings, cannot be entirely in our heads because what is in our heads is insufficient to determine how language relates to reality. In addition to what is in our heads, "narrow content," we need a set of actual physical causal relations to objects in the world, we need "wide content." These views were originally developed around problems in the philosophy of language (Putnam 1975b), but it is easy to see how they extend to mental contents generally. If the meaning of the sentence "Water is wet" cannot be explained in terms of what is inside the heads of speakers of English, then the belief that water is wet is not a matter solely of what is in their heads either. Ideally one would like an account of intentional content stated solely in terms of causal relations between people, on

[p50] the one hand, and objects and states of affairs in the world, on the other.

 

A rival to the externalist causal attempt to naturalize content, and I believe an even less plausible account, is that intentional contents can be individuated by their Darwinian, biological, teleological function. For example, my desires will have a content referring to water or food iff they function to help me obtain water or food (Milliken 1984).

 

So far no attempt at naturalizing content has produced an explanation (analysis, reduction) of intentional content that is even remotely plausible. Consider the simplest sort of belief. For example, I believe that Flaubert was a better novelist than Balzac. Now, what would an analysis of that content, stated in terms of brute physical causation or Darwinian natural selection, without using any mental terms, look like? It should be no surprise to anyone that these attempts do not even get off the ground.

 

Once again such naturalized conceptions of content are subject to both technical and commonsense objections. The most famous of the technical problems is probably the disjunction problem (Fodor 1987). If a certain concept is caused by a certain sort of object, then how do we account for cases of mistaken identity? If "horse" is caused by horses or by cows that are mistakenly identified as horses, then do we have to say that the analysis of "horse" is disjunctive, that it means either horse or certain sorts of cows?

 

As I write this, naturalistic (externalist, causal) accounts of content are all the rage. They will all fail for reasons that I hope by now are obvious. They will leave out the subjectivity of mental content. By way of technical objections there will be counterexamples, such as the disjunction cases, and the counterexamples will be met with gimmicks ‑ nomological relations, and counterfactuals, or so I would predict ‑ but the most you could hope from the gimmicks, even if they were successful in blocking the counterexamples, would be a parallelism between the output of the gimmick and intuitions about mental content. You still would not get at the essence of mental content.

 

[p51]

 

I do not know if anyone has yet made the obvious commonsense objection to the project of naturalizing intentional content, but I hope it is clear from the entire discussion what it will be. In case no one has done it yet, here goes: Any attempt to reduce intentionality to something nonmental will always fail because it leaves out intentionality. Suppose for example that you had a perfect causal externalist account of the belief that water is wet. This account is given by stating a set of causal relations in which a system stands to water and to wetness and these relations are entirely specified without any mental component. The problem is obvious: a system could have all of these relations and still not believe that water is wet. This is just an extension of the Chinese room argument, but the moral it points to is general: You cannot reduce intentional content (or pains or "qualia") to something else, because if you could they would be something else, and they are not something else. The opposite of my view is stated very succinctly by Fodor: "If aboutness is real, it must really be something else" (1987, p. 97). On the contrary, aboutness (i.e., intentionality) is real, and it is not something else.

 

A symptom that something is radically wrong with the project is that the intentional notions are inherently normative. They set standards of truth, rationality, consistency, etc., and there is no way that these standards can be intrinsic to a system consisting entirely of brute, blind, nonintentional causal relations. There is no normative component to billiard ball causation. Darwinian biological attempts at naturalizing content try to avoid this problem by appealing to what they suppose is the inherently teleological, normative character of biological evolution. But this is a very deep mistake. There is nothing normative or teleological about Darwinian evolution. Indeed, Darwin's major contribution was precisely to remove purpose and teleology from evolution, and substitute for it purely natural forms of selection. Darwin's account shows that the apparent teleology of biological processes is an illusion.

 

It is a simple extension of this insight to point out that notions such as "purpose" are never intrinsic to biological organisms, (unless of course those organisms themselves have

[p52] conscious intentional states and processes). And even notions like "biological function" are always made relative to an observer who assigns a normative value to the causal processes. There is no factual difference about the heart that corresponds to the difference between saying

 

1. The heart causes the pumping of blood.

 

and saying,

 

2. The function of the heart is to pump blood.

 

But 2 assigns a normative status to the sheer brute causal facts about the heart, and it does this because of our interest in the relation of this fact to a whole lot of other facts, such as our interest in survival. In short, the Darwinian mechanisms and even biological functions themselves are entirely devoid of purpose or teleology. All of the teleological features are entirely in the mind of the observer.11

 

 

IX. The Moral So Far

 

My aim so far in this chapter has been to illustrate a recurring pattern in the history of materialism. This pattern is made graphic in table 2.1. I have been concerned not so much to defend or refute materialism as to examine its vicissitudes in the face of certain commonsense facts about the mind, such as the fact that most of us are, for most of our lives, conscious. What we find in the history of materialism is a recurring tension between the urge to give an account of reality that leaves out any reference to the special features of the mental, such as consciousness and subjectivity, and at the same time account for our "intuitions" about the mind. It is, of course, impossible to do these two things. So there are a series of attempts, almost neurotic in character, to cover over the fact that some crucial element about mental states is being left out. And when it is pointed out that some obvious truth is being denied by the materialist philosophy, the upholders of this view almost invariably resort to certain rhetorical strategies

 

[p53]

 

Table 2.1

The general pattern exhibited by recent materialism.

 

Theory

Common‑sense objections

Technical objections

 

Logical behaviorism

Leaves out the mind: superspartan/superactor objections

1.      Circular; needs desires to explain beliefs, and conversely

2.      Can't do the conditionals

3.      Leaves out causation

 

Type identity theory

Leaves out the mind: or else it leads to property dualism

1.      Neural chauvinism

2.      Leibniz's law

3.      Can't account for mental properties

4.      Modal arguments

 

Token identity theory

Leaves out the mind: absent qualia

Can't identify the mental features of mental content

 

Black box functionalism

Leaves out the mind: absent qualia and spectrum inversion

Relation of structure and function is unexplained

 

Strong AI (Turing machine functionalism)

Leaves out the mind: Chinese room

Human cognition is nonrepresentational and therefore noncomputational

 

Eliminative materialism (rejection of folk psychology)

Denies the existence of the mind: unfair to folk psychology

 

Defense of folk psychology

Naturalizing intentionality

Leaves out intentionality

Disjunction problem

 

 

[p54] designed to show that materialism must be right, and that the philosopher who objects to materialism must be endorsing some version of dualism, mysticism, mysteriousness, or general antiscientific bias. But the unconscious motivation for all of this, the motivation that never somehow manages to surface, is the assumption that materialism is necessarily inconsistent with the reality and causal efficacy of consciousness, subjectivity, etc. That is, the basic assumption behind materialism is essentially the Cartesian assumption that materialism implies antimentalism and mentalism implies antimaterialism.

 

There is something immensely depressing about this whole history because it all seems so pointless and unnecessary. It is all based on the false assumption that the view of reality as entirely physical is inconsistent with the view that world really contains subjective ("qualitative," "private," "touchy‑feely." "immaterial," "nonphysical") conscious states such as thoughts and feelings.

 

The weird feature about this entire discussion is that materialism inherits the worst assumption of dualism. In denying the dualist's claim that there are two kinds of substances in the world or in denying the property dualist's claim that there are two kinds of properties in the world, materialism inadvertently accepts the categories and the vocabulary of dualism. It accepts the terms in which Descartes set the debate. It accepts, in short, the idea that the vocabulary of the mental and the physical, of material and immaterial, of mind and body, is perfectly adequate as it stands. It accepts the idea that if we think consciousness exists we are accepting dualism. What I believe ‑ as is obvious from this entire discussion ‑ is that the vocabulary, and the accompanying categories, are the source of our deepest philosophical difficulties. As long as we use words like "materialism," we are almost invariably forced to suppose that they imply something inconsistent with naive mentalism. I have been urging that in this case, one can have one's cake and eat it too. One can be a "thoroughgoing materialist" and not in any way deny the existence of (subjective, internal, intrinsic, often conscious) mental phenomena. How-

[p55] ever, since my use of these terms runs dead counter to over three hundred years of philosophical tradition, it would probably be better to abandon this vocabulary altogether.

 

If one had to describe the deepest motivation for materialism, one might say that it is simply a terror of consciousness. But should this be so? Why should materialists have a fear of consciousness? Why don't materialists cheerfully embrace consciousness as just another material property among others? Some, in fact, such as Armstrong and Dennett, claim to do so. But they do this by so redefining "consciousness" as to deny the central feature of consciousness, namely, its subjective quality. The deepest reason for the fear of consciousness is that consciousness has the essentially terrifying feature of subjectivity. Materialists are reluctant to accept that feature because they believe that to accept the existence of subjective consciousness would be inconsistent with their conception of what the world must be like. Many think that, given the discoveries of the physical sciences, a conception of reality that denies the existence of subjectivity is the only one that it is possible to have. Again, as with "consciousness," one way to cope is to redefine "subjectivity" so that it no longer means subjectivity but means something objective (for an example, see Lycan 1990a).

 

I believe all of this amounts to a very large mistake, and in chapters 4, 5, and 6, I will examine in some detail the character and the ontological status of consciousness.

 

 

X. The Idols of the Tribe

 

I said earlier in this chapter that I would explain why a certain natural‑sounding question was really incoherent. The question is: How do unintelligent bits of matter produce intelligence? We should first note the form of the question. Why are we not asking the more traditional question: How do unconscious bits of matter produce consciousness? That question seems to me perfectly coherent. It is a question about how the brain works to cause conscious mental states even though the

[p56] individual neurons (or synapses or receptors) in the brain are not themselves conscious. But in the present era, we are reluctant to ask the question in that form because we lack "objective" criteria of consciousness. Consciousness has an ineliminable subjective ontology, so we think it more scientific to rephrase the question as one about intelligence, because we think that for intelligence we have objective, impersonal criteria. But now we immediately encounter a difficulty. If by "intelligence" we mean anything that satisfies the objective third‑person criteria of intelligence, then the question contains a false presupposition. Because if intelligence is defined behavioristically, then it is simply not the case that neurons are not intelligent. Neurons, like just about everything else in the world, behave in certain regular, predictable patterns. Furthermore, considered in a certain way, neurons do extremely sophisticated "information processing." They take in a rich set of signals from other neurons at their dendritic synapses; they process this information at their somae and send out information through their axonal synapses to other neurons. If intelligence is to be defined behavioralistically, then neurons are pretty intelligent by anybody's standards. In short, if our criteria of intelligence are entirely objective and third‑person ‑ and the whole point of posing the question in this way was to get something that satisfied those conditions ‑ then the question contains a presupposition that on its own terms is false. The question falsely presupposes that the bits do not meet the criteria of intelligence.

 

The answer to the question, not surprisingly, inherits the same ambiguity. There are two different sets of criteria for applying the expression "intelligent behavior." One of these sets consists of third‑person or "objective" criteria that are not necessarily of any psychological interest whatever. But the other set of criteria are essentially mental and involve the first-person point of view. "Intelligent behavior" on the second set of criteria involves thinking, and thinking is essentially a mental process. Now, if we adopt the third‑person criteria for intelligent behavior, then of course computers ‑ not to mention

[p57] pocket calculators, cars, steam shovels, thermostats, and indeed just about everything in the world ‑ engages in intelligent behavior. If we are consistent in adopting the Turing test or some other "objective" criterion for intelligent behavior, then the answer to such questions as "Can unintelligent bits of matter produce intelligent behavior?" and even, "How exactly do they do it?" are ludicrously obvious. Any thermostat, pocket calculator, or waterfall produces "intelligent behavior," and we know in each case how it works. Certain artifacts are designed to behave as if they were intelligent, and since everything follows laws of nature, then everything will have some description under which it behaves as if it were intelligent. But this sense of "intelligent behavior" is of no psychological relevance at all.

 

In short, we tend to hear both the question and the answer as oscillating between two different poles: (a) How do unconscious bits of matter produce consciousness? (a perfectly good question to which the answer is: In virtue of specific ‑ though largely unknown ‑ neurobiological features of the brain); and (b) How do "unintelligent" (by first‑ or third‑person criteria?) bits of matter produce "intelligent" ( by first‑ or third‑person criteria?) behavior? But to the extent that we make the criteria of intelligence third‑person criteria, the question contains a false presupposition, and this is concealed from us because we tend to hear the question on interpretation (a).