Reading: Minds, Brains, and Programs by John Searle
Minds, Brains, and Programs by John R. Searle
Searle, John R. “Minds, Brains, and Programs.” Readings in Cognitive Science, 1988, pp. 20–31.
Instructions:
Read the following essay. While reading, think about the answers to the questions in the boxes. Click on the tabs above for optional considerations.
Objectives:
- Explain Searle's Chinese Room thought experiment.
- Analyze the arguments put forth by Searle.
- Debate the ideas presented by Searle.
- Demonstrate an understanding of the arguments for and against artificial intelligence.
- Communicate ideas in the video and reading to your classmates.
What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (Artificial Intelligence). According to weak Al, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers are given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, its programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.
I have no objection to the claims of weak AI, at least as far as this article is concerned. My discussion here will be directed at the claims I have defined as those of strong Al, specifically the claim that an appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. When I hereafter refer to Al, I have in mind the strong version, as expressed by these two claims.
One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test with the following thought experiment. Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view - that is, from the point of view of somebody outside the room in which I am locked - my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that m y answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view - from the point of view of someone reading my "answers" - the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterrupted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.
Comprehension Questions:
- What is the difference between strong and weak AI?
- To which type of AI does Searle direct his objections?
- What does Searle mean by "formal symbols"?
Now the claims made by strong Al are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment.
As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, a computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.
As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same - or perhaps more of the same - as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if the program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested - though certainly not demonstrated - by the example is that the computer program is simply irrelevant to my understanding of the tory. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements.
As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding. Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles - that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all.
Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that I know what the former mean, while I haven't the faintest idea what the latter mean. But in what does this consist and why couldn't we give it to a machine, whatever it is? I will return to this question later, but first I want to continue with the example.
I have had the occasions to present this example to several workers in artificial intelligence, and, interestingly, they do not seem to agree on what the proper reply to it is. I get a surprising variety of replies, and in what follows I will consider the most common of these (specified along with their geographic origins).
But first I want to block some common misunderstandings about "understanding": in many of these discussions one finds a lot of fancy footwork about the word "understanding." My critics point out that there are many different degrees of understanding; that "understanding" is not a simple two place predicate; that there are even different kinds and levels of understanding, and often the law of excluded middle doesn't even apply in a straightforward way to statements of the form "x understands y"; that in many cases it is a matter for decision and not a simple matter of fact whether x understands y; and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the points at issue. There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.
I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute "understanding" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine knows how (understands how, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature." The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples The sense in which an automatic door "understands instructions" from its photoelectric cell is not at all the sense in which I understand English. If the sense in which programmed computers understand stories is supposed to be the metaphorical sense in which the door understands, and not the sense in which I understand English, the issue would not be worth discussing. But Newell and Simon (1963) write that the kind of cognition they claim for computers is exactly the same as for human beings. I like the straightforwardness of this claim, and it is the sort of claim I will be considering. I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.
Now to the replies:
The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."
My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong Al will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.
So there are really two subsystems in the man; one understands English, the other Chinese, and "it's just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of "subsystems" for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle." All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese. And it doesn't meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still don't have anything even remotely like what the English-speaking man (or subsystem) has. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English.
Comprehension Questions:
- What does Searle mean by the word "understanding"?
- What does it mean to internalize all the elements of the Chinese Room?
The Robot Reply (Yale). "Suppose we wrote a different kind of program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like. The robot would, for example, have a television camera attached to it that enabled it to 'see,' it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would have genuine understanding and other mental states."
The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to the original program. To see this, notice that the same thought experiment applies to the robot case. Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.
The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"
Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong Al is that we don't need to know how the brain works to k now how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong Al, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do Al, we wouldn't bother with A l. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.
Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
The other minds reply (Yale). "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."
This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.
Let us now return to the question I promised I would try to answer: granted that in my original example I understand the English and I do not understand the Chinese, and granted therefore that the machine doesn't understand either English or Chinese, still there must be something about me that makes it the case that I understand English and a corresponding something lacking in me that makes it the case that I fail to understand Chinese. Now why couldn't we give those somethings, whatever they are, to a machine?
I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program. It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.
But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. And any other causal properties that particular realizations of the formal model have, are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent. Even if, by some miracle, Chinese speakers exactly realize the program, we can put tile same program in English speakers, water pipes, or computers, none of which understand Chinese, the program notwithstanding.
What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences. All the arguments for the strong version of artificial intelligence that I have seen insist on drawing an outline around the shadows cast by cognition and then claiming that the shadows are the real thing.
By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question and answer fashion, and I begin with that old chestnut of a question:
"Could a machine think?"
The answer is, obviously, yes. We are precisely such machines.
''Yes, but could an artifact, a man-made machine, think?"
Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.
"OK, but could a digital computer think?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
"Why not?"
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often called "electronic brains"). But that is no help. Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.
Basic Questions (Optional)
This is an optional, non-graded, non-credit area. Explore the following for a better understanding of philosophy. Answers to these question can be found in the video and in the reading.
- Briefly describe Searle's Chinese Room thought experiment.
- How would you describe the difference between manipulating Chinese symbols according to formal rules and responding in English messages?
- What is intentionality?
- What are the four objections to Searle's thought experiment?
Advanced Questions (Optional)
This is an optional, non-graded, non-credit area. Explore the following for a better understanding of philosophy.
- Think about the ways in which cognitive processing (for example, problem solving, decision making, driving a car) occurs at the subconscious level. To what degree is conscious awareness present or absent in activities that are considered to be intelligent?
- Suppose you're lying on your deathbed and your doctor offers you the possibility of downloading your mind into a computer. Would you do it?
- What things can machines never do? What thing do people claim machines can never do?
- Taking into account Searle's view of AI, do you think that computers could have a mind or be conscious like human beings? Why or why not?
Additional Considerations (Optional)
This is an optional, non-graded, non-credit area. Explore the following for a better understanding of philosophy.
Why People Think Computers Can't by Marvin Minsky
Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.
Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That's why the things were called "computers". Yet even then, a few pioneers -- especially Alan Turing -- envisioned what's now called "Artificial Intelligence" - or "AI". They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.
Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, "computer experts" say machines will never really think. If so, how could they be so smart, and yet so dumb?
Most people assume that computers can't be conscious, or self-aware; at best they can only simulate the appearance of this. Of course, this assumes that we, as humans, are self-aware. But are we? I think not. I know that sounds ridiculous, so let me explain.
If by awareness we mean knowing what is in our minds, then, as every clinical psychologist knows, people are only very slightly self-aware, and most of what they think about themselves is guess-work. We seem to build up networks of theories about what is in our minds, and we mistake these apparent visions for what's really going on. To put it bluntly, most of what our "consciousness" reveals to us is just "made up". Now, I don't mean that we're not aware of sounds and sights, or even of some parts of thoughts. I'm only saying that we're not aware of much of what goes on inside our minds.
When people talk, the physics is quite clear: our voices shake the air; this makes your ear-drums move -- and then computers in your head convert those waves into constituents of words. These somehow then turn into strings of symbols representing words, so now there's somewhere in your head that "represents" a sentence. What happens next?
When light excites your retinas, this causes events in your brain that correspond to texture, edges, color patches, and the like. Then these, in turn, are somehow fused to "represent" a shape or outline of a thing. What happens then?
We all comprehend these simple ideas. But there remains a hard problem, still. What entity or mechanism carries on from there? We're used to saying simply, that's the "self". What's wrong with that idea? Our standard concept of the self is that deep inside each mind resides a special, central "self" that does the real mental work for us, a little person deep down there to hear and see and understand what's going on. Call this the "Single Agent" theory. It isn't hard to see why every culture gets attached
to this idea. No matter how ridiculous it may seem, scientifically, it underlies all principles of law, work, and morality. Without it, all our canons of responsibility would fall, of blame or virtue, right or wrong.
What use would solving problems be, without that myth; how could we have societies at all?
The trouble is, we cannot build good theories of the mind that way. In every field, as Scientists we're always forced to recognize that what we see as single things - like rocks or clouds, or even minds - must sometimes be described as made of other kinds of things. We'll have to understand that Self, itself, is not a single thing.
Then, is it possible to program a computer to be self-conscious? People usually expect the answer to be "no". What if we answered that machines are capable, in principle, of even more and better consciousness than people have?
I think this could be done by providing machines with ways to examine their own mechanisms while they are working. In principle, at least, this seem possible; we already have some simple Al programs that can understand a little about how some simpler programs work. (There is a technical problem about the program being fast enough, to keep up with itself, but that can be solved by keeping records.) The trouble is, we still know far too little, yet, to make programs with enough common sense to understand even how today's simple Al problem-solving programs work. But once we learn to make machines that are smart enough to understand such things, I see no special problem in giving them the "self-insight" they would need to understand, change, and improve themselves.
This might not be so wise to do. But what if it turns out that the only way to make computers much smarter is to make them more self-conscious? For example, it might turn out to be too risky to assign a robot to undertake some important, long-range task, without some "insight" about its own abilities. If we don't want it to start projects it can't finish, we'd better have it know what it can do. If we want it versatile enough to solve new kinds of problems, it may need to be able to understand how it already solves easier problems. In other words, it may turn out that any really robust problem solver will to understand itself enough to change itself. Then, if that goes on long enough, why can't those artificial creatures reach for richer mental lives than people have. Our own evolution must have constrained the wiring of our brains in many ways. But here we have more options now, since we can wire machines in any way we wish.
It will be a long time before we learn enough about common sense reasoning to make machines as smart as people are. Today, we already know quite a lot about making useful, specialized, "expert" systems. We still don't know how to make them able to improve themselves in interesting ways. But when we answer such questions, then we'll have to face one, even stranger, one. When we learn how, then should we build machines that might be somehow "better" than ourselves? We're lucky that we have to leave that choice to future generations. I'm sure they won't want to build the things that well unless they find good reasons to.
Just as Evolution changed man's view of Life, Al will change mind's view of Mind. As we find more ways to make machines behave more sensibly, we'll also learn more about our mental processes. In its course, we will find new ways to think about "thinking" and about "feeling". Our view of them will change from opaque mysteries to complex yet still comprehensible webs of ways to represent and use ideas. Then those ideas, in turn, will lead to new machines, and those, in turn, will give us new ideas. No one can tell where that will lead and only one thing's sure right now: there's something wrong with any claim to know, today, of any basic differences between the minds of men and those of possible machines.
Minsky, Marvin. "Why People Think Computers Can't." AI Magazine, vol. 3 no. 4, Fall 1982.
Stundent Lounge Q & A
This is the place to ask and answer questions about the course and about technical issues. This is a lounge so anyone can ask or answer questions. Remember, you are part of a community of learners. As such, you are expected to help each other out as the need arises.
Q & A about Intro. to Philosophy
Ask and answer questions about the course here.
Help each other out with technical issues here.