It’s an unseasonably warm spring day on campus and you’ve got a hankering for an ice cream cone. Will you pick chocolate or something more exotic? Standing indecisively, you look up to see a crazy little dog named Russell catching a ball in midair. Russell’s well known around Berkeley. He’s a robot and he fetches like nobody’s business. You marvel at his abilities and then go for chocolate. Which is more likely: that you possess free will and used it to make your ice cream choice? Or that a dog like Russell could exist? Philosopher John Searle has thought a lot about the nature of consciousness and the prospects for the kind of artificial intelligence that would produce a conscious robot. For Searle, questions about human consciousness and thinking machines are intimately intertwined. Author of 17 books, including his most recent, Freedom and Neurobiology: Reflections on Free Will, Language, and Political Power, Searle is the Willis S. and Marion Slusser Professor of Philosophy. He has been at UC Berkeley since 1959. Although he grapples daily with questions such as “What is the nature of causation?” and “How do brains cause minds?” he can discuss his process as though he’s talking to his neighbor across the back fence. I met Searle in his office, where a poster of his downhill-ski racing days while at Oxford hangs above a bulging bookshelf.
Q: In Freedom and Neurobiology as well as some of your other work, you borrow from the natural sciences to further your philosophical inquiries. Why?
Searle: I don’t make a sharp distinction between philosophy and other disciplines, and so when I am working on a problem I am very opportunistic. My general strategy is to use any weapon that works when tackling a philosophical problem. There are many puzzling philosophical questions that have related issues in neurobiology. So I learn what neurobiology I can. Some philosophical problems have scientific solutions, but many do not. For example, questions about the good society and leading an ethical life are beyond the scope of the natural sciences.
I think some of the problems of consciousness will have neurobiological solutions. We don’t know the solutions yet, but we have made progress on several questions: How do neurobiological processes in the brain cause conscious experiences? How are those conscious experiences realized in the brain—that is, where are they exactly in the brain?
One of the tasks of philosophical analysis is to get these questions into a good enough shape so that they can be solved using scientific methods. To some extent that has already happened. But it took a long time because many neurobiologists had made a philosophical mistake. They thought that consciousness was not a scientific problem at all, but rather the domain of philosophers and theologians.
An even worse mistake was the theory that consciousness is just a computer program, which has nothing to do with neurobiology.
This isn’t news to you, but people get really exercised about ideas. Philosophical debate is almost a contact sport. In fact, you’ve been referred to as a “philosophical bruiser.”
Philosophy is about issues that matter desperately to people. Years ago I wrote a refutation that made a lot of people mad. I refuted what I call “Strong Artificial Intelligence.” This is the idea that the brain is just a digital computer and that consciousness is just a computer program running in the brain. You can refute this Strong AI thesis in about two minutes.
Here is how it goes. Let’s get a watch so you can time me. I don’t speak Chinese. I can’t understand a word. It just looks like squiggles to me. Now, imagine that I am locked in a room and people give me questions in Chinese. I go through the steps of a computer program written in English to see what I’m supposed to do in order to answer the questions. I respond to the questions with the appropriate Chinese squiggles. I give the right answers. I do what a computer does. But I still don’t understand a word of Chinese. And the conclusion is that if I don’t understand Chinese solely on the basis of implementing the Chinese-understanding program, then neither does any other digital computer, solely on that basis, because the computer has nothing that I don’t have.
Computation by itself isn’t the same as thinking because computation is defined in terms of syntax, specifically the manipulation of symbols, which are usually thought of as 0s and 1s but could be Chinese squiggles or anything else. But real understanding is more than just syntax; it is also semantics. To understand something, you have to have meaningful semantic content.
You dispensed with the argument pretty quickly, but is there any utility for you in engaging in these sorts of debates, even if you find the arguments on the other side weak?
The beauty of arguing with people about artificial intelligence is that its adherents are committed to rationality. They may cheat like crazy in actual argument, but they recognize that there are steps to the argument, and they recognize that there is a distinction between valid and invalid arguments.
In philosophy, part of your task is to make sure that the truth has a chance of prevailing. That means you’ve got to try to refute obvious falsehoods. Now, it’s an obvious falsehood to say there’s nothing to my mind except 0s and 1s. I felt I had to refute that. Though I don’t think it’s the most important thing I’ve done, the Chinese Room Argument is probably my best-known work.
Is this leap, from syntax to semantics and from computation to meaning, the same problem as the problem of programming common sense into a computer? Why is common sense so hard to replicate artificially?
This is a different but also interesting problem. The assumption behind traditional AI is that all our thinking is done the way a digital computer operates—by a linear manipulation of symbols according to an algorithm. But there is a lot of thinking that isn’t done that way. The assumption that these AI guys make is that all our reasoning is done by mathematical algorithms. It’s not. A lot of thinking and acting are done using what I call “background abilities.” You just have certain skills.
Here’s my favorite example: My dog Russell was really good at catching tennis balls bounced off walls. Suppose you’re going to build a robot dog. If you use traditional AI you would have to program the robot—this is what they think Russell did—so that it would compute the trajectory of the ball by solving a set of very complicated equations. Do you think Russell did that? I don’t. He did what I would do. He consciously tried to figure out where that ball was going and then put his mouth there. He developed a skill.
So is it impossible to build a Russell robot?
It is not impossible to build a robot that behaves like Russell. The point is that if you want to build a robot that has conscious experiences like Russell, you have to build a conscious robot and we don’t know how to do that. If the definition of a machine is “a physical system capable of performing certain functions,” then we are machines. There’s no reason in principle why you couldn’t build an artificial human or an artificial dog. We haven’t the faintest idea how to go about doing it, but there’s no reason you couldn’t build a thinking machine out of nonbiological materials. My rejection of Strong Artificial Intelligence is not a rejection of the possibility of creating a conscious machine. I am a conscious machine. So are you. The question is whether computation as standardly defined is sufficient for having consciousness. It’s not.
We can do mathematics, a lot of equation solving. But we should not then suppose that the way we solve equations on a machine is the way we catch a ball, drive our car, eat, or make love. When we do those things we’re not just doing computations.
If you’re going to create consciousness, you have to be able to duplicate and not just simulate the causal powers of the brain. That’s a different business altogether. We’re not yet in the business of creating consciousness because we don’t know how the brain does it.
Let’s dig into this problem of consciousness. We may not possess one of the aspects of consciousness that, at least to me, helps define what it is to be human: free will. You have raised the question of free will and suggested two hypotheses, one according to which free will exists, and one according to which it doesn’t. If I’m not mistaken, you first had to refute the 17th-century mind/body dualism of Descartes.
Descartes thought it obvious that we are conscious, that our minds are distinct from our bodies, and that our minds are free. So we have free will.
The problem with Descartes’s dualism is that nobody ever succeeded in making any sense of how there could be causal relations between the mind and the body if they are two different kinds of substances in two different metaphysical realms.
But do you doubt that consciousness can move bodies? I decide to raise my arm and the damn thing goes up. Mental causation is just not a problem for me. However, I also know that anything that can move my body has to cause certain neurobiological changes. We know that one and the same event must be both a conscious decision—the decision to raise my arm—and it must also have biochemical features. That’s just how nature works.
Did it take science to resolve Descartes’s dualism of mind and body?
I’d say that most scientists don’t give a damn about this problem and would prefer not to think about it. But I believe that in the long run my views will be substantiated, when we have an adequate scientific account of how the brain works to cause consciousness and how consciousness is realized in the brain. Already there are some first-rate neurobiologists doing this sort of research.
We are now at the point where the question of the relations of the mind and the body can stop being philosophical questions and can be solved scientifically.
It’s not a proper word, but is it fair to say that a lot of the neurobiologists working on this problem are “anti-free-willists”? From this, together with their findings about the causal relationship between the brain and consciousness, are they concluding that rationality and free will are all biologically determined?
A lot of them are. You can see why, if you’re in the hard sciences, you would think free will must be an illusion. Maybe they’re right. The problem is that the assumption of free will is not something we can do without.
Wait a minute. You’re saying that free will may well be an illusion, but in order to live we have to assume it’s not?
We cannot get up in the morning, we cannot get along in life without the assumption of free will. If you are in a restaurant and you are given a choice between steak and chicken, you can’t say, “I’m a determinist so I will just wait and see what I decide” because the refusal to exercise free choice is intelligible to you only as an exercise of free choice. But the assumption of free will may be false. If it’s false, evolution has played the most massive practical joke in the 15 billion-year history of the universe because rational decision-making is very expensive to us, just in terms of how much blood flow to the brain it demands. And we put in an awful lot of effort raising our young so they can make better rational decisions.
The notion that free will is an illusion is based on the assumption that for any action we perform, the set of causes immediately prior to that action are sufficient to fix that action and absolutely no other. In other words, for any event that occurs, the events prior to it are sufficient to determine it. I can’t prove to you that this deterministic view is false. It’s an empirical question.
But what do you think?
If forced to choose today on the basis of the evidence we have, we would choose the hypothesis that suggests that free will is an illusion.
It’s a highly disturbing hypothesis, don’t you think?
It is! There are some problems in philosophy I can solve, but I can’t solve this one. I don’t know the answer. But it is certainly possible that free will is an illusion.
You’ve presented another possibility in Freedom and Neurobiology. What is it?
This one is based on quantum mechanics. With quantum mechanics we’ve got a revolutionary conception of reality that goes against tradition. It says that at the most fundamental level you have an element of indeterminacy, that there is an element of randomness.
Even so, randomness is not equal to freedom, so how does free will come in here?
You’re right. I always thought that the invocation of quantum mechanics in the discussion of free will was hopelessly confused, because randomness is not the same as freedom. But then it occurred to me that, strictly speaking, I was committing the fallacy of composition. I was assuming that because events are random at the micro level, they would therefore have to be random at the macro level. That may not be true. There is, at least logically, another possibility: a quantum mechanical explanation of consciousness, in which conscious decision-making would be indeterminate but nonrandom.
Please talk more about this, because it’s the first thing you’ve said that gives me hope that free will isn’t an illusion.
Well, do I think it’s likely? No. When philosophers talk about quantum mechanics, it’s usually hot air. I can’t tell you how much sloppiness there is out there in this regard. But is free will logically possible given what we know about the world? I can argue that it’s possible under the assumption I just described. That’s the best I can do. But it just sounds crazy to me.
If we try to look at ourselves from outside, just as parts of the natural world, like trees and stones, then determinism seems obvious. It just seems like we are physical systems like any other, as determined as any other. The brain is as determined as the liver, for god’s sake.
But if you look at the world from inside, from the point of view of our own consciousness, you cannot act, you cannot reason, except on the assumption of freedom. The assumption of freedom is built into the very structure of your thought. That doesn’t mean it’s true. But if it’s false, all the same, when you walk out that door you’ll behave as if you had free will. You can’t avoid it.
Did it upset you on some level, these conclusions you’ve drawn about free will?
Yes. Philosophical ideas often upset me. I was very sorry when I finally had to come to the conclusion that God doesn’t exist, for example. That really upset me. I was a teenager when I came to that conclusion and that really hurt. But you have to follow the ideas where they go.
I just try to go absolutely, ruthlessly through the steps in the argument, and I try not to give up. I just keep pushing the logical conclusions and I try to not be frightened about where it leads. As Wittgenstein once said, “When I am afraid of the truth, as I am now, it is not the whole truth that I am afraid of.” Just keep going until you get the whole truth.