Close Mobile Menu

Five Questions for David Patterson

July 12, 2018
by Pat Joseph
Patterson_sized

1. The Turing Award is often called the Nobel Prize of Computing. Counting faculty and alumni, Berkeley claims more Turing laureates than almost any other university in the world. That surprises a lot of people. Should it?

Let’s just say our competitors aren’t burdened with an overdeveloped case of humility.

If we look at where and when the research was done, 7 of the 51 Turing Awards given so far were done at Berkeley in the 1970s and 1980s. No other single group has more than 3! As a Stanford CS professor remarked, you can make the case that the greatest team of computer researchers ever assembled at one place and time was at Berkeley in the 1980s.

2. Since retiring in 2016, you consult for Google developing chips for artificial intelligence/machine learning. A number of high-profile figures in science and tech—including late physicist Stephen Hawking, Sun Microsystems’ Bill Joy, and Tesla’s Elon Musk—have issued dire warnings about the dangers of AI. Do you share their sense of alarm?

If the fear is of realizing Skynet of Terminator movie fame, then I think that is a very long way off. But I have an AI colleague who says that even if it’s a problem for the end of this century, why not start setting the AI groundwork so they’re ready when we get close? An example is Asimov’s Three Laws of Robotics.

If the fear is of the impact on white-collar jobs, then I share those concerns. Since people are working on machine learning around the world, we can’t stop progress even if it were a good idea to try. I hope people from many disciplines would join forces to address this looming issue, ideally at an institution like Berkeley.

3. Your Turing Award, which you share with your friend and collaborator (and former Stanford president) John Hennessy, recognizes the creation of so-called RISC processors, which have long since become the industry standard. What made RISC revolutionary?

The acronym stands for Reduced Instruction Set Computer. Software has a vocabulary it uses to talk to hardware, called an “instruction set.” The prevailing wisdom for large computers was that the instruction set should consist of a large number of “five-dollar” polysyllabic words (“complex instructions”). Hennessy and I thought that was the wrong way to go for microprocessors, and we argued instead for a small number of monosyllabic words (“reduced instructions”). The question was how many extra reduced instructions would software need, and how much faster could a RISC read those simple instructions?

It turned out that RISC needed only 25 percent more instructions, but it could read instructions five to six times faster, so the net was RISC was about four times faster.

4. Computer Architecture: A Quantitative Approach, the classic textbook you coauthored with Hennessy, is now in its sixth edition. In a field defined by rapid change, what makes it so enduring?

The quantitative approach to designing computers is still much better than the more qualitative approach of other textbooks. John and I also take a new edition seriously, touching almost every page and significantly changing half of them. For example, the latest edition has a new chapter on machine-learning hardware, helping to keep the book at the forefront.

5. In addition to being an eminent computer scientist, you’re also a former wrestler and powerlifting champion. Have to ask: How much do you bench press?

My joke is that I’m the answer to the question, “Who is the strongest Turing laureate?”

On my 50th birthday, I benched 325 pounds. I weighed about 30 pounds less on my 65th birthday—which was good for my heart but not for my strength—but I still pressed 280 pounds.

I’ve proposed that the best weightlifting metric is to multiply the weight lifted by your age, called “pound years.” Surprisingly, the Olympic committee is not yet convinced of its obvious superiority.

Share this article