When ChatGPT was released last December, its capabilities were startling and alarming, capturing our collective imagination while stoking fears about what the future of artificial intelligence (AI) might hold for humankind. Now, as we grapple with AI’s perils and possibilities, I have never been more grateful for all that Berkeley brings to complicated, existential challenges and opportunities.
For many, AI’s newfound attributes seemed to arrive with little warning. Yet members of our faculty and our graduate students have, for many years, been at the forefront of the technology. ChatGPT’s chief engineer, John Schulman, received his doctorate from Berkeley in 2016, and we are, today, recognized as the world’s preeminent center for AI research, teaching, and learning. That is due not only to the excellence of our computer scientists and engineers, but also to our ability and desire to transcend the traditional borders between academic disciplines in order to explore AI’s every aspect. The societal stakes are high, and Berkeley is uniquely able to go all-in with both academic excellence and an unwavering focus on the public interest.
Take, for example, computer science Professor Stuart Russell. Renowned for his foundational work on AI, he is the coauthor of the field’s primary textbook: Artificial Intelligence: A Modern Approach, first published in 1995 and now, in its fourth edition, used in more than 1,500 universities in 135 countries. Yet Russell’s concerns and interests extend well beyond his hard-science areas of expertise, which include machine learning, probabilistic reasoning, and long-range decision making. Long concerned with ensuring scientific advances are consistent with fundamental human interests, in 2016 he founded our Center for Human-Compatible Artificial Intelligence. There, computer sciences and social sciences are being marshaled so that AI will be developed and applied in ways that are always controlled by and beneficial for humans.
“ChatGPT is just a taste of what’s to come. Real AI is still on the horizon,” Russell told me. “But we have built something we don’t really understand at all, and there is now an urgent need for regulations about the properties and protections these systems must have before they are released into the wild. While there are literally trillions of dollars of market capital pushing this technology forward, I believe Berkeley has the human capital necessary to steer the global conversation and ensure that humans remain in control.”
Engineering Professor Ken Goldberg is another pioneer and chairs the Berkeley AI Research Lab with more than 65 affiliated faculty. With research interests in art, robotics, and social media, he, too, is a polymath with a public perspective. Responding to the recent explosion of interest, Goldberg curated and hosted a public lecture series featuring Berkeley faculty exploring everything from AI’s economic and legal aspects to the sensory-motor challenges that could impede AI’s application in the real world. The talks, available on our website, are must-see viewing.
Goldberg notes that despite the name of ChatGPT’s corporate parent—OpenAI—the software’s elements are proprietary and shrouded in secrecy. With that in mind, a group of Berkeley faculty members recently released Koala, a chatbot of their own invention with similar abilities to ChatGPT but also at least one essential difference: Koala is what’s known as an open-source model, designed to encourage collaboration by virtue of its accessibility and transparency.
“Open access and the public interest are core values at Berkeley,” Goldberg told me. “Society now faces profound questions, such as: Is the ability to reason, to be creative no longer uniquely human? How will this revolutionary breakthrough shift our place and purpose in the universe? This is the time for changemakers who challenge conventional wisdom in support of the greater good.”
Goldberg and Russell are not alone. I share their excitement and concerns. And as a professor of literature, I, in turn, am not alone among my colleagues in the arts and humanities. There, too, is a rising tide of academic activity related to AI’s ability to produce more than passable prose and works of art, not to mention the intellectual pull of a whole host of cultural and sociological, philosophical, and ethical challenges and conundrums.
As it happens, I have long had an interest in literature focused on non-human characters being given human agency. Novelists are often the first to venture into the future and have been thinking about the rights of, and our responsibility for, proto-human figures ever since Frankenstein. I’ve been drawn in, as a scholar, an administrator, and a human. I am awed by what I’m seeing as our faculty and students rise to meet the multidisciplinary demands of this moment. I am grateful that the fates conspired to bring the Division of Computing, Data Science, and Society into existence in 2019, our newest academic unit that could not be more perfectly positioned to bring real smarts to artificial intelligence.
It is shaping up to be another moment in the sun for Berkeley. Fiat Lux.