On a gray Thursday in Berkeley, Nina Beguš joined our call with an eagerness to discuss her forthcoming book, Artificial Humanities: A Fictional Perspective on Language in AI, and the larger project that runs through her teaching, collaborations, and industry-facing work. The premise is simple and demanding: bring the humanities to the AI workbench, where assumptions are formed and systems are shaped.
How Berkeley became the right place to write
Beguš traced a path that helps explain her urgency. After earning a Ph.D. at Harvard, she moved to the University of Washington’s School of Medicine, drawn by its long tradition of humanistic and ethical approaches to science and technology. Already working on AI since arriving in the U.S. “about fifteen years ago,” Beguš kept running into practical questions that theory alone couldn’t satisfy.
From there, Beguš joined the Berggruen Institute as part of a convened team of humanists, technologists, and artists and put her directly “at the table where the technology is being made.” The experience was clarifying: her theoretical concerns belonged in the same room as product decisions. Consulting for big tech and startups in the Bay Area, first through ToftH and then through her own company InterpretAI, the proximity made her next step inevitable. “I realized humanistic knowledge is very much needed in the industry, and had a huge urge to write this book.”
A postdoc at Berkeley gave her the time to write. The campus “vibe was really good”, people reached out, projects overlapped, and the scale of the Cal network resulted in cross-pollination with the industry. As Beguš noted, the university educates a striking number of the people who go on to work in Silicon Valley.
What “artificial humanities” actually means
Beguš’s definition in the interview is plain: artificial humanities is both the study and practice of using humanistic methods to shape AI, analyze it, critique it, develop it, and use it as a tool for creative or humanistic expression. The order is significant. So much ethics and policy work enters late in the technology development process, at the end of the pipeline, while artificial humanities starts with basic questions during the process of building: What is being built? What frames are smuggling in answers? Which assumptions are being mistaken for facts?
The field Beguš illustrates is not “computers applied to culture.” Using computational methods and applying them to humanistic materials is the direction of digital humanities. Artificial humanities flips the direction, drawing on interpretive methods, narrative analysis, history, and philosophy to inform development decisions. “We should use humanities to address the most pressing issues around AI development.” That position was unusual a decade ago when she coined the term; today it is easier to recognize why it’s needed.
Fiction isn’t ornamental, it is a way of exploring the human condition
When Beguš talks about fiction, she starts with something ordinary and true: people are “hungry for stories.” In labs and offices, teams will stop and listen if a story carries the idea. That hunger is not superfluous, it’s a clue. Fiction is where a culture imagines knowledge, agency, and other minds and perspectives. If those imaginings go unexamined, their metaphors become hidden and embedded design choices.
Beguš returns to the Pygmalion myth to show how an ancient frame still shapes what technologists expect of “made beings.” Contemporary films like Ex Machina and Her portray personhood, and their pull reveals how easily a human-like frame narrows what a team can imagine a system to be. The point is not to banish metaphor but to test the metaphor before it hardens into a feature. That is where literary training becomes practical: notice the frame, name what it reveals, and name what it hides.
Fiction matters for another reason Beguš emphasized: it helps us stretch what we can responsibly imagine. As AI assumes tasks once reserved for humans, “breaking the limits of the imaginable” becomes part of the work. In industry, she has seen fiction used superficially as inspiration and innovation tool, providing consequences and design openings that metrics miss.
Active research threads
Beguš’s curiosity keeps branching out. In the interview she described active research:
- Latent spaces made legible. Explaining and interpreting latent spaces in computational models is both the focus of a new paper in the Antikythera journal, based on the Venice Biennale exhibition of a collaborative piece “Latent Spacecraft,” and an ongoing two-year project with Sorbonne partners. While latent spaces are inherently technical, mathematical constructs, they also carry cultural, social, aesthetic, and political implications, allowing for what models can “see,” “imagine,” and ultimately produce. An exhibition on latent spaces will be brought to Berkeley next fall, and a new Venice exhibition is in preparation.
- Cultural diagnostics for AI. Literary studies have built “diagnostic tools” that transfer well into AI. This thread asks how interpretive methods, attentiveness to context, genre, and rhetorical moves can evaluate systems that are increasingly judged on qualitative grounds. A larger shift is underway: quantitative metrics still matter, but more of AI’s impact now demands qualitative evaluation. Humanities and interpretive social sciences are equipped for that turn, naming harms and values that numbers alone can’t capture.
- Interdisciplinary lab collaboration. Beguš is co-writing a project with Eoin Brodie’s group at Lawrence Berkeley National Laboratory. Her role in that group is “more of a philosopher,” offering a philosophy of science approach to the scientific work in the lab. The collaboration treats the familiar human nature versus technology split as a problem to think through, not a given. She remarks on a memorable moment where, after a session reading Kant on mechanisms and organisms, a student pointed to Brodie and said, “There’s the Newton of the blade of grass.”
- Narration and AI: A new grant with the University of Bergen will build a lexicon for narration and AI and run a workshop at Berkeley and at a CS conference, convening humanities scholars and computer scientists who study how stories and AI systems meet. Relatedly, Beguš is a part of TEXT – Center for Contemporary Cultures of Text, at Aarhus University, where research focuses on co-writing and creativity. To showcase professional writers’ views on AI, Beguš gathered creative writers to reflect on AI as a challenge and opportunity in a public-facing volume, First Encounters with AI: Writers on Writing.
Big picture concerns, and why variety matters
Asked about public worries, Beguš didn’t hesitate: labor and displacement; extractive dynamics between the center and the periphery; cultural and social bias; and the democratic gap between how many people are touched by AI and how few have a say. She added a market critique: if one “helpful chatbot assistant” becomes the single marketable face of AI, that’s less a technical inevitability than a mirror of “market economy and poor imagination.” A handful of dominant actors reduces variety; a healthier field would cultivate many builders and many forms.
Beguš examines a philosophical question: what is language, and how might machines produce it differently from humans? The shock of 2017–18, when large language models expanded into industry and more general use, made the question concrete: “Nobody really expected there is enough in language about language,” for machines to both perform language and meta-analyze it as a result of a sequence-prediction neural architecture and a humongous amount of text. That surprise should open an inquiry.
Beguš connected those questions to work beyond computer science. Animal communication research, once under-supported, has been steadily revising what counts as language-like behavior. At the same time, machines now use language convincingly. Pressure arrives from both sides, nature and technology, “dethroning the human” and forcing more careful definitions. A narrow, human-only picture of language can blind us to possibilities, and to pitfalls.
For Cal alums in industry, Beguš recommended one practice: interrogate metaphors. If a team begins talking about a system as if it “understands” or “wants,” pause and ask what that language buys you, and what it hides. The term “artificial intelligence” itself is a metaphor.
Campus texture: why place matters
In our conversation, Beguš kept returning to Berkeley’s campus culture. She shares a residential college life in Bowles Hall, frequents shared spaces in interdisciplinary centers, and prioritizes a density of disciplines that resist hyper-fragmentation. Put people from different backgrounds in the same room often enough, and synthesis follows. Beguš started a research group because “so many people were interested in working in this framework.” Students and collaborators who first came to talk with her began talking to each other, and projects were born. Some work on other ancient myths, such as that of Midas, used by UC Berkeley’s roboticists Stuart Russell for AI safety; some work on disentangling the liminality of human and technology relations, portrayed in Pierre Huyghe’s work, also featured on the cover of Artificial Humanities; and some use this research to explore best medical approaches.
Asked what she cut from the manuscript, Beguš laughed: plenty. Publishers want books that aren’t too long. Sections on medical uses of AI neurotechnologies, and ethics were trimmed and re-routed into other work. Artificial Humanities is a lifetime project. New papers and books were born from this framework. In the paper “Experimental Narratives,” Beguš compared crowdsourced stories with generated ones, outlining cultural and synthetic imaginary, biases, and narrative skills in both. Her dream project First Encounters with AI: Writers on Writing, fifteen writers reflecting on the entrance of AI into the space of writing is forthcoming.
The writing process surprised her with its joy. Papers can be done in focused sprints; a book demands immersion and gives space to reflection. “Even the painful part” of facing a blank page felt like part of a beautiful process. Don’t expect this to be the last book.
At Cal, good work starts with identifying pressing questions and recognizing essential topics early. Fiction, data, philosophy, and code all share the same air of processing, storing, and transferring information. UC Berkeley continues to build what it’s always built best: an intellectual culture of care and inclusion of many perspectives.
Photo Courtesy of Nina Beguš

