Close Mobile Menu

Automatic Writing

August 30, 2011
by Jyoti Madhusoodanan
an artist's depiction of artificial intelligence

The artificial intelligence behind Wikipedia

If you’ve ever read through a few Wikipedia entries on U.S. cities and thought they sounded, well … robotic, you might be right. Chances are the article was written by a piece of software. Automated programs called bots have been working behind the scenes since the online public encyclopedia was born. They’ve become such important entities that one, named Antivandalism Bot, was even nominated for a spot on the governing committee (it didn’t win). And like good politicians the bots are stirring up controversy.

“Bots aren’t just programs—they are participants with lives and responsibilities,” says Stuart Geiger, a doctoral student in Berkeley’s School of Information. “If we’re interested in understanding how Wikipedia works, we have to include these bots in our stories and our histories.” Geiger, who describes himself as a kid who grew up on the Internet, has been investigating the social structure of Wikipedia for several years. He recently summarized his findings in a chapter of the book Critical Point of View: A Wikipedia Reader, published by the nonprofit Institute of Network Cultures in Amsterdam.

From their humble beginnings as the lines of code that “did the tasks no human wanted to do,” bots have become integral players in the editorial and social structure of Wikipedia, according to Geiger. Organizing global workflows, enforcing antivandalism policies and editorial guidelines such as the “only three reversions in 24 hours” rule, bots monitor user activity and continually curate the data they gather.

“The antivandalism bots, especially, are critical to the concept of Wikipedia as a free online encyclopedia that anyone can edit,” Geiger says. “If these bots didn’t exist, that structure could not survive. We would be overrun by spam in hours.”

Yet Geiger’s favorite example of a bot’s “rise to power” is not the crime-fighting anti-spam bot, but the more modest HagermanBot. “It was originally written as a program that would sign and date user comments if the person forgot to do it.” Though considered good editorial practice, signing comments wasn’t mandatory until the bot was introduced.

As one of the first bots that imposed a social norm rather than just an editorial standard, HagermanBot created intense controversy over whether bots should be allowed to use the information they were monitoring to edit people’s comments. “[HagermanBot] started out so humbly, and it stepped into this quagmire that was so out of its league,” says Geiger. “It really provoked the debate on what bots could and couldn’t do with the information they monitored.” If knowledge is power, information-storing bots were growing into powerful social actors.

Heated discussions about the rights of human and robotic editors eventually led to the insertion of a clause in people’s user agreements, giving contributors the choice of whether they wanted HagermanBot or similar bots editing their comments.

There’s a limit to how much bots can do, though. “Bots don’t replace humans. Their role is really to augment the human experience,” Geiger clarifies. “The goal is to have a bot do one thing that a human does, and do it well enough that you can’t even tell it’s a robot doing it.”

Share this article