Close Mobile Menu

Facebook’s “Compassion Team”—Academics Try to Convince a Billion Users to Play Nice

January 23, 2014
by Ben Christopher

Imagine a gentler Internet. Imagine a world wide web where comment sections aren’t the lowest common denominator rhetorical melees we know them to be but forums for reasoned debate and thoughtful discussion. Imagine your life online in which social media sites serve as a breeding grounds for empathy, introspection, and compassion, rather than for bullying, smut, and smarm.

Now imagine a pig with wings.

Despite the odds, Facebook has been trying to boost the social intelligence of its social network since early 2012. Facing mounting public concern over cyber-bulling and a growing number of disgruntled users alienated by the site’s tendency towards snark and callousness, the social media leviathan hired a small army of academics from psychology and sociology departments at universities such as UC Berkeley, Stanford, Yale and Columbia to form a “compassion research team.” The team’s mandate: convince the one billion-plus people who use Facebook to play nice.

“If you and I are having a conversation, I can tell very quickly if I have said something that you find off-color or offensive, even if you don’t say anything at all,” says Emilliana Simon-Thomas, a UC Berkeley neuroscientist and science director at the Greater Good Science Center, which has partnered with Facebook for the project. “It’s not surprising that it’s harder to resolve these more nuanced issues through virtual communication because we don’t have those subtle cues to respond to.”

The first goal of the project, says Paul Piff, a post-doctoral researcher at Berkeley who is also a member of the compassion research team, was to tackle the social media site’s massive complaint backlog. This presented a unique problem. While Facebook’s Terms of Usage are quite straight-forward—no hate speech, no threats of violence, no harassment—the site was still receiving formal complaints about seemingly innocuous photos and status updates. And they were receiving them by the millions.

“These aren’t pictures depicting porn or violence. It’s usually a friend’s photo, often with the person who filed the complaint in it. But they don’t like the way the look in it, it’s embarrassing, it violates their privacy,” Piff says. Rather than rewriting the Terms of Usage or simply ignoring all those hurt feelings, Facebook had its compassion research team sit down with Facebook engineers to design tools that help Facebook subscribers communicate those feelings. Or, in Piff’s words: “To build empathy into the site.”

He walks me through the process. Pick a photo someone else has uploaded onto Facebook in which I’m tagged, and untag myself from it.  Next, try to report the image.

I do, but first I’m stymied by a text box.

“Instead of giving you a tool that automatically report it to Facebook, it tries to channel your experience that makes you more aware of why you don’t want that photo there,” Piff ​says.

“Why don’t you like this photo?” the text box reads. I’m offered five options. A chaise lounge it’s not, but in that brief moment of reflection a person is forced to consider what it is about the photo that so offends.

Being able to identify one’s emotions is a fundamental aspect of emotional intelligence, Simon-Thomas says. And it’s often the first step towards resolving a potentially vitriolic online confrontation.

“If someone writes a status update that says, ‘Sally is Fat,’ and Sally sees that an feels sad, but instead responds, ‘F— Off!’ that’s not going to do anything. The original poster is only going to feel attacked,” she says. “But if Sally has the opportunity to realize that the message does make her sad and she is prompted to send a private message explaining that to the other person, that brings the entire experience to the opposite end of the emotional spectrum. Inherently, a person who receives that message is going to have at least a small empathic response.”

Just as important, Piff ​says, the person gains the feeling of being heard by Facebook’s newly empathy-enabled algorithm.

In compassionate coding, as in conversation, getting the wording just right was a big challenge. Take the first option. Initially, users were allowed to select the option “I’m embarrassed.” But after looking at early user data, the compassion team switched over to the less personalized “it’s embarrassing.” As Piff puts it, “People far preferred something that prioritized the process over the feeling.”

According to Facebook, some 3.9 million people use the tools developed by the compassion research team every week.

After I select the process that’s bothering me (I, too, choose that “it’s embarrassing”), I am brought to yet another window. According to Piff, this multi-step process runs counter to the prevailing wisdom among engineers at Facebook—and throughout Silicon Valley—who assume that the best user experience is the one that is the most seamless and more streamlined.

Not so when you’re dealing with “emotionally-laden scenarios,” Piff says.

“Engineers talk about wanting to do things in two or three clicks,” he says. “What we learned is that if you make people like they’re being heard, that they’re engaging in a process that’s tailored to their experience, they’re more likely to come away with a positive experience.”

And so I’m brought to the second box, which contains a pre-written message directed to the poster of the objectionable picture (in my case, it’s my father).

“The best way to remove this photo is to ask [my dad] to take it down,” the header tells me, before moving down the to the message itself: “This photo is a little embarrassing to me and I would prefer that people don’t see it on Facebook. Would you please take it down?”

Here too, the wording is key.

“We had different versions: Would you mind…?Would you take it down?” Piff says. But after sifting through real-world user data on how frequently people actually took down photos once they received a complaint (for any privacy defenders out there who also inexplicably use Facebook, the researchers were only able to see the meta data, not the text of the messages or the identifications of the messengers), the numbers were clear. In plenty of situations, that pretty please seemed to make all the difference.

Depending on how psychologically or socially important you think these online conflicts are, that difference might be pretty significant. According to Facebook, some 3.9 million people use the tools developed by the compassion research team every week.

And after working on the kindness campaign for some two years, they have helped develop a sizeable toolbox of conflict mitigation and empathy-building software tweaks designed to make Facebook gentler, kinder, and more emotionally complex. Last spring, Berkeley psychologist and Greater Good executive committee member Dacher Keltner led a project to design a set of more subtly expressive emoticons (the cartoon facial expressions, beginning with the smiley face, that Internet users exchange in lieu of using their words). These days, a team led by Yale’s Marc Brackett is tackling the pernicious problem of the moment, online bullying.

Meanwhile, Cal researchers like Piff and Simon-Thomas are considering different ways to cool down Facebook comment sections and status updates.

“Shared links is the most common problem at the moment,” says Simon-Thomas. “At lot of the time the issue is that someone is midly irritated by a particular person who is showing up in their newsfeed. Everyone can relate to that. Unlike the photo issue, we don’t want necessarily to encourage messaging as a way of resolving that issue. It doesn’t help for me to reach out and say, ‘hey, I’m tired of your annoying posts.’”

Instead, the team decided it would be matter to just make it really easy to unsubscribe from a particularly tiresome poster.

Both Simon-Thomas and Piff readily admit that the compassion research team isn’t going to change the world by getting Facebook users to say “please.” But neither believes in such a psychological bifurcation between “online” and “real” life.

Still, heated, often nasty, debate and confrontation is a perennial problem.

“What would it look like to build tools to make people more mindful of what they’re posting or which better enable them to resolve a dispute?” asks Piff. “Say you post an article about Israel and Palestine and all of a sudden a big fight unfolds on your Facebook wall and the conversation has turned into virtual fists. Currently, there isn’t a lot that anyone can do beyond an individual deciding to close their computer.”

Of course, the entire premise of the compassion research team project rests on the assumption that these virtual fistfights are always a bad thing and that the social niceties and empathic sensitivities that guide our “real life” interactions should also dominate our online interactions. They view Internet vitriol as a swamp that should be dredged of its spite and cruelty, rather than a release valve for our darker selves. Isn’t it possible that being a jerk online is what allows many of us to keep from losing it in the middle of morning traffic? Or better yet, if the only way to get out of a nasty comment war is to close the computer, maybe people should learn to just close the computer.

Both Simon-Thomas and Piff readily admit that the compassion research team isn’t going to change the world by getting Facebook users to say “please.” But neither believes in such a psychological bifurcation between “online” and “real” life. That’s particularly true of social media, where few people “make a useful distinction” between their face-to-face interactions and those they make in a Facebook chat window.

“If you have a device that people use to communicate with one another and that communication is devoid of the basic principles of pro-social interaction, those people will adopt those habits of communicating,” Simon-Thomas says. “Maybe for older users there might be a real difference in-person socializing and virtual socializing. But for kids who have been on Facebook since the beginning of their social lives, the lines are way more blurry.”

“People really care about these (online) interactions,” Piff ​says. “It’s not the case that these relationships are more superficial than those they have in ‘real’ life. I think there’s a lot of reason to think that by constructing a particular kind of culture in one facet of a person’s life, that will permeate into other facets of life.”

I think to ask Piff if he takes his own online interactions so seriously, but, despite his research, it turns out he isn’t the right person to ask about that. He doesn’t use Facebook.

Share this article