It was many years ago, when I worked in a large city and I often had to walk several blocks from one large office complex to another during the course of the average work day. One afternoon I was trudging between buildings, head bent, lost in thought; I passed the entrance to a small, dark alleyway just as a new Porsche roared up from the gloom. The car fishtailed to a stop a few inches from my kneecaps, and I froze, immobile with fear. The driver was a budding Master of the Universe—thirtyish, well dressed, obviously used to money, privilege, and a certain quantum of power. He looked through the windshield at me with distaste, raised his hand slightly from the wheel, and gestured: a shooing motion, as though I were a small, bothersome animal.
I’m not completely clear about what happened after that, except that red lava seemed to swell through my brain, and my reaction was extreme. I do know that the incident ended with the scared driver gunning his vehicle into traffic, almost sideswiping another car; and I recall several passersby stopping to gawk before hurrying on.
I’m not proud of my reaction, but when I think back on it, it’s not so much abject shame that I feel. It’s more relief—relief that no one had smartphones back then. We had flip phones, barely good enough for calling and texting, and totally unable to record video or stream data to the Internet.
My response, though not laudable, was human. I can forgive myself. But would anyone else? Especially anyone who saw it on social media? Because that’s what would happen today. The pedestrians who scurried by then would now stop, haul out their iPhones, shoot footage, and post it immediately to Facebook, where it would propagate endlessly, metastasizing like a virtual tumor. By the time I got back to my office, I’d be—I almost said famous. I’d be infamous.
Granted, my behavior wasn’t racist or xenophobic like that of so many Americans who have lately suffered ignominy after their bigotry was caught on camera and shared with the world. I wasn’t like the lawyer who ranted against Spanish-speaking customers in the New York deli or the drunk in Chicago harassing a woman for wearing a T-shirt imprinted with the Puerto Rican flag or the white San Franciscan who called the cops on a little African-American girl who was selling bottled water without a permit.
No, I was just momentarily unhinged by a combination of adrenaline and righteous indignation. Nevertheless, my rage against the machine would no doubt have cost me my job and made it hard to find another one. And, being single at the time, I would have resigned myself to terminal loneliness. No one watching a video of that encounter would’ve wanted anything to do with me.
In sum, our smartphones have become more than a platform for communication, news, and entertainment. Linked to social media, they are a whip, a goad, a tool for vigilantes and social justice warriors alike.
We have entered an era of “sousveillance”—literally, watching from below. Where surveillance suggests the all-seeing eye of Big Brother, sousveillance, a term coined by wearable computing pioneer Steven Mann, is about all of us, armed with cameras, watching the watchers.
This is not a bad thing, necessarily. As Lisa Nakamura, a professor of American culture and the coordinator of digital studies at the University of Michigan observes, smartphones are classic “weapons of the weak” that provide a powerful means for the powerless to confront authority and expose its abuses and even criminality. What is Black Lives Matter, after all, but a social awakening and revolution amplified by cell phone footage—video after video of police encounters where unarmed black men wind up dead. If all we had to go on were the body cams worn by police, there would not be a Black Lives Matter. As Mann says, surveillance without sousveillance is only half-truth.
And yet, even with ubiquitous cameras, the whole truth is hard to come by. Part of the problem is context and point of view. Imagine how different my encounter might have looked if the Porsche had been equipped with a dashcam, for example. Or how it would look to observers who only saw my freak-out, but not the recklessness or arrogance that provoked it.
That inherent subjectivity was the impetus behind The Rashomon Project, an effort led by Ken Goldberg, a UC Berkeley engineering professor and chair of the University’s Department of Industrial Engineering and Operations Research. An open-source toolkit, Rashomon, named after Kurosawa’s classic film based on multiple viewpoints, was designed to allow the simultaneous display of multiple time-coordinated videos and photos of protests, riots, and other events marked by violence and discord.
Says Goldberg, “Rashomon attempted to address the problem of the multiple perspectives that crop up when you have so many people recording and sharing videos of the same event from a variety of angles. We developed a tool that allowed the synchronization of all these points of view.”
Events subjected to such analysis include the UC Davis pepper-spraying incident from 2011, and a police raid on Istanbul’s Taksim Square in 2013. The effect of these time-synched collages is somewhat mesmerizing; they feel as much like art exhibitions as documentary displays. And while neither appears to offer any revelatory counter-interpretations of events—the Davis kids really did get lacquered with pepper spray, the Turkish protesters really were hosed down with water cannons—it’s not hard to imagine such a tool shifting perceptions of history.
After all, as any sports fan knows, a call that looks sound from one angle can appear baseless on slow-motion replay or when shown from a different camera angle. As fans also know, video review rarely settles an argument once and for all.
So, if truth remains elusive, where does this obsessive, 24/7, wall-to-wall recording of everybody by everybody else leave us? In a sense, we’re living in a new kind of panopticon, where we have to presume that not only is the jailer always watching, but so too are our fellow inmates. Those who are slow to recognize this may pay a steep price.
Take the case of environmental scientist Jennifer Schulte, whose tirade against the African-American family barbecuing at Oakland’s Lake Merritt was captured and posted to the Internet by one of the family. Nakamura, the digital studies professor at Michigan, says she wasn’t particularly shocked by the racist overtones of the incident. She noted that racism has always been a defining quality of the national character, “and cell phones haven’t made it better or worse. The main thing that struck me was more [Schulte’s] lack of disposition for living in a cell phone world. There is growing awareness in the culture generally—and especially among younger people—that you’re always ‘on’ in public. [Younger] people instinctively know when to tone it down, so when [racist incidents are recorded], the reaction is as much scorn about the [recorded subjects’] ignorance as it is outrage at the racism.”
But if highlighting racism is largely beside the point, what are we left with but the ritual of public shaming—enduring disgrace for a moment of mere stupidity or weakness—where social media becomes judge, jury, and executioner.
It’s a theme journalist Jon Ronson explored in his 2015 book, So You’ve Been Publicly Shamed. The author cited as an example the case of Justine Sacco, a young PR executive who, in 2013, sent this ill-considered tweet as she boarded a flight for South Africa: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”
To be sure, it was a remarkably lame, unfunny joke, but she insisted that’s how she meant it—as a joke about white privilege. Instead, people read it as a celebration of the same. By the time her plane landed in Cape Town, she was already tied to the social media whipping post and branded as the worst kind of racist twit. Her career went up in smoke and her name was dragged through the mud.
So, is there anything that can protect us from this kind of overkill, say, policy-wise? Probably not.
“The genie’s out of the bottle,” says Brandie Nonnecke, research and development manager for Berkeley’s Center for Information Technology Research in the Interest of Society. “Every day you have 1 billion people putting up content or commenting on it,” she says. “With that kind of volume and with existing automated processes, it’s incredibly difficult to identify and flag hate speech, disinformation, or violent content. That’s why [Facebook CEO] Mark Zuckerberg said he’s hiring 20,000 more monitors. Government is under increasing pressure to mitigate the negative impacts [of content on social media], but it’s incredibly difficult to do in practice. The simple fact is that the systems can be manipulated for positive or negative outcomes, and we have to move forward [with regulation] carefully if we’re not going to infringe on basic First Amendment rights.”
And yet, technology aside, this is hardly a new problem. Public shaming has been with us since we were hunters and gatherers. It’s as American as The Scarlet Letter. What has changed is the scope of the opprobrium. When Justine Sacco’s humor missed the mark, the world piled on.
It’s one thing to lose face at Thanksgiving dinner, after you’ve had too much pinot noir and ranted to your nephews and nieces about the evil machinations of the Illuminati. It’s another when one of the scamps surreptitiously records your screed and posts it to YouTube.
The old media saw, “If it bleeds, it leads,” holds true for social media. If you bleed—or make someone else bleed, literally or figuratively—be prepared to find yourself widely featured on Facebook and trending on Twitter. That’s just how it is. Adapt or die. (Or at least, wish you had.)