Close Mobile Menu

As More Extremists Radicalize Online, Can Violence be Prevented?

October 23, 2019
by Bryan Schatz
Hands on computer

The Christchurch mosque shooting was the clearest turning point: a mass murder that was, as the New York Times put it at the time, “of, and for, the Internet.” The gunman had teased the shooting on Twitter, announced it on the anonymous, fringe forum 8chan, a megaphone for extremist political views and hateful ideology, and it was live streamed on Facebook. On YouTube, Reddit, and elsewhere, the video of the shooting was repeatedly uploaded faster than the sites’ moderators could take it down. 8chan viewers reportedly watched the shooting unfold live, reacting with horror in some cases and cheering it on in others.

By lurking in dark corners of the internet, extremists are able to avoid the kind of shaming that could “disincentivize their sharing of those beliefs.”

This wasn’t the first mass shooting bearing the Internet’s fingerprints. The Tree of Life synagogue shooter was a fan of Gab, the social media site beloved by extremists. The man who sent explosives to critics of President Trump last year shared his beliefs via far-right Facebook and Twitter memes. But Christchurch was perhaps the most direct portrayal of how radicalization and a glorification of gun violence is colliding online and leading to horrific shootings in the real world. It also laid bare this fact: little is known about how entrenched the problem is or, importantly, what can legally be done about threats made online before they come to fruition.

“As we started to see more and more of this happen, it started to feel like an area that was being under-addressed with legal research and moves that can be taken to curtail gun violence,” says Zeke Wald, a student at Berkeley Law who is leading the university’s Gun Violence Prevention Project. Now in its second year, the GVPP, in partnership with the Giffords Law Center, a public-interest law organization, hopes to stop gun violence through policy. This year’s project is studying the intersection of gun violence and online radicalization, the existing laws for countering threats posed online, and what new policy protections can cover gaps in the legislation.

This is relatively uncharted territory. “As all of these stories started to come out, this picture started to form for us around what we saw as a dearth in the gun violence prevention movement,” says Wald. What can be done at the point when an individual goes online and declares their intent to go on a shooting spree? “As these conversations are happening on these forums, what does the law say happens next? That is not something we’ve seen a really comprehensive research project on.”

So that’s what they are setting out to do. GVPP’s first step will be doing a nationwide sweep of existing laws to determine what tools are available today, and where there are gaps. Wald anticipates the project orienting itself towards three potential directions: a “public enforcement” route where, through legislation, law enforcement is given more tools to deal with threats; a “private enforcement” route, which would include litigation to put pressure on individuals and web platforms; and what Wald calls “community enforcement,” which would be an effort to bring the toxic forums out of their dark corners and into public view where citizens can air their disapproval. As it stands, says Wald, by lurking in dark corners of the internet, extremists are able to avoid the kind of shaming that could “disincentivize their sharing of those beliefs.”

“What are the options available to individuals who are being directly threatened? Or to users who see what they consider to be threatening content, even if it’s not directly toward them? Or to law enforcement?”

Throughout, GVPP will analyze shootings such as the one last year in Christchurch, and the two others that were announced on 8chan via manifesto since Christchurch. In one, a 19-year-old man, inspired by the Christchurch shooter, posted a white nationalist letter before opening fire in a synagogue in Poway, California, last April, killing one and wounding three. In another, a person identifying himself as the gunman of the El Paso shooting in August, posted a four-page message just before the massacre began and encouraged his “brothers” on the site to spread the message online. “More and more of these are starting to be seen as finding an origin in online engagement,” says Wald.

In 2014, California passed “red flag” laws, also known as a Gun Violence Restraining Order, which allows law enforcement to temporarily take away someone’s firearms when they present an immediate threat to themselves or others. It came about in part because of the Santa Barbara massacre, when a 22-year-old gunman who had posted a 141-page manifesto drenched in extremist messaging, killed six people in a stabbing and shooting spree. There were warning signs, but law enforcement had little authority to do anything. His “parents actually warned law enforcement that they were worried that he was a risk to himself or to others,” notes Wald. “And law enforcement was able to receive that warning and have a conversation with him. But there was nothing else that they could do from there. They weren’t able to search his apartment for weapons. And then he went on to kill six people,” says Wald, who attended UC Santa Barbara during the shooting spree. “One of the impetuses behind the gun violence restraining order was, when there’s really credible evidence that someone is an imminent threat to themselves or to others, let’s have a mechanism by which law enforcement can say, ‘For 14 days, we’re going to take away your access to firearms.’”

This is one tool that’s available in California, says Wald. But he adds that it has its limitations. As the restraining orders stand now, law enforcement agents, and in some cases parents or household members, are the only ones who can request them, which likely excludes the Internet users who actually see threats posted online. The tool also isn’t being used very frequently throughout the state because, as a relatively new law, it is not yet well known, and there are strict limitations on who can request the restraining orders.

This year’s GVPP aims to identify all the laws that are currently available throughout the country and determine where and how they can be expanded and adopted. Wald wants to know: “What are the options available to individuals who are being directly threatened? What are the options that are available to independent platforms, if they actually want to do something themselves about content that’s posted on their site? Or to users who see what they consider to be threatening content, even if it’s not directly toward them? Or to law enforcement?”

There is, of course, controversy in advocating for law enforcement intervention when someone makes threats online. These efforts could be in direct opposition to the work of data privacy and free speech advocates. A crucial question, according to Julia Weber, a senior legal counsel at the Giffords Group who is overseeing the student-run project, will be, “How do we avoid incursion into civil liberties around speech and data privacy, and at the same time, develop that information that may be necessary to prevent gun violence?”

“It’s incredibly clear that this isn’t something that’s going away on its own. I think it’s important that we get that conversation, very loudly, happening, regardless of where it ends up going.”

The clearest real-world example of the conflict between gun safety and data privacy began in the wake of the 2015 San Bernardino massacre, when a married couple killed 14 people and wounded 22 others. The FBI attempted to compel Apple to help break into the shooters’ iPhone as part of its investigation. It was a drawn-out battle that eventually saw the ACLU defending Apple against the FBI and stating: “If the government has its way, then it will have won the authority to turn American tech companies against their customers and, in the process, undercut decades of advances in our security and privacy.”

“We can’t, in our enthusiasm for wanting to prevent tragedies, acquiesce to erosion of our civil liberties,” allows Weber. “But in the data privacy conversation, there may be such enthusiasm for avoiding either private or governmental intrusion into our online interactions that I think sometimes those conversations don’t include what the cost of that may be—making it harder for law enforcement to pursue dangerous situations for the public. Each of the movements and each of the conversations need to be infused with information about each other.”

For this reason, Wald and his colleagues will be collaborating with the Digital Rights Project, another student-led pro bono project working with the ACLU of Northern California. “We’re seeing this really well-founded move towards limiting the amount of personal information that platforms have on users,” says Wald. Responding to citizens’ fears about the data they’re inadvertently providing to aggregators, advocates and lawmakers are pushing legislation—and litigation—to limit the kind of information that can be gathered and to prevent personal information from being shared. Wald worries that could end up protecting the privacy of violent extremists. “We want to work with the folks who are pushing that movement forward to try to avoid creating a situation where we accidentally create a safe haven, where someone can go on a 8chan and post, you know, ‘I’m gonna go shoot all the Muslims.’ And there’s nothing that can be done.”

The idea, says Wald, is that once they get a clear idea of what regulations already exist, what has been effective and what hasn’t, they can begin crafting sample policy proposals that aim to balance these competing ideas. “It’s incredibly clear that this isn’t something that’s going away on its own. We had the Gilroy shooting, we had the shootings in Texas and Ohio, and it’s just continuing to proliferate and grow. I think it’s important that we get that conversation, very loudly, happening, regardless of where it ends up going,” says Wald. “Part of the power of those Internet spaces is that they can operate in the shadows, and the more we shine a light on them, the less, hopefully, we’re going to be seeing the same kinds of things happen.”

Share this article