Close Mobile Menu

In the Age of Information, Can We Weed Out the Fake News?

August 20, 2020
by Apoorva Tadepalli
Newspaper clippings with "coronavirus" and other words printed on them

In mid-April, the United Nations Secretary-General formally identified a parallel “pandemic” to COVID-19: a “misinfo-demic” or false news about the virus. Conspiracy theories, dangerous fake health advice, and discrimination and stigma related to the virus—from its origin to how it can be prevented or cured—have all spread like wildfire. The World Health Organization even added a “mythbusters” section to its website.

Various actors have been responsible for the spread of this misinformation, from conservative news to massive text message campaigns. But social media is particularly conducive to the spread of conspiracies and false information, and more advanced technology has brought even more advanced deception.

Headshot of Hany Farid
Hany Farid is a professor in the School of Information at UC Berkeley. // Photo by Bob Margot

For months, resident deepfake expert and professor at UC Berkeley’s School of Information, Dr. Hany Farid, had been analyzing pictures, videos, and text to understand the spread of misinformation as it pertained to the 2020 election. Now, with pandemic news dominating the media, he’s leading a new survey to assess the COVID-related mis- and disinformation, in order to advise digital platforms on how to stem this flood. With the survey ongoing and the results yet to be published, Farid spoke to California about what he’s seen so far, the stakes of the issue, and how he thinks social media needs to move forward.

Can you start by describing the project and how you gather the information?

Hany Farid: What we wanted to understand was, what was the penetration of this misinformation—that is, how many people are seeing this misinformation, and then, are people believing it? You can imagine several scenarios: One is it’s out there, but most people aren’t seeing it because they’re focused on other things. Or, people are seeing it, but no one really believes it. Or, they are both seeing it and believing it, and that would be a serious problem. 

What we’re seeing in the early stages is that these things are somewhat geographical. So the type of misinformation that is widespread [in the U.S.] is different than it is in Italy or France or China or India. So we’re trying to localize these issues. We’ve done the U.S. study, and we’re currently carrying out the study in western Europe, Central and South America, and Northern Africa and the Middle East. 

What kind of mis- and disinformation have you seen?

HF: We started by looking at the most popular disinformation headlines on Facebook. These formed three rough categories—health related (“Do this to prevent COVID”), obviously political ones (“This is being done to hurt the Republicans and Trump”), and then there is conspiratorial (“5G towers responsible for the spread, there is no virus” or “This is a bioweapon coming out of China”). There’s obviously a little bit of crossover in those categories, but they’re all dangerous for different reasons. The health ones are dangerous because people are drinking bleach to prevent themselves from being infected; the political ones are bad because we’re in an election year, and it’s adding to an already polarized society; and the conspiracies are dangerous because it just leads to additional craziness. In the UK there were reports that people were taking down 5G cell towers because of the conspiracy. There’s been for many years this idea that this type of misinformation is relatively harmless because it’s online and what happens online stays online—but it turns out that’s not true. The online and offline worlds are interacting with each other in a weird way.

We did a survey on Mechanical Turks, an online marketplace. We have about one thousand responses. We showed [participants] 40 headlines: 20 are real, 20 are fake, and we don’t tell them which is which. We ask them three questions: Have you heard of this story? Do you believe it to be true? And do you know someone who believes or is likely to believe this? The first question gives us the penetration. Depending on which of the 20 questions we’re talking about, we’re seeing a penetration between 30–40 percent. For example when we ask people, “Have you heard of gargling or drinking bleach curing the virus?” 25 percent of respondents said yes, they had heard it. “Is COVID manmade, and not a natural virus?” 85 percent have heard this. “No one is sick from COVID; this is purely the media.” 45 percent have heard this. These are huge penetrations; way higher than we expected. In fairness, the people we are asking are people on Mechanical Turk—they tend to be more technologically sophisticated; they’re online more. So maybe it’s a little bit of oversampling. But this stuff is getting out there. 

The second [question] gives us the uptick—how much this is being believed. When we ask them, “Do you believe it?” the numbers range from 5 and 15 to 20 percent, on average 10 percent or so. The more political and conspiratorial ones have a higher uptick. When we go to look at the question, “Do you know somebody who believes or is likely to believe this?”, that number more than doubles to up to about 30 percent who say “yes.” We all have that person in our families who sends us crazy conspiracies and fake news, and anyone who’s on Facebook knows this, too. That number is really disturbing—you’re talking about one in three.

While absolutely some mainstream news outlets have been either intentionally or unintentionally spreading misinformation, we know that this problem is primarily a social media problem: Twitter, YouTube, Facebook. And we’re seeing a phenomenal amount of misinformation on all three. 

How can this issue best be addressed?

HF: We are seeing misinformation sowing unrest, and we have to get a handle on it. Once people start believing something, setting the record straight is very difficult. There’s a well-known phenomenon called the boomerang effect: When you tell someone that what they think is wrong, it backfires; it only further entrenches them. It’s certainly true of conspiracy theories. This is the world we live in now—where facts have become something other than what they should be. So while we would like to think about mechanisms to set the record straight, we’re also not naive about the way human psychology works.  

We think the way to deal with this is at the root. You have to stop this stuff as it’s making its way up. Now you get into some complicated issues. There’s going to be one side of the aisle that says people have the right to believe things that are wrong and stupid—that that’s their first amendment right. There are others like me who say there’s a cost associated with that, to society and democracy and our overall well-being.  

But that’s not really what the issue is. The issue is not that people are posting this stuff; the issue is that Facebook and Twitter and TikTok and YouTube are amplifying it. It’s algorithmic amplification. While you may have a freedom of speech to express yourself online, you don’t have a freedom of reach, of having a private social media company amplify your voice to millions of viewers. I contend that the argument should not focus on what should and should not be said online, but should focus on how social media amplifies the outrageous, conspiratorial, and divisive, because it drives engagement and profits. What we argue is, we should be changing the way we do algorithmic recommendations, so we’re not simply optimizing for engagement, trying to keep people on platforms as long as possible to deliver ads to them so we can make money. We should be more responsible, and that means promoting content that is trusted, not inflammatory and conspiratorial and hateful and dangerous. And to do that you have to be able to identify this material—that’s one of the things we’re working on. But that’s not nearly as hard as getting the Facebooks of the world to change the way they think about their social responsibility, to say, “We have a responsibility to do better, and while we respect users’ right to hold ideas that we think are simply not true, they don’t have the right to have us amplify that to the world.” That’s where we should focus our attention. 

In late 2019 and early 2020 before this pandemic, I was really worried about the 2020 election. I still am, but it’s nothing compared to what we’re seeing now. Now we’re talking about a global pandemic that’s taken hundreds of thousands of lives … It’s going to be harder and harder for our society and our policymakers and our health workers to figure out how to get out of this. And it’s the perfect storm—we’re all at home, we’re all online, we’re all scared, we’re all looking for information, and the trolls and the stupidity are out. On top of all that, Facebook has sent all the moderators home! Of course nobody could have predicted the pandemic. You can’t blame Zuckerberg for that, but you can blame Zuckerberg and Facebook for years of people telling them, “Your platform has been weaponized to do awful things to society—you’ve got to do better.”  

You’ve said that “the half-life of a social media post is measured in hours not days or weeks.” Can you say more about what that means? 

HF: When someone posts something on Facebook or Twitter or YouTube, it has a certain shelf life. By half-life I mean, how long does it take to get halfway to the total number of views that this post will eventually get? When something gets posted on Twitter, it gets seen very quickly. That’s the nature of Twitter; people aren’t going back and looking at stuff, for the most part, three days later—it’s happening at that moment. So when Facebook says, “When someone gets around to telling us about the bad stuff, and when we eventually get around to looking at it, we’ll take action,” and that’s 48 hours, it doesn’t matter! Everyone has seen it! How many posts are people seeing on a daily basis on Facebook and Twitter and YouTube? Hundreds, thousands? You can’t correct the record at that point. 

Technically speaking, on these platforms, what would cutting the head off the snake look like? Something that better assesses what a post is saying and makes sure its visibility is minimal? 

HF: Exactly. Or, for example, if someone is constantly spreading misinformation, to stop amplifying their posts because it is not a trustworthy source.  

The problem is that’s not how social media works. Social media works on an algorithm that says, “Our job is to maximize engagement, because that’s how we make a shit ton of money.” It doesn’t care about the content, it only cares about what grabs people’s attention. And what we know is this crap does actually engage people. So there are frankly really simple things we can do—once we’ve identified the information, we can go back to see who’s responsible for spreading it: “You can still have a voice, I’m not going to kick you off the platform, but I’m going to stop amplifying you, and that’s not asking for too much.” 

You mentioned a difference between people who are misinformed because they happen to see certain things on Twitter or Facebook and people who are misinformed because they actively seek out a certain kind of information.

HF: This is the difference between dis- and misinformation. Misinformation is not with malice. [People spreading misinformation] are the people we think we can help. The hope is that that is a significant amount of the population. 

How do you think this might look different if Trump weren’t president?

HF: The fact that we have flat earthers that have gained traction, and people who say climate change isn’t an issue, and people who say they know how to fix this virus, is incredibly dangerous. When we don’t trust the people who have spent decades studying and understanding these issues and are trying to give information to policymakers and the public, we are in a shit ton of trouble. And frankly that has been going on well before Donald Trump. He has been particularly egregious with the response, dismissing science and facts and evidence for political expediency or ideological expediency, but this problem has been growing for as long as I’ve been an academic, and that I think it’s really worrisome. We can absolutely disagree on any number of things—abortion, taxes, trade, immigration. But we can’t disagree on the goddamn facts! 

The problem is that politicians have always spun facts, and we’ve gone from spinning facts to just making them up. Prior to Facebook and Twitter and YouTube, it mattered what the New York Times and CNN said because that was the way politicians communicated with the public. What Donald Trump figured out is—he doesn’t care for the New York Times op-eds because that’s not where the people who follow him are getting their information. Social media in its early days was this fantastic democratization of access to information; you take the corporate overlords out of the system. But it turns out there was value to having editorial oversight. And I think that’s been the core problem—we’re no longer being held responsible for outward lies. 

We’ve been living now over 20 years through the information revolution, but the problem is we’ve confused information with knowledge. The online ecosystem is simply polluted with crap! It’s harder and harder to gain knowledge, which is quite the paradox when you have access at your fingertips to what would have been considered a supercomputer thirty years ago. They thought this was only going to be a boon—turns out, not so much, because Mark Zuckerberg is an asshole. Because what he has valued is his profit over everything else. Our regulators have failed to see the power of these platforms, failed to regulate them in any meaningful way. So this is the nightmare that we have now. 

Have you been seeing a lot of parallels with your other work in deepfakes? 

HF: We’ve been worried about all aspects of mis- and disinformation, whether they come in the form of deepfakes or trolls on Twitter or fake news stories. There’s no doubt that the fake videos are particularly powerful … but, to me, the deepfake videos and images are just part of this larger ecosystem. To my knowledge we haven’t seen them around COVID—probably because it happened so fast and there hasn’t been time—but it turns out you don’t really need them! You can just share a story saying, “Drink bleach, you’ll be fine!” and that’s it. Because of the fear and anxiety and speed, people are sponges right now, in a very dangerous way. 

I do have to ask, your survey is being funded by Facebook, and the survey tool you use is an Amazon tool?

HF: Yes. I do receive funding from Facebook, absolutely. I have a big lab and lots of different projects and people. This particular survey is not being funded by Facebook. We are running it on Amazon, but that’s just Amazon Mechanical Turks, so there’s no corporate conflict there. But you’re absolutely right. It’s been really interesting being funded by Facebook—I’ve been a pretty vocal critic of them for many years, and I continue to be a vocal critic of them. When they reached out for help, I thought if they’re genuine about this, let’s see if we can actually help. It felt a little unfair to beat them over the head for not doing a good job, and then when they came to ask for help, to say “no.” It was not an easy decision, and I have to say I struggled with it. I told them, “Please understand: This doesn’t come with a gag order.” And it hasn’t. I’ve remained very critical of them, despite the fact that my lab is funded by them.

Share this article