Close Mobile Menu
Back
Benefit of the Week

The Graduate Wine Collective

Wines crafted by UC Berkeley alumni.
Join the Club
Back
Upcoming Event

Cal for All: Advancing Sustainable and Equitable Futures

Creating lasting change requires collaboration across industries, communities, and generations. Dr. Yvette Gullatt ’88, M.A. ’94, Ph.D. ’05  will discuss ways innovation, sustainability, and equity can drive a better future.

RSVP

Building Trust

A Q&A with Professor Coye Cheshire on Social Media, Misinformation and Political Activism.

June 17, 2024
by Nathalia Alcantara
Portrait of Coye Cheshire smiling and looking at the camera Photo by Julie Cheshire

From real-time raw images of war to the sight of Pope Francis wearing an AI-generated puffer coat, social media content pulls us in every direction. In this sea of information, the line between reality and the distortions of our digital echo chambers has never been blurrier. Despite its undeniable benefits in democratizing information, social media’s fragmented reality forces us to question whether anything we see online is real.

The quandaries of online trust have intrigued Berkeley School of Information Professor Coye Cheshire since the 1990s, back in the early days of online marketplaces such as eBay. “At that time, the idea of conducting online social and financial transactions with other people was still novel, unproven, and quite risky,” he says.

One day, as he helped his parents buy some crafts online and wondered whether the seller would actually send the promised items, he found a new focus for his work. Cheshire’s mixed-methods research explores the intricacies of how people build trust and cooperate in online environments—a complicated dynamic today, to say the least. 

With deepfakes rapidly becoming a fact of life and conspiracy theories in no short supply, it’s easy to despair over the future of our digital interactions. But that’s not the entire story, Cheshire says. While these challenges are real, there is hope on the horizon. California sat down with him to explore how online distrust impacts our perceptions and behaviors.

This interview has been edited for length and clarity.

We were surprised to see you share a rather hopeful outlook for the future of our digital landscape. You describe it as potentially more ‘civil, democratic, and trustworthy.’  How do you envision us moving toward that future?

My hopeful outlook comes from a desire to help us get there—not because I think that the optimistic outcome is necessarily the most likely. I think that we are in a particularly challenging period in the evolution of online information sharing. And things may get worse before they can begin to improve. But I strongly believe that our societies have a greater interest in creating civil, democratic systems than tyrannical, discourteous environments. It takes time… often longer than we would like, but positive change does happen. A big part of getting us to a healthier, more prosocial, democratic future is highlighting the problems with online disinformation, talking about why these problems matter and sharing actionable solutions. 

What are some of the actionable solutions tech companies could use to combat online misinformation? 

Some of the most effective solutions tend to leverage social accountability (e.g., up/down voting content, verifications and corroborations). Some forms of social media incorporate these into their designs and prioritize content based on how the collective rates the information. For example, in a platform like Reddit, we have observed posts about protest activities that are challenged and that receive no additional verification, resulting in ‘downvoting’ to de-prioritize the content since it may likely be misinformation. For fast-evolving events, this kind of social accountability can be powerful, though it’s certainly not perfect. Other platforms have leaned more heavily into automated moderation for known misinformation, such as using natural language processing and AI tools to tag and brand certain types of information as misleading or contested. 

Is there any major social media platform doing reasonably well on that front?

When it comes to top-down actions, I actually think that Meta is making some pretty significant advances. There is a lot more to be done, but I do want to acknowledge Meta’s advancements in identifying potential misinformation across their social media properties (Instagram, Facebook, and Threads). While Facebook in particular has been involved in some of the most egregious examples of mis/disinformation and the spreading of intentionally fake news stories, I think that Meta has been leveraging large, third-party fact checking networks to identify hoaxes and false information fairly quickly. 

Before social media, our exposure to images of violent conflict was largely curated by traditional media. Today, many of us are exposed to raw war imagery directly through our devices. How does this impact us? 

The ability for citizens to directly share media about conflicts and social confrontations is a huge triumph for free expression of information. I think many people are now familiar with several high-profile atrocities and abuses of power in just the past few years that would never have been believed or addressed if it had not been for the images, video, and audio that are shared on social media by those on the ground. I think that there are several ways that this can affect us politically, but the one I would highlight here is the fact that we simply do not all watch or use the same sites, platforms, or news sources.

We know from a variety of empirical research studies that those who are more factually informed about the reality of important events have very different media consumption habits than those who are less factually informed about the same events. And yet, those who are less informed do not necessarily believe that they are less informed. So, while the internet has democratized our ability to share information about what is happening when it happens, political views are ultimately shaped by the limiting scope of what people consume, not what fully exists.

In what ways has misinformation surfaced during recent student protests, and what have been its most noticeable consequences?

Many of the misinformation issues that we have seen in recent student protests include confused reports about activities, embellishments and unsubstantiated assumptions about motivations, misleading headlines, as well as flat-out fake stories. One of the most difficult and frustrating consequences of misinformation about social behavior is that public opinions are formed very quickly based on initial accounts of protest activities. Misinformation also tends to be highly simplistic, providing uncomplicated targets of blame for negative events.

Unfortunately, we know that opinions and beliefs change slowly with the release of more accurate information, and often opinions do not change at all. Finally, a lot of research in our field shows that the speed of spread and repetition of misinformation during fast-moving events is greatly enhanced by social media platforms that are primarily designed to prioritize newness and novelty over accuracy.

Does social media help explain the widespread campus protests?

Despite all the negative problems that social media can play in the dissemination of misinformation, these platforms can also be amazing tools for coordination and mobilization. It can also create a public record of intention—where organizers and participants share information to help manage protest efforts in a way that potentially lets the public see exactly what was being organized and why. Furthermore, these platforms provide efficient pathways for sharing real-time images, videos, and audio of events as they unfold. On the flip side, some universities and law enforcement officers monitor social media to try and stay ahead of potential problems and surveil participants—even if no laws or rules have been broken.

The existence of social media does not itself cause or encourage protests on the ground, but it does allow information to spread quickly. What current research shows, however, is that the ‘summaries’ of what is happening as reported by news organizations and primary media sources have an outsized effect on what the public believes about these protests. Most people do not closely follow the up-to-the-minute minutia of protests. Instead, the summary framing of primary media coverage is the primary determinant of what people believe is happening on campuses. This can create a different kind of misinformation, where the summary coverage may focus on brief moments of drama or disruption rather than on the more complex reasons that such disruptions may have occurred.

Interesting, it seems like a snowball effect. Protesters hesitate to talk to the media, which makes sense given what you’ve described. But the more they hesitate, the more surface-level the coverage gets. 

During social unrest, I think that much of the reticence to talk to the media likely stems from frustrations over past events where coverage tended to focus on anything that is salacious, violent, or disruptive (and, this is precisely the kind of content that gets consumed and clicked, leading to so-called doom scrolling). Protesters may realize that they can give a full, complete interview that tells their version of events, but they have no discretion over how that interview will be cut and edited if it fits into a particular narrative. So, the in-the-moment calculation to say nothing is probably a rational individual decision, which collectively may lead to a suboptimal outcome of more surface-level coverage, as you suggest. Obviously this is not a new problem, but for fast-moving events where disruption is used to call attention to a perceived injustice it can be particularly challenging. Hence, the ability to directly share one’s experiences and views on social media becomes even more attractive.

What future trends can we expect in the use of social media for political activism?

My hope is that political activists will continue to coordinate in open, public ways using social media to provide a clear and unfiltered view of efforts as they are planned and as they occur. My fear, however, is that the attractiveness of software surveillance tools that monitor social media will potentially push activism coordination into more private channels. When that occurs, peaceful protests can inadvertently appear confrontational by virtue of their secrecy.

What social cues are amplified or lost in digital environments as compared to real world interactions?

Most cues that depend on subtlety get lost in technology-mediated interactions. Things like sarcasm, as well as any kind of non-verbal communication. On the other hand, strong emotions such as anger and joy tend to be overstated in tech-mediated interactions to the point where they can overshoot the intended response. Examples include online forums where terse statements (perhaps in all caps) can come across as far more confrontational than intended. The same can be said for joy and laughter—consider how many times we use the term ‘lol!’ and never even crack a smile while typing it. Taken together, there are many ways for us to misunderstand and misattribute emotions and the severity of situations when interpreting social communications in online environments.

It’s fascinating to think about how much of our day-to-day interactions are now mediated by technology and how this will only increase. Do you worry about how well humans can adapt to spending so much time interacting in the digital world?

One of the things that I am actually not concerned about is our ability to adapt, respond, and reflect over time to advances in technology. AI and machine learning algorithms are joining a long list of technological advances that were equally feared and lauded when they were introduced—from the printing press, industrialization, the automobile, the television, and of course the Internet too. The thing that I do fear (which is mirrored in all of the earlier technologies I just mentioned), is how technologies are heavily influenced by commercial and economic motives, rather than by the needs or benefits to society and our environment as a whole. For example, I was just attending our annual human-computer interaction conference this year, where Kate Crawford gave a keynote speech on AI and human values. One of the fascinating (and frustrating) examples that came up was how current AI tools are trained by scouring metadata from online images. For example, a picture of a person eating salad might have a series of hidden words (metadata) that describe the meaning and purpose of the image. However, these metadata tags were primarily created to optimize today’s search algorithms for commercial purposes. As a result, we are now training today’s AI tools based on the logic and categorization schemes that are used to sell us things. So, my fear is not that we cannot adapt, but that we won’t be aware of *why* we are adapting and for what purpose. 

Share this article