We may never know the true number of Facebook users who suffered data breaches as a result of Cambridge Analytica’s antics, or what it all means in terms of personal security. And Facebook CEO Mark Zuckerberg certainly didn’t provide a great deal of insight when he testified before Congress today.
Underlying the brouhaha are a couple of overriding questions: Who’s to blame, and how to fix it? Also, perhaps, is Facebook’s time done? Is the breach one of trust as much as data, and is it so damaging that the social media giant will founder?
Starting with the third question first, no—Facebook seems in no imminent danger of dissolution. #DeleteFacebook, the meme and movement that launched shortly after news of the breach hit the internet, apparently is sputtering. Many people have talked of bailing from the platform, but relatively few, it seems, are actually doing it. That’s because Facebook dominates the social media landscape to the point that it no longer functions as a service or communications platform. It’s more of a utility, like power and water.
“A lot of people may want to delete, but they can’t afford to,” says Daniel Griffin, a PhD student at Cal’s School of Information and the co-director of the university’s Center for Technology, Society & Policy. “For some, it’s the basic means for connecting to the internet. Or they may need it to run their business, or it’s the main way they stay in touch with family members. You go to the webpage for the Federal Trade Commission [the federal agency that regulates social media companies, including Facebook], and it has a ‘follow us on Facebook’ link. That kind of says it all.”
Still, the general consensus—including among Facebook executives—is that the breach is intolerable and must be remedied. But Zuckerberg et al also seem somewhat disingenuous when they express shock, yes shock, that such things could happen. That was brought home last month by the leaked “growth-at-any-cost” memo by Facebook Vice President Andrew Bosworth, in which he opined that “…Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good…”
Further, doing such “de facto good” requires revenue, and a lot of it. Facebook, after all, is a for-profit, publically-traded company, one of the biggest on the planet, not a soup kitchen. And the coin of the realm, says Berkeley Law adjunct professor and Center for Technology, Society & Policy faculty director Chris Hoofnagle, is data. In a recent blog on l’affaire Facebook, Hoofnagle laid out his thesis point in the headline: “Facebook and Google Sell Your Data.”
Hoofnagle explains Facebook’s business practices can seem abstruse for your average social media user, noting in particular the difficulty in understanding API, or Application Programming Interfaces. These tools allow third parties to promote their products and services on social media platforms, tailoring messages to individual users. Such tailoring is accomplished, of course, by accessing the rich trove of data logged in each user’s account. And that’s the rub, says Hoofnagle.
The main issue isn’t that Facebook’s technology is hampered by bugs and unintended consequences. Rather, the problem is the inverse: the platform is functioning precisely as designed.
“One lesson emerging from this …is that developers often get more data than web users,” Hoofnagle writes in his blog, “and this is the way that Facebook and Google sell your data. [They] reward developers for working on their platforms by making personal data available—often much more data than are needed for the API, extension, or application functionality. When [social platforms] make your data available to developers, it is a transfer of value. Developers want access to personal data, and Facebook and Google gain the network effects and other benefits from a larger developer network…”
Sometimes developers use that data to interest you in copper-bottomed pans or trips to Fiji, but as the Cambridge Analytica scandal demonstrates, sometimes they use it to mobilize portions of the electorate with fake news and inflammatory memes.
Facebook has announced some mitigative changes to its platform since news of the breach broke, including requiring political advertisers to verify their identities, allowing users to view in-depth information on political advertisers, and letting users see all the active and archived ads supported by any account.
But the main issue, says Jennifer King, a PhD student at the School of Information specializing in privacy in online social networks, isn’t that Facebook’s technology is hampered by bugs and unintended consequences. Rather, the problem is the inverse: the platform is functioning precisely as designed. To a very real degree, says King, Facebook’s business plan is predicated on third party developers having free and easy access to heaps of user data.
“I was concerned about something liked this [the Cambridge Analytica scandal] happening as far back as 2010, and I wasn’t alone,” says King. “But I always figured the information would be used [for advertising or marketing], not weaponized for political manipulation. My stomach hit the floor when I first started reading stories about Cambridge Analytica.”
And it’s not just Facebook, says King. It’s the whole API-centric network. No matter the platform, the system is geared to encourage the propagation and use of apps, the more the merrier. That’s because speed and volume are essential to any platform’s success. Indeed, speed in building a user base remains the paramount goal of any social media company, CEO bromides about concerns over user privacy and security notwithstanding. The industry was just born that way.
“In order to build platforms as quickly as possible, these companies felt they needed to allow the maximum amount of data to be shared as quickly as possible with as few safeguards as possible,” says King. “What really worries me at this point is that this just isn’t about Cambridge Analytica. I’m not trying to fear monger, but there could be hundreds, thousands of other nasty actors out there doing the same thing. Some could end up being worse than Cambridge Analytica. We just don’t know. Social media companies keep logs on their data, so they can usually identify the biggest [aggregators of user information] on their platform, but they don’t have reliable ways of figuring out what’s being done with that information. ”
There is a potential “resource and time intensive fix,” says King: clamp down on API developers.
“Just allow them access to a small pool of data,” King says. “Or sell access but hold on to all the data. [Social media companies] could have a deep history on you and your searches, but they wouldn’t allow third party developers to access it.”
The platforms could also ratchet up scrutiny of third party developers, says King.
“They could require developers to undergo background and credit checks before they’re allowed to submit an app, and then police them very closely,” says King.
Given those recommendations, Facebook’s recent moves to improve security seem less than aggressive—or impressive. So is the time ripe for an alt-Facebook? Could some neo-Zuckerberg launch a competing platform that could do what #DeleteFacebook couldn’t: siphon away users?
King observes there have been such attempts, including by Path, a social media company that initially restricted users to a network of 50 friends as a means of assuring privacy and data security. That limit was eventually raised to 150 friends, and then eliminated altogether. But the company hasn’t been free of its own privacy controversies; in 2012, it was pilloried for storing phone data without the knowledge of platform users, and in 2013 it paid an $800,000 fine to the FTC for storing data from underage members. Perhaps more to the point, the site has achieved little in the way of market penetration; it has never come close to challenging Facebook’s primacy.
“Facebook was a slate cleaner,” says King.
“There are concerns that these calls for blocking access to Facebook’s data will affect legitimate, important research.”
Moreover, says Griffin, newcomers could make the problem worse, not better. Because they would be smaller and hungrier, they wouldn’t have the resources and culture that would allow them to emphasize privacy above, say, quarterly revenue targets.
“Facebook has all these users now, so they’re not as worried about growing as they once were,” Griffin says, “but growing is what younger companies must be worried about. Big companies are potentially better at protecting data because they’re usually more capable technically, and they aren’t as loose as smaller competitors.”
As an academician, Griffin says, he’s personally worried about the unintended impacts of Facebook’s responses to its data woes.
“There are concerns that these calls for blocking access to Facebook’s data will affect legitimate, important research,” he says. That includes research of immediate relevance, such as the degree to which voters are influenced by social media manipulation.
“There’s been this tension on Twitter, for example, about how much of what Cambridge Analytica did was real and how much was pseudo-science,” Griffin says. “How much influence did they really have? Did they just change conversations, or did they push the needle enough to change votes? Groups like Data and Society are doing great work on media manipulation, but you need broad access to data for that kind of research.”
Megan Graham, a teaching fellow at Berkeley’s Samuelson Law, Technology & Public Policy Clinic, says Facebook has modified its policies at various times due to privacy concerns, an indication that it is fairly responsive to public pressure. Indeed, users can already protect their privacy on the platform to a fair degree, but that requires some expertise in negotiating complicated menus full of options and settings.
“One thing Facebook has talked about is centralizing many of these privacy settings, providing an interface that’s easily accessed from the homepage,” Graham says. “That seems like a good move, if not necessarily a robust one. For me, the biggest takeaway is that the more you learn about the ways companies use data, the more proactive you can become about protecting your own information. That doesn’t remove the onus for the companies, but it does make you a more effective advocate.”
Of course, the problem with Facebook might be solved by upending its revenue model. Instead of making money by providing access to user information, the company could charge subscriptions for service, augmented by advertising unmoored from data mining. But inertia militates against such a fix.
“I think people are generally aware of the trade they’re making,” King says. “They understand that when a social service is free, they are the product. Some people say they’d be willing to pay, but many people absolutely won’t pay, and for most people, one of the first questions remains, ‘Is it free?’ It’s possible we could build a better social network, but it’s unlikely that it will happen at the scale represented by Facebook’s aspirations. And Facebook’s aspirations aren’t necessarily in our best interests. I study how people communicate on social media, and frankly, it makes me pessimistic. It’s hard to keep things from going poorly online.”
Posted on April 10, 2018 - 3:06pm