Facebook and Google Need to Own Their Role in Spreading Misinformation -- And Fix It

CNN/Stylemagazine.com Newswire | 10/9/2017, 7:39 a.m.
Every day we learn more about the scale and effectiveness of Russian influence operations during the 2016 US election. The …

By Jonathon Morgan

(CNN) -- Every day we learn more about the scale and effectiveness of Russian influence operations during the 2016 US election. The implications for our political process are ominous, but the problem runs much deeper. For years hostile foreign governments, terrorist groups, and armies of internet trolls have manipulated public discussion, both in the United States and around the world.

We must understand Russia's manipulative attack during last year's presidential elections in a broader context: as a relatively traditional propaganda operation that achieved unprecedented reach because social media companies are either unable or choose not to defend themselves.

Every time our society falls victim to an information attack, we blame the attacker, but not their weapons. It's hard to accept that the platforms we use to chat with friends and share baby photos can have such a powerful impact in shaping democracy. Platforms like Google, Facebook, and Twitter have fundamental flaws that must be fixed if we're going to protect ourselves in the future.

At its peak in 2013, ISIS mobilized tens of thousands of social media accounts to spread their message and radicalize Americans. They created horrific but compelling content, and used their social bot armies to force that content into the mainstream. They were manipulating the underlying mechanics of social media, tricking platforms like Twitter, Tumblr, and YouTube into amplifying their message to the widest possible audience.

ISIS lit a fuse with its message of hate and violence, but these social media platforms powered the explosion, making it simple to share and spread nearly instantaneously. Years later, tech companies have finally acknowledged the severity of the problem, establishing the Global Internet Forum to Counter Terrorism in June, 2017.

The terrorist group's years-long, highly visible manipulation of our social platforms was a clear warning that hostile actors could use our online gathering places to target Americans. The warning was ignored, and in 2016 we were attacked again. In October, 2016, Obama administration intelligence officials formally accused Russia of meddling in US elections. By March of 2017, New Knowledge, along with independent researchers from our volunteer collective, Data for Democracy, discovered 30,000 accounts that exhibited bot-like behavior in posting on the Trump campaign's Facebook page alone. Fake accounts, run by software programs, are designed to imitate real users and manufacture popular support by repeatedly posting messages to social media.

Yet only now, 11 months after the 2016 election, are we beginning to realize the cost. We have recently learned that millions of Americans were served propaganda by a foreign adversary, disguised as news stories and as persuasive comments by fake neighbors. The tech companies that enabled this continue to accept only limited responsibility, reframing the problem as an issue of free speech rather than admitting that their users were fooled and manipulated.

Yet again, as the country reeled from the deadliest mass shooting in the US history, Google and Facebook promoted fake news about the attacks. 4chan, an anonymous message board infamous for its racist and misogynist content, falsely accused an innocent man of being the Las Vegas shooter.

Despite 4chan's reputation as a hotbed of offensive memes and wild conspiracy theories, Google promoted the content in its Top Stories search results and Facebook promoted 4chan messages on its crisis response page -- which was supposed to be a resource for families checking to see if their loved ones were injured or killed in the attack. Both companies released statements expressing regret for the mistakes and ultimately corrected the issues.

These information attacks continue because manipulating these platforms is cheap and easy. Media platforms like Google, Twitter, and Facebook rely on trust. If enough people trust that a piece of information is valuable, it is promoted to more users. It doesn't matter where the information was published, who authored it, or what it contains.

At one time, this kind of democratic sharing of ideas was the great promise of the internet, where information flowed freely, unconstrained by traditional gatekeepers like government or media companies.

However the internet is not a democracy. Information is controlled by a cartel of media companies, who are not motivated to help their users share information, but rather to profit off of their attention. Google, Facebook, and Twitter know that their businesses profit by deeply understanding their users, and the type of content that captures each individual user's attention.

Their algorithms learn what information attracts attention, then in turn promote that content to exactly the users who are most likely to consume and share it, which in turn gives that content more attention, and generates more profit. As a society, we communicate with other using products that aren't designed to help us communicate — they're designed to keep us watching.

The flaws in this system are baked into its design. These flaws have been repeatedly exploited, and we're still vulnerable. We're vulnerable to content that grabs our attention, regardless of its veracity or substance. We're vulnerable to targeting systems that understand our preferences, our beliefs, and our politics, and can show attention-grabbing, influential content to exactly the people most likely to be manipulated by it. And we're vulnerable to automation -- fake accounts that invent, share, and amplify information to fool tech company algorithms into believing information will be valuable in capturing their users' attention.

Addressing these complex problems will require significant investment from tech companies, who need to rethink how information is shared and promoted on their platforms. They need new models for promoting content that assess the quality, not just quantity, of user engagement.

In the short term, they need to follow through on promises to flag false or misleading information, and commit to labeling automated accounts so users can distinguish between what real humans believe, and what bots want us to believe.

Most importantly, tech companies need to accept that they don't operate in a vacuum -- Facebook and Google have revolutionized how we communicate as a society. These platforms are where Americans engage in public discourse, and are therefore fundamental to our modern democracy. Every tech worker and executive has an ethical responsibility to protect our democratic ideals.

It's also time for a sober conversation about government regulation. As a society, we need to decide whether a small number of giant tech companies should be restricted in mining the data they collect about every aspect of lives. We need to decide if we trust these companies to control the information we consume, even when that gives them the control to shape our elections.

Together, the tech industry and government need to deal with the structural problems in our media platforms. Otherwise we can expect to be victims again.