Facebook and TikTok are approving ads with 'blatant' misinformation about voting in midterms, researchers say
10/21/2022, 12:57 p.m.
Originally Published: 21 OCT 22 08:29 ET
Updated: 21 OCT 22 13:51 ET
By Jennifer Korn
(CNN) -- Facebook and TikTok failed to block advertisements with "blatant" misinformation about when and how to vote in the US midterms, as well as about the integrity of the voting process, according to a new report from human rights watchdog Global Witness and the Cybersecurity for Democracy Team (C4D) at New York University.
In an experiment, the researchers submitted 20 ads with inaccurate claims to Facebook, TikTok and YouTube. The ads were targeted to battleground states such as Arizona and Georgia. While YouTube was able to detect and reject every test submission and suspend the channel used to post them, the other two platforms fared noticeably worse, according to the report.
TikTok approved 90% of ads that contained blatantly false or misleading information, the researchers found. Facebook, meanwhile, approved a "significant number," according to the report, though noticeably less than TikTok.
The ads, submitted in both English and Spanish, included information falsely stating that voting days would be extended and that social media accounts could double as a means of voter verification. The ads also contained claims designed to discourage voter turnout, such as claims that the election results could be hacked or the outcome was pre-decided.
The researchers withdrew the ads after going through the approval process, if they were approved, so the ads containing misinformation were not shown to users.
"YouTube's performance in our experiment demonstrates that detecting damaging election disinformation isn't impossible," Laura Edelson, co-director of NYU's C4D team, said in a statement with the report. "But all the platforms we studied should have gotten an 'A' on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters."
In response to the report, a spokesperson for Facebook-parent Meta said the tests "were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world." The spokesperson added: "Our ads review process has several layers of analysis and detection, both before and after an ad goes live."
A TikTok spokesperson said the platform "is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform. We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies."
Google said it has "developed extensive measures to tackle misinformation on our platforms, including false claims about elections and voting procedures." The company added: "We know how important it is to protect our users from this type of abuse -- particularly ahead of major elections like those in the United States and Brazil -- and we continue to invest in and improve our enforcement systems to better detect and remove this content."
While limited in scope, the experiment could renew concerns about the steps taken by some of the biggest social platforms to combat not just misinformation about candidates and issues but also seemingly clear cut misinformation about the process of voting itself, with just weeks to go before the midterms.
TikTok, whose influence and scrutiny in US politics has grown in recent election cycles, launched an Elections Center in August to "connect people who engage with election content to authoritative information," including guidance on where and how to vote, and added labels to clearly identify content related to the midterm elections, according to a company blog post.
Last month, TikTok took additional steps to safeguard the veracity of political content ahead of the midterms. The platform began to require "mandatory verification" for political accounts based in the United States and rolled out a blanket ban on all political fundraising.
"As we have set out before, we want to continue to develop policies that foster and promote a positive environment that brings people together, not divide them," Blake Chandlee, President of Global Business Solutions at TikTok, said in a blog post at the time. "We do that currently by working to keep harmful misinformation off the platform, prohibiting political advertising, and connecting our community with authoritative information about elections."
Meta said in September that its midterm plan would include removing false claims as to who can vote and how, as well as calls for violence linked to an election. But Meta stopped short of banning claims of rigged or fraudulent elections, and the company told The Washington Post those types of claims will not be removed for any content involving the 2020 election. Looking forward, Meta has banned US ads that "call into question the legitimacy of an upcoming or ongoing election," including the midterms, according to company policy.
Google also took steps in September to protect against election misinformation, elevating trustworthy information and displaying it more prominently across services including search and YouTube.
The big social media companies typically rely on a mix of artificial intelligence systems and human moderators to vet the vast amount of posts on their platforms. But even with similar approaches and objectives, the study is a reminder that the platforms can differ wildly in their content enforcement actions.
According to the researchers, the only ad they submitted that TikTok rejected contained claims that voters had to have received a Covid-19 vaccination in order to vote. Facebook, on the other hand, accepted that submission.