This is extracted for classroom use from the CalMatters explainer on voting issues.
Social media and technology platforms are hotspots for spreading false claims, yet there are no government regulations on how they address them. Building on what they did in 2020, Facebook, YouTube, TikTok and Twitter are all boosting more reliable information around candidates and voting in their users’ feeds. They’re also removing false claims, labeling falsehoods and, in some cases, discouraging users from spreading them.
Going into the 2022 election, none of the top platforms completely ban election misinformation or have a distinct disinformation policy, according to a study from the Anti-Defamation League. The efforts are also lacking transparency: None of the platforms allow third parties to fully audit their efforts.
Starting with the 2020 election, both Twitter and TikTok have blocked political ads in the final days of campaigning. Facebook will ban new ones during the week before Election Day. Google has also previewed new features that will elevate reliable sources in election-related searches.
In August, Twitter renewed its civic integrity policy, allowing the platform to label and restrict tweets containing misleading election information. Also in August, TikTok, whose user base jumped from 700 million to 1.5 billion since the second quarter of 2020, launched an elections center that will connect users to reputable election information.
But protecting voters at this scale comes with challenges, experts say. For example, fact-checking can be time-intensive. And when platforms do remove unreliable content, users can get suspicious and seek out unmonitored information elsewhere. Labeling this content, on the other hand, can cause information to spread even further, a New York University study found.
Some platforms also make exceptions for prominent people who spread misinformation. Facebook grants newsworthiness exemptions, while Twitter gives public interest exemptions.