Republish
How social media companies can self-regulate toxic, hate speech
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
How social media companies can self-regulate toxic, hate speech
Share this:
By May Habib, Special to CalMatters
May Habib is the CEO of Writer, a San Francisco-based company, may@writer.com.
Even if you apply First Amendment principles, the 17 companies (and counting) that deplatformed former President Donald Trump are not acting unlawfully. But at a time when Big Tech has near-monopolistic influence over what we see and say, the power to censor should not be in the hands of so few.
Big Tech must self-regulate, and they have unique capability to do so.
How can they protect free speech and draw hard lines against hateful rhetoric or calls to violence, which is not protected under the First Amendment?
The answer: Use Artificial Intelligence to enforce objective policies on users’ hateful, inflammatory, racist or aggressive speech; apply them consistently whether they result in limiting or labeling content; and be transparent about how decisions are made.
I know the first call to action – objectively tagging language that is toxic – is possible, because we do it at Writer, an AI writing assistant for the enterprise. In our case, we’re trying to keep colleagues from being passive aggressive to each other, but we built a second model applying the same principles to a model trained exclusively on Twitter’s data.
Defining “toxic” as content that is aggressive, mean, offensive, racist, sexist or hateful (on either side of a debate), we analyzed millions of tweets from Oct. 20, 2020, through Jan. 6, 2021, the date of the insurrection at the U.S. Capitol. On that day, 32% of all tweets were toxic, a +40% jump from the previous average. Trump himself contributed “just” 4 toxic tweets that day, but by virtue of his having 88 million followers, tweets and retweets like, “Get smart Republicans. FIGHT!” have outsized impact. Indeed, nearly a third of all toxic tweets in the U.S. on Jan. 6 contained at least one of these four phrases: “stop the steal,” “stolen,” “rigged” or “fight.”
It’s clear that Trump’s language on Twitter has been toxic for a long time, and like a deadly virus, his influence spread across the entire platform, increasing its overall toxicity.
The artificial intelligence behind our analysis is not rocket science: human labeling of online content that is itself hateful, inflammatory, racist or aggressive – our definition of “toxic.” Is “fake news” inflammatory? If it’s in reference to a factually inaccurate “stolen election,” it is in our models. Yes, editorial decisions on datasets are required to build a robust, trustworthy models, as are frequent updates to include ever-evolving dog whistles on both sides of a debate, whether “MAGA” as shorthand for “delusional” or “illegal alien” to refer to “immigrant.”
A transparent, open, collective effort to create a standard for healthy communication is necessary. It’s also how Big Tech can bring its AI expertise to solve the problem we have on social media. By balancing First Amendment principles with a user’s toxicity and reach – how much influence does a user’s content have on the overall social media platform? – I hope we can make decisions like the one Twitter had to make more transparent and fair.
Jack Dorsey himself tweeted that banning Trump sets a dangerous precedent and was a result of Twitter’s failure to “promote a healthy conversation.” I couldn’t agree more, and it’s possible to work together as a tech community to create a new open standard around the definition of healthy.
Though it’s good-riddance for the boot to Trump, an exodus of conservative voices from the mainstream social platforms serves no one. Communities are stronger when more voices are represented. To earn back trust, Big Tech needs to be clear about what it will and won’t tolerate in its terms of service, apply those rules consistently over time and across all people, and be utterly transparent about why and how it arrives at decisions.
We have the technology and the know-how to get there. Do we have the will? There are 88 million reasons that we should.