In summary

A recent U.S. Supreme Court ruling involving free speech protections and immigration law reaffirmed a longstanding view that could help lawmakers in California and elsewhere attempting to regulate harmful social media algorithms.

Guest Commentary written by

Karl Manheim

Karl Manheim

Karl Manheim is an emeritus professor at Loyola Marymount University in Los Angeles. He has taught artificial intelligence law since 2018.

Jeffery Atik

Jeffery Atik

Jeffery Atik is a law professor at Loyola Marymount University in Los Angeles. He has taught artificial intelligence law since 2018.

Proponents of social media regulation in California and elsewhere recently received important support from an unlikely place: a U.S. Supreme Court decision on immigration law.  

United States v. Hansen involved the prosecution of a fraudster who tricked migrants into believing that he could arrange legal residency for them in the United States. The court rebuffed the assertion that applying an immigration statute criminalizing the encouragement of migrants to illegally enter or reside in the U.S. would unconstitutionally abridge freedom of speech.

By doing so, the court reaffirmed the longstanding view that speech encouraging the commission of a crime is not protected speech. This means the decision provides further authority for ongoing legislative efforts to regulate social media platforms, even as companies attempt to avoid accountability for harms they cause by invoking the First Amendment.

Courts and state legislatures across the country are currently reviewing the liability of social media giants such as Meta, YouTube and TikTok for the damage they have caused to children and other users. These efforts have focused on the use of artificial intelligence-driven recommendations that concentrate and target vulnerable groups with harmful content in the pursuit of profits.

One law proposed in New Jersey would prohibit social media platforms from using certain practices or features that cause children to become addicted to the platform, with penalties up to $250,000. A California bill would go further, prohibiting the use of any “design, algorithm, or feature” that causes a child to hurt themself or others, or become addicted to the platform. Congress is also considering taking action, for what it’s worth.

Social media companies have mounted a vigorous campaign to convince state legislatures, Congress and the courts that this type of regulation violates the Constitution. Undoubtedly much of what appears on social media is protected by the First Amendment. But the platforms advance a vision of the First Amendment that would provide them with almost absolute immunity.

They deserve no such treatment. As we have argued before, the outputs from AI-driven algorithms do not constitute protected speech and can be properly regulated. Certain algorithmic tools used by social media are benign and may often be socially beneficial such as AI-based moderation engines that identify and remove hateful and otherwise offensive content.

But the outputs of AI-based recommendation engines that select and push masses of targeted content can lead to addiction and other harmful effects. The urge to regulate this aspect is justified – and the First Amendment does not limit government’s protective powers.

There is no human speaker behind the output of an AI algorithm. Nor is there any intent to communicate (AI has no intent). A recommendation algorithm mostly reflects back personal data. Even if we worked backward through the algorithm to examine the platform’s intent, it’s not to communicate. Instead, platforms seek to maximize ad-generated revenue through “user engagement.”

The Hansen case is demonstrably different from the usual context of social media regulation. Yet the Supreme Court restated long-established First Amendment doctrine that counters the absolutist view asserted by the social media giants.

The use of human language does not constitute protected speech. The ruling upheld state control of speech that encourages or induces illegal acts – AI-driven recommendations can encourage or induce users to become addicted to content that cumulatively inflicts harm.

If social media enjoys absolute First Amendment protection, as the platforms seemingly claim, the next step is for AI to have such rights as well. In that case, all bets are off.

We want to hear from you

Want to submit a guest commentary or reaction to an article we wrote? You can find our submission guidelines here. Please contact CalMatters with any commentary questions: