Republish
California’s AI employment laws look tough, but they leave workers exposed
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
California’s AI employment laws look tough, but they leave workers exposed
Share this:
Guest Commentary written by
Alberto Rocha
Alberto Rocha leads policy development at the Algorithmic Consistency Initiative.
When Gov. Gavin Newsom vetoed Senate Bill 7, the “No Robo Bosses Act,” which would have required human review before an algorithm could fire or discipline a California worker, the governor’s message last year was unmistakable: protecting livelihoods from automated decisions would impose too great a “burden” on innovation.
This was not a minor policy disagreement. It was a signal that Sacramento is willing to let algorithmic systems — many built and controlled by out-of-state tech giants — make life-altering decisions about Californians without meaningful guardrails.
Over the past two years, California lawmakers have processed more than 30 AI-related bills, earning headlines about the state’s leadership on safety, transparency and consumer protection. Yet the laws that survived lobbyist pressure and gubernatorial vetoes share a common flaw: they rely almost entirely on delayed paperwork — training data summaries, incident reports and audits that arrive long after the damage is done.
When an algorithm quietly denies someone a job, demotes them or ends their employment, the harm is immediate and personal. Waiting months or years for a redacted transparency report does nothing to prevent that harm, or to hold anyone accountable when it occurs.
This is not hypothetical. Major AI hiring platforms already influence decisions at Fortune 500 companies that have California operations. Lawsuits filed in 2025 and early 2026 allege some of these systems generate opaque scores that exclude older workers or perpetuate racial bias — yet the underlying logic of the actions remains hidden from the affected individuals and from regulators.
Last year some of the Big Tech companies spent more than $4.6 million lobbying in California. The result: most of the strongest protections in technology bills were either watered down or pushed to distant effective dates — some not until 2030. By then, the patterns of algorithmic decision-making will be deeply embedded in the state’s economy.
We don’t need more deferred disclosure. We need architectural authority — engineering constraints that make discriminatory or arbitrary outcomes impossible at the moment of decision.
One credible path forward comes from the Luevano Standard, a framework that modernizes the lessons of the landmark Luevano v. Campbell consent decree, a court decision that ended discriminatory federal hiring tests in the 1980s.
READ NEXT
Three bills would protect California workers from AI management, but will costs stand in the way?
The standard requires that algorithmic employment decisions be predictable and tied to job-relevant criteria, rather than hidden statistical correlations. It also mandates runtime enforcement, meaning legal and ethical rules are checked continuously by the system itself, to block unlawful actions before they happen.
Finally, the standard calls for forensic auditability, so every decision produces a clear, technical record of how it was reached to make accountability possible without reverse-engineering proprietary models.
This is not anti-innovation. It is the opposite. Verifiable constraints would create a safe harbor for responsible companies and protect Californians from unchallengeable, black-box judgments.
The proposed California Algorithmic Accountability & Fairness Act — as detailed in the Luevano Standard — could make these requirements mandatory for high-stakes systems used in employment, credit, housing and insurance. Without that kind of structural change, Sacramento’s current approach risks becoming a hollow victory: lots of press releases, very little protection.
Californians deserve more than symbolic legislation. When an algorithm can end a career in a millisecond and the state’s response is to wait five years for a report, it seems clear that some people’s livelihoods matter less than some companies’ convenience.
It’s time for lawmakers and the governor to move beyond promises of future transparency. Workers, families and communities are being judged by machines right now. They need real safeguards today — not in 2030.
READ NEXT
California has 30 new proposals to rein in AI. Trump could complicate them
Why California backed off again from ambitious AI regulation