Republish
California shouldn’t buy Big Tech talking point that AI regulation will hurt innovation
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
California shouldn’t buy Big Tech talking point that AI regulation will hurt innovation
Share this:
Guest Commentary written by
Sunny Gandhi
Sunny Gandhi is the vice president of political affairs at Encode Justice.
If you’ve encountered headlines about California’s proposed legislation to establish safety guardrails around artificial intelligence, you might think this is a debate between Big Tech and “slow” government.
You might think this is a debate between those who would protect technological innovation and those who would regulate it away.
Or you might think this is a debate that decides if AI development will stay or leave California.
These arguments could not be more wrong.
Let me be clear: Senate Bill 1047 is about ensuring that the most powerful AI models — those with the potential to cause catastrophic harm — are developed responsibly. We’re talking about AI systems that could potentially create bioweapons, crash critical infrastructure or engineer damage on a societal scale.
These aren’t science fiction scenarios. They’re real possibilities that demand immediate attention.
In fact, the bill has been endorsed by many of the scientists who invented the field decades ago, including Yoshua Bengio and Geoffrey Hinton, the so-called “godfathers of AI.”
Critics, particularly from Silicon Valley, argue that any regulation will drive innovation out of California. This argument is not just misleading — it’s dangerous. The bill only applies to companies spending hundreds of millions on the most advanced AI models. For most startups and researchers, it’s business as usual. They will feel no impact from this bill.
Fearmongering is nothing new. We’ve seen this kind of pushback many times before. But this time, major tech companies like Google and Meta have already made grand promises about AI safety on the global stage. Now that they are finally facing a bill that would codify those verbal commitments, they are showing their hand by lobbying against common sense safety requirements and crying wolf about startups leaving the state.
Read Next
How California and the EU work together to regulate artificial intelligence
These exaggerated threats ring hollow: the likelihood of massive success — especially for startups — is much better in Silicon Valley than anywhere else in the country.
Some of the most vehement opposition comes from the “effective accelerationist” wing of Silicon Valley. These tech zealots dream of a world where AI develops unchecked, regardless of the consequences. They list concepts like sustainability, social responsibility and ethics as enemies to be vanquished. They feverishly dream of a world where technology replaces humans, ushering in “the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.”
We’ve seen this kind of polarization play out before, albeit less intensely. Social media companies promised to connect the world, but their unregulated growth led to mental health crises, election interference and the erosion of privacy.
We can’t afford to repeat these mistakes with AI. The stakes are simply too high.
Californians understand this. Recent polling shows that 66% of voters don’t trust tech companies to prioritize AI safety on their own. Nearly 9 in 10 say it’s important for California to develop AI safety regulations, and 82% support the core provisions captured in SB 1047.
The public overwhelmingly supports policies like SB 1047 — it is just the loud voices of Big Tech attempting to drown out the opinions of most Californians.
As a young person, I often feel as though I get mischaracterized as being anti-technology — for being this century’s Luddites. I reject that completely. I’m a digital native that sees AI’s immense potential to solve global challenges. I am deeply optimistic about the future of technology. But I also understand the need for guardrails.
My generation is the one that will inherit the world shaped by today’s decisions. We deserve a say in how this technology develops.
For lawmakers and ultimately Gov. Gavin Newsom, the choice isn’t between innovation and safety. It’s between a future where AI’s benefits are shared widely and one where its harms fall disproportionately on the shoulders of vulnerable groups and young people like me.
SB 1047 is a step towards the former, a future where California leads not just in technological innovation but in ethical innovation.
Read More
If California government wants to use AI, it will have to follow these new rules
Could AI reject your resume? California tries to prevent a new kind of discrimination