In summary
ChatGPT maker OpenAI and Common Sense Media had rival ballot initiatives designed to protect kids from chatbots. Today they merged their efforts.
Kids safety advocate Common Sense Media and ChatGPT-maker OpenAI joined together today to advance a ballot measure that would amend the California Constitution in order to protect kids from companion chatbots online.
The two previously planned to place competing initiatives before voters, each stipulating that the one that got the most “yes” votes would win. OpenAI’s proposal largely reflected existing law, while the Common Sense measure included new bans on what AI systems children could access.
The merged measure is known as the Parents & Kids Safe AI Act. It would, among other things:
- Require chatbot developers use technology to estimate a user’s age range and apply filters and protective settings for people with an age predicted under the age of 18
- Require AI systems to undergo independent audits to identify child safety risks and report them to the California attorney general
- Ban child-targeted advertising and the sale or sharing of kids’ data without a parent’s consent
- Stop manipulation through emotional dependency by preventing AI systems from promoting isolation from family or friends, simulating romantic relationships with kids, or claiming that they’re sentient
A Common Sense spokesperson said the measure was filed Thursday afternoon. It’s not yet visible on the attorney general’s website but you can read a copy obtained by CalMatters here. As described in a press release, the combined measure drops a ban on student smartphones in K-12 California schools and prohibition on minors using chatbots capable of engaging in erotic or sexually explicit talk that were part of Common Sense Media’s original initiative.
The initiative must receive 546,651 signatures in order to come before voters in November. California Secretary of State Shirley Weber has until June 25 to determine if it reaches that threshold or qualifies for the ballot.
Common Sense put forward its original ballot initiative, the California Kids AI Safety Act, last fall, not long after the Gov. Gavin Newsom vetoed a bill the nonprofit had authored that contained similar provisions.
In response, in December 2025, OpenAI put forward a competing ballot measure that mirrors a bill that Newsom signed into law last October, requiring companion chatbot providers to enact a suicidal ideation protocol and inform people every three hours that they’re speaking with AI. Critics called that move manipulative and designed to thwart stronger protections for kids.
Common Sense Media research has found that seven in 10 teens have used companion chatbots and that the tech is too dangerous to be used by minors. In promoting its original ballot initiative, the group warned that without action the tech could lead to more harm and addiction for young people. In one well-publicized case, the parents of California teen Adam Raine sued OpenAI, alleging Raine was coached by OpenAI’s ChatGPT to commit suicide.
OpenAI’s willingness to compromise marks a contrast with how tech companies banded together to get their way in a policy fight in 2020. That year, major gig economy players like DoorDash, Instacart, Lyft, and Uber spent $200 million bankrolling a successful ballot initiative regulating gig work, Proposition 22. It effectively exempted them from a state law that would have required the companies to provide full employment benefits to their drivers.
Sen. Steve Padilla, the Chula Vista Democrat who carried the chatbot bill signed by Newsom, called the merged ballot measure a significant breakthrough. But he added that he thinks the matter should be handled by lawmakers and the governor instead of directly by voters. Since the ballot initiative would amend the state constitution, Padilla said it “would create an unnecessarily high-bar to revise and update that law in the future. Moreover, legislative hearings will provide the broader public an opportunity to comment and provide input on this important issue.”
In recent weeks, Padilla has proposed a bill with a four-year moratorium on the sale of toys with companion chatbots inside. OpenAI signed a partnership with Barbie-maker Mattel but has yet to produce any products.
OpenAI’s fight at the California ballot box isn’t limited to kids’ online safety issues. One proposed ballot initiative would give a state commission the power to slow or stop AI model development if commission members suspect catastrophic risk of harm to Californians. Two other proposals target corporate conversions from nonprofit to for-profit companies, as OpenAI has planned. The initiatives compel nonprofits that restructure in such a way to dedicate all their assets to the public benefit of humanity. To reach that goal, the initiatives would create a commission that has the power to shut down AI models and that hosts competitions that invite the public to propose ways AI can help humanity. Under one of the initiatives, the commission would also have the power to revoke nonprofit conversions.
OpenAI was founded about a decade ago with a charter stating its purpose was to benefit humanity. Its plans to convert to a public benefit corporation led to heavy criticism from nonprofits and scrutiny by attorneys general in California and Delaware. Both states eventually reached agreements with OpenAI to allow a restructuring after the company agreed to place roughly 25% of its assets into a nonprofit.