
By Colin Lecher
WHAT THE BILLS WOULD DO
Senate Bill 243 by Democratic Sens. Josh Becker of Menlo Park and Steve Padilla of Chula Vista would require tech companies that create artificial intelligence chatbots, like ChatGPT, to clearly inform users that the bot is not human, and to create a protocol to help users experiencing suicidal ideation. The bill would also task companies with instituting “reasonable measures” to prevent minors from interacting with sexually explicit material while using the bots.
Assembly Bill 1064 by San Ramon Democratic Assemblymember Rebecca Bauer-Kahan, the Leading Ethical AI Development (LEAD) for Kids Act, would outlaw the use of companion chatbots for children entirely unless the bot “is not foreseeably capable” of harming a child, such as by encouraging self-harm or drug use.
WHO SUPPORTS THEM
Initially, several tech watchdog and child safety groups, including the American Association of Pediatrics, supported SB 243. But many withdrew their support, saying last-minute changes hopelessly watered it down. Some moved their support to the broader AB 1064. Groups including Common Sense Media and Tech Oversight California are now supporting that legislation, while the Computer and Communications Industry Association trade group says SB 243 would at least not create “an overbroad ban on AI products” like AB 1064.
WHO OPPOSES THEM
The CCIA initially testified against SB 243, arguing it was overly broad and would stifle innovation. It altered its position after the last-minute changes. On the other side of the debate, while groups like Common Sense Media supported earlier versions of SB 243, they have since dropped that support to back the other measure.
WHY IT MATTERS
The bills come as lawmakers face immense pressure to rein in tech chatbots following several high-profile tragedies. News reports nationwide have shown how AI-powered chatbots have fed users’ delusions, sometimes leading to real-world violence.
Some recent scandals have also drawn attention specifically to children’s relationships to chatbots. In August, the New York Times reported on the case of 16-year-old Adam Raine, who spoke to ChatGPT about suicide before ending his life. Watchdog groups have accused ChatGPT of giving other harmful information to teens as well. Meta, Facebook’s parent company, also faced criticism recently after leaked internal documents obtained by Reuters showed the company allowed its AI chatbots to have “romantic or sensual” conversations with children.
GOVERNOR’S CALL ✅ ❌