Republish
AI is already harming our children. Are California lawmakers going to do something?
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
AI is already harming our children. Are California lawmakers going to do something?
Share this:
Guest Commentary written by
Robert Fellmeth
Robert Fellmeth is the Price Professor of Public Interest Law and executive director of the Children’s Advocacy Institute at the University of San Diego School of Law.
Parents beware. The money-lusting billionaires in Silicon Valley who, through social media, have already caused unprecedented child suffering — including depression, eating disorders, suicide, drug-related deaths, invasions of privacy and sex trafficking — they have unleashed a new horror.
They are called artificial intelligence companions, led by a service called Character.ai, an AI-driven platform that permits the creation of fictitious character chatbots. These companion chatbots engage in personal and evolving conversations like real people. The chatbots even have profile pictures and bios and can be custom-tailored for their user.
This new technology is already hurting our kids. According to numerous reports, Character.ai’s chatbots sometimes try to persuade children to kill themselves, to avoid consulting doctors, to engage in sexualized conversations, and to embrace anorexia and eating disorders.
In one widely reported instance, a Character.ai chatbot suggested a child should murder his parents because they tried to limit screen time.
Some of the people putting this invention before children in California and beyond are already so rich that their grandchildren’s grandchildren won’t be able to spend it all. I think I speak on behalf of angry and frustrated parents everywhere when I ask the titans of Big Tech, what the hell is wrong with you?
Is money that, at this point, amounts to bragging rights in a parked account so important that you trot out technologies to children without first making sure they are 100% safe to use?
Have you learned nothing from the social media catastrophe?
This is more dangerous than social media’s AI custom-delivering generally available videos to teens that exploit their anxieties to keep them online. These are one-on-one, private conversations that evolve just like real conversations. It is AI direct-to-child.
Making this available to children without first ensuring it is safe is not just grossly negligent — it is sociopathic.
Companion chatbots are the satire-shattering example of Mark Zuckerberg’s infamous quote that tech companies should “move fast and break things.”
Read Next
Why Gavin Newsom vetoed California’s bold bid to regulate AI
But our children are not “things” for tech to “break.” Children are our love, our future, our responsibility. We measure our humanity by how we treat them.
If a human engaged in private conversations with scores of children and urged them to hurt themselves, kill their parents, not eat or avoid doctors, we would lock them up.
Where is Washington, D.C.? Sacramento? Are our lawmakers again going to permit an addictive technology that children can access and stand by as another generation gets hurt?
Technology products that children use must be safe before children use them. This could not be more obvious.
This is also obvious: Every elected official has a choice. Stand with Big Tech, or stand with parents and children.
Standing with parents and kids means, first, not being influenced by Character.ai’s promises of self-reform. We have been down this road before with social media. Second, standing with parents and children means saying, “never again.”
It means a rejection of moving fast and breaking children and cutting off their access until they can either prove they are safe, or until laws hold them financially responsible when they cause harm. There is no “other side” that warrants kids being used in Big Tech’s experiments again.
And, there is nothing more urgent on any lawmaker’s to-do list than protecting our children from technologies that have the power to hurt them.
Read More
If California actually wants to lead on AI, it can’t let 3 companies hog the infrastructure
More California schools are banning smartphones, but kids keep bringing them