Republish
Children are in the crosshairs of artificial intelligence. Who will we blame?
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.

Children are in the crosshairs of artificial intelligence. Who will we blame?
Share this:
Guest Commentary written by
Sasha Costanza-Chock
Sasha Costanza-Chock is the author of “Design Justice: Community-Led Practices to Build the Worlds We Need” and a faculty associate at the Harvard Berkman Klein Center for Internet & Society at Harvard Law School.
I recently watched the documentary The AI Doc: Or How I Became an Apocaloptimist, which follows real-life director and father-to-be Daniel Roher as he freaks out about whether it’s a good idea to have a child in the age of artificial intelligence.
He interviews AI doomers — who think AI will kill us all — then AI utopians — who think it will save us all. Finally, he seeks answers from CEOs of leading Bay Area AI companies: Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario and Daniela Amodei of Anthropic.
In one of the film’s most intense moments, Center for Humane Technology co-founder Tristan Harris tells Roher: “I know people who work on AI risk who don’t expect their children to make it to high school.”
This gut-punch of a line illustrates a core problem: the families of elementary school students recently killed in Iran already know their children won’t make it to high school. The worst things AI doomers can imagine have already happened there.
On the opening day of the US-Israeli war on Iran, multiple Tomahawk cruise missiles struck the Shajareh Tayyebeh elementary school in Iran. At least 168 people were killed, the majority of them children, according to Amnesty International.
US Central Command has used Palantir’s Maven Smart System for target identification throughout the campaign. Anthropic’s flagship AI, Claude, is integrated with Palantir’s systems and was used in Iran, as well as in the illegal US military operation to capture Venezuelan President Nicolás Maduro.
Anthropic isn’t the only AI firm involved with the Pentagon. In July 2025, the Pentagon awarded contracts of up to $200 million each to Anthropic, OpenAI, Google, and xAI.
Roher puts the CEOs of these firms on camera, yet he never asks them about their military contracts or any of the well-documented harms of their AI systems. To be fair, the film was completed before the war in Iran began.
But these companies building AI ‘kill chains’ are also causing harm to people in California.
A campaign by a group called Purge Palantir highlights how Palantir powers the ICE deportation machine targeting immigrant communities across the state and country. And massive new data centers are draining California’s water and straining its energy grid during a climate crisis.
AI tenant screening algorithms are driving up rents and pushing people out of their homes in California.
READ NEXT
OpenAI, childrens’ advocates join forces on initiative to protect kids from chatbots
None of this appears in the film. The AI doomers, utopians and CEOs never mention existing AI harms. Consequently, the film ignores the myriad ways communities are already holding AI companies accountable. Resistance to harmful AI systems is real and growing in California.
The Stop LAPD Spying Coalition has led the fight against predictive policing and won, forcing the Los Angeles Police Department to shut down both Operation LASER, built on the Palantir platform, and the PredPol program that uses AI to target Black and brown neighborhoods for extreme policing.
The Writers Guild of America went on strike and won groundbreaking protections against the use of AI to replace creative workers.
No Tech for Apartheid, led by Google and Amazon workers in Silicon Valley, has built awareness of tech companies’ military contracts with the Israeli Defense Forces in the mass killing of Palestinian people. And in September 2025, after sustained worker pressure under No Azure for Apartheid, Microsoft blocked Israel from using its cloud and AI services for mass surveillance of Palestinians.
In Monterey Park, residents blocked a massive AI data center and organized to put a permanent ban on the June 2026 ballot.
In the film, Roher wants to know if it’s a good time to bring a child into the world. The mothers of Minab in Iran want to know who will be held accountable for the AI-supported mass murder of their children.
The question is not whether AI will harm ‘our’ children someday. The question is whose children are already being harmed. And will we demand accountability?
READ NEXT
Data centers are putting new strain on California’s grid. A new report estimates the impacts
AI is already harming our children. Are California lawmakers going to do something?