Guest Commentary written by

Sasha Costanza-Chock

Sasha Costanza-Chock is the author of “Design Justice: Community-Led Practices to Build the Worlds We Need” and a faculty associate at the Harvard Berkman Klein Center for Internet & Society at Harvard Law School.

I recently watched the documentary The AI Doc: Or How I Became an Apocaloptimist, which follows real-life director and father-to-be Daniel Roher as he freaks out about whether it’s a good idea to have a child in the age of artificial intelligence

He interviews AI doomers — who think AI will kill us all — then AI utopians — who think it will save us all. Finally, he seeks answers from CEOs of leading Bay Area AI companies: Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario and Daniela Amodei of Anthropic.

In one of the film’s most intense moments, Center for Humane Technology co-founder Tristan Harris tells Roher: “I know people who work on AI risk who don’t expect their children to make it to high school.”

This gut-punch of a line illustrates a core problem: the families of elementary school students recently killed in Iran already know their children won’t make it to high school. The worst things AI doomers can imagine have already happened there.

On the opening day of the US-Israeli war on Iran, multiple Tomahawk cruise missiles struck the Shajareh Tayyebeh elementary school in Iran. At least 168 people were killed, the majority of them children, according to Amnesty International. 

US Central Command has used Palantir’s Maven Smart System for target identification throughout the campaign. Anthropic’s flagship AI, Claude, is integrated with Palantir’s systems and was used in Iran, as well as in the illegal US military operation to capture Venezuelan President Nicolás Maduro

Anthropic isn’t the only AI firm involved with the Pentagon. In July 2025, the Pentagon awarded contracts of up to $200 million each to Anthropic, OpenAI, Google, and xAI. 

Roher puts the CEOs of these firms on camera, yet he never asks them about their military contracts or any of the well-documented harms of their AI systems. To be fair, the film was completed before the war in Iran began. 

But these companies building AI ‘kill chains’ are also causing harm to people in California. 

A campaign by a group called Purge Palantir highlights how Palantir powers the ICE deportation machine targeting immigrant communities across the state and country. And massive new data centers are draining California’s water and straining its energy grid during a climate crisis.

AI tenant screening algorithms are driving up rents and pushing people out of their homes in California. 

None of this appears in the film. The AI doomers, utopians and CEOs never mention existing AI harms. Consequently, the film ignores the myriad ways communities are already holding AI companies accountable. Resistance to harmful AI systems is real and growing in California. 

The Stop LAPD Spying Coalition has led the fight against predictive policing and won, forcing the Los Angeles Police Department to shut down both Operation LASER, built on the Palantir platform, and the PredPol program that uses AI to target Black and brown neighborhoods for extreme policing.

The Writers Guild of America went on strike and won groundbreaking protections against the use of AI to replace creative workers. 

No Tech for Apartheid, led by Google and Amazon workers in Silicon Valley, has built awareness of tech companies’ military contracts with the Israeli Defense Forces in the mass killing of Palestinian people. And in September 2025, after sustained worker pressure under No Azure for Apartheid, Microsoft blocked Israel from using its cloud and AI services for mass surveillance of Palestinians.

In Monterey Park, residents blocked a massive AI data center and organized to put a permanent ban on the June 2026 ballot.

In the film, Roher wants to know if it’s a good time to bring a child into the world. The mothers of Minab in Iran want to know who will be held accountable for the AI-supported mass murder of their children.

The question is not whether AI will harm ‘our’ children someday. The question is whose children are already being harmed. And will we demand accountability?