In summary
Attorney General Rob Bonta said his office is looking into whether a new AI image editing tool from Elon Musk’s company violates California law.
California Attorney General Rob Bonta today announced an investigation into how and whether Elon Musk’s X and xAI broke the law in the past few weeks by enabling the spread of naked or sexual imagery without consent.
xAI reportedly updated its Grok artificial intelligence tool last month to allow image editing. Users on the social media platform X, which is connected to the tool, began using Grok to remove clothing in pictures of women and children.
“The avalanche of reports detailing the non-consensual sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta said in a written statement. “This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further.”
Bonta urged Californians who want to report depictions of them or their children undressed or committing sexual acts to visit oag.ca.gov/report. In an emailed response, xAI did not address questions about the investigation.
Research obtained by Bloomberg found that X users utilizing Grok posted more non-consensual naked or sexual imagery than those of any other website. In a posting on X, Musk promised “consequences” for people who made illegal content with the tool. On Friday, Grok limited image editing to paying subscribers.
One potential route for Bonta to prosecute xAI is a law that went into effect just two weeks ago creating legal liability for the creation and distribution of “deepfake” pornography.
X and xAI appear to be violating the provisions of that law, known as AB 621, said Sam Dordulian, who previously worked in the sex crimes unit of the Los Angeles District Attorney’s Office but today works in private practice as a lawyer for people in cases involving deepfakes or revenge porn.
Assemblymember Rebecca Bauer-Kahan, author of the law, told CalMatters in a statement last week that she reached out to prosecutors, including the attorney general’s office and the city attorney of San Francisco, to remind them that they can act under the law. What’s happening on X, Bauer-Kahan said, is what AB 621 was designed to address.
“Real women are having their images manipulated without consent, and the psychological and reputational harm is devastating,” the San Ramon Democrat said in an emailed statement. “Underage children are having their images used to create child sexual abuse material, and these websites are knowingly facilitating it.”
A global concern
Bonta’s inquiry also comes shortly after a call for an investigation by Gov. Gavin Newsom, backlash from regulators in the European Union and India and bans on X in Malaysia, Indonesia, and potentially the United Kingdom. As Grok app downloads rise in Apple and Google app stores, lawmakers and advocates are calling for the smartphone makers to prohibit the application.
Why Grok created the feature the way it did and how it will respond to the controversy around it is unclear, and answers may not be forthcoming, since an analysis recently concluded that it’s the least transparent of major AI systems available today. xAI did not address questions about the investigation from CalMatters.
“The psychological and reputational harm is devastating.”
Rebecca Bauer-Kahan, Democratic Assemblymember, San Ramon
Evidence of concrete harm from deepfakes is piling up. In 2024, the FBI warned that use of deepfake tools to extort young people is a growing problem that has led to instances of self harm and suicide. Multiple audits have found that child sexual abuse material is inside the training data of AI models, making them capable of geneating vulgar photos. A 2024 Center for Democracy and Technology survey found that 15% of high school students have heard of or seen sexually explicit imagery of someone they know at school in the past year.
The investigation announced today is the latest action by the attorney general to push AI companies to keep kids safe. Late last year, Bonta endorsed a bill that would have prevented chatbots that talk about self harm and engage in sexually explicit conversations from interacting with people under 18. He also joined attorneys general from 44 other states in sending a letter that questions why companies like Meta and OpenAI allow their chatbots to have sexually inappropriate conversations with minors.
California has passed roughly half a dozen laws since 2019 to protect people from deepfakes. The new law by Bauer-Kahan amends and strengthens a 2019 law, most significantly by allowing district attorneys to bring cases against companies that “recklessly aid and abet” the distribution of deepfakes without the consent of the person depicted nude or committing sexual acts. That means the average person can ask the attorney general or the district attorney where they live to file a case on their behalf. It also increases the maximum amount that a judge can award a person from $150,000 to $250,000. Under the law, a public prosecutor is not required to prove that an individual depictured in an AI generated nude or sexual image suffered actual harm to bring a case to court. Websites that refuse to comply within 30 days can face penalties of $25,000 per violation.
In addition to those measures, two 2024 laws (AB 1831 and SB 1381) expand the state’s definition of child pornography to make possession or distribution of artificially-generated child sexual abuse material illegal. Another required social media platforms to give people an easy way to request the immediate removal of a deepfake, and defines the posting of such material as a form of digital identity theft. A California law limiting the use of deepfakes in elections was signed into law last year but was struck down by a federal judge last summer following a lawsuit by X and Elon Musk.
Future reforms
Every new state law helps give lawyers like Dordulian a new avenue to address harmful uses of deepfakes, but he said more needs to be done to help people protect themselves. He said his clients face challenges proving violation of existing laws since they require distribution of explicit materials, for example with a messaging app or social media platform, for protections to kick in. In his experience, people who use nudify apps typically know each other, so distribution doesn’t always take place, and if it does, it can be hard to prove.
For example, he said, he has a client who works as a nanny who alleges that the father of the kids she takes care of made images of her using photos she posted on Instagram. The nanny found the images on his iPad. This discovery was disturbing for her and caused her emotional trauma, but since he can’t use deepfake laws he has to sue on the basis of negligence or emotional distress and laws that were never created to address deepfakes. Similarly, victims told CNBC last year that the distinction between creating and distributing deepfakes left a gap in the law in a number of U.S. states.
“The law needs to keep up with what’s really happening on the ground and what women are experiencing, which is just the simple act of creation itself is the problem,” Dordulian said.
California is at the forefront of passing laws to protect people from deepfakes, but existing law isn’t meeting the moment, said Jennifer Gibson, cofounder and director of Psst, a group created a little over a year ago that provides pro bono legal services to tech and AI workers interested in whistleblowing. A California law that went into effect Jan. 1 protects whistleblowers inside AI companies but only if they work on catastrophic risk that can kill more than 50 people or cause more than $1 billion in damages. If the law protected people who work on deepfakes, former X employees who detailed witnessing Grok generating illegal sexually explicit material last year to Business Insider would, Gibson said, have had protections if they shared the information with authorities.
“There needs to be a lot more protection for exactly this kind of scenario in which an insider sees that this is foreseeable, knows that this is going to happen, and they need somewhere to go to report to both to keep the company accountable and protect the public.”