In summary
California State University’s $17 million contract with ChatGPT’s maker OpenAI is up for renewal. Some students and faculty say equal access to AI is important for preparing students for the workforce. Others say the implementation of AI tools has been confusing and opens the door to cheating. Some faculty have banned AI from their classes altogether and even started a petition to end the contract deal.
When California State University paid OpenAI $17 million last year to give campuses unlimited access to a high-powered educational version of ChatGPT, the goal was to help students learn to use artificial intelligence for their education and future careers. However, the announcement came as a surprise to faculty and students, who were left on their own to figure out how to use AI ethically.
Afraid students would use ChatGPT Edu to cheat, many professors turned to in-class tests using bluebooks and scantrons, or employed faulty AI detectors like TurnItIn to catch AI-generated work. Meanwhile, other faculty have embraced ChatGPT and made it part of their curriculum. This all has left students confused over the use of AI in their courses.
A recent Cal State survey of over 94,000 students and university employees found 52% of faculty reported AI having a negative affect on their teaching and 67% of students felt their professors don’t teach them how to use AI effectively.
Now, as Cal State approaches the end of its 18-month contract with OpenAI this July, the university system has not announced whether it will renew the deal. Some faculty at San Francisco State University have begun a petition calling on Cal State Chancellor Mildred Garcia to end the partnership.
The Cal State Chancellor’s office points out that the AI survey found 64% of students, faculty and staff said AI has affected their learning experience at their university positively, and 63% said they’ve seen more opportunities on their campus to learn about AI.
“Our systemwide AI survey results reflect what we are seeing across our universities — widespread engagement with AI tools and technologies,” wrote Cal State spokesperson Amy Bentley-Smith in an email.
The university system left it up to campuses to dictate the proper uses of the chatbot while offering tools and training on a website called AI Commons. But students and faculty say those resources have not been enough. As of April, only 0.7% of students and 16% of faculty have completed the voluntary training, based on data provided by Bentley-Smith.
Assemblymember Mike Fong introduced Assembly Bill 2392 in February, which would require Cal State and California Community Colleges, as well as request University of California schools, to provide training on any AI product deployed on campuses.
Last fall, Fong and the Assembly Standing Committee on Higher Education questioned Cal State officials about planning around the AI initiative.
“During the joint hearing on higher education and privacy, discussions revealed that California State University campuses have adopted AI tools without consistent guidance or training, raising concerns around data privacy, academic integrity, and equitable use,” said Fong in an email to CalMatters.
While a few students and faculty testified at the hearing, others have continued to echo those issues.
“I’m not sure [Cal State] realized how much new work it would require, how much revision to the old way of doing things it would require,” said Ryan Jenkins, the chair of the AI Task Force for Cal Poly San Luis Obispo’s faculty union chapter.
Students want to be a part of AI decisions
Cal State Northridge communications major Katie Karroum was shocked when she saw the announcement about ChatGPT Edu last year. As the vice president of systemwide affairs for the Cal State Student Association, she would have expected the chancellor’s office to meet with the student organization that represents over 470,000 students throughout the state.
“We were not consulted when the contract was signed, and we weren’t even given a heads up,” Karroum said.
Cal State chose OpenAI as the least-costly option, according to assistant vice chancellor of academic technology services Leslie Kennedy. The contract aimed to give everyone free access to ChatGPT Edu across all 22 campuses. Previously, campuses and individuals were paying for their own upgraded ChatGPT accounts that allow users to generate content like images and research reports without the limitations of the free version.
The contract with OpenAI was signed in January 2025, revealed later that month at a Board of Trustees meeting, and formally announced through a systemwide press release in February 2025, which is how Karroum found out.

In a meeting of the Cal State Student Association last October, student representatives from each campus told Karroum that they saw a lack of justice for students accused of using generative AI to cheat, and that they were concerned about the data collected from the chatbot being shared.
ChatGPT Edu at Cal State is defaulted to not use data for training models, but users can opt to allow their data to be shared, according to testing by CalMatters.
Students have also complained about the absence of a consistent AI policy in their classes, according to an open letter published by Karroum. At most campuses, professors get to decide their classroom policies, including about AI.
Yagmur Wernimont, a sophomore at Cal Poly San Luis Obispo, said that although AI is used for automation and robotics in her intended agriculture field, she still does not use the technology herself because she thinks “it’s making us dumber” and doesn’t promote learning. She also watched herself fall behind while a classmate used ChatGPT to get a 100% on an assignment.
While her professor verbally told the class at the beginning of the quarter not to use AI, the rule was not on the syllabus, nor was a clear consequence for using AI. Wernimont said this may have given students a loophole for using it.
At Cal State Bakersfield, Emily Callahan, dean of students for academic integrity, said there has been a steady uptick of students reported for improper use of AI. She said students are using the chatbot to gain an unfair advantage over others.
Wernimont has also witnessed a divide between professors over AI. While one of her professors required the use of Google NotebookLM, an AI-powered note-taking app, an English teacher told Wernimont’s class that she was sad students would be using AI for writing, but shared a presentation on ways to cite the tool anyway.
“They’re all having different ways and ideas how to do it,” she said. “And it’s kind of conflicting as a student.”


Kennedy said the university system hasn’t excluded anybody from the discussion around AI. The Chancellor’s Office started a generative AI committee in 2024 that includes students and faculty.
“It was the committee’s recommendations that served as the basis for the CSU to identify, evaluate, and negotiate with multiple companies who at the time offered plans designed specifically to help bring AI tools to higher education institutions,” said Cal State’s chief information officer Ed Clark in an email. “Their assessment and feedback have been and continue to be essential to how the CSU implements its AI strategy that is both cost-effective and secure.”
A new board formed after the implementation of ChatGPT Edu focuses on California’s workforce by including representatives from technology companies. Cal State Student Association President Tara Al-Rehani said that while she is part of that board, it makes no final policy or guidance decisions on AI use.
Karroum said although students need to learn how to use AI, she doesn’t like feeling part of an experiment.
“I think that we’re being treated as, like, test rats right now because there’s no policy and there’s no guidance,” Karroum said.
Faculty introduce new classroom policies on AI
Faculty leaders said they also were caught off guard with the ChatGPT deal. According to the Cal State survey, 59% of faculty regularly use AI in teaching and research, and 68% said they include an explicit statement on AI use.
According to a repository of more than 200 AI syllabus policies housed on Cal Poly San Luis Obispo’s website, one criminal justice professor from Cal State Fullerton describes in the syllabus when, why and how students should use AI. The professor also includes an example of a good AI disclosure statement from a student who outlined their use of ChatGPT for an assignment.
The AI Commons website states that faculty ultimately decide how they want to implement generative AI into their curriculum,taking into consideration whether it might improve teaching and learning in their classroom like any new technology.
Jenkins, who teaches philosophy at Cal Poly San Luis Obispo, gives exams in class using blue books and scantrons to avoid any potential for students to cheat with AI. When ChatGPT was first released in 2022, Jenkins tested the chatbot by giving it a reading quiz. It gave all the right answers, alarming Jenkins that his students might use the technology while taking tests online. Today, Jenkins tells his students to treat AI like any other source when using its outputs for an assignment, but still proctors exams in-class.
“The bread and butter of philosophy is reflecting on your own ideas and trying to sort out what you believe and why,” Jenkins said. “If you have a tool that does that for you, then you’re being denied an opportunity to practice that skill.”

Jenkins said he does not have an AI statement in his syllabus because neither the department nor Cal Poly has provided one to use. On its website, Cal Poly San Luis Obispo links to the AI Commons as well as an AI statement builder from Pepperdine University for faculty to use. But the university does not require any specific statement from professors.
At Cal State Fullerton, Shelli Wynants helps faculty decide how to use AI in their classrooms through her role in the university’s faculty development center. She also teaches students in her child and adolescent studies courses to critically review AI output, and make sure they are remaining “the thinker and the decision maker” in the process.
Wynant said she refers to AI as an “assistant” or “teammate,” but emphasizes it should never replace human judgement. She has found that many of her students who plan careers in teaching want to learn how to use AI responsibly for the sake of their future students. “These students need to get up to speed because they’re going to be the ones teaching students digital literacy,” she said.
In August 2025, the Assembly Standing Committee on Higher Education questioned Cal State officials about planning around the AI initiative. Representatives of the Academic Senate, Cal State Student Association, California Faculty Association and Cal State Employees Union spoke to the Assembly committee about their discontent over the contract with OpenAI.
“We understand all these criticisms and concerns, and they’re valid,” said Cal State’s chief information officer Ed Clark at the meeting. “The best way to deal with those concerns is to have our universities participate in helping to shape the future of these technologies. We can’t just sit back and let it go by.”
Students still need support, even with AI chatbots
Staff at university tutoring centers are struggling to advise students who say faculty are blaming them for cheating by using the very AI tools the university system wants them to learn to use. According to the Cal State AI survey, 78% of students, faculty and staff said the ethical use of AI is a major concern.

Seher Vora, the coordinator for San Jose State University’s writing center, created an AI Writer Toolbox after conversations with tutors about students who were being penalized by professors for using AI. The toolbox helps students work with AI responsibly, including how to properly cite AI use and not using the chatbot for generating work that is not their own.
The toolbox also includes a disclosure tool that allows students to fill out a form outlining their use of AI for an assignment. The form generates a certificate for students to submit with their work.
The writing center at San Jose State advises students to check with their professors if they are unsure what uses of AI they accept. Vora hopes her work with the toolbox will encourage education around AI, for both students and faculty.
“We have to stay on top of it,” she said. “It’s changing every day.”
