Republish
Instead of policing student use of AI, California teachers need to reinvent homework
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
Instead of policing student use of AI, California teachers need to reinvent homework
Share this:
Guest Commentary written by
William Liang
William Liang is a high school sophomore and student journalist living in the Bay Area.
The education sector has sleepwalked into a quagmire. While many California high schools and colleges maintain academic integrity policies expecting students to submit original work, a troubling reality has emerged: Generative AI has fundamentally compromised the traditional take-home essay and other forms of homework that measure student thinking.
A striking gulf exists between how teachers, professors and administrators think students use generative AI in written work and how we actually use it. As a student, the assumption I’ve encountered from authority figures is that if an essay is written with the help of ChatGPT, there will be some sort of evidence — a distinctive “voice,” limited complexity or susceptibility to detection software.
This is a dangerous fallacy.
AI detection companies like Turnitin, GPTZero and others have capitalized on this misconception, claiming they could identify AI-generated content with accuracy levels as high as 98%. But these glossy statistics rest on the naive assumption that students submit raw, unmodified ChatGPT responses.
A 2023 University of Maryland study revealed these detectors perform only slightly better than random guessing. I surveyed students at high schools across the Bay Area and found that the vast majority consistently use AI on writing tasks, regardless of detection measures. Not a single student reported submitting unaltered AI text. Meanwhile, almost all of my teachers told me they regularly overturn or disregard the AI checker’s conclusions.
The reality is sobering: It’s very easy for students to use AI to do the lion’s share of the thinking while still submitting work that looks like our own. We can manually edit AI responses to be more “bursty,” employ one of the myriad programs that “humanize” text or blend AI-generated ideas with our own prose.
The problem isn’t technological — even perfect detection software couldn’t prevent intellectual plagiarism when students can harvest ideas from AI and put them in their own words. Ultimately, the problem is behavioral.
When cheating is easy and the consequences are remote, people cheat. Noor Akbari, co-founder and CEO of Rosalyn.ai, put it perfectly: “When enough players in a competitive game can cheat with a high upside and low risk, other players will feel forced to cheat as well.”
We saw this play out during COVID when online learning made cheating ridiculously easy. One study found that cheating jumped 20% during the pandemic.
Read Next
AI chatbots can cushion the high school counselor shortage — but are they bad for students?
The fallout is already obvious. The latest Education Recovery Scorecard shows that U.S. students remain nearly half a grade level behind in math and reading compared to pre-pandemic levels. And while California scored much better than the national average, that’s because we were already so far behind in 2019.
Sure, AI could theoretically help learning — brainstorming ideas, assisting with editing, helping students learning English with sentence structure. But come on. Most students use it because thinking is hard and AI practically makes it unnecessary.
We now find ourselves in an absurd middle ground where academic policy in response to generative AI ignores human nature. Thankfully, many teachers are adapting. But we must face an uncomfortable truth that nearly every form of unsupervised assessment can now be compromised by AI. The only reliable solution is to fundamentally rethink our approach to student evaluation.
There’s three specific changes schools and universities could implement.
First, shift major writing assessments to supervised environments. Whether it’s a timed in-class essay, proctored computer labs or oral defense exercises where students must explain their reasoning, we need approaches that verify if students can produce and defend original thinking.
Second, we should integrate oral components into major exams. Students have to be able to explain and defend their written work through one-on-one discussions with teachers. That could help reveal gaps in understanding if they’ve outsourced their thinking.
Third, educational institutions must shift emphasis from policing AI use to explicitly teaching the cognitive processes AI can’t replace: critical thinking, creative problem-solving and effective communication. These skills are best developed through iterative practice with immediate feedback, not through homework completed in isolation.
This isn’t about resisting technological progress. It’s about ensuring educational institutions fulfill their core mission: teaching students how to think independently and communicate effectively. By acknowledging that traditional assignments no longer reliably serve this purpose, we can begin adapting our educational approaches to maintain academic integrity and prepare students for a world where human judgment remains irreplaceable.
California Voices editors confirmed that AI was not used to help write this guest commentary. “Every word of this piece is mine,” Liang told us. Given the subject, we had to ask.
Read More
California students want careers in AI. Here’s how colleges are meeting that demand
California’s two biggest school districts botched AI deals. Here are lessons from their mistakes.