The First Case of Many
A family is suing a school to change a student’s grade. A.I. is to blame—but only in part.
Given the pace at which generative A.I. tools have flooded the market and the much slower rate at which school districts and universities develop new policies, it was only a matter of time before a case focused on A.I. and cheating made its way to court.
That day came on Tuesday, when an attorney asked a federal judge to require Hingham High School, in Massachusetts, to raise the AP U.S. History grade of a student who had been penalized for allegedly using A.I. to research and outline a class project. The attorney for the student argued that because the school had no A.I. policy in the student handbook, using A.I. wasn’t cheating—and that the low grade the student received in that course would unfairly prevent him from applying to selective colleges. Hingham school officials have argued that the use of A.I. was clearly prohibited by policies laid out in class and by existing policies against plagiarism.
The case against the Hingham school system turns on the question of whether what the student did constituted cheating, according to the existing school policies: Were students allowed to use A.I. tools as these students did, or not? And is it, in fact, plagiarism to use research and an outline generated by a chatbot? But the ruling in this case won’t change the tricky truth about A.I. tools, which is that in most cases teachers don’t know or can’t prove that students are using A.I. tools when they’ve been told not to.
A.I. detection tools like Turnitin, which the teacher used in the Hingham case (along with ChatZero and the Chrome plugin Draftback), are considered inaccurate enough that OpenAI actually withdrew its own tool from the market. In her testimony on Tuesday, the teacher also explained that several books mentioned in the student’s project did not exist—a clear sign that they were likely invented by A.I. But signs like this will disappear quickly as tools for “humanizing” A.I. prose and checking for imaginary sources become widespread. And because A.I. tools are integrated into existing platforms like Grammarly to “help” students write, it won’t be clear to teachers—and sometimes to students themselves—what role A.I.
tools have played in their work.
One way to alleviate concerns about cheating is to allow students to use A.I. in the classroom, and some educators are already taking this approach. But we’re far from agreement about what it would actually look like to embrace A.I. in the classroom in ways that foster learning rather than undercutting it. Two years ago, if you asked five teachers if it would be cheating for one student to ask another student to do their research or outlining for them, there would likely have been broad agreement that this was, in fact, cheating. But with A.I., those same instructors don’t always agree about where it’s appropriate to bring A.I. into the classroom. Is there pedagogical value in prompting a chatbot to generate a first draft, which you then revise? How about to get feedback on your draft? To outline your paper from your ideas? While embracing A.I. in the classroom across the board may solve the cheating problem, it’s less clear what it would mean for learning.
And while learning isn’t a focus of the Hingham complaint, it’s the main concern for the teachers I’ve talked to, and their concerns about how students are using A.I. tools are supported by the data. At its recent OpenEducation forum, OpenAI reported that the majority of ChatGPT users are now students. While we don’t know exactly how students are using these tools, we do know that for the most part these tools are being shared on social media as productivity tools that will save you time, rather than as educational tools. In my own recent classroom experience, research tools currently being promoted to students, like Perplexity and Consensus, can potentially provide educational value in helping students identify interesting sources. But the same tools also act as customer service agents, returning first a research question, then sources, and finally the draft of the paper itself. In other words, the tool that might be a learning tool is also a pretty good cheating tool.
While the Hingham case won’t solve the problems posed by generative A.I. in the classroom, any conversation about how to navigate these challenges should address the factors that brought this case to court in the first place, which are less about A.I. and more about a family’s belief that one low grade will exclude their child from the future they want for him, which begins with admission to an elite college. The system of high-stakes college admissions existed long before ChatGPT was released in 2022. But the Hingham case tells us less about district cheating policies and more about what happens when an education system that so often emphasizes grades over learning ends up on a collision course with a technology that enables students to bypass learning in favor of grades.
Discover more from CaveNews Times
Subscribe to get the latest posts sent to your email.