‘Like playing Whack-A-Mole’: How some local college professors are grappling with AI-written essays

“ChatGPT is just the tip of the iceberg.”

alt = The OpenAI website ChatGPT with a black background and purple and green font on a black laptop computer.
The OpenAI website ChatGPT on a laptop computer. Gabby Jones/Bloomberg

College students are returning to classrooms across Boston, and professors are becoming increasingly aware of a new challenge: identifying AI-written work. 

Educators are looking at one source in particular, called ChatGPT, which generates written text based on prompts users give via a chat function. The AI can provide detailed short answers, and even paper-length responses on seemingly any subject. 

Now, some are worried students might try to submit ChatGPT-generated responses as their own work.

David Richard, a professor at Emerson College and CEO of Big Fish PR, started playing with ChatGPT when it first came available. He said he fed it one of his assignments – to write an Apple press release – and the resulting statement scored a B+.


The AI, Richard added, probably poses the biggest challenge at the high school and middle school level, but at the collegiate level only professors who assign papers will likely have a reason to worry. 


“You can’t police it,” Richard said. “It’s going to become too difficult to tell what’s computer generated versus what’s been generated in terms of composition.” 

Edward Tian, a senior at Princeton University, is trying to rectify this with GPTZero, an AI he’s developing with the goal of identifying when ChatGPT is used. The service is already available for use.

Janna Kellinger, a doctor and professor of education at the University of Massachusetts Boston, said a fellow faculty member raised concerns this past fall about students using ChatGPT, but she hasn’t had issues with it. Her daughters, however, have used it for fun.

“They just kind of dismissed it as a fun thing to play with, but they hadn’t really thought about it in terms of plagiarizing,” Kellinger said, adding that they asked it to write an essay about the big toe and another about the video game “Zelda.”

Kellinger did say there are plenty of reasons why someone might want to use ChatGPT to plagiarize, but that those issues deserve addressing more so than the threat of GPT itself.

“Is it because they’re insecure about their writing skills?” Kellinger said. “Is it because there’s a lot of pressure put on them? Is it because they lacked time management skills?”


For those who are concerned about ChatGPT use in their classes, Richard said there are some solutions, but ultimately, they will all likely involve removing computers with internet access from the equation. He suggested professors do a first paper in-class in a controlled environment.

“It’s kind of funny because there’s this discussion of, do we even need classrooms where everything’s virtual? From our COVID years where students are just taking classes online,” Richard said. “Well, you can’t enforce it if students are online.”

Totally banning ChatGPT and other text-writing AI also isn’t practical, Kellinger added, saying it would be “like playing a game of Whack-A-Mole.” 

Instead, Kellinger said educators should provide ethical training to students surrounding proper and improper use of the AI, because it can be a learning tool. For example, Kellinger said ChatGPT helped one of her students with writer’s block to brainstorm a character they were trying to create.

“There are ways that it can be used almost as a thought partner,” Kellinger said.

Richard also said ChatGPT should be considered a tool, calling it the “spell-check” or “autocorrect” of the future. He added that he can see similar programs going commercial in the next two to three years, bringing AI-text systems to the forefront.


“ChatGPT is just the tip of the iceberg,” Richard said. “It’s going to become this arms race that I don’t think that the professors in the academic community are going to be able to win, where they’ll be able to absolutely tell that something was generated by AI or by the students themselves.”


This discussion has ended. Please join elsewhere on