ChatbotGPT is concerned about teachers trying to detect cheating by AI


Teachers and professors across the education system have nearly panicked at the face of the artificial intelligence revolution that allows cheating on a large scale.

The source is ChatGPT, an artificial intelligence bot released a few weeks ago that allows users to ask questions and, moments later, receive very human written answers.

Almost immediately, educators started experimenting with the tool. While the bot’s answers to academic questions aren’t perfect, they come very close to what teachers expect from their many students. How long, educators wonder, before students start using the site to write essays or computer code for them?

Māra Corey, an English teacher at Irondale High School in New Brighton, Minn., says she immediately discusses the issue with her students so they can understand how using the tools is hindering their learning.

“Some of them were surprised that I found out,” he said. He wasn’t worried that the conversation would plant bad ideas in their heads. “Expecting teenagers not to notice flashy new things that will save them time is a foolish task.”

Stumbling over their words, some people let the AI ​​do the talking

Within days of launch, more than a million people had tried ChatGPT. Some ask innocent questions, like how to explain to a 6-year-old that Santa Claus isn’t real. Other requests demand complex responses, such as completing complex software code.

For some students, the temptation is obvious and overwhelming. One senior at the Midwestern school, who spoke on condition of anonymity for fear of expulsion, said he had used a text generator twice to cheat on his schoolwork. He came up with the idea after seeing people explaining on Twitter about how great a word generator it was after its November 30 release.

He was staring at an at-home computer science quiz that asked him to define certain terms. He entered it into the ChatGPT box and, immediately, the definitions reappeared. He wrote them by hand onto his quiz sheet and turned in his assignment.

Later that day, he used the generator to help him write code for homework questions in the same class. He was confused, but ChatGPT was not. That brings up a series of texts that work perfectly, he says. After that, the student said, he was hooked, and planned to use ChatGPT to cheat on tests instead of Chegg, the homework help site he used in the past.

He said he wasn’t worried about getting caught because he didn’t think the professor could tell the answers were computer generated. He added that he had no regrets.

“It’s kind of on the professor to make the questions better,” he said. “Use it to your own advantage. … Just don’t go through the whole course on this subject.

What is ChatGPT, the viral social media AI?

The tool was created by OpenAI, an artificial intelligence lab launched several years ago with funding from Elon Musk and others. The bot is powered by a “big language model,” an AI software that is trained to predict the next word in a sentence by analyzing large amounts of internet text and finding patterns by trial and error. ChatGPT is also refined by humans to make its answers more communicative, and many have noted its ability to generate paragraphs that are often humorous or even philosophical.

However, some of the responses are false or bigoted, such as when users do write rap lyrics which says: “If you see a woman in a lab coat, she’s probably just there cleaning the floor.” The creators acknowledge that ChatGPT is not perfect and may provide misleading answers.

Educators assume that over time these tools will improve and knowledge about them among students will increase. Some said teachers would adjust their ratings to account for possible cheating. For example, they will ask students to write papers by hand or during class, so that they can be monitored. Others think of ways to write questions that require deeper thought, which are more challenging for bots.

The stakes are high. Many teachers agree that learning to write can happen only when students grapple with ideas and translate them into sentences. Students start out not knowing what they want to say, and as they write, they figure things out. “The process of writing transforms our knowledge,” said Joshua Wilson, a professor in the School of Education at the University of Delaware. “That really goes away if all you do is jump to the final product.”

Wilson added that while universities are talking about this, many high school teachers remain oblivious.

“The average K-12 teacher — they’re just trying to get it [semester-end] go to class,” he said. “This must be a wave that’s about to hit.”

Teachers say pandemic technology changed their jobs forever

Department heads at Sacred Heart University in Connecticut had discussed how to tackle artificial intelligence, and faculty members knew they had to find ways to deal with it, said David K. Thomson, a history professor at the school.

Thomson said he realized by experimenting with the site well enough the types of questions that appear on many take-home tests, such as those asking students to compare the development of the North and South American colonies before the Economic Revolution. and other terms. “It’s not perfect,” he said. “Students aren’t perfect either.”

But when he asked more sophisticated questions, such as how Frederick Douglass made his argument against the institution of slavery, his answers were far more convincing. Professors, he said, must provide judgments that value analytical reasoning and not just facts that can be looked for.

At the same time, others saw possible gains. Technology is an opportunity for teachers to think deeper about the assignments they’re given—and talk to students about why it’s important to create their own work—says Joshua Eyler, an assistant professor at the University of Mississippi who directs the Center for Excellence in Teaching & Learning, who mocks “moral panic”.

“This is kind of a calculator moment for teaching writing,” says Eyler. “Just as calculators changed the way we teach math, this is the same moment to teach writing.”

“Predictably, what we saw was a sort of moral panic. There is a great fear that students will use these tools to cheat.”

Michael Feldstein, education consultant and publisher of the blog e-Literate, says that along with panic has come a sense of curiosity among educators. He said some professors in commerce-oriented fields see AI-generated writing as a tool that might be useful. A marketing student might use it to write marketing copy at school, he says, and also at a future job. If it works, he asks, what’s wrong with that?

“They don’t care if students are going to be the next Hemingway. If the goal is communication, it’s just another tool,” said Feldstein. The most important thing, he said, is that the tool is used as part of learning, not as a place for learning.

As educators consider how to live with technology, some companies are considering ways to beat it.

Turnitin, a company that has created widely used software for detecting plagiarism, is now looking at how to detect AI-generated material.

Automated essays differ from student written work in many ways, company officials said. Students write in their own voice, which is not in the ChatGPT content. Essays written with AI sound like the average person, but any student isn’t average, so an essay won’t sound like one, said Eric Wang, vice president of AI at Turnitin.

“They tend to be probabilistic vanilla,” he says.

Remote learning app shares children’s data on a ‘dizzying scale’

But detecting cheaters who use this technology will be difficult.

Sasha Luccioni, a research scientist at open-source AI start-up Hugging Face, said OpenAI should enable the public to browse ChatGPT code, because only then can scientists create truly powerful tools to catch fraudsters.

“You work with a black box,” he said. “Unless you actually have one [access to] these layers and how they connect, it’s very difficult to create anything meaningful [cheating detection] tool.”

Hugging Face hosts a detection tool for earlier chatbot models, called GPT-2, and says it can potentially help teachers detect ChatGPT text, but may be less accurate for newer models.

Scott Aaronson, a visiting researcher at OpenAI, said the company is exploring various ways to combat abuse, including the use of watermarks and models that differentiate between bot-generated text and the real world. Some people question whether the watermark approach is enough.

“We are still running experiments to determine the best approach or combination of approaches,” Aaronson said in an email.

ChatGPT has its own idea of ​​a solution. Asked how to deal with the possibility of cheating, the bot offered several suggestions: educate students about the consequences of cheating, supervise exams, make questions more complicated, give students the support they need so they don’t feel the need to cheat.

“Ultimately, it is important to clearly communicate with students about your expectations of academic integrity and take steps to prevent cheating,” explains the bot. “This can help create a culture of honesty and integrity in your classroom.”


An earlier version of this article incorrectly said that Hugging Face built a detection tool for a chatbot model called GPT-2. It hosts the tool. Article has been corrected.