While the University is implementing artificial intelligence as a learning tool in some courses, it’s also working on reframing its academic integrity policy to address cases where students use these technologies “unauthorized”
State Press
Before artificial intelligence made the jump into the hands of the public, students had to cheat on writing assignments the old-fashioned way: by discreetly wiring money to ghostwriters and copying and pasting.
But since the viral rise of AI chatbot ChatGPT, some students are now turning to a new generation of free, artificially intelligent ghostwriters: chatbots that can create personalized, well-written content in seconds from as little as a singular prompt.
ChatGPT produces brand new content that has been largely able to evade conventional plagiarism detectors, even though the unauthorized use of ChatGPT and other generative AI technologies violates the University’s academic integrity policy, an ASU spokesperson wrote in an email.
ASU is still grappling with how to identify, address and prevent academic integrity violations involving generative AI. Currently, the University is dealing with such incidents on a case-by-case basis until the Office of the Provost develops a more concrete policy.
“Generative AI technologies offer both challenges to academic integrity and opportunities to improve teaching and learning,” an ASU spokesperson wrote in an email. “ASU is cognizant of both sides of this rapidly evolving debate and developing a framework of policies and recommendations to positively employ these powerful technologies to enhance learner outcomes.”
READ MORE: ChatGPT worries professors, excites them for future of AI
This spring, over one in five students reported using AI to help complete their assignments or tests in a survey by higher education research website BestColleges.
Because ChatGPT produces written content, much of the alarm it’s caused in academia is concentrated in writing-centric subjects. However, Kathleen Hicks, the director of online programs for ASU’s English department, has found that ChatGPT’s writing is less undetectable than the initial panic made it seem.
“When you don’t know how to prompt (ChatGPT) very well, the writing that it produces is very formulaic,” she said. “And so in that case, there are some hallmarks that you can notice in the responses that ChatGPT produces.”
Human writing tends to be more creative and varied than ChatGPT’s writing, but it typically comes with more mistakes, like typos, grammatical errors and overly complicated sentences. These quirks are small ways for professors to sense that a paper was written by a human, not an AI chatbot.
Along with the human “soul” that ChatGPT’s writing isn’t able to capture, Hicks said its content could be riddled with what AI researchers call hallucinations, or factual inaccuracies, fake citations, irrelevant answers and plain nonsense. This is because ChatGPT is a large language model, meaning that it operates by selecting strings of words that are likely to answer a user’s prompt based on the data it’s been trained on.
“(Generative AI systems) can’t tell the difference between truth versus falsity,” said Subbarao Kambhampati, a computer science professor who researches AI at ASU. “Basically, they’re generating text that is like the text people have written on the web, which it has been trained on. And just because it’s like that text doesn’t mean it’s actually factual.”
ASU plans to implement the plagiarism checker Turnitin’s new AI detection toolin Canvas this summer. But not all have welcomed the decision, due to the tool’s rate of false positives — cases where it incorrectly flags writing by students as being generated by AI.
Even though Turnitin advertised that its tool had a false positive rate of less than 1% when it launched in April, the company later disclosed that the rate was actually 4% on a sentence-by-sentence level. Some students have already been wrongly flagged.
Instead of relying on AI detectors, Andrew Maynard, a professor of advanced technology transitions who studies responsible innovation, is reformatting his courses to make it more difficult for students to cheat using generative AI. Rather than standard essays that ChatGPT can create in seconds, Maynard is developing assignments that test his students’ creativity and critical thinking, like videos and voice recordings.
“I don’t think I’m willing to penalize a student even if there’s a 2% possibility that the algorithm got it wrong,” Maynard said. “I would rather change my whole approach to learning.”
While some schools have responded by completely banning ChatGPT or having writing be done in class, Maynard, Kambhampati and Hicks agree this isn’t feasible — or even advisable.
“ChatGPT is here, and it’s here to stay,” Maynard said. “It’s a reality of life. If you ban it, you’re basically removing the ability for students to learn in a much richer environment. And certainly, when the students graduate and go on to get jobs, they’re going to need to know how to use ChatGPT.”
ASU is actually incorporating the AI chatbot as a learning tool in some courses, from English to prompt engineering. But even as some professors are using it as a source for educational innovation, the looming threat generative AI poses to academic integrity still remains.
“I don’t see it as a greater threat than any other threats to academic integrity that already exist,” Hicks said. “Cheating is not new. And the methods that students can use to cheat continue to evolve, and we continue to respond.”
Edited by Shane Brennan, Angelina Steel