You have /5 articles left.
Sign up for a free account or log in.

To the editor:

I have been following conversations surrounding AI generation of writing with a wait-and-see attitude since the beginning of the year but recently had cause to reconsider the urgency of the subject following an incident in one of my own classes. I agree with Ali Lincoln’s recent piece “ChatGPT: A Different Kind of Ghostwriting” that the ethics of AI text generation are gray but disagree with her premise that we currently know enough to conclude that it is a valuable tool for editing and writing—I suspect there are more questions that need to be answered first.

This past spring semester a student in one of my literature courses submitted an annotated bibliography project of six journal articles which at first glance looked like a good submission with the exception that all the article citations were missing URLs and none of them were articles I had encountered previously—and I am familiar with the topic the student was researching. After some checking I discovered that every single one of the six sources was invented and did not exist. When confronted, the student confessed to having used an AI service to create the submission. What is particularly noteworthy about this instance of AI plagiarism is that all the citations included in the submission listed the titles of real, high-quality journals that have published articles on similar topics previously and most of the names listed for the authors of these imaginary articles were the names of real literary scholars.

Following this incident, there are two questions which have stayed with me: How much of what is produced by these services is scraped from copyrighted works without acknowledgement or compensation to the authors and publishers? What happens when texts full of invented information and imaginary citations attributed to real authors and journals proliferate across the web?

The first question is not easy for the average member of the public without AI expertise to elucidate but what I have found has serious implications for intellectual property rights. Furthermore, both questions raise the possibility we are entering a world where ownership of intellectual property rights for authors is diluted to the point of meaninglessness and the reputations of scholars and journals are degraded even further, erasing conceptions of credibility from the mind of the public. Educators who have wholeheartedly embraced AI technology in the classroom—even just for brainstorming and drafting purposes—are asking students to use technology which could possibly be stealing the ideas of others or simply inventing things wholesale.

Conversations around AI in the classroom need to be more explicit about addressing the opaque nature of technologies such as Chat GPT—particularly in the wake of the revelations of the data breach at OpenAI. Most of these AI generation services state in their terms of service that users should provide attribution to the AI for work created through the service but these services themselves do not provide clear attribution for the many sources across the web that are used to generate these texts—nor do they clearly denote invented material. My ask here is that we bring these questions to the forefront as we consider the form that responsible use of AI in college classrooms should take.

--Mary Nestor
Senior Lecturer
Department of English
Clemson University

Next Story