AI-Written Essays: Cheating or Leveraging Technology?
Academic Problems and Skills
Take our Mental Processing Test
Find a therapist to help with academics
I recently had a conversation with a colleague about artificial intelligence (AI), specifically its use in classroom settings. We both acknowledged very different perspectives on it, particularly from the point of view of students, essentially, using it to write their essays for them. Had it not been for a few experiences I had recent to that conversation, I might have had a very different view.
Historically, I’ve always had a cynical perspective on AI. I know. Not very consistent with critical thinking (i.e., scepticism is good, cynicism is bad). Of course, I blame it on being of that generation that saw "T2: Judgment Day" way too many times. You know, when SkyNet takes over, we’re going to say, "I told you so!" However, as reality sets in, my primary concerns rested less on robotic Arnies trying to wipe us all out and more so on the potential of AI dumbing down younger cohorts and the generations that follow. Of course, that’s not to say that there aren’t other concerns, but they’re not the point of this piece.
Nevertheless, that cynicism/scepticism was my primary reaction to AI until a friend of mine asked me, knowing my dislike for just "Googling things" as a default, whether I ever use AI as an alternative. No, typically, I don’t—well, I didn’t anyway. Let’s call it moral opposition. Maybe even educated moral opposition. If anything was worth looking up online—other than what year Jimmy Stewart appeared in that film about going to Washington—it was probably worth thinking about it "properly" and not depending on an AI. Sure, I would use sources on the internet that I knew to be credible, but nothing other than me would do the thinking.
My friend told me to ask him a question, and he’d pop it in. "Does smoking cause cancer?" I asked. I knew that question should yield a convoluted answer. I anticipated that it would say "Yes" and likely quote some stats; but the real answer is "No," smoking doesn’t cause cancer (as "cause" is a loaded term, in context), but is strongly correlated with cancer—so strongly, in fact, that it’s just easier to state "cause" as we often do. It requires a thorough explanation (one longer than that provided here), which I did not expect to receive back. He sent me back the results. I was amazed. It was a perfect answer. It used terms I would have; it explained it like I would have. Had it been an essay, I would have given it an "A." That’s perhaps the crux of the issue we now face, but I digress.
"Does it always respond like that?" I asked.
"God, no." he said. "But, I asked it to think critically, because I knew you’d only accept an answer like that."
Wow. Well done, AI. I’m impressed. So, what’s that mean for our students?
Many institutions and many teachers/lecturers have a "no AI" policy and typically enforce it well, if for no other reason than because students are typically very poor at disguising it. I’d confidently say that I can identify when a student has used AI to write their essay 95 percent of the time. That’s not because I’m brilliant; rather, it’s because such students not only depend on AI to write the essay; they also depend on AI to be accurate (e.g., failing to double-check and edit the response provided). First off, AI isn’t always right and sometimes contradicts itself depending on how it's posed questions. Moreover, AI is not very good at referencing. So, regardless of whether AI is used or not, wrong is wrong. Second, I spot the AI use easily because students often fail to put responses into their own words, reformat the bullet points out of their essays, reference the sources from the text at the end, and remove other associated "tells" of AI use. If they were more diligent about their AI use, then they might fool me and other educators like me. But, until then, they’re "caught" fairly frequently (without the need for intervention of some fancy third-party app).
But what happens if they do edit? What happens if they do put things in their own words? What if they read through it numerous times and polish it up real nice, consistent with the requirements of the assignment? Well, to me, if they go through all that, it sounds like they’re at significant risk of learning something… that’s a good thing, right?
Forty years ago, you’d have to go to the library to find a physical copy of a journal article or put in a request for a library loan. Luckily, I did my Ph.D. during a time when we had the internet, and I could click a few links to get the desired article. I’d read it, copy-paste the citation into my references section, rewrite the relevant learnings into my own words, and progress my rationale. Is this use of AI not a similar progression? Maybe. But to do it right, you still have to think. AI has just made it easier. Sure, I might be simplifying things to an extent, but if you can use AI to write an essay in a manner that fools me, then there’s a good chance that you’ve used it in a way that has made you learn something and reflects your understanding of it, and that’s not a bad thing, provided that nothing is plagiarised, everyone is appropriately referenced, and the student has learned something.
Academic Problems and Skills
Take our Mental Processing Test
Find a therapist to help with academics
So, is it cheating? If you use it properly, like I exemplify above, then I would lean toward "No"—though it’s certainly not the most honest or integritous approach to writing the essay. With that, it does reflect an ability to use a newly developed tool well. For example, as opposed to collecting multiple papers to inform the stance of your argument, you allow AI to recommend a body of literature—but you ultimately decide what information to use or not. Does it dumb down the learning process? It depends… to use it properly implies that it doesn’t make decisions for you—you still make the decisions—it just provides you information, just as any other source might. In fact, maybe using/asking multiple AIs could be usefully educational.
The reality is, in years to come (if not already), people will be using AI to answer big questions in their workplace on the regular. If they don’t, they might risk being "left behind" in terms of their ability to make complex decisions and solve complex problems in a "timely" manner," particularly relative to those who are using AI. Educators need to acknowledge this potential and accept it. Perhaps we are better placed showing students how to use it correctly, so that it isn’t "cheating." Chances are, they’re going to use it anyway, so we might as well show them how to do it well and ethically. That said, I’ll still mark down students for its use when I catch it. But, overall, I think it’s a useful tool that we can leverage to facilitate our decision-making, as long as we don’t place the decision-making responsibility on it… that is, until the robotic Arnies come for us.
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
