Limits on AI: Teaching and Assignment Design In The Age of ChatGPT Part 1

I recently did a sort of impromptu experiment with my three sections of English 101 that revealed quite a bit about how framing—not to mention knowing authorship—affects how our students perceive a text. It may, in fact, illustrate why it matters if AI-generated text is labeled as AI-generated. In any case, it was a really interesting exploration into what student writers think is good writing versus what is actually effective writing and gets rewarded with high grades.

Very shiny apples generated by the image artificial intelligence program Starry AI.
Oh look, AI generated apples! That means we’re talking about teaching with AI!

Our story starts with my officemates chatting about ChatGPT and concerns about students using it to cheat on assignments. That’s a very reasonable fear, but not one I share to the same degree as many of my colleagues. I have always relied on three things to weed out academic dishonesty, and none of them are likely to be foiled by a student using ChatGPT to avoid writing an essay: my ability to detect voice in a text (unless a student trains the AI on their own writing, which most of my students aren’t likely to do); heavy use of in-class work and reflective assignments; and my carefully written rubrics that laser-focus on the actual goals of the assignments and often include something like “expresses class concepts and definitions,” something AI cannot yet replicate because it wasn’t in class.

But we were reasonably curious how AI would do on various assignments, in part to see if it resembled the things we were seeing from our students. So I opened up ChatGPT and set it to work on a few things, because that’s obviously the best use of my office hours.

The first thing I did was simply ask ChatGPT to “please write a rhetorical analysis of Edgar Allan Poe’s essay ‘The Philosophy of Composition.'” What it produced was a fairly coherent analysis of Poe’s classic essay. But it was not a rhetorical analysis; it was more of a literary analysis or a close reading. Rhetorical analysis requires a focus on how and why something is written the way it is—an awareness of the relationship between author, audience, and purpose; a sense of the exigence acting on the text; a focus on the specific choices that the author has made in order to be persuasive. What I got was was a complete essay that reads like this:

One of the central themes in the essay is Poe’s emphasis on the unity of effect in a literary work. He argues that a successful poem or story should evoke a single, intense emotional response from the reader, and every element of the work should contribute to that unified effect. Poe uses the example of “The Raven” to illustrate how he carefully selected each component, from the length and rhythm of the poem to the choice of words and the creation of a melancholic atmosphere, all in service of achieving the desired emotional impact.

generated by ChatGPT

That’s a fairly coherent reading of the essay. I have to give it credit for that. But it would have failed the assignment as given, at least by my rubrics, because it completely lacks any sense of rhetorical awareness. It’s just summary. Critical summary, but summary.

But, I thought, maybe that was too easy. I don’t give my students broad prompts like that. While we did read “The Philosophy of Composition,” I didn’t even actually have them do an assignment on it, because I was using it for class examples. So I figured a student looking to cheat would probably just copy/paste the prompt straight into ChatGPT. So I did.

My students had a detailed prompt asking for a specific word count for their rhetorical analysis essay. It also asked that the essay be in MLA format. They were also required to write about one of four more recent essays (in the last ten years or so) about identity.

I was surprised and amused that ChatGPT answered thus:

I’m sorry for any confusion, but I can’t fulfill your request to write a formal essay in MLA format. However, I can certainly guide you on how to structure your rhetorical analysis essay and provide some insights into the rhetorical elements of one of the specified personal essays.

Generated by ChatGPT

Now, I’m not sure exactly why it answered that way, but I do know that it then produced an outline that would be perfectly ok and ethical for a student to use as an aid in writing. It provided absolutely no actual analysis; it just broke down concepts to look for and provided a structure to do that in. The sections of the outline read like this:

Language and Style:

Discuss Yang’s language choices and writing style. How does he use tone, diction, and syntax to convey his message? Consider if there are any rhetorical devices employed for emphasis or persuasion.

Generated by ChatGPT

I have absolutely no objection to a student using an outline broken down like this to help write the assignment. This is no different than the sort of assistance they might get at a tutoring service, albeit less customized and with less opportunity to check their answers to the prompts.

Now, why did it respond this way when it was able to write the Poe essay with no hesitation? I’m not sure, but I have some hypotheses.

My first hypothesis is my most hopeful one: perhaps the engineers at OpenAI have programmed in some “bumpers” by training it to identify some common assignment parameters, and those bumpers were triggered by my request and the AI was trained to refuse to help with cheating if it could detect that that was what was being requested. I sincerely hope this one is the case, and it’s not unlikely, given concerns about ChatGPT’s uses.

However, it just as likely is incapable of complex formatting, so it balked at that part of the request, since it started with saying it can’t generate an essay in MLA format.

The third possibility is that it was unable, for whatever reason, to access the essays that were assigned, even though they are all readily available on the internet. It was clearly able to access the Poe essay, but that is a very commonly assigned essay in the public domain, so perhaps it actually had a number of summaries and analyses of that essay to draw on. The essays I had my students writing about were much less canonical, although by no means unlikely to be assigned (one was from CommonLit, for instance), but the statistical probability that ChatGPT had examples of analyses of those essays was much smaller. After all, nothing that the AI generated suggested that it had actually read any of the essays—it only used the author’s name of one of them to ask questions to fill out the outline.

An AI-generated image of a young blonde girl dressed in white holding apples in an orchard. The image is a little fuzzy, and the girl's hands are distorted.
AI often generates things that seem kind of right, but are actually flat out wrong. This picture looks quite nice until you look very closely. Also, it isn’t really what I asked the AI to generate at all. I asked it for “Apples in an educational setting that are clearly generated by artificial intelligence or a computer” in a vintage sparkle aesthetic. At least I got apples! (generated by StarryAI)

In any case, this little test has some important implications for us educators in the age of AI. One is that we don’t really need to be very afraid of it; we should absolutely see it as a possible tool that our students will use, and we should design our assignments so that the students who do use AI will be doing so ethically.

If we design our assignments so that students are responding actively and directly to course content, rather than doing repetitive tasks with canonical works, then AI isn’t a threat at all. Indeed, it can become a support for our students who choose to use it.

When we design our rubrics thoughtfully to reinforce our pedagogical goals, we can also help curb the unethical use of AI. Even if I didn’t know that AI had generated the Poe essay, I still would have failed it and required the student to revise it based on my rubric because it didn’t reflect any of the concepts we had been discussing in class. The result is the same as if I had detected it as an AI-generated essay and punished the student for an academic integrity violation.

Of course, there are some assignments that tools like ChatGPT will be able to do much more easily than others, and I recognize that sometimes you will need to use those assignments: summary paragraphs, for instance, or short responses to simple questions. In my next post, I’ll talk about an experiment I did with my students testing those assignments.

One thought on “Limits on AI: Teaching and Assignment Design In The Age of ChatGPT Part 1

Leave a comment

Design a site like this with WordPress.com
Get started