This blog post is part of a series of posts covering community members’ experiments with AI in the classroom and the workplace.
For their first essay assignment, students in Color! (IDSC 250) are asked to play the role of tutor. The twist: their tutee is ChatGPT.
Color! is an interdisciplinary class co-taught by Marty Baylor (PHYS), Julia Strand (PSYC), and Jason Decker (PHIL/CSLI) that covers the physics, psychology, and philosophy of color. In addition to learning about the physics of electromagnetic radiation and the neuroanatomy behind color perception, students spend classes experimenting with lasers, inducing color illusions, and mapping the many philosophical stances toward color. For their first essay assignment, students also get the chance to engage with generative AI (GenAI) in a unique way: as its tutor.
For this assignment, students are first given a definition of the color yellow generated by ChatGPT. Then, using what they have learned from the class, students write an essay-length response breaking down ChatGPT’s answer: where it is right, where it is wrong, and how it misrepresents or oversimplifies the truth.
“The assignment is about information literacy,” Strand explains. “Generative AI like ChatGPT often produce answers that sound like good answers, that have the outward appearance of a good answer, but are actually quite superficial.” Strand and her colleagues illustrate this through contrast. In class, students are exposed to the intricacies, nuances, and controversies surrounding our current understanding of color. They quickly learn that apparently simple questions about color rarely have simple answers—a point further underscored by the interdisciplinary nature of the class. Then, equipped with this understanding, students get a chance to apply it by correcting and complicating ChatGPT’s simple answer to a deceptively simple question. In this way, students learn what information literacy looks and feels like, as well as how to calibrate their expectations about AI.
Strand notes that this approach works especially well for the topic of color because of the dearth of quality information about color. Much of the readily available information about color is oversimplified or just misguided, and it is exactly this information that is used to train GenAI tools. As a result, tools like ChatGPT tend to parrot the same cliches and mistakes that circulate around the internet. This makes GenAI tools wrong in ways that create useful teaching opportunities.
We can experience a version of this lesson in our own everyday experimentation with AI, Strand points out. When we go to GenAI for guidance on a topic with which we are unfamiliar, we might walk away impressed, thinking that we understand that topic now. For topics with which we are already familiar, though, we are often more sensitive to the mistakes and omissions that these tools can make. Comparing these two experiences, we can realize how important human expertise is for using GenAI. Far from replacing it, GenAI needs human expertise to be used responsibly and accurately—and therefore to its fullest potential.