Last updated Winter/Spring 2023

Other pages on this site discuss, in purely practical terms, how instructors might approach crafting course policies around AI, designing AI-resistant writing assignments, and designing assignments that incorporate AI tools. This page seeks to complicate those considerations a bit by addressing some of the ethical factors around AI tools and their use in academic work.

The purpose of this page is not to draw clear boundaries around “ethical” and “unethical” uses of AI in the classroom, nor is it to encourage or discourage certain practices around AI. The goal here is simply to recap a few of the major issues raised by AI, to help instructors make informed and nuanced decisions about how to engage it in their classes.

Copyright and Intellectual Property

Generative AI tools like ChatGPT work by synthesizing content that’s publicly available online, which means that any actual substance in AI-generated writing is really a pastiche of ideas expressed by human writers elsewhere on the internet. This arguably makes anything these tools produce a work of mosaic plagiarism. Most users understand this conceptually, but actually working with AI tools makes it easy to overlook in practice, because the algorithms that power these tools deliberately conceal the sources they draw from in order to make the writing they produce read like original content. Nevertheless, it’s important to keep in mind that everything “written” by an AI using material generally available on the internet reflects the uncredited intellectual labor of multiple human beings.

Legally, the questions surrounding AI and copyright are just beginning to work their way into the courts, and no concrete resolutions are expected in the immediate future. Pedagogically, though, instructors should consider how the emerging discourse around AI use can complicate or even undermine the ways that students understand their obligations to engage and document their sources. We might also consider more broadly the ways we teach students to understand, value, and acknowledge the contributions of other authors to their work and the ways that AI writing can complicate this.

Additional reading:

Availability and Equity

Many commentators have argued that AI editing tools — that is, tools that can proofread and correct students’ writing without necessarily generating text for them — can be used to improve equity in education. The hope is that, if these tools can allow all students to produce clear and relatively error-free English prose, this will level the playing field for students who enter college with below-average skills English writing skills. This presents obvious benefits for English language learners and students from underprivileged educational backgrounds (who are, statistically, more likely to be BIPOC and/or low-income students), as well as students with disabilities such as dyslexia and ADHD. Mastering the language skills necessary to write standard written English at a college level while simultaneously learning how to engage their course material at a college level steepens the difficulty curve for these students, exacerbating the “achievement gap” between them and their peers.  

This argument does, however, raise multiple questions that are difficult to answer, particularly while this technology is so new. If students are allowed or even encouraged to use AI editing tools, to what extent is their writing their own? Will students who rely on AI editors ever actually learn higher-order writing skills, and, if not, will that skill deficit hinder them in other ways as they move through and past their college careers in a world where AI is commonplace?

Regardless of how one approaches these questions, though, it’s also important to keep in mind that AI tools are rarely free. Most of the major tools operate on a subscription model, offering no or minimal services for free and a more sophisticated set of services for paid users. This means that, in practice, the availability of AI might exacerbate equity issues rather than improve them, as students with greater financial resources have access to increasingly powerful tools that can assist them with their academic writing, while lower-income students have less access to the same resources. 

On the individual course level, instructors might consider the degree to which their expectations for student writing are based in surface-level qualities of grammar, diction, and tone, and the degree to which students who can afford it might raise their grades through AI use alone. This is, admittedly, a complicated issue with no easy resolutions. It’s neither desirable nor realistic for an instructor to simply declare that grammar and surface-level writing qualities “don’t matter” in their writing assignments. But simply ignoring the fact that AI tools allow a privileged subset of our students to produce reliably error-free prose in an “academic” register much more easily than their peers would overlook an increasingly significant factor in the achievement gap between affluent and low-income students.

On an institutional level, the uneven availability of these tools raises the question of if and how the college should help low-income students access the AI resources their more affluent peers can afford. This, too, is not a simple question, but it’s likely one that colleges will face increasing pressure to address as these tools become more common and their impact on student writing is more fully understood by schools, students, and the general public.

Finally, on the detection side of AI, a recent study at Stanford discovered that AI detection tools such as GPT-Zero have an alarmingly high false positive rate when evaluating writing by non-native writers of English. This makes it imperative to use these tools with discretion, and to not assume that a report which states that a given piece was “likely” written by AI is conclusive proof.

Additional Reading:

Privacy and Data Mining

All major AI tools require users to create an account in order to access their services. This means that any personal information users enter will be available to the owners of that service and to any parties with whom they share data. Instructors should keep this in mind if they design assignments that require students to use AI, particularly if the AI tools in question resist or prevent the creation of anonymous accounts. (Some tools, for example, require only an email address to create an account, while others require a verified phone number and/or a credit card number, even for free accounts.)

Additionally, many AI companies stipulate that the content users generate with their tools and the text they enter into the tool to generate that content are available for the company to use thereafter. This content is often used to train the AI itself, but it may be sold to third parties as well. This raises some concern for students if an assignment requires them to feed their own writing into an AI’s database, but it might also raise concerns for instructors who use AI in their assignments. Many AI-based assignments (including some suggested on this very site) involve feeding writing prompts into an AI tool so that students can examine the results, and instructors should be aware that, in doing so, they are handing their own intellectual property—i.e. the writing prompt—over to the AI’s owners, while simultaneously teaching the AI to generate better and more “authentic” responses to that assignment.

Further Reading:

Bias

When AI tools remix content on the internet, they can inadvertently replicate both implicit and explicit biases in that content (see “Further Reading” below for some examples of this bias in action). This is a problem that human writers face as well, of course, but humans can make their writing and research methods more transparent than an AI can, which makes it easier to look for, critically examine, and address bias in human writing. Since both the writing and the “research” done by AI tools happens inside the machine, users can only see the results, with little to no insight into the sources that the AI used or overlooked to shape its words.

Furthermore, the fact that an AI is technically incapable of having an ideology of its own can often lead readers to assume that AI-written text is more likely to be objective or unbiased, which can make them less likely to scrutinize facts and ideas espoused by the AI than they would the work of a human writer.

How much of a concern this raises for instructors depends on the kind of work they allow or encourage students to do with AI. But, if nothing else, it underscores the importance of teaching critical reading skills and training students how to look for implicit biases in the material they read, whether it was written by a human or an AI.

Further Reading: