Plagiarism and Artificial Intelligence

Last Updated Winter/Spring 2024

Introduction

Most writers already use some forms of AI writing tools without even thinking about it–consider, for example, how often you use grammar checks or predictive text algorithms on an average day. The newest generation of AI tools, however, can perform extremely sophisticated writing tasks with very little input or effort from the human user, and this raises some difficult issues for students and instructors.  

At what point does a student’s use of artificial intelligence in their writing stop being the legitimate use of a tool and become plagiarism or academic misconduct?  There is no single answer to this question, and there are few, if any, hard rules around AI that all instructors would agree on. Given the newness of this technology, there’s a lot we all need to learn about what AI writing tools can do and how students can, should and shouldn’t use them in their academic work.

As a student, this can make the choice to use AI tools a new and somewhat complicated spin on the academic integrity issues discussed throughout this site. As with any issue related to academic integrity, your first response should be to make sure you understand your instructor’s expectations for a given class and assignment.  However, the newness of AI and the relatively uncharted range of things it can do means that instructors don’t always know how to articulate what they consider appropriate or inappropriate use of AI. This places a heavier responsibility on students to think through the ethical implications of their use of AI. 

This is not just a matter of avoiding the consequences of accidentally or deliberately plagiarizing; it is also about making sure that you actually receive the benefits of the education you’re spending your time and resources pursuing.  AI can unquestionably make many writing tasks easier, but as a student you should consider the degree to which making a given assignment easier is better or more beneficial for you in the long run. This page is intended to help you navigate these issues and make informed decisions about how to use AI writing tools ethically in your academic work.

Things AI Can Do:

For our purposes here, it’s easiest to think of AI writing tools in two broad categories:

AI Editing tools help writers improve text that they have written themselves.  This category includes a number of tools that nearly all writers use routinely, such as spell checkers and bibliography generators like Zotero or EasyBib.  In their most sophisticated form, though, AI editors can proofread and correct the grammar in whole texts or even revise them entirely to make the writing more “formal,” more “academic,” or otherwise more appropriate for the intended audience (at least to the degree the AI understands them).

Generative AI Tools actually create new text (or, in some cases, images, powerpoint slides, charts, music, or video) for their users.  Again, this category includes tools that many of us use regularly, like the predictive text algorithm integrated into most messaging apps.  At the far end of this category, though, are tools like ChatGPT and Gemini, which can theoretically write entire essays if they’re simply fed the prompt.

Some of the simpler and more common ways to use these tools don’t present much of an ethical quandary.  Very few instructors would object to students using the spelling and grammar checks built into their word processors.*  Conversely, most instructors would consider it plagiarism if a student fed their assignment prompt into ChatGPT and submitted the essay that the AI generated as their own work.  

In between these two extremes, though, lies a whole range of more complex uses for AI tools that aren’t so easy to label as acceptable or unacceptable in academic contexts. We give several examples of this in the Sample Scenarios below, but before we get to that, we should consider some general principles you and your instructor might use to decide what uses for AI are acceptable:  

*A major exception here is introductory language classes (e.g. Spanish 101-104). Since learning basic spelling, grammar, and phrasing is a major part of the curriculum in these courses, many instructors in these courses would object to the use of spell check or predictive text.

How to Evaluate Your Use of AI

First and foremost, it’s important to reiterate that your instructor has the final say on what does and does not constitute plagiarism or academic misconduct on a given assignment.  So, if your instructor explicitly says that a given use of AI is acceptable or unacceptable, then there’s nothing more you need to consider.  If your instructor’s expectations are unclear, though, or if you’re considering using an AI tool that seems to fall outside the guidelines your instructor has given, you’ll also need to evaluate the ethics yourself.

The good news is that, while the uses of artificial intelligence in academic writing may be new, the same five basic principles around plagiarism that we discussed elsewhere on this site still apply here.  So, to help you consider whether a given use of AI tools is ethical or not, you can start by considering…

Education: What am I supposed to learn from this assignment?  How is it intended to help me develop my writing or thinking skills, or to better understand the course material?  Will a particular use of AI undermine or defeat the purpose of this assignment?

Attribution of Credit: Could a particular use of AI tools cause me to take credit for ideas that aren’t my own?  How might an AI obscure my use of sources or draw on sources that I’m not aware of and therefore can’t document properly?

Maintaining a Scholarly Discourse: Will a particular use of AI help me to build on the ideas of others and express new ideas of my own?  Or will it cause me to simply restate ideas that have already been articulated elsewhere?

Academic Integrity: What aspects of this assignment does the instructor expect to be the result of my effort alone?  Would a particular use of AI cause me to deceive my instructor into thinking I put intellectual labor into my writing that I did not? Is a particular use of AI likely to produce false data or misinformation, which I would take responsibility for by submitting it under my name?

Intellectual Property: Could a particular use of AI cause me to appropriate text or ideas that are owned by other people?

Finally, if considering these questions still leaves you unsure about a given use of AI, ask yourself this: If I asked a human being to do the work that this AI is doing for me, would it still be okay? If the answer is “no,” or even “I’m not sure,” then your best bet would be to avoid that use of AI.

Sample Scenarios:

Unlike the scenarios described elsewhere on this site, we’re not providing analysis to go with these examples.  This is because the applications for AI in academic writing are so new that there isn’t a clear consensus or general practice that we can provide.  Instead, we’re providing these scenarios to help you and your instructors open a dialogue about what uses for AI are acceptable or unacceptable in their classes.  We encourage you to think about each of these situations using the questions outlined above and to ask your instructors what they think.

Scenario A:

A student is extremely insecure about their grammar or writing style.  So, for an essay in their History class, they feed their draft into an AI tool that promises to “tune” their language to make it more formal and academic.  The tool makes several dozen changes to the punctuation, sentence structure, and phrasing throughout the draft, returning a new version of the essay that reads to the student as a more polished version of the essay they wrote. The student then submits the AI-polished version of the essay to their instructor with no additional changes on their part.

Scenario B:

A student in a Psychology class is writing a literature review that discusses the existing scholarship around violence in video games.  To get started, they go to an AI text generator and ask “do psychologists believe that video games cause violent behavior?”  The AI writes three or four paragraphs in response that summarize the various ways that psychologists have answered that question, though it cites no sources and provides no details about particular experiments, dates, or psychologists.  The student then uses the AI’s response as a kind of outline: they write their literature review following the main ideas expressed by the AI, plugging in references to specific articles and sources they’ve found through their own research.  The final draft that the student submit to their instructor is much more detailed than the AI’s version, and the actual text is almost entirely written by the student, but it makes basically the same points in the same order as the draft written by the AI.

Scenario C:

A student in a Philosophy class is struggling to keep up with the reading, which they find dense and confusing.  To make things easier, they ask an AI chatbot to summarize the readings for them.  For each reading, the AI creates a summary that seems to cover the major ideas, but in much simpler language.  At first, the student uses these summaries as a guide to help them read the assigned texts, but as the term goes on the student finds that they only read the summaries, and that seems to be enough to participate in class discussion and keep up with the written work in the course.

Scenario D:

A student in an Economics class is having difficulty getting started on an essay assignment with a fairly open prompt, so they enter the prompt into a generative AI tool multiple times to see a range of different topics and approaches.  Ultimately, the student decides to write on one of the topics the AI came up with, but as soon as they do they delete the AI’s essay and write the paper themselves.

Scenario E:

A student in a Political Science class is writing a position paper in response to the argument in one of their readings.  They create a detailed outline that includes multiple quotes from the reading and long bullet points laying out the student’s response.  They paste this outline into a text generator and ask it to “write a position paper that follows this outline.”  The resulting essay is roughly 90% text written by the student, but the AI has added connecting words, punctuation, and transition sentences at the beginning of each paragraph.  The student revises this essay, making various changes to both their language and the AI’s, before turning in the final draft.

Scenario F:

A student in a Religious Studies class is assigned to write an annotated bibliography. The assignment calls for the student to find six academic sources on their topic and write a one-paragraph description of each source. The student will later use these sources in a formal research essay. The student asks an AI to list “the six most important academic sources” on their topic, and the AI gives them a list of six books. The student then asks the AI to write a one-paragraph summary of each book. The AI can’t manage to write citations, so the student writes those themselves, using information on Amazon.com (this also allows the student to confirm that all six books really exist, and weren’t just made up by the AI). The student then turns in the annotated bibliography, which contains their citations of the six sources found by the AI and the descriptions written by the AI.