1. Using AI is an ethical choice.
Generative AI tools can be useful in a lot of situations: they can help you generate new ideas, organize information, and speed up or even complete intellectual tasks you’d rather not do yourself. But there are also a variety of reasons not to use AI tools in each of these situations. In academic contexts, using AI might be prohibited. Even if it’s allowed, using AI in your academic work might prevent you from learning, which is the whole purpose of academic work. Beyond that, the basic operation of AI technology raises ethical questions about copyright, sustainability, and inclusion.
The point here is not to say that AI is good or bad; that would be too simple. The point here is that every time you use AI, you make an ethical choice–you decide that the potential benefits of the technology outweigh the potential drawbacks in that situation. You are, in effect, deciding which you value more–the benefits to you or the potential consequences. Even if you choose not to think about the ethics of using AI, that too is a choice, and these choices both reflect and determine your values.
So, for example, if you say you value learning and hard work, but you consistently use AI in a way that prevents you from learning or engaging in hard work, then, in practice, you’re saying that you value the convenience of AI more. There may be situations where this is a reasonable decision (not everything in life needs to be difficult or a learning experience), but it’s important to consider, at least from time to time, what this says about your beliefs.
2. AI can’t think for you because AI can’t think.
At its core, Generative AI is a set of pattern recognition algorithms–basically a sophisticated version of the predictive text in your word processor. It can’t think, write, or create in the ways humans do.
When humans create, our work reflects some aspect of our thoughts, memories, and learning. We process what we know, then use words, images, sounds, etc. to articulate a very particular way of understanding what we have just processed. However, when you ask a generative AI tool to produce content, it uses its algorithms to guess the sort of thing you’re probably asking for, then uses the same algorithms to filter through its library of content and assemble a set of words, pixels, musical notes, etc. that roughly corresponds to that kind of thing. But, crucially, the AI has no real understanding of what you asked or what it produced. It’s not thinking about the content it generates; it’s just assembling it based on patterns.
3. Don’t let AI do work for you if you have anything to gain by doing that work yourself
This rule is especially important in an academic context. The purpose of going to college is to learn, and learning requires work. Sometimes, the work you do in your classes can seem purely transactional (“My prof asked for a summary of this article, I gave them a summary, obligation fulfilled!”), but every activity and assignment your instructors give you is designed to make you learn through the process of completing it. Using AI to do the intellectual work of an assignment therefore makes that assignment worthless to you.
If you use AI to write a paper, you might get a good grade, but you won’t learn how to write a good paper. You’ll have the grade, but you won’t have the skill. Similarly, if you use AI to generate a piece of code, you might get a working program, but you won’t learn how to write working programs. In the long run, relying on AI to do your work for you will make you less competent, less employable, and less able to solve problems independently. Learning is a process, and sometimes that process involves struggle and difficulty. AI can deprive you of the opportunity to engage in that struggle, and that’s a real loss, even if you don’t notice it at the time.
4. If you don’t know how to create a good version of something on your own, you’re unlikely to produce one with AI.
Even if you don’t value learning for learning’s sake, though, there’s a very practical reason not to let AI do your intellectual work for you in your college classes: if you don’t learn how to produce content for yourself, you’ll have no way of evaluating the content that AI creates for you.
You may have noticed that it’s usually pretty easy to spot AI-generated content, because most AI-generated content is pretty bad. GenAI creates text, images, music, code, and so on by remixing the different versions of that content in its training set, and more often than not that’s what the content it produces feels like–a weak remix of other people’s work. AI is also prone to making mistakes: AI-generated code frequently has errors, while AI-generated writing frequently contains misinformation or reflects cultural biases. The common term for these mistakes is “hallucinations,” but really they’re just places where the algorithm has guessed wrong as it tried to predict what a response to your prompt might look like. Because, again, AI can’t think, so it can’t really distinguish between an error and a correct response.
Knowing this, most AI guides will insist that you should “collaborate” with AI rather than just accepting whatever it produces. Edit the text it generates to make it seem more interesting. Check its facts. Look for errors in its code. In short, find ways to fix and improve the AI’s content, or at least make it more suitable for your purposes. This technique really can work, but it presents a problem: how do you make the AI’s content better if you don’t know what “better” looks or sounds like? How do you spot errors in code if you’ve never written code yourself? How do you improve the structure, tone, or depth of an academic essay if you’ve never written an academic essay?
You can certainly try, but any actual improvement will be purely coincidental.
5. AI Isn’t Neutral
AI systems have no consciousness. They have no desires, agendas, or biases of their own; they simply run operations according to their programming. However, all AI systems are created, programmed, and owned by humans, and humans absolutely have values, desires and biases that inevitably influence our creations. Therefore, in order to use a generative AI tool ethically and effectively, it’s important to consider how the perspectives of the humans who programmed it might affect the output it generates for you.
First, consider the kinds of subconscious biases that might be buried in the AI’s training data. As it runs its predictive algorithm and assembles the content you asked for, what assumptions might that content reflect, and where would that be reflected in the output? Second, consider how the very conscious motivations of the company who owns that tool might affect the outcome. How and why might the programmers manipulate the ways that their tool will respond to your requests? To what degree might their motivations conflict with yours? Asking these questions can help you decide when to use AI, what specific tools to use, and what to look for as you edit and improve AI-written content.
6. AI Isn’t Free
A general rule for using any internet service is that, if the service is free to use, then you’re the product. In the case of generative AI technology companies may offer you access to their tools for free because they can profit from selling the personal data you give them, and they can use the documents and requests you feed into their tools to help improve their models. In other words, you’re literally selling your data and your thoughts to the companies who own these tools in exchange for the ability to use them.
There are some exceptions to this, of course. The AI tools available to Carleton students through the college, for example (which you technically pay for with your tuition), will not use your data or your interactions with the AI to train their models. However, it’s also worth considering the cost required for AI technology to exist at all.
The data servers that power generative AI models require an enormous amount of power and water to train and operate, which presents significant sustainability issues. The data sets that they draw on also contain the creative and intellectual work of many thousands of human creators, the vast majority of whom receive no payment for the AI’s use of their creative work.
Do these problems mean that it’s simply immoral or unethical to use AI at all? Again, that’s a decision you need to make for yourself–not just once, but every time you use the technology.
7. AI Changes, and Your Perspective on AI Should Change with It
As you probably know, AI technology is constantly changing. Tools and platforms add new capabilities, get better at the things they can already do (by some measures, at least), and offer new features and options. Alongside the technology, our understanding of how to use AI also changes constantly, as people figure out new approaches, techniques, and applications for the tools we already have.
At the same time, the conversation around AI is always changing as we learn more about how the technology works and how its use and proliferation affects our lives, our learning, and our environment. This means that your opinions about AI shouldn’t be fixed either. Even if you’re confident that you’ve figured out how this technology should (or shouldn’t) fit into your life right now, you should keep an open mind and try to stay informed, so your opinions and your actions with regards to AI will continue to reflect your values.