AI is ready and able to disrupt education. Just give it a prompt and watch what it can do. Carleton’s leaders, librarians, and technologists are working alongside this new reality to make sure it supports—not subverts—human intelligence.
Last fall, an audacious new presence arrived on campus. First, it raced through the library, devouring information like a fire in the stacks. Then it showed up in everyone’s class and sat quietly in the back of the room, conspicuously lurking but never contributing anything new to the discussion. Very quickly, though, it became bold and the words poured out. ChatGPT smooth-talked its way into every dorm, promising homework help and answers to college life’s burning questions. Who are the 10 most important philosophers? What is a good title for my psychology essay? Where should an international relations major study abroad? And, How can you tell if a girl likes you?
Its responses were kind of basic, but they sure sounded good, and they were helpful enough, especially if you asked more questions. More fun than a Magic 8 Ball, a better conversationalist than Google, ace at producing an outline for a paper, and so very polite, ChatGPT, OpenAI’s generative text engine, turned the first somewhat normal post-pandemic school year into yet another exercise in “new normal,” aka, nothing will ever be normal again.
“Artificial intelligence [AI] and machine learning have been with us for a long time. Graphic design and animation classes have used AI for a while now. It’s a wonderful tool for speeding up video editing—which is scary for people who might want to go into that field,” says Wiebke Kuhn, Carleton’s director of academic technology. “But AI writing tools are a relatively new development, and the wheels are coming off.”
ChatGPT, Google Bard, and other generative text AI tools are known as large language models (LLMs). These super-charged, brainiac chatbots produce text as quickly as you can read it. Using AI algorithms, LLMs draw upon massive volumes of data to generate written content in response to a prompt. ChatGPT’s dataset includes nearly everything on the internet, including publications, books (fiction and nonfiction, open source and pirated), movie scripts, social media, forums, and chats. It has a firm grasp of grammar and produces well-formed sentences that take the form of an essay, blog post, technical paper, sonnet, or any other written form. It can even mimic the writing style of Hemingway or J. K. Rowling. In other words, yes, it can do your homework.
Many digital applications operate using LLMs, including language translators, predictive text functions, the programming tool GitHub Copilot, the mathematics tool Wolfram Alpha, the art-generation tool DALL-E, music-generating platform AIVA, and the customer-service chatbots that become helpful only when you ask to chat with a human. Like these other AI tools, what makes ChatGPT so alarming to educators is that this generative text model is so very good.
Prompt
Write a haiku about Northfield, Minnesota.
ChatGPT
In Northfield’s embrace,
Colleges bloom in the sun’s glow,
Historic and warm.
OK, so maybe not that good.
“Writing is still a skill that is only about 150 years old for most populations. For some people, writing is extremely anxiety-inducing and AI may present a helpful tool. But it is a challenge for higher education because we focus on intellectual work rather than hands-on labor. AI targets intellectual work,” says Kuhn. “It is daunting for students who come here to hone those skills to see that AI can create content. It is boring content, though. It’s rote and predictable. It is not always factually correct; it ‘hallucinates’ sources. But it is improving quickly. We as humans will struggle mightily to determine whether something we read or see is real or not. What does that mean for our future society? We have to wrestle with this question.”
Across the Carleton community, the AI question is an urgent and ongoing discussion. At the Perlman Center for Learning and Teaching, the December 2022 Winter Break Conference—just days after ChatGPT’s release—centered on how instructors can guide students through this new era with a focus on critical thinking and digital literacy. A cross-campus Community of Practice group formed to monitor developments surrounding the technology. AI is part of the required first-year Argument and Inquiry seminar. Information Technology Services is focused on cybersecurity. Administrators are considering AI’s potential role in operations, including admissions, HR, and student services, while prioritizing data privacy and the human touch.
Provost Michelle Mattson says that the arrival of ChatGPT heralded a challenging new era for higher education. “We are making sure that AI and academic integrity stay at the forefront of our institutional attention. Faculty are deeply engaged in new ways of thinking about pedagogy and how students can use it productively,” she says. “We have many people who are closely involved with technology and watching how these tools are evolving.”
Carleton does not plan to ban AI, and it does not have an official AI policy at this time, although its extensive plagiarism policy encompasses the use of generative text. The college prioritizes academic freedom and trusts instructors to develop their own AI policies, acknowledging that judicious use may be acceptable in some disciplines while verboten in others. Cybersecurity is an urgent priority.
“We have real responsibilities to everyone in our campus community as well as alumni and donors,” says Mattson. “FERPA [the Family Educational Rights and Privacy Act] already governs how we protect information, but with AI we need to train faculty and staff about the potential for exposing proprietary information to these models.”
Carleton does not currently use AI in admissions, alumni and donor relations, or any other institutional processes, and Mattson says there are currently no plans to change that. She has noticed other schools using it for things like website chatbots, but that doesn’t fit Carleton’s institutional identity. Relationships—real, human connections—will remain cornerstones of the Carleton experience.
Prompt
Can AI replace original human thought?
ChatGPT
While AI systems can perform complex tasks and make decisions, their operations are fundamentally different from human thought processes. AI operates based on algorithms, data, and patterns, while human thinking involves consciousness, emotions, and subjective experiences.
ChatGPT reached 100 million users within two months of its release. Compare that to TikTok (nine months), Instagram (30 months), or Spotify (55 months) and it’s easy to see why educators need to stay vigilant. “We are proceeding with great caution, but we have no choice but to engage with AI on some level,” says Mattson. “Our students are smart and they will figure out how to use AI productively. Future employers will want to see that they understand how to use this tool to bring efficiency and capability to their work.”
In the world beyond campus, businesses, governments, and creatives are considering how to integrate AI and machine learning into their processes, and how to view digital resources when dynamic creation tools make it harder to discern whether content has been generated by a human or a machine. The internet is already polluted by reams of dull generative text and absurd AI-generated deepfake images, and LLMs will make it much worse.
Meanwhile, the people who created AI tools are now sounding alarms about potential dangers that make the concerns of educators seem trifling. Last June, hundreds of tech leaders signed onto this objectively terrifying statement from the nonprofit Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”
How can we peabrains possibly control a technology that has more information than any human mind could ever absorb? I’m not even going to ask ChatGPT; this sounds like a job for the librarians.
Prompt
Did Jesse James attend Carleton?
ChatGPT
There is a common misconception that Jesse James attended Carleton because the college is located in Northfield, Minnesota, which is the same town where James and his gang attempted to rob a bank in 1876. However, the college did not exist at the time of the robbery.
Common misconception?
Oh, and Carleton was founded in 1866.
“It’s a really amazing tool, but it’s important to keep in mind the many, many things that it cannot do. It’s not even that great at basic information. I asked it, ‘What should I have for lunch today?’ And it said I should go to Chipotle,” says Sarah Calhoun ’01, Reference Librarian for Humanities and Digital Scholarship. “There is no Chipotle in Northfield.”
AI’s real problem with hallucinations—its ability to synthesize facts and invent sources that don’t exist—poses a huge caveat to anyone using it for academic work. All those terabytes of information get jumbled together and the same machine learning processes that enable AI to fit data into neat paragraphs also enable it to generate some real nonsense.
“ChatGPT is not the most effective way to conduct research,” says Calhoun. “The librarians are invited to visit the first-year seminars, and we talk about the issues and limitations with AI and share information of how to search effectively. We can teach students how to search history databases like a historian. We know how to navigate print resources as well as those that are ‘born digital.’ I do one-on-one consultations with students to help them navigate what can be a gigantic ocean of info they can drown in.”
Carleton’s archives encompass primary source resources that don’t exist online, such as newspaper collections going back to the 1850s, academic journals from around the world, government documents, art, media, data, and statistics that are very particular, proprietary, or subscriber-only. LLMs do not have access to this content.
Sean Leahy, Reference and Instructional Librarian for Social Sciences, says that the library and IT teams are working to help Carleton understand the opportunities AI tools offer while protecting the college’s unique resources.
“Right now, we’re in a learning phase,” he says. “We’re testing the potential of this new tool to see what it can do, how it can support research. We are following the faculty to support them where they are, and we’re keeping information literacy at the forefront.”
Another concern is ethics. “As people use these tools, they need to know there are issues surrounding bias that have been introduced through training data. They need to be aware of privacy in terms of individuals and materials that are protected by copyright. It’s important to understand that as students and faculty put their data out there, it becomes part of the pool of information it learns from.”
So maybe, just don’t do it. Or at least think long and hard before you release your groundbreaking study, strategic plan, or uncopyrighted manuscript out into the world via an AI tool. On the other hand, gatekeeping information presents its own troubling ethics.
“As librarians, we absolutely support open access. Our work is centered on making sure that students can access the information they need to do their best work. Keeping scholarship open is essential,” says Leahy. “But what worries me is that by giving these technologies access, we undermine our work or empower AI to put things behind a new paywall, and the end result is the opposite of our intentions.”
Prompt
Is AI smarter than my professor?
ChatGPT
AI excels in specific domains and tasks, but it doesn’t possess general intelligence, consciousness, or human qualities such as wisdom, intuition, and empathy. Professors bring a wealth of human experience, expertise, and mentorship that are valuable in education and guidance.
Technology is supposed to help us. In a prophetic 1991 essay for Scientific American, “The Computer for the 21st Century,” Xerox PARC chief technologist Mark Weiser described his vision for “ubiquitous computing,” a world in which AI-powered connected technologies support higher levels of human endeavor and connection by automating myriad rote tasks—allowing us to step away from the computer. “Only when things disappear in this way are we freed to use them without thinking and so to focus beyond them on new goals,” he wrote, hopeful that technology would free us from the screen and allow us more latitude for human experiences. “There is more information available at our fingertips during a walk in the woods than in any computer system.”
George Cusack hopes that as his students experiment with AI, they come to appreciate their own minds and offscreen experiences more. The director of Writing Across the Curriculum and senior lecturer in English said his students were unnerved when they first saw what ChatGPT can do. “When we examined it more closely, they agreed that AI-generated essays didn’t really say much. It can’t generate new ideas, only remix what information it has. At its best, AI will produce well-crafted but vacuous prose. Another thing to understand is that a student’s essay is inspired and informed by the actual class they are taking, and AI hasn’t taken that class,” he says. By examining these shortcomings, his students learn how to write better. “Human thought is kind of messy. It doesn’t have an easy structure, and it is full of surprising ideas. I didn’t close the door on using AI for certain tasks, but it’s no substitute for original work.”
Cusack compares the impact of ChatGPT to the arrival of the internet. “It will probably change the course of civilization, for good and for bad, just as the internet did. Would any of us give up the internet? Very few,” he says. “The hope is that we will learn how to use AI to do some rote things faster, and that will free up time for true creativity.”
At Carleton, that’s already happening. Henry Burkhardt ’26 is a digital humanities assistant working with the librarians. “In one project, we are using machine learning models to cluster historical documents to identify trends. Using AI can reduce the amount of time needed to sift through huge amounts of data; instead of reading hundreds of documents, these tools can categorize them so that you might only need to read the top 10 or 20 most relevant ones.”
On his own time, Burkhardt has enlisted AI to develop a sweet executive function hack. “I have really bad ADHD, so I developed my own use case to help manage deadlines—it’s a little nerdy,” he admits. “I have a chart set up with all my syllabi and deadlines, and then I asked ChatGPT to create a list of commands I could paste into a to-do app on my phone. I can add things, like ask it to make me a haircut appointment. As a proudly ADHD person, I’m excited to make these kinds of administrative things easier.”
With the administrivia under control, he can focus on writing—without the help of AI. “I think that Carleton students at large aren’t interested in using AI to do work for them. I mean, really, what are you paying for? Plus, there are very few things that I think AI would do better than me.”
Other digital humanities assistants have come to the same conclusion. Math major Erin Watson ’24 says she is a bit ahead of the average student in terms of AI use and understanding because her dad is a tech blogger and Northwestern professor who encouraged her to experiment with the models early on.
“I use GitHub Copilot to speed up coding. I did 99 percent of the work, and it did nicely format it, but it doesn’t help with real thought. It doesn’t understand the big picture of what I’m trying to do, and it’s definitely not a substitute for learning,” she says. “It’s useful like Wikipedia is useful; for example, I can use AI to apply the triangular distribution function, but I still have to understand the function to know what to ask AI to do.”
Watson is zero worried about AI’s impact on her job prospects. “I don’t see it replacing any real job that Carleton students go on to do. Any work that it can create is just mimicry,” she says. “I hear students joke: ‘I don’t want to write this paper, I should ask ChatGPT to do it.’ But they won’t. That’s not what we’re here for.”