ChatGPT, Google Bard, Claude, Poe, and, and, and. The news is full with new AI tools every day, with both cautions and enthusiastic ideas on how to use them. As of August 31, only 18% of Americans have used ChatGPT. Generative AI tools, such as Google Bard, can help your routine work be a little easier, more efficient, and, maybe, more effective. At the same time, let’s remind ourselves of some significant cautions.
We encourage faculty to review the most recent East Laird Times, which has specific advice about AI syllabus statements and using AI tools for course design and/or academic work.
How Generative AI works
A brief reminder on how generative AI, especially when based in large language models, works.
Generative AI relies on two elements: the data it is trained on and the algorithm to query the data it has available. The product is an assembly of words that are statistically probable to be grouped together. The more data is available on a particular topic, the larger the sample size, the more reliable this assembly is. This is a simplified explanation; for a more thorough and complex explanation, try this New Yorker article.
When we add our questions, prompts, and follow-up clarifications to an engine like Google Bard, our language is also added to the data — and the assembly of words that comes back as an answer becomes part of the available data. For example, asking Bard about zucchini recipes will bring back a set of suggestions, reassembled from the wealth of zucchini recipes out there. Asking the question again may not bring a lot of new information because, well, there is only so much you can do with zucchini.
When we ask questions about, shall we say, more obscure research or food recipes, the information may be sparser or downright incorrect because the engine pulls from a much smaller data pool — and here is where it also starts making connections that are incorrect, leading to so-called hallucinations.
Using AI for your own work
So, how to use Google Bard and other generative AI engines to make your life easier?
- Identify what kinds of writing tasks you struggle with. Are you worried about how your emails may sound? Do you need a list of Tuesdays and Thursdays for this term to add to your syllabus? Do you need some examples of bad writing or problematic work so that students can critique it?
- Start up Google Bard and write down your general idea. For instance: if you need to pull together a presentation about how AI can make your life easier, type in: “How can Bard make office work easier?”
- Pick one or two of the options that resonate with you and ask more specific questions, especially if you need more specific information or instructions.
Chances are that you will know the options or answers yourself, but it may have taken you more minutes to reassemble them out of memory. Or you discover that Bard hallucinated with that particular question.
When it comes to emails or parts of reports or other texts (that do not contain proprietary or identifiable information), you can paste the text into an AI engine and ask it to rewrite the text to be more concise, more formal, or less formal. Before doing this, you should always be sure that this use of AI is acceptable with the person you’re sharing the writing with (e.g. a professor or supervisor). And be sure to verify that any facts or quotations are accurate. This use of the tool may be helpful for folks who do not consider themselves strong writers, are dyslexic, or struggle with organizing material.
If you spend time on tasks that are almost but not quite automatic — e.g., generating a list of dates for your new syllabus, or coming up with a certain number of bad examples — ask Google Bard to help. If nothing else, the response will make your own idea generation faster.
Remember to be cautious
You may have heard President Byerly talk about the distinction between routine work and proprietary work. Getting started on a generic email or asking for some ideas about a general topic is routine work that may benefit from the use of a Generative AI engine like Google Bard. Proprietary information — that is, information specific to your own ideas or research, or to Carleton policies or plans — or information that can identify you or others should not be added to your prompts or questions. As mentioned above, writers should check with professors and/or supervisors before using AI to generate text that you will be using as your own writing.
Special note regarding Generative AI: Although some providers claim they won’t incorporate user data into their learning model, it is advisable and considered best practice to avoid putting any medium- or high-risk data into an AI platform, such as ChatGPT and Google Bard. Medium and high-risk data includes personal and identifiable information, Social Security Numbers or bank information, FERPA data, and Carleton proprietary data and information.
There has been an increase in security breaches at higher education institutions, for a variety of reasons, but the breaches always focus on high-risk data like Social Security Numbers or health records. If such information from Carleton were to end up in an AI engine, the college could be held legally and financially liable.
Learning more
- Join Carleton’s AI Community of Practice — we have a relatively active Google group and a couple of meetings each term. The next meeting will be on Sept. 20, focusing on tools we are already using on campus that have AI components. For example, we will learn about some Adobe Creative Cloud features, Google Translation, ArcGIS features, or writing aides.
- ITS has started collecting some AI information in our Service Catalog.
- For some excellent strategies connected to writing, check out these Writing Across the Curriculum resources.
- The most recent East Laird Times has specific advice about AI syllabus statements and using AI tools for course design and/or academic work.
- Try it out — Start with Google Bard, using your Carleton account, and share your experiences with others.