Skip to Main Content

Artificial Intelligence in Teaching & Learning

Information for Students and Faculty about ChatGPT and Other AI Tools

Generative AI and Information Literacy

Generative artificial intelligence (AI) is a relatively new technology that is developing quickly. Like the internet in general, AI tools like ChatGPT, Gemini, and Claude are not simply good or bad when it comes to finding and using information. Instead, they offer some new ways to access and interact with information.

The purpose of this guide is to help you learn about generative AI tools and to think critically about how they can help or hinder your academic work.

Students, please first check with your professor about whether content produced by ChatGPT or another generative AI tool is acceptable before using it for any course assignments. 

How Does It Know So Much?

Where does the information come from?

Like other large language models, ChatGPT was trained on a body of text which allows it to generate text in response to a prompt. Some general lists of the training dataset exist, and ChatGPT itself will also provide a partial list when queried. However, the entire body of text that has trained ChatGPT is unknown.

When ChatGPT provides an answer to a question, it will not immediately provide a reference for where the information came from. This is because it is pulling predictive language from a wide variety of places, so the information usually doesn't come from a single source. Because of this, you typically cannot trace the response back to a single source or know where the information came from.

Can ChatGPT provide references?

The short answer is: maybe. When prompted, ChatGPT will provide references. But these references may not be where the information actually came from and, more importantly, may not be real sources. Despite sounding plausible, ChatGPT can easily hallucinate (make up) citations. This is an issue with other generative AI tools, like Gemini and Claude, as well.

Don't Believe Everything You Read

It's always important to evaluate information for credibility, regardless of where you find it. This is especially true for generative AI responses. Here are two strategies for evaluating information provided by generative AI tools.

Verify Citations

If a generative AI tool provides a reference, confirm that the source exists. Trying copying the citation into a search tool like Google Scholar or the Library's Discovery Search tool. Do a Google search for the lead author. Check for the publication in the Library Publications list. 

Second, if the source is real, read the abstract or the full article to check that it contains what ChatGPT says it does.  

Lateral Reading

Don't take what ChatGPT tells you at face value. Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. You could start by searching for a Wikipedia entry on the topic or doing an internet search to see if a person that ChatGPT mentions actually exists.

When you look at multiple sources, you maximize lateral reading and can help avoid bias or misinformation from a single source.

This three-minute video from the University of Louisville describes lateral reading to evaluate information on websites. You can adapt the steps in this video to fact-check information from AI chatbots.