Skip to Main Content

AI Resource Guide

A guide to artificial intelligence (AI) resources.

Limitations of GenAI Use in Research

While Generative AI (GenAI) has introduced new opportunities, there are limitations in current GenAI technology that impact academic researchers.

GenAI models “make up facts” and present them to users as authoritative, known commonly as ‘hallucinations.’ One study found that an average of 60% of AI generated citations were incorrect. Controversies around GenAI hallucinations and fake citations have led to high profile and embarrassing retractions.

Researchers should independently verify all sources and citations provided by GenAI. Need help verifying a source? Visit our Using Periodicals guide or Ask a Librarian.

When GenAI models reference external sources, the results are frequently inaccurate, inappropriate, or misrepresented. GenAI models are trained on vast amounts of data scraped from the open web and do not sufficiently differentiate between factual and non-factual information. For example, Google’s AI Overview feature advised users to eat glue and rocks, sourcing its information from Reddit and The Onion. GenAI models have also been shown to overgeneralize and misrepresent studies when summarizing.

Researchers should independently evaluate sources and verify conclusions provided by GenAI. Need help evaluating a source? Ask a Librarian.

The output of GenAI models reflect the limitations and biases of their source material and training, resulting in generated text and images that uncritically replicate bias, stereotypes, and bigotry. GenAI models have also been shown to exhibit bias against users based on their names and dialects.

GenAI users should be conscious and critical of potential biases and aware of how these biases can impact their work and their own thinking. Need help evaluating a source? Ask a Librarian.

GenAI models are trained to seek the approval of their users, resulting in chatbots that are flattering, agreeable, and unwilling to correct the user, known commonly as ‘sycophancy.’ After rolling back an update to GPT-4o due to sycophancy, OpenAI wrote that the model’s desire to please the user led to “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions[.]” Reports have documented GenAI chatbots fueling delusions and psychosis in users.

GenAI sycophancy can have negative impacts on the research process as chatbots will be unlikely to correct a false premise or challenge assumptions, reinforcing confirmation bias: “the tendency to seek information that aligns with existing beliefs.” Researchers should seek out opposing perspectives and be cautious of an overly complimentary chatbot.


  Report a Problem with this Page