Skip to Main Content

Generative AI and Chatbots

This guide offers advice on generative AI chatbots and tools and how to best use them to support your work.

Hallucination and misinformation

Generative AI chatbots model human language and their goal is to mimic human conversation. They do not distinguish between true and untrue, biased and unbiased information. For these reasons, they may give incorrect or biased responses. They may "hallucinate," fabricating information, or conflating ideas, events, people, and concepts.

Be careful about asking AI chatbots questions where you don't have the expertise to judge the accuracy of the response.

Exercise caution

Currently, AI chatbot responses often include a mix of correct and incorrect information. The technology is changing rapidly and has improved over time, but it's important to be careful about the information you enter into and get out of popular, standalone chatbots such as ChatGPT, Bing Copilot, and Google Gemini. You may not want to rely on them for:

  • Medical, legal, or financial advice
  • High-stakes decision making
  • Answering very specific questions or questions that require answers based on current information
  • Finding scholarly sources on academic or controversial topics

The AI Tools for Research page lists machine learning and AI-powered tools to try as alternatives to popular, standalone chatbots for conducting scholarly research and supporting different parts of the research process.

Related guides