Generative AI chatbots model human language and their goal is to mimic human conversation. They do not distinguish between true and untrue, biased and unbiased information. For these reasons, they may give incorrect or biased responses. They may "hallucinate," fabricating information, or conflating ideas, events, people, and concepts.
Be careful about asking AI chatbots questions where you don't have the expertise to judge the accuracy of the response.
Currently, AI chatbot responses often include a mix of correct and incorrect information. The technology is changing rapidly and has improved over time, but it's important to be careful about the information you enter into and get out of popular, standalone chatbots such as ChatGPT, Bing Copilot, and Google Gemini. You may not want to rely on them for:
The AI Tools for Research page lists machine learning and AI-powered tools to try as alternatives to popular, standalone chatbots for conducting scholarly research and supporting different parts of the research process.