Here are suggested questions to ask when assessing the quality of AI tools for research:
Do you have access? Is there a cost to use this tool? | |
How much scholarly information does this tool access? Are its sources comprehensive for your discipline or topic? How does it deal with retracted research? Does it have access to actual data sources? | |
Are the tool's results and recommendations relevant? Do the most relevant items sort to the top of search results? | |
Are summaries, extracted information, and other AI outputs accurate? Are they sufficiently detailed? | |
How well do the conversational features work? Do they show common pitfalls of generative AI chatbots (e.g. vagueness, hallucinations, reliance on biased or limited training data)? | |
How will my chatbot conversations and personal data be used? Does the tool creator share my values on data security, ethics and privacy? | |
What am I hoping to learn? Will using this tool help me achieve that goal, or will it undermine my learning? Does this tool introduce more work to double check AI outputs? |
Instructors may use this customizable activity to empower students to ask and answer critical questions about AI research tools.
The activity can be delivered in various formats, including:
Students can complete the full report card on their own, or divide up the criteria to complete as a group.
This activity is shared with a Creative Commons CC-BY-NC-SA license so that instructors are free to use and adapt it to suit their teaching.
For any questions, or to share an adaptation that worked particularly well, please contact Caitlin Shanley (cshanley@temple.edu) and Olivia Given Castello (olivia.castello@temple.edu) of Temple University Libraries.