Skip to Main Content

Artificial Intelligence (AI)

Evaluating AI Tools and Output

When using artificial intelligence, it is important to evaluate the tool itself and the tool’s output critically. Ask yourself these questions:

  • What is the purpose of the tool?
  • How is this tool funded? Does the funding impact the credibility of the output?
  • What, if any, ethical concerns do you have about this tool? 
  • Does the tool asks you to upload existing content such as an image or paper? If so, are there copyright concerns? Is there a way to opt out of including your uploaded conent in the training corpus? 
  • What is the privacy policy? 
  • What corpus or data was used to train the tool or is the tool accessing? Consider how comprehensive the data set is (for example, does it consider paywalled information like that in library databases and electronic journals?), if it is current enough for your needs, any bias in the data set, and algorithmic bias.
  • If reproducibility is important to your research, does the tool support it?
  • Is the information the tool creates or presents credible? Because generative AI generates content as well as or instead of returning search results, it is important to read across sources to determine credibility.
  • If any evidence is cited, are the citations real or "hallucinations" (made up citations - see the glossary).

This work, "Artificial Intelligence (AI)," is adapted from "Artificial Intelligence (AI)" by University of Texas, used under CC BY-NC 4.0.
"Artificial Intelligence (AI)" is licensed under CC BY-NC 4.0 by Gateway Technical College Libraries.