Sometimes, but be skeptical.
As an AI model processes more data, it learns more, which increases the probability it will provide accurate information. However, it has been well documented that generative AI models can hallucinate—they can generate fictional, erroneous or unsubstantiated information in response to queries, which can in turn spread misinformation. All AI models can contain known racial, gender and class stereotypes and biases. This means it is important to cross-check and verify information provided by AI tools against other reliable sources.
Key Takeaways
-
1.
Generative AI models can sometimes hallucinate and make stuff up.
-
2.
Algorithmic bias can have harmful impacts to users.
-
3.
Verify outputs from AI are correct by checking other trusted sources.
Additional Resources
National Institute of Standards and Technology: Toward a Standard for Identifying and Managing Bias in AI
The Brookings Institution: Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
U.S. Equal Employment Opportunity Commission: Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems