-
Basic definitions
There is no single, agreed upon definition of artificial intelligence (AI)[1]. AI is both a technological concept and a category of tools powered by advanced mathematical models and data.
- An algorithm can be understood as “rules” used to organize, evaluate and assess matches and patterns in data.
- Algorithms are developed by humans and coupled with data to make an AI Model.
- AI Systems describe the AI models, along with the humans, organizations and other technologies that enable the AI models to run. They can be thought of as advanced inference engines, used to produce predictions, inform decisions and take automated actions[2].
-
Legally, the National Artificial Intelligence Initiative Act of 2020 defines AI as the following
- “Machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—(A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.”
-
Example use cases [3]
- Finance: decisions about loan approvals, investment recommendations
- National security: analyzing video surveillance footage
- Healthcare: diagnostic imaging
- Criminal justice: facial recognition
- Transportation: self-driving cars
- Other, everyday uses: autocorrect, ChatGPT, customer service chatbots, smart assistants, facial recognition, wearable fitness trackers, product recommendations.
-
Benefits
- Efficiency: Repetitive tasks can be automated with AI, saving time and limited resources.
- Data analysis: AI systems can analyze large amounts of data faster than humans can. They can also make predictions about future outcomes based on historical information.
- Personalization: AI systems can be tailored to specific interests, preferences and behaviors.
-
Some things to watch out for
- Uncertainty and error: Outputs by AI models are predictions made by correlating information and recognizing patterns from past events or instances with new data. AI models offer probabilities and therefore may not always be accurate.
- Bias and discrimination: All AI models process vast amounts of historical data, organizing them by rules and using labels often provided by humans, and so contain some element of bias.
- Lack of transparency/opaqueness: Some AI models can be black boxes—they cannot adequately explain to human users how it arrives at the conclusions it draws or why their insights should or should not be relied upon.
- Privacy[4]: Institutions using automated systems should seek user permission and consent regarding collection, use, access, transfer, and deletion of personal data.
- Disinformation[5]: From deepfake videos to online bots on social media, there is the danger of AI systems undermining social trust.
-
Legal implications
AI needs to be regulated, but how remains a question.
- At the federal level, AI legislation regulating AI writ large does not yet exist. Current legislative bills propose to protect data privacy, disclose the use of AI, require risk management of AI when used by the federal government, honor copyright and promote AI research and development.
- Individual states and cities have each enacted their own laws and rules regulating AI and how it is used. State legislatures across the country have introduced AI bills targeting data privacy, bias and discrimination, facial recognition technology and deepfakes[6] while awaiting federal action.
-
Download our full factsheet here.
Additional Resources
American Association for the Advancement of Science (AAAS) Center for Scientific Evidence in Public Issues https://www.aaas.org/programs/epi-center/AI
“Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.” (2023) https://sagroups.ieee.org/global-initiative/wp-content/uploads/sites/542/2023/01/ead1e.pdf
Hoffmann, M. and Frase, H. "Adding Structure to AI Harm: An Introduction to CSET's AI Harm Framework." Center for Security and Emerging Technology. (July 2023). https://cset.georgetown.edu/publication/adding-structure-to-ai-harm/?utm_source=Center+for+Security+and+Emerging+Technology
“How Should We Regulate AI? Practical strategies for regulation and risk management from the IEEE1012 Standard for System, Software, and Hardware Verification and Validation.” (2023) https://ieeeusa.org/product/how-should-we-regulate-ai/
“Trustworthy Evidence for Trustworthy Technology: An Overview of Evidence for Assessing the Trustworthiness of Autonomous and Intelligent Systems.” (2022) https://ieeeusa.org/assets/public-policy/committees/aipc/IEEE_Trustworthy-Evidence-for-Trustworthy-Technology_Sept22.pdf
Sources
1 Firth-Butterfield, K. and Silverman, K. “Artificial Intelligence – Foundational Issues and Glossary.” American Association for the Advancement of Science (2022). https://doi.org/10.1126/aaas.adf0782
2 Mittelsteadt, M. “Artificial Intelligence: An Introduction for Policymakers.” Mercatus Center at George Mason University (2023). https://www.mercatus.org/research/research-papers/artificial-intelligence-intro-for-policymakers
3 West, D. and Allen, J. “How artificial intelligence is transforming the world.” Brookings Institution (2018). https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-world/
4 “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” White House (2022). https://www.whitehouse.gov/ostp/ai-bill-of-rights/
5 Littman, M. et al. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University (2021). http://ai100.stanford.edu/2021-report
6 Heath, R. “Exclusive: States are introducing 50 AI-related bills per week.” Axios (2024). https://www.axios.com/2024/02/14/ai-bills-state-legislatures-deepfakes-bias-discrimination