Yes, but how it is regulated is important.
AI is not just one thing but encompasses several different tools and systems. The United States currently regulates AI as distinct products and applications, but due to the complicated nature of the technology, there are some less clear-cut areas for government responsibility. AI also poses many significant ethical risks for users, developers, humans and society.
At the federal level, specific AI legislation does not yet exist. The German Marshall Fund has an AI policy tracker that maps federal action and initiatives on AI, comparing their goals, applications, enforcement, timing and legal outcomes. The National Conference of State Legislatures also has a webpage tracker that summarizes enacted legislation and adopted resolutions in states so far.
AI needs to be regulated, but how remains a question.
While federal AI policy is still being developed, President Biden issued an Executive Order that establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and ensures responsible and effective government use of AI.
Individual states and cities have each enacted their own laws and rules regulating AI and how it is used.
American Association for the Advancement of Science, Center for Scientific Responsibility: Decision Tree for the Responsible Application of Artificial Intelligence
Select Committee on Artificial Intelligence of the National Science And Technology Council: National Artificial Intelligence Research and Development Strategic Plan 2023 Update
White House: Blueprint for an AI Bill of Rights