AI Regulation and Policy in the United States
The United States AI regulatory landscape is defined by a patchwork of federal executive actions, sector-specific agency guidance, and state-level legislation — without a single comprehensive federal AI statute as of 2024. This page maps the institutional structure, legal instruments, classification frameworks, and active policy tensions that shape how AI systems are developed, deployed, and governed across the country. Professionals navigating procurement, compliance, or deployment decisions must contend with overlapping jurisdictions and evolving standards from bodies including NIST, the FTC, the FDA, and the White House Office of Science and Technology Policy (OSTP).
- Definition and Scope
- Core Mechanics or Structure
- Causal Relationships or Drivers
- Classification Boundaries
- Tradeoffs and Tensions
- Common Misconceptions
- Regulatory Compliance Sequence
- Reference Table: Key Instruments and Bodies
Definition and Scope
AI regulation in the United States encompasses the legal rules, executive directives, agency guidance documents, and voluntary standards that govern the design, training, testing, deployment, and monitoring of AI systems. Scope extends across federal and state jurisdictions, covering both general-purpose AI and domain-specific applications in health care, finance, employment, housing, and critical infrastructure.
The foundational federal policy instrument is Executive Order 14110, signed in October 2023, which directed 16 federal agencies to produce guidance, assessments, and standards within defined timelines. EO 14110 does not carry the force of statute but obligated agencies including the Department of Commerce, the Department of Health and Human Services, and the Department of Defense to act within 90 to 365 days on specific deliverables.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0), published in January 2023, defines voluntary standards for trustworthy AI covering four core functions: Govern, Map, Measure, and Manage. The AI RMF is referenced in federal procurement guidance and forms the backbone of sector-specific compliance programs, though it carries no mandatory enforcement mechanism at the federal level.
State-level regulation adds a second layer. Illinois, California, Colorado, and Texas have each enacted AI-related statutes targeting algorithmic employment decisions, automated decision tools, and consumer privacy. The Colorado AI Act (SB 205), signed in May 2024, requires developers and deployers of "high-risk" AI systems to perform impact assessments and disclose AI use to affected consumers — making Colorado the first state to pass comprehensive risk-tiered AI legislation.
Core Mechanics or Structure
The US AI governance structure operates through four distinct instrument types:
Executive Orders and Presidential Directives establish agency mandates and inter-agency coordination requirements without passing through Congress. EO 14110 created the AI Safety Institute Consortium (AISIC) under NIST and directed the Office of Management and Budget (OMB) to issue AI procurement standards for federal agencies, which it did in March 2024 via OMB Memorandum M-24-10.
Agency Guidance and Rulemaking translates executive direction into sector-specific compliance requirements. The FDA regulates AI-enabled medical devices under the Software as a Medical Device (SaMD) framework, with over 950 AI/ML-enabled device authorizations issued as of 2023. The FTC applies Section 5 of the FTC Act to AI-related deceptive practices and has issued enforcement guidance on algorithmic bias and deepfakes.
Voluntary Standards issued by NIST, the IEEE, and ISO/IEC provide technical benchmarks that agencies, contractors, and courts increasingly reference. NIST SP 600-1 (AI RMF Playbook) provides 130+ suggested actions mapped to the four RMF functions.
State Statutes create binding obligations for developers and deployers operating within specific state jurisdictions, often with private rights of action and penalty structures ranging from $500 to $10,000 per violation depending on the statute.
Causal Relationships or Drivers
Three structural forces drive the current regulatory trajectory:
Incident-driven federal agency activation. The FTC's 2023 enforcement sweep targeting AI-generated voice cloning fraud and the FDA's 2021 action plan for AI/ML-based Software as a Medical Device were each triggered by documented harm events, not proactive legislative cycles. This reactive posture produces regulatory gaps between harm occurrence and rule issuance.
Congressional gridlock. As of 2024, over 40 AI-related bills had been introduced in the 118th Congress — including , the AI Labeling Act, and the NO FAKES Act — without a single comprehensive AI statute reaching the floor for a vote. This legislative stasis pushes governance weight onto executive agencies and state legislatures.
International regulatory pressure. The European Union's EU AI Act, the first binding risk-tiered AI regulation globally, became effective in August 2024. US companies with EU market exposure face extraterritorial compliance obligations, which in turn pressures domestic policy toward alignment with EU risk classification models. For context on how AI systems are structured at the technical level, see the coverage of AI system components and architecture.
Classification Boundaries
US AI regulation does not yet employ a single unified risk classification system, but agency practice and emerging state law reveal four functional tiers:
- Prohibited or restricted applications: Biometric surveillance of workers (regulated under Illinois BIPA), real-time facial recognition by law enforcement (restricted in 16 US cities as of 2023), and AI-generated nonconsensual intimate imagery (criminalized in 48 states).
- High-risk applications: AI systems used in employment screening, credit scoring, health diagnostics, and public benefit administration. Subject to impact assessment requirements under Colorado SB 205 and proposed federal rules.
- Limited-risk applications: AI systems with transparency obligations, such as chatbots required to disclose AI identity under California AB 302 (2019).
- Minimal-risk applications: General productivity tools, spam filters, and recommendation engines operating without significant individual-level impact. No current mandatory federal requirements.
For a broader look at how AI ethics and responsible AI principles intersect with classification decisions, that sector provides the definitional context underpinning risk-tier logic.
Tradeoffs and Tensions
Preemption vs. state innovation. Federal preemption of state AI laws would create regulatory uniformity but would extinguish California's, Colorado's, and Illinois's more stringent protections. Industry groups including TechNet and the Chamber of Commerce favor federal preemption; civil rights organizations oppose it, citing state laws as essential floors.
Voluntary vs. mandatory standards. NIST's AI RMF is widely adopted but carries no enforcement mechanism. OMB M-24-10 mandates its use for federal agencies, but private-sector adoption remains uneven. NIST documented 1,200+ organizations engaged with the AI RMF as of mid-2024, yet compliance verification mechanisms are absent.
Speed vs. precision. Executive actions deploy rapidly but lack permanence and democratic accountability. Statutory AI law provides durability but typically lags technological change by 3 to 7 years based on historical sector analogies (telecommunications, financial services).
Innovation versus harm prevention. The National AI Initiative Act of 2020 explicitly prioritizes US AI leadership, creating tension with precautionary regulatory models. This dynamic is central to debates over mandatory pre-deployment testing requirements, particularly for large language models and generative AI systems.
Common Misconceptions
Misconception: The US has no AI law. Correction: Dozens of state statutes address AI directly, and federal agencies enforce existing consumer protection, civil rights, and financial regulation statutes against AI-based violations. , and Title VII all apply to AI system outputs.
Misconception: NIST AI RMF compliance equals legal compliance. Correction: The AI RMF is a voluntary framework. Aligning with it does not satisfy state statutory obligations or sector-specific federal rules. OMB M-24-10 makes it mandatory for federal agencies, not for private entities.
Misconception: AI regulation only affects large technology companies. Correction: Colorado SB 205 applies to any company deploying a "high-risk AI system" affecting Colorado residents, regardless of company size. Illinois BIPA has resulted in class actions against employers with as few as 50 employees.
Misconception: Executive Order 14110 created binding regulations. Correction: EO 14110 directed agencies to act; the resulting agency guidance documents vary in legal force. Some constitute binding rules (issued through notice-and-comment rulemaking); others are non-binding guidance subject to revision without public process.
Regulatory Compliance Sequence
The following sequence reflects the operational steps organizations follow when assessing US AI regulatory exposure — drawn from OMB M-24-10, NIST AI RMF 1.0, and state statute structures:
- Identify system classification — Determine whether the AI system falls into a prohibited, high-risk, limited-risk, or minimal-risk category under applicable state law and federal sector guidance.
- Map jurisdictional exposure — Identify which states' residents are affected and which federal agencies have sector authority (FDA, FTC, CFPB, EEOC, FHFA).
- Conduct impact assessment — Required under Colorado SB 205 for high-risk systems; recommended under NIST AI RMF for all systems. Document data sources, model logic, and potential discriminatory outputs.
- Inventory data governance obligations — Cross-reference AI inputs against state privacy laws (CCPA/) and federal sector rules (HIPAA, GLBA).
- Apply transparency and disclosure requirements — Determine whether chatbot disclosure (California AB 302), employment AI notice (New York City Local Law 144), or automated decision explanation obligations apply.
- Establish monitoring and incident protocols — OMB M-24-10 requires federal agencies to maintain AI use inventories and annual impact assessments; private-sector equivalents are emerging in state law.
- Document governance structure — Assign accountability roles consistent with NIST AI RMF's "Govern" function; record risk acceptance decisions.
- Review against updated agency guidance annually — AI regulatory instruments are updated on sub-annual cycles; static compliance postures generate exposure gaps.
For professionals engaged with procurement decisions, AI system procurement and vendor evaluation covers how regulatory compliance evidence factors into vendor selection frameworks. The full landscape of AI systems this regulation applies to is indexed at the Artificial Intelligence Systems Authority.
Reference Table: Key Instruments and Bodies
| Instrument / Body | Type | Scope | Enforcement |
|---|---|---|---|
| EO 14110 (Oct 2023) | Executive Order | Federal agencies | Agency-level; no private right of action |
| NIST AI RMF 1.0 (Jan 2023) | Voluntary Standard | All sectors | None (mandatory for federal via OMB M-24-10) |
| OMB M-24-10 (Mar 2024) | Agency Memo | Federal agencies | Agency accountability |
| Colorado SB 205 (May 2024) | State Statute | High-risk AI, CO residents | Private right of action; AG enforcement |
| Illinois BIPA (740 ILCS 14) | State Statute | Biometric data, IL residents | Private right of action; $1,000–$5,000/violation |
| NYC Local Law 144 (2023) | Municipal Law | Automated employment tools, NYC | Agency enforcement; $375–$1,500/violation |
| FTC Act §5 | Federal Statute | Deceptive/unfair AI practices | FTC civil penalties |
| FDA SaMD Framework | Regulatory Guidance | AI medical devices | FDA premarket review; 510(k)/PMA |
| National AI Initiative Act (2020) | Federal Statute | Federal R&D coordination | No direct private enforcement |