US AI Regulation 2026: Essential State Law Guide

Nanobanana2 TeamApril 3, 2026

Washington has largely stepped back from federal AI regulation, but the states haven't. In 2026, 78 AI chatbot safety bills are moving through legislatures in 27 states, with 98 chatbot-specific bills tracked across 34 states and three federal proposals (AI2Work, 2026). The result: a fragmented but increasingly real compliance landscape that every AI company and developer needs to understand.

This isn't theoretical anymore. California's SB 243 is live. Oregon's SB 1546 is signed. Tennessee's criminal penalties for harmful chatbot design are advancing. If you build, deploy, or integrate AI chatbots in the US, you're already operating in regulated territory.

Key Takeaways

  • 78 AI chatbot safety bills are active in 27 states as of 2026 (AI2Work, 2026)
  • California's SB 243 (companion chatbot disclosure) took effect January 1, 2026 and is being adopted as a legislative template
  • Oregon's SB 1546 requires chatbots to disclose non-human status and have suicide/self-harm protocols
  • Tennessee's proposed legislation would make training AI to encourage self-harm a criminal felony

Why Are States Acting While the Federal Government Stalls?

The Trump administration's executive order on AI in early 2026 signaled that federal AI regulation would remain light — prioritizing US competitiveness over consumer protection mandates. That left a regulatory vacuum, and state legislatures rushed to fill it.

California is the bellwether. Its SB 243, which took effect January 1, 2026, established disclosure requirements for AI companion chatbots — services that form emotional relationships with users. When California acts, other states follow. SB 243 is now being cited as a template by legislators in at least a dozen other states (Troutman Privacy, 2026).

The practical result for companies: there's no single federal standard to comply with. Instead, there's a growing patchwork of state laws with different definitions, thresholds, and penalties.

What AI Laws Are Already on the Books?

California — Disclosure and Health AI Focus

California's three active AI bills in 2026 target specific high-risk categories:

  • SB 243 (effective Jan. 1, 2026) — Companion chatbot disclosure requirements
  • SB 1146, Regulates AI in health-related advertising
  • AB 2575, Regulates AI use in health care services

California Governor Newsom has been explicit: the state, not the federal government, will determine how AI companies manage risk within California's borders. With the world's fifth-largest economy, California's rules effectively set a national floor for any company that wants California market access.

Oregon, The Baseline Chatbot Safety Law

Oregon Governor Tina Kotek signed SB 1546 into law, passing the state Senate 26-1. The bill establishes two baseline requirements for any chatbot deployed in Oregon (AI Chatbot Legislation, 2026):

  1. Human disclosure, If a reasonable person might believe the chatbot is human, users must be notified it's not
  2. Mental health protocols, Chatbot providers must have documented procedures for handling suicidal ideation and self-harm signals from users

These requirements apply to any chatbot accessible to Oregon residents, regardless of where the company is incorporated. A New York-based AI startup with Oregon users needs to comply.

Tennessee, Criminal Penalties for Harmful AI Design

Tennessee's HB 1470/SB 1580 package goes furthest. The proposed legislation would make it a criminal felony to:

  • Train an AI model to encourage self-harm or suicide
  • Develop chatbots designed to form emotional dependencies that lead to harmful outcomes

The bills have passed the Senate and advanced through a House subcommittee (Future of Privacy Forum, 2026). If signed, Tennessee would be the first state to impose criminal liability on AI developers for harmful design decisions, not just civil penalties.

The trigger: the 2023 death of a teenager who had formed an intense emotional attachment to a Character.AI chatbot. That case drove legislative urgency and is being cited in hearings nationwide.

What Should AI Companies Do Right Now?

Immediate Compliance Requirements

If you operate chatbots accessible to US users, you likely need:

  • Clear non-human disclosure, Not buried in terms of service; surfaced in the actual interface
  • Mental health safety protocols, Documented escalation procedures for crisis signals
  • Geographic compliance tracking, Different rules apply in different states

The compliance trap: "We don't operate in Oregon" isn't as clean as it sounds. If a user in Oregon can access your chatbot, through a web browser or mobile app, Oregon's regulators can assert jurisdiction. The safest interpretation is that state laws apply wherever your chatbot is accessible, not where your company is located.

The Design Liability Shift

Tennessee's criminal felony framing represents something new: shifting liability from outputs to design decisions. Previous regulatory frameworks focused on what AI systems did. Tennessee's approach asks what AI systems were designed to do.

If your model was fine-tuned in ways that predictably encourage harmful behavior, that design choice could constitute criminal conduct under the proposed framework. This creates documentation pressure: AI companies need records of safety evaluations, red-teaming results, and design rationale, not just incident response logs.

The Federal vs. State Tension

The current administration favors preemption, using federal rules to override state AI laws in the name of regulatory coherence. Several proposals in Congress would preempt state AI regulations to create a single national framework.

AI companies face a strategic choice: lobby for federal preemption (simpler compliance, potentially weaker rules) or prepare for the patchwork (complex compliance, but with more consumer protection floor). Where companies land on this often reflects their business model, large platforms with national reach typically prefer federal preemption; consumer advocates prefer state autonomy.


Related Resources on Nano Banana 2:

Frequently Asked Questions

How many US states have active AI legislation in 2026?

As of early 2026, 78 chatbot safety bills are active in 27 states, with 98 chatbot-specific bills tracked across 34 states and three federal proposals under consideration (AI2Work, 2026). The pace of legislation has accelerated significantly from 2025.

What does California's SB 243 require for AI chatbots?

California's SB 243, effective January 1, 2026, requires AI companion chatbots, services designed to form emotional relationships with users, to disclose their non-human nature. The law is serving as a legislative template for similar bills in other states (Kiteworks, 2026).

What is Oregon's SB 1546?

Oregon's SB 1546, signed by Governor Tina Kotek, requires chatbots to disclose they're not human when a reasonable person might think they are, and mandates that chatbot providers have documented procedures for handling users expressing suicidal ideation or self-harm. It passed the state Senate 26-1 (Stateside Associates, 2026).

Could AI developers face criminal charges under Tennessee's proposed law?

Yes, under Tennessee's HB 1470/SB 1580, it would be a criminal felony to train an AI model or system to encourage harmful acts including suicide, or to deliberately develop emotional relationships that lead to harm. The bills have passed the Senate and advanced in the House (Future of Privacy Forum, 2026).

How should AI companies prepare for state-level AI regulation?

Focus on three areas: (1) implement clear non-human disclosure in your UI, not just in terms of service; (2) build and document mental health safety protocols including crisis detection and escalation procedures; (3) audit your model's fine-tuning and RLHF process for any patterns that could be characterized as encouraging harmful behavior. Treat state laws as applying wherever your product is accessible, not just where you're incorporated (Troutman Privacy, 2026).