Who Pays When AI Causes Harm? State Liability Laws Explained

Weekly Update, Vol. 88.

Key Takeaways

  • State AI liability laws are emerging across the country as lawmakers grapple with who should be held responsible when AI systems cause harm, from chatbots providing misleading information to algorithms making consequential decisions about employment or housing.
  • Several states are proposing AI chatbot liability legislation that would hold companies accountable for financial losses or other harms caused by materially misleading or incorrect information generated by their customer service bots, rejecting the argument that chatbots are separate legal entities.
  • Product liability for AI systems is gaining traction through bills in Illinois, New York, and other states that would treat AI as a defective product if it's unreasonably dangerous, inadequately warns users, or causes harm through design flaws.
  • State AI accountability frameworks vary widely in their approach to AI developer deployer responsibility, with some states proposing joint liability between developers and deployers while others offer safe harbors for companies that follow risk management practices like the NIST AI Risk Management Framework.
  • If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.

When Air Canada’s customer service chatbot told a grieving passenger he could book a flight at full price and later claim a bereavement discount, it sounded authoritative. It was also wrong. The airline refused to honor the refund, arguing the chatbot was a “separate legal entity” responsible for its own words. A Canadian tribunal disagreed and held the airline liable for the bot’s misstatements, rejecting the idea that companies can outsource accountability to code. That case has quickly become a real-world touchstone for lawmakers grappling with a deceptively simple question: when AI systems cause harm, who pays?

Like any product, AI systems do not always perform as intended. But the ever-changing technology, the opacity of algorithms, and the potential chilling effect that overly broad liability laws could have on an emerging industry pose challenges for assigning responsibility. State legislatures have responded with a range of liability frameworks that reflect competing views about what AI systems are and how risk should be allocated.

If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.

Previous
Previous

Digital Replica Legislation in 2026 Expands Right of Publicity

Next
Next

AI Training Data Disclosure Requirements Gain Momentum Nationwide