Dueling Approaches Shape Connecticut's AI Policy
Key highlights this week:
We’re tracking 984 bills in all 50 states related to AI during the 2025 legislative session.
Montana becomes to first state to enact a “Right to Compute Act.”
North Dakota enacted a political deepfake law.
Lawmakers in Arkansas and Tennessee sent their governors sexual deepfake legislation.
Connecticut lawmakers are navigating a delicate balance between encouraging artificial intelligence innovation and imposing meaningful regulatory guardrails, a tension reflected in two major bills recently advanced by the General Law Committee. The competing visions underscore a broader policy divide on whether the state should fill the vacuum left by Congress and the White House and lead on AI regulation, or develop more of a consensus among states to avoid being hostile to the industry.
The General Law Committee approved passage of SB 2, the second attempt at a comprehensive regulatory bill sponsored by Sen. James Maroney (D). The General Law Committee in the Connecticut Legislature is a joint standing committee with jurisdiction over consumer protection matters. We wrote about the bill back in February when the committee first released its draft. The proposal retains many of the regulatory provisions Sen. Maroney introduced last year, imposing certain obligations on developers and deployers of high-risk AI systems, as well as "integrators," those that integrate a high-risk AI system into a product or service.
After feedback from stakeholders, the committee softened certain provisions without altering its regulatory core. The definition of "consequential decision" was amended to clarify it applies to a decision that relates to "access" to employment, education, essential government services, or health care services. The committee also deleted from the definition automated task allocation that limits, segregates, or classifies employees for assigning material terms and conditions of employment. The issues raised are similar to those faced by the California Privacy Protection Agency in their revisions to proposed regulations on automated decision-making technology.
The committee also amended the definition of “high-risk artificial intelligence system” to delete an exemption for AI that is “intended to perform any narrow procedural task” or used to “detect decision-making patterns, or deviations from decision-making patterns”, a suggestion proposed by the Electronic Privacy Information Center (EPIC). It also adds exemptions for narrow, low-risk tasks such as sorting documents, detecting duplicate applications, or flagging unusual decision-making patterns that are not intended to make or influence decisions without sufficient human oversight.
Despite pushback from industry witnesses, the committee retained many of the obligations designed to mitigate risks, such as documentation and disclosures. Developers will have to provide a general statement of "intended" uses rather than "reasonably foreseeable” uses. The amended bill also allows any documentation a developer provides to satisfy compliance with the bill if it is "reasonably similar in scope and effect" to the required documentation, and it allows contracting with a third party to assist with compliance. Another addition to the bill requires deployers to disclose how a consumer can exercise their rights under the law.
The committee made several changes to requirements for general-purpose AI deployers, such as narrowing the scope to only released models capable of being used by a high-risk AI system and deleting many of the technical documentation requirements. The committee added a new section to establish a legislative coordination and innovation support initiative to aid AI policy implementation and small business adoption.
The bill received resistance in a contentious hearing from Dan O'Keefe, commissioner of the state Department of Economic and Community Development. O’Keefe told the committee, "It's irrational to be the first state in the region,” echoing concerns voiced by Gov. Ned Lamont (D) that Connecticut would be an outlier in AI regulation.
Gov. Lamont has his own AI bill (CT SB 1249), a proposal more focused on fostering innovation that was also passed out of the General Law Committee. The bill would establish an investment fund for artificial intelligence and quantum technology and create an AI regulatory sandbox program, similar to the one developed in Utah last year. The bill also has some regulatory measures, clarifying that the use of AI is not a defense to a tort or unfair or deceptive act furthered by the AI, as well as directing an inventory of AI use in state government. The governor recently touted his bill on X, boasting that Connecticut has “always led in innovation — now we’re leading the next chapter.”
Sen. Maroney doesn’t disagree, arguing that guardrails and innovation need not be at odds. "In fact, we believe that good guardrails, good regulations, help to promote innovation and we can make Connecticut a leader in responsible AI innovation.” But he will need the support of Gov. Lamont, whose threatened veto was enough to kill Sen. Maroney’s bill last year.
According to Sen. Maroney, his bill will now go to the Judiciary Committee, then the Appropriations Committee, before a planned mid-May debate in the Senate. He still faces an uphill climb to get his proposal passed before the scheduled adjournment on June 4. But even if his bill fails again this year due to gubernatorial pushback, Sen. Maroney has established himself as an influential leader in state AI regulation, organizing a multi-state AI working group with lawmakers from nearly every state. And the changes in CT SB 2 could indicate similar shifts in the two dozen or so algorithmic discrimination and AI consumer protection bills introduced in 18 states this session.
Recent Developments
Major Policy Action
Arkansas: The General Assembly passed several AI-related bills before adjourning on Wednesday including measures to prohibit sexual deepfakes (AR HB 1529), adds sexual deepfakes to child pornography criminal provisions (AR HB 1877), grant intellectual property rights to content generated by generative AI models (AR HB 1876), and require governments to develop policeis on AI and automated decision tools (AR HB 1958). These bills now head to the governor’s desk for her signature to become law.
Montana: On Wednesday, Gov. Gianforte (R) signed the “Right to Compute Act” (MT SB 212), which prohibits the government from restricting the ability to privately own or make use of computational resources for lawful purposes.
Nevada: The Assembly unanimously passed a bill (NV AB 325) to prohibit a public utility from using artificial intelligence to make a final decision regarding whether to reduce or shut down utility service in response to a disaster or emergency.
North Dakota: Gov. Kelly Armstrong (R) signed a bill (ND HB 1167) into law to prohibit political deepfakes without a disclaimer after the measure passed both chambers unanimously.
Oregon: The House passed a bill (OR HB 2299) to modify the crime of unlawful dissemination of an intimate image to include the disclosure of digitally created, manipulated, or altered images.
Tennessee: The General Assembly passed legislation (TN SB 741) that would create a criminal offense for possessing, distributing, or producing technology, software, or digital tools designed for the purpose of creating sexual content involving a minor. Gov. Bill Lee (R) has ten days (excluding Sundays) to sign or veto the bill.
Notable Proposals
Alabama: A consumer data privacy bill (AL HB 283) was amended last week to create an exemption for artificial intelligence models “provided that no personally identifiable data is present in the model or can be extracted from the model." The proposal creates rights for consumers over data collected from them, with certain obligations for data controllers.
Maine: Lawmakers have drafted a bill (ME LD 1727) to require the disclosure of chatbots engaged in commercial transactions with consumers. Another group of lawmakers has drafted a bill (ME LB 1690) to require disclaimers for political deepfake ads.
North Carolina: A bipartisan group of lawmakers has introduced a bill (NC HB 934) that would protect an artificial intelligence developer from liability for errors when certain “learned professionals” use the program or product for professional services. The proposal also includes a prohibition on the use of deepfakes to harass, extort, threaten, or cause harm to an individual, or to injure a political candidate.