Scaled Back AI Governance Bill Sent to Texas Governor
Key highlights this week:
We’re tracking 1,046 bills in all 50 states related to AI during the 2025 legislative session.
The New York Legislature passed its version of the AI safety bill, the RAISE Act, as lawmakers send the bill to the governor’s desk before next week’s adjournment.
Comprehensive AI regulation falls short in Connecticut again.
The Texas lawmakers approved a pared-back AI regulation bill, as well as several AI-related measures addressing sexual deepfakes, state use of AI, and AI use by health care entities, which are the topic of this week’s deep dive.
Last week, the Texas Legislature approved the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), an AI regulation bill that is much scaled-down from the ambitious “red state model” bill sponsor Rep. Giovanni Capriglione (R) originally proposed. While the bill does impose limited guardrails and transparency measures, it is a retreat from robust regulation, a victory for industry groups wary of new compliance burdens, and a reflection of the political headwinds facing AI legislation across the country. The governor has until June 22, 2025, to decide whether to sign the bill into law.
TRAIGA (TX HB 149) would establish a modest legal framework focused on preventing harmful or deceptive uses of AI, particularly by public entities or in sensitive contexts.
Prohibited Uses of AI
Rather than applying an oversight framework of requirements for certain “high-risk” AI systems, similar to other algorithmic discrimination proposals, TRAIGA specifies prohibited uses for all artificial intelligence systems, including:
Prohibition on harmful activities: TRAIGA prohibits the development or deployment of AI systems intended to encourage self-harm, criminal activity, or the production of child sexual abuse material.
Prohibition on social scoring: TRAIGA prohibits government entities from using AI-based systems that rate or categorize individuals based on behavior or characteristics in ways that affect their rights or opportunities.
Prohibition on infringing rights: Rep. Capriglione added an amendment to ensure that AI cannot be used to infringe any right guaranteed under the U.S. Constitution.
Prohibition on discrimination: The bill includes an anti-discrimination provision, but it is watered down compared to anti-discrimination sections in other AI bills. Under TRAIGA, AI cannot be used to discriminate against protected classes, but it specifies that disparate impact on populations is “not sufficient by itself to demonstrate an intent to discriminate.” Insurers and financial institutions are exempt and are instead required to follow appropriate insurance and financial laws.
Disclosures Limited to Government Use
TRAIGA contains disclosure requirements, but they are limited to AI used by state agencies or health care providers. Those entities will be required to disclose to individuals when they are interacting with an AI system that directly interacts with the public, such as a chatbot. TRAIGA also includes some privacy protections, clarifying that AI systems cannot use biometric identifiers without consent, a nod to the state’s biometric laws.
Regulatory Sandbox
Under the bill, Texas would also set up a regulatory sandbox program administered by the Texas Department of Information Resources to allow companies to test AI systems without full regulatory exposure for up to 36 months. This is similar to a program already running in Utah and is aimed at promoting innovation in sectors like finance, healthcare, and education.
Enforcement
The Texas Attorney General (and in some cases, state agencies) will enforce violations of TRAIGA. There is no private right of action for private individuals to file lawsuits to enforce TRAIGA. The AG may investigate and, if necessary, seek civil penalties for violations of the proposed law.
TRAIGA 2.0 Less Onerous Than TRAIGA 1.0
Rep. Capriglione served on the informal AI Policymaker Working Group with Connecticut Sen. James Maroney (D), which has been studying artificial intelligence policy for the last two years. When the Texas legislative session began, Rep. Capriglione introduced a measure (TX HB 1709) that he thought could be a model for other Republican lawmakers in contrast to the more regulation-heavy policies Sen. Maroney and other Democrats were working on. “Our goal and our obligation is to create a model, a red state model, that other states can go and look at, that's our goal,” said Capriglione last fall.
While HB 1709 (also called TRAIGA) was not as onerous as proposals in California and Connecticut, it still imposed numerous obligations on developers and deployers of “high-risk” AI systems, requiring a duty of reasonable care to prevent foreseeable risks of algorithmic discrimination, annual impact assessments, disclosure of system limitations, adherence to certain risk management standards, and disclosure to consumers when AI is used for consequential decisions. This regulatory regime brought industry pushback against requirements seen as too vague and overly broad, eventually leading Rep. Capriglione to abandon the bill and start fresh with TX HB 149 (or TRAIGA 2.0).
One of a Dozen AI Bills on Governor’s Desk
The veto pen killed a legislature-approved algorithmic discrimination bill in Virginia, and a lack of gubernatorial support thwarted efforts in Connecticut again this year. But in Texas, Gov. Greg Abbott (R) is expected to sign HB 149, one of several AI bills the legislature sent to his desk this year. Among them are bills:
Criminalizing the production or distribution of sexual deepfake images without consent (HB 449);
Requiring age verification for websites that create sexual deepfakes (HB 581);
Creating an agency to assist state agencies in integrating generative-AI tools (HB 2818);
Requiring social media platforms to take down sexual deepfakes upon request (HB 3133);
Requiring AI training for certain state employees (HB 3512);
Including deepfake content in prohibitions on child sexual abuse material (SB 20);
Providing civil and criminal remedies for the distribution of sexual deepfakes without consent (SB 441);
Prohibiting the use of AI to make an adverse determination in utilization review (SB 815);
Requiring healthcare practitioners who use AI for diagnostic purposes to review all information for accuracy (SB 1188);
Including computer-generated content in provisions criminalizing possession of sexual content of children. (SB 1621);
Regulating the use of AI by government entities (SB 1964); and
Creating civil and criminal liabilities for using deepfakes for financial fraud (SB 2373).
Ultimately, Texas may indeed serve as a model for AI regulation, not just for red states, but for blue states wary of scaring away tech companies. By focusing narrowly on clearly harmful use cases instead of imposing a broad compliance framework, this legislation may be a pragmatic approach that can survive this political environment.
Recent Developments
Major Policy Action
United States: After criticism from some Republican members of Congress, the Senate Commerce Committee removed an outright preemption of state and local AI laws from the House passed reconciliation bill and replaced it with a provision (page 26) that would deny states promised funding under the Biden era BEAD program to build out broadband infrastructure in the state if the state chooses to regulate AI over the next ten years. It’s still unclear if this version of an AI moratorium would survive the “Byrd Rule” in the Senate. Even so, preemption could resurface under an anticipated separate AI regulatory bill from Sen. Cruz (R).
New York: On Thursday, lawmakers passed Assemblymember Bores’ RAISE Act (NY AB 6453 / SB 6953, which we analyzed upon introduction), an AI safety bill similar to last year’s CA SB 1047, which would regulate the development and deployment of AI "frontier models."
Connecticut: The AI regulation bill (CT SB 2) pushed by Sen. James Maroney (D) failed to pass as the legislature adjourned last week, once again from a lack of support from Gov. Ned Lamont (D). Sen. Maroney expressed some satisfaction that the legislature was able to pass a bill prohibiting the distribution of non-consensual sexual deepfakes (CT HB 7287).
Rhode Island: Attorney General Peter Neronha (D) issued an Advance Notice of Proposed Rulemaking earlier this month to solicit feedback for potential regulations regarding deceptive trade practices involving artificial intelligence. Stakeholders can issue comments until July 23.
California: The Senate approved a bill (CA SB 243) designed to protect minors from some potential harms arising from interactions with companion chatbots. The bill requires companion bot operators to take steps to prevent the bot from encouraging increased engagement and requires a protocol for addressing suicidal ideation or self-harm.
Florida: Gov. DeSantis (R) signed a measure known as “Brooke’s Law” (FL HB 1161) that requires online platforms to take down deepfake sexual content upon request within 48 hours. The bill was named for Brooke Curry, the daughter of a former Jacksonville mayor, who had pictures of her taken from social media altered by AI to produce sexual content without her consent, and pushed for greater online protections.
Nevada: Gov. Joe Lombardo (R) signed two sexual deepfake bills last week approved by the legislature. NV SB 213 includes sexual deepfakes in criminal revenge porn laws, and NV SB 263 expands child pornography provisions to include computer-generated content and increases penalties.
Pennsylvania: The Senate passed a bill (PA SB 649) that would create the crime of digital forgery for using a digital likeness to commit fraud. The Senate amended the bill before sending it to the House, creating exceptions for a “constitutionally protected activity.”
Notable Proposals
New York: Assembly Democrats introduced the New York Artificial Intelligence Act (NY A 8884) on Monday, a companion to NY S 1169. The proposal requires developers and deployers to take reasonable care to prevent foreseeable risk of algorithmic discrimination, disclose AI use for consequential decisions to consumers with an opportunity to opt out, and place liability on developers and deployers for the accuracy of consequential decisions made by AI.]
Washington: The city of Seattle advanced a bill to ban the use of rent-setting algorithms used by landlords. Philadelphia, Minneapolis, and two cities in California (Berkeley and San Francisco) have already banned such algorithms that aggregate and analyze private data on the housing market.