How to Define AI?

We’ve established that artificial intelligence (AI) is the issue du jour in state capitols today and that state lawmakers are solidly in the education stage of the policymaking process around AI regulation. But it’s helpful to take a step back and ask what are we even talking about? What is “artificial intelligence”? 

Policymakers and industry insiders have yet to settle on a universal definition. Google, whose researchers have led much of the academic development on AI over the years, defines AI broadly as “a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.”

But lawmakers need a more precise and narrow definition of AI in order to regulate the technology in legislation. So far, states have proposed various definitions. For example, a Rhode Island bill (RI HB 6285) defines AI as “any technology that can simulate human intelligence, including, but not limited to, natural language processing, training language models, reinforcement learning from human feedback and machine learning systems.” A pending bill (MI HB 5143) in Michigan defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

The recently introduced “New York Artificial Intelligence Bill of Rights” (NY AB 8129) sidesteps the issue by not using the term AI within the bill itself (despite the title) and instead defines an “automated system.” While the definition the bill supplies for automated systems is detailed (and includes “artificial intelligence techniques”), the legislative intent intro spells it out more succinctly as “any system making decisions without human intervention.” 

One option for lawmakers is to broadly define AI, which could pull many established and vetted technologies into the AI regulatory target. This includes many techniques used way before ChatGPT and image generators like DALL-E burst onto the public consciousness last year. Another option will be to define AI, or use another term like “automated system” or “automated decision making,” that’s narrowly tailored to the legislative provision or regulatory goal lawmakers are seeking to solve. For example, in the legislation signed into law this week by Gov. Hochul (see details below), New York lawmakers didn’t need to define AI in order to outlaw “deepfakes.” They simply added “an image created or altered by digitization” to the state’s unlawful dissemination or publication of an intimate image statute. 

Settling on a definition of AI or defining the technology into actionable subsets will be a fundamental job of policymakers and industry moving forward. And for companies that have used algorithms to make decisions for years, keeping an eye on how states are defining these terms will be paramount to keep established techniques out of the regulatory crosshairs. 

Recent Policy Developments

  • New York: Gov. Hochul signed into law this week legislation (NY SB 1042) that adds “deep fake” images created by digitization within the definition of unlawful dissemination or publication of an intimate image. Lawmakers also introduced a flurry of AI bills this month that offer a preview for next year's session. The proposed laws look to regulate AI from several different angles, including disclosure (NY AB 8098, NY AB 8158), discrimination (NY SB 7623, NY AB 8129), and job protections (NY AB 7634, NY AB 8138). Another proposed New York bill (NY AB 8110) would regulate the admissibility of evidence created or processed by AI while one (NY AB 8105) would require an oath of responsible use from users of certain high-impact advanced artificial intelligence systems. 

  • New Jersey: Gov. Murphy signed an executive order establishing an advisory Artificial Intelligence Task Force to study the impact of AI on society. The order also calls on state agencies to develop guidelines for the ethical use of AI to improve government efficiency. 

  • Illinois: The House will hold a joint committee hearing on artificial intelligence on November 2 between the Judiciary-Civil Committee and the Cybersecurity, Data Analytics, and IT Committee. The Senate also passed a bill this week to insert a definition of “digitally altered sexual image” to clarify a deepfake law that passed earlier this year

  • Indiana: The Commerce and Economic Development Interim Study Committee held a hearing to educate lawmakers on artificial intelligence and its impacts. “We’ve had the pants scared off of us a little bit, in some regards,” said Committee Chair Sen. Scott Baldwin. The committee will meet again on November 1 to discuss draft recommendations for the General Assembly.

  • Michigan: Lawmakers have introduced a package of bills aimed at regulating the use of artificial intelligence in political ads. The measures would require disclaimers on ads generated with AI within 90 days of an election and create penalties for trying to deceive voters. A group advocating for the bills produced a campaign ad using deepfakes that depicts Michigan politicians saying things they would never say.

Previous
Previous

AI Policy 101: Key Trends in State Legislation

Next
Next

States Go to School on Artificial Intelligence