Ask the expert: How are states placing guardrails around AI? 

U.S. state legislatures are stepping up to place guardrails around artificial intelligence, or AI, technologies in response to the lack of meaningful regulation from the federal government. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap.

Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.

Here, Anjana Susarla, Omura-Saxena Professor in Responsible AI in Michigan State University’s Broad College of Business, explains the four aspects of AI in particular that stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI.

Answers are excerpts from an article originally published in The Conversation.

How is the government currently using AI, and what challenges come along with this?

The oversight and responsible use of AI are especially critical in the public sector. Predictive AI — AI that performs statistical analysis to make forecasts — has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.

But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases.

How have states started to regulate the use of generative AI?

The widespread use of generative AI has prompted concerns from lawmakers in many states. Utah’s Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they’re using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information.

Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training.

How are states working to address potential challenges and harms that come with the use of AI?

Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment.

Several states have required AI developers to disclose risks posed by their systems, while others have set requirements that AI developers adopt risk management frameworks — methods for addressing security and privacy in the development process — for AI systems involved in critical infrastructure.

How are states attempting to regulate the use of AI in health care?

In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI and clinicians’ use of AI.

Bills about transparency define disclosure requirements for information that AI system developers and organizations use.

Consumer protection bills aim to keep AI systems from unfairly discriminating against some people and ensure that users of the systems have a way to contest decisions made using the technology.

Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.

What are the potential harms of using facial recognition technology?

In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.

Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and, consequently, is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.

Bias also creeps into the data used to train these algorithms; for example, it can be introduced when the composition of teams that guide the development of such facial recognition software lack diversity.

By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.

What are states doing to try and regulate AI in the absence of federal regulation?

In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers’ compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections.

Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says “The federal government should not allow AI-related federal funding to be directed toward states with burdensome AI regulations . . .”

The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration’s definition of burdensome against needed federal funding for AI.

By Paige Higley

MEDIA CONTACTS

Artificial IntelligenceEngineering, Science and Technology