Key Takeaways
- The regulatory battle is intensifying, with federal efforts to centralize AI oversight clashing with state-led authority.
- The NAIC is firmly defending state-based insurance regulation, citing consumer protection and legal precedent.
- State activity is accelerating, with 38 states enacting AI-related laws and many targeting high-stakes uses like claims and underwriting decisions.
- The NAIC’s AI Model Bulletin is gaining traction, pushing insurers to implement formal governance programs, risk controls, and audit frameworks.
- Insurers must navigate uncertainty while prioritizing responsible AI use, as litigation risk and scrutiny around automated decisions continue to rise.
March accelerated the race to define who governs AI in insurance. A new federal framework seeking to curb state-level regulation was met with a firm response from the National Association of Insurance Commissioners (NAIC), reinforcing its stance on state authority. As AI adoption surges, insurers now face a fast-evolving regulatory environment, where the rules, and the rule makers, are still being decided.
Where State-Level Insurance AI Regulation Stands
States have been busy enacting regulations to oversee the way businesses use AI, both inside and outside of the insurance industry.
The National Conference of State Legislatures1 keeps a list of state-level AI legislation. As of 2025, 38 states have adopted or enacted AI measures. More rules are coming. In 2025, every state, as well as Washington D.C., the Virgin Islands and Puerto Rico, introduced AI-related legislation.
State laws cover a lot of AI-related issues, and not all of them are immediately relevant to insurance companies – but some of them are. The use of AI in claims denials and prior authorization decisions has been a particularly hot topic. According to Avalere Health2, at least 29 states have passed laws addressing the use of AI in healthcare, and some of these laws prohibit AI from being the sole source of adverse determinations.
A Quick Background on the NAIC Model Bulletin Adoption
In 2023, the NAIC adopted a Model Bulletin on the Use of Artificial Intelligence Systems3 by Insurers. The bulletin aims to remind insurers that hold certificates of authority that they must comply with all applicable insurance laws and regulations, including those addressing unfair discrimination or trade practices, when implementing AI. Without proper controls, the bulletin warns that AI could “increase the risk of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers.”
To reduce the risk, insurers are urged to maintain a written AI Systems Program (AIS Program) that covers certain points, including governance, risk management controls and internal audit functions.
Many states are on board with these guidelines. According to JDSupra4, 24 states have adopted the NAIC’s bulletin as of March 2025.
The Federal-State Tug-of-War
While states are rolling out their own plans to regulate AI, the federal government has other ideas.
In December 2025, an executive order titled Ensuring a National Policy Framework for Artificial Intelligence5 called for the elimination of state laws that obstruct a national AI policy. According to the order, “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” The order also called for an evaluation of state AI laws and possible restrictions on state funding.
On March 20, 2026, the White House followed up with a National AI Legislative Framework5. Section VII, Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws, says that states should not be permitted to:
· Regulate AI development.
· Unduly burden Americans’ use of AI for activities that are lawful when performed without the use of AI.
· Penalize AI developments for a third party’s unlawful conduct involving their AI models.
The NAIC Response
In response to the federal push to restrict state-level AI regulations, the NAIC6 has reaffirmed their support of state-based AI oversight. An Issue brief argues that federal preemption undermines consumer protections and the McCarran-Ferguson Framework, which allows states to regulate insurance.
The NAIC has also launched an AI Evaluation Tool7 pilot to help regulators assess how insurers use AI and whether the governance practices in place are sufficient to manage risks. Nine states are participating in the pilot program.
Embracing Responsible AI
The current explosion of AI tools involves a wide range of models, systems and applications. Some are more controversial than others, and many concerns revolve around AI used to make high-stakes coverage decisions in underwriting, prior authorization or claims denials. Litigation is already brewing.
This does not mean that AI cannot be used responsibly. Liberate’s AI system of action can help insurers manage the bottom line by increasing efficiency and improving the customer experience, all under the oversight of thoughtful governance and guardrails.
Sources:
1. https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation
2. https://advisory.avalerehealth.com/insights/state-oversight-of-ai-in-healthcare-status-and-impacts
4. https://www.jdsupra.com/legalnews/nearly-half-of-states-have-now-adopted-7290555
6. https://content.naic.org/sites/default/files/ai-issue-brief.pdf
7. https://content.naic.org/sites/default/files/call_materials/Pilot%20Project%20Summary.pdf


