G360 Technologies

The Evidence Problem: State AI Laws Are Asking for Documents Most Enterprises Don’t Have

The Evidence Problem: State AI Laws Are Asking for Documents Most Enterprises Don’t Have

Colorado, Connecticut, and Maryland are turning AI governance into recurring work with deadlines, documentation requirements, and user rights obligations. The question for enterprise teams is not whether frameworks exist, but whether the evidence to satisfy them is ready.

Short Scenario

A product team launches an AI-assisted hiring tool. It ingests resumes, scores candidates, and flags whom to advance. The model performs well in testing. Legal clears the launch. Once the regime is in force, a compliance inquiry arrives, whether from a regulator, an internal audit, or a procurement diligence process.

The request covers the impact assessment conducted before deployment, training data documentation, performance metrics, discrimination risk evaluation, vendor documentation provided to the deployer, applicant notices, and any explanation or appeal process.

None of this is about whether the model worked. It is about whether governance was treated as a system requirement from the start.

Several U.S. states are establishing AI governance regimes that regulate certain systems not because they are “AI,” but because they materially affect people’s rights, opportunities, or access to essential services. Colorado’s enacted Colorado AI Act SB 24 205 , Connecticut’s pending SB 2, and Maryland’s enacted AI Governance Act for state agencies represent the most developed frameworks. A parallel track is forming through California’s ADMT regulations and a separate frontier-model transparency regime under SB 53.

These frameworks share a common logic: define a category of systems called “high-risk” or “high-impact,” attach governance obligations to that category, and require evidence that those obligations were met.

The shared trigger is consequential decisions: those with legal or similarly significant effects in domains such as financial or lending services, housing, insurance, education, employment, healthcare, or access to essential goods and services. Colorado and Connecticut focus on private-sector developers and deployers. Maryland focuses on public-sector agencies. California spans both, depending on the provision.

Key deadlines: Colorado’s core obligations take effect June 30, 2026. Connecticut’s SB 2 would take effect February 1, 2026 if enacted. Maryland’s agency inventory deadline was December 1, 2025, with impact assessments for certain existing systems due by February 1, 2027. California’s frontier-model obligations under SB 53 are effective January 1, 2026, with ADMT rules following January 1, 2027. Organizations not yet in-scope for every regime may already have suppliers, customers, or public-sector counterparts that are.

How the Mechanism Works

Classification: “High-Risk” and “Consequential Decisions”

The governance trigger is not the presence of AI. It is the role the system plays. Colorado and Connecticut both use the framing of “high-risk AI systems” that make, or are a substantial factor in making, consequential decisions. Once a system crosses that threshold, it becomes a governed system with documented controls rather than a standard software feature.

In practice, classification is harder than it appears. Many systems sit at the edges: they inform rather than decide, or they contribute to a workflow where a human nominally makes the final call. Getting classification right is the prerequisite to everything that follows.

Developer Obligations vs. Deployer Obligations

Both Colorado and Connecticut split responsibilities between developers (those who create or provide the AI system) and deployers (those who use it in an operational context affecting people).

Developers are responsible for reasonable care, for providing deployers with the technical documentation needed to conduct assessments, and for publishing statements about high-risk systems and risk management practices. Colorado adds a notification requirement: developers must alert the Attorney General and known deployers within 90 days of discovering, or receiving a credible report, that a system has caused or is likely to cause algorithmic discrimination.

Deployers carry the implementation burden: a risk management policy and program for each high-risk system, comprehensive impact assessments, annual reviews, consumer notices, and rights processes for adverse decisions. Deployers cannot complete their obligations without adequate documentation from developers. Gaps in vendor-supplied materials are a compliance blocker, not just a legal footnote.

Evidence Artifacts

Compliance is not a checkbox. Required artifacts typically include a risk management policy and program; a comprehensive impact assessment per highrisk AI system covering purpose, data categories, performance metrics, discrimination evaluation, and safeguards; documentation packages flowing from developers to deployers; and public statements about high-risk system categories. These artifacts must be maintained over time, not produced once at launch.

Transparency and User-Facing Controls

Colorado and Connecticut both require AI interaction disclosures for systems intended to interact with consumers, and consumer notice when a high-risk system is used in a consequential decision context. Both include rights to explanation, correction, and appeal or human review following adverse consequential decisions. Connecticut SB 2 adds watermarking requirements for AI-generated content under specified circumstances.

These obligations require operational readiness across support, legal, and product teams, including the ability to field appeals, trace decisions, and enable meaningful human review.

Public Sector Governance

Maryland requires state agencies to maintain inventories of high-risk AI systems, adopt procurement and deployment policies, and conduct impact assessments on a defined schedule. California’s government inventory requirement mandates statewide visibility into high-risk automated decision systems and reporting.

Framework Alignment as a Defense

Colorado and Connecticut both reference the NIST AI Risk Management Framework as a basis for asserting reasonable care or an affirmative defense. This creates an incentive to build one internal governance program mapped across jurisdictions rather than separate compliance tracks per state.

A Second Scenario: The Vendor Problem

An enterprise deploys a third-party AI model to score commercial loan applications. The vendor provides a model card and a brief technical summary. When the deployer’s compliance team begins its impact assessment, it finds the vendor documentation does not include discrimination testing results across protected classes, does not describe training data sources with enough specificity to evaluate potential bias, and does not provide the performance metrics expected for the impact assessment.

The deployer cannot complete its assessment without that information.

Procurement did not require it at contract time. The compliance deadline is fixed.

This is a representative failure mode implied directly by the developer-deployer split these frameworks create. Procurement processes that do not require documentation upfront will surface the gap at the worst possible time.

What Changes Operationally

Inventory as a prerequisite. Teams need to identify where AI is used in consequential decision contexts, and who is the developer versus deployer for each system. Without this map, everything else is guesswork.

Impact assessment as a standard workflow. Each high-risk system requires an assessment covering purpose, data categories, performance, discrimination risk evaluation, and safeguards, with ongoing monitoring and annual review.

Documentation supply chain. Developers need a repeatable documentation packet for deployers. Deployers need procurement requirements that make that packet contractually required.

User rights operations. Explanation, correction, and appeal rights require operational infrastructure: logging decisions, tracing inputs, routing appeals, and enabling human override with traceability.

Incident and escalation paths. Colorado’s notification obligation requires a mechanism to classify and investigate potential algorithmic discrimination, and to determine within 90 days whether the threshold for notification to the Attorney General has been crossed.

Technical Implications

System boundary clarity. “Substantial factor” determinations require architectural clarity on where model outputs flow, how they are used, and whether they drive automated decisions or provide information to a human.

Measurement and monitoring hooks. Impact assessments include performance metrics and discrimination evaluation, implying telemetry, evaluation datasets, and monitoring infrastructure that can be referenced over time and produced for compliance review.

Provenance and data documentation. Connecticut SB 2 requires disclosures including training data, limitations, and mitigation measures. Even when training is outsourced, deployers need sufficient vendor documentation to support their own obligations.

Human review and override design. Appeal and human review requirements mean decision workflows must support meaningful human intervention, including the ability to correct inputs, re-run or override decisions, and maintain traceability.

Risks and Open Questions

Definition edges. Many systems sit in gray zones between informing and deciding. Inconsistent internal classification across teams and products is a material compliance risk.

Annual review standardization. Annual review requirements are clear in principle and ambiguous in practice. What constitutes adequate review across different products, data sources, and operational contexts has not been standardized.

Multi-regime harmonization. Affirmative defenses tied to recognized frameworks encourage convergence, but jurisdiction-specific differences will still require management.

Frontier model and downstream coordination. California’s SB 53 obligations apply to frontier model developers and include incident reporting and safety framework requirements. How these interact with enterprise deployment realities, where model developers, platform providers, and deployers are distinct parties, has not been fully resolved.

Further Reading

Colorado Legislature: SB 24 205

Future of Privacy Forum: Connecticut SB 2 analysis

TrustArc: Colorado AI Law SB 24 205 compliance guide

Maryland General Assembly: SB 818 fiscal note

Orrick AI Law Center: U.S. AI law tracker Maryland entry)

StackCybersecurity: AI state laws overview

Baker Botts: Texas TRAIGA overview

Securiti: AI roundup October 2025

Ryan Specialty blog: AI regulations and business exposures