When Governance Becomes a Data-Flow Problem
The Evidence Question
An enterprise can publish an AI policy, assign an oversight committee, and adopt a governance framework, yet still fail a simple operational question: where did the data go, who could access it, how long was it kept, and what evidence exists to prove those answers?
That gap between what governance documents say and what systems can actually demonstrate is becoming the central problem in enterprise AI compliance. Across federal procurement, state regulation, standards development, and litigation, the same questions keep surfacing. And they all resolve to the same operational layer: data-flow mapping, retention boundaries, and access controls.
The Short Version
Between March and April 2026, GSA published a draft procurement clause with specific data ownership, segregation, and disclosure requirements for federal AI contractors. NIST launched a new AI risk management profile for critical infrastructure. The White House released a national AI policy framework recommending federal preemption of state laws. A federal court allowed AI hiring bias claims to proceed in Mobley v. Workday.
The authorities are different, the mandates are different, but the operational question is the same: can the organization produce evidence of how AI data is handled?
What the GSA Clause Requires
The clearest source of operational specificity is GSA’s draft GSAR 552.239 7001, published March 6, 2026. It applies to any GSA Schedule contract involving AI capabilities and reaches any contractor using AI tools in government contract performance.
The data ownership terms: Government Data, defined to include all inputs and outputs in government context, belongs to the government. Contractors cannot use it to train, fine-tune, or improve models, or to inform business decisions. At contract end, all Government Data must be securely deleted and the contractor must certify deletion in writing.
The processing evidence requirements: for systems using intermediary processing such as reasoning, retrieval, or agentic workflows, GSAR 552.239 7001 requires summarized intermediate processing actions and decision points, model routing decisions with accompanying rationale, and data retrieval methods with complete source attribution, including direct links and relevant excerpts from materials used in generation. That means governance is tied to reconstructing what data entered the system, what happened to it, and what sources contributed to the output.
The retention requirements: all relevant logs, forensic images, and incident artifacts must be preserved for a minimum of 90 calendar days after a security incident involving Government Data.
The access-control requirements: GSAR 552.239 7001 mandates “eyes-off” handling, restricting human review of Government Data except where strictly necessary. Any human access must be logged, justified, limited to the minimum necessary, and visible to the government. Government Data must be logically segregated from non-government customer data through access controls, policy enforcement points, labeling, and encryption.
The disclosure timelines are tight: 30 days to identify all AI systems used in performance, 7 days to report material changes affecting bias or safety guardrails, and 72 hours to report security incidents to CISA.
OMB has declared compliance with the clause “material to contract eligibility and payment,” language that could trigger False Claims Act liability. GSAR 552.239 7001 is currently in draft (deferred from MAS Refresh 31 to Refresh 32 after industry pushback from BSA, the U.S. Chamber of Commerce, and multiple law firms), but the direction is established.
Where the Same Pattern Appears Elsewhere
NIST’s April 7 concept note for a Trustworthy AI in Critical Infrastructure Profile extends governance requirements to operational technology. It covers all 16 critical infrastructure sectors and explicitly includes use cases such as AI powered digital twins, autonomous robots with deterministic fail-safe controllers, and AI-enabled compliance monitoring. The profile will define trustworthiness requirements that operators must communicate across their supply chains, meaning governance evidence will need to flow beyond the organization into vendor and partner relationships.
In Mobley v. Workday, Judge Rita Lin’s March 6 ruling allowed core age discrimination claims against an AI hiring system to proceed under the ADEA. Baker Botts’ analysis frames the implication: employers using AI-assisted screening should be prepared to explain what the system does, how it is configured, and what monitoring exists to detect disparate impact. The exact discovery expectations are not yet standardized, but the direction points toward operational evidence about data flows, not policy statements about fairness.
Why This Matters Now
The compliance timeline is compressing. Colorado’s AI Act takes effect June 30, 2026. The EU AI Act’s transparency and high-risk rules begin August 2, 2026. California’s ADMT regulations take effect January 1, 2027. GSAR 552.239 7001, once finalized, will apply via mass modification with a 60-day acceptance window.
These regimes do not align cleanly. GSAR 552.239 7001 requires that AI systems
“must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” The EU AI Act requires providers of high-risk systems to implement safeguards against harmful outputs. An organization operating under both faces a compliance conflict that policy language cannot resolve. It requires architectural workload segregation.
Federal preemption of state AI laws has been recommended by the White House but not legislated, which means enterprises must comply with state requirements that may later be overridden. That uncertainty makes data-flow controls more operationally valuable, not less. Mapping where AI data goes, enforcing retention boundaries, and producing access evidence are jurisdiction-neutral capabilities. An organization that builds these once can configure them to satisfy GSA requirements, Colorado’s impact assessments, the EU AI Act’s high-risk obligations, and future federal legislation with the same underlying infrastructure. The alternative, separate compliance programs per jurisdiction, does not scale.
What Remains Uncertain
GSAR 552.239 7001 is in draft and the final language may change after substantial industry feedback. But the operational requirements around data ownership, processing evidence, and access control reflect a direction that is unlikely to reverse.
Whether federal preemption passes Congress is unknown. Colorado enforcement begins in two months. Organizations cannot wait for legislative clarity.
The NIST CI Profile is a concept note, not a finished standard. Its use cases signal where governance is heading for operational technology, but specific control requirements have not been drafted.
And a fundamental question remains: how does an organization certify data-flow controls across AI systems that depend on closed-source models with opaque data handling? GSAR 552.239 7001 makes the contractor responsible for service provider compliance even when the service provider is not a party to the contract. That turns third-party opacity from an abstract governance concern into a specific contractual liability. The governance obligations are becoming precise. The ability to fulfill them across third-party AI infrastructure has not caught up.
Further Reading
- GSA Draft GSAR 552.239 7001, “Basic Safeguarding of Artificial Intelligence Systems” March 6, 2026
- White House: National Policy Framework for Artificial Intelligence March 20, 2026
- NIST Concept Note for AI RMF Profile on Trustworthy AI in Critical Infrastructure April 7, 2026
- Baker Botts: AI Legal Watch, April 2026
- Holland & Knight: “GSA’s Proposed AI Clause: A Deep Dive” March 2026