G360 Technologies

Treasury’s New AI Risk Framework Gives the Financial Sector a Governance Playbook

Treasury’s New AI Risk Framework Gives the Financial Sector a Governance Playbook

Here is the problem every bank technology team knows. The model gets built fast. Governance takes forever. And when regulators ask for evidence that someone thought carefully about bias testing, data lineage, and explainability before deploying an AI credit underwriting system, the answer is usually a collection of emails, a few internal memos, and a hope that nobody asks too many follow-up questions.

In February 2026, the U.S. Department of the Treasury released the Financial Services AI Risk Management Framework FS AI RMF) to close that gap. Built with the Cyber Risk Institute and shaped by input from more than 100 financial institutions and public agencies, the framework does something most AI governance guidance has not attempted. It gets specific.

Where the NIST Framework Ends

The NIST AI Risk Management Framework, released in 2023, gave the industry a solid four-function structure: Govern, Map, Measure, and Manage. It defined the vocabulary of AI risk. What it deliberately did not do was tell a bank’s compliance team what evidence to produce when an examiner walks in the door.

That is not a criticism. The NIST framework was designed to be cross-sector and principles-based, which is exactly what made it adoptable across industries. But financial services are not a cross-sector environment. Banks operate under model risk management requirements. They have consumer protection obligations. They run under continuous supervisory scrutiny. Abstract principles do not satisfy an OCC examiner asking for validation documentation.

The FS AI RMF takes the NIST structure and fills in that operational layer. The same functions remain. Substantially more operational detail about how those functions can be implemented.

230 Controls Is a Lot. Here Is How It Works.

The framework introduces roughly 230 control objectives, which may initially appear complex, until you understand how the implementation is structured. Institutions do not start by reading all 230. They start by taking a questionnaire.

The AI Adoption Stage Questionnaire classifies an organization by the extent and risk profile of its current AI deployments. The answers determine which controls apply. A community bank running a single vendor fraud detection tool has a very different control set than a large bank with internal model development teams building credit and trading systems from scratch.

From there, the toolkit has four main components:

Risk and Control Matrix: Maps risk statements to the relevant control objectives, organized by adoption stage. This is where institutions figure out which controls actually apply to them.

Guidebook and Control Reference Guide: Operational guidance on how to implement each control, including concrete examples of evidence that satisfies the requirement. Guidance intended to support audit preparation and supervisory review.

Quick Start Guide: A smaller control set for institutions early in their AI adoption. It establishes a governance baseline without requiring the full framework on day one.

Adoption Stage Questionnaire: The entry point that determines scope. Everything else flows from this.

The implementation sequence follows a five-step loop: Assess, Customize, Implement, Integrate, Evolve. The last step matters. AI systems drift, get retrained, expand into new use cases. The framework expects controls to evolve alongside the systems they govern.

What the Controls Actually Cover

The control set spans the full AI lifecycle across several operational domains. For a bank deploying AI in lending decisions, this translates into concrete governance requirements across four areas.

Model lifecycle management

Controls address model design, testing, monitoring, drift detection, explainability thresholds, and rollback procedures. For credit underwriting, this means documented processes for catching when a model stops performing as expected and clear steps for what happens next.

Consumer protection

Fairness, explainability, and data documentation requirements sit here. If your model is making credit decisions, you need to be able to explain those decisions, document what data went into training it, and demonstrate that fairness testing was actually performed rather than assumed.

Resilience and security

Cybersecurity exposure, adversarial risks, and vendor dependencies all get coverage. AI systems introduce additional security considerations, including model inversion, adversarial inputs, and dependencies on external foundation models.

Third-party governance

This one is particularly relevant for smaller institutions. Most community banks and regional institutions are not building models in-house. They are buying or licensing them from technology vendors. The framework requires meaningful oversight of those vendor systems, which creates a real operational challenge when vendors are not forthcoming about their internal model practices.

Why This Matters Right Now

The framework arrives as AI moves from internal automation into decisions that directly affect consumers: credit approvals, fraud flags, customer service responses. As these systems take on higher-stakes functions, the gap between governance-as-principle and governance-as-practice becomes a regulatory liability.

What the FS AI RMF represents, beyond its specific controls, is a coordination mechanism. Treasury and the Cyber Risk Institute developed it with input from financial institutions, regulatory bodies, and standards organizations. The goal appears to be establishing a shared understanding of what adequate AI risk management looks like before formal regulation arrives and forces the conversation.

For institutions that have already built robust model risk management programs, much of this is not new territory. The framework was deliberately designed to align with existing MRM practices rather than replace them. What it adds is a structured way to extend those practices to AI-specific risks that traditional model validation was not designed to catch: algorithmic fairness, adversarial robustness, foundation model dependencies.

The Open Questions

The framework is voluntary. That status will matter a great deal for how quickly institutions adopt it. Voluntary frameworks in financial services have a history of becoming effectively mandatory once regulators begin referencing them in examination guidance, but that process takes time and the current framework offers no guarantee of that trajectory.

Vendor transparency remains an unsolved problem. The framework correctly identifies third-party AI oversight as a priority control area. It does not solve the practical reality that many vendors treat their model internals as proprietary and are not interested in producing the documentation their customers’ regulators want to see.

For institutions with global operations, the framework adds another governance layer to reconcile against international AI regulatory regimes that are moving on their own timelines with their own requirements.

And 230 controls, even proportionally applied, represents real infrastructure investment. Legacy systems, limited governance staff, and tight technology budgets are not abstract concerns for the institutions this framework most needs to reach.

The Bottom Line

For most banks, the question of whether to adopt AI is already settled. The question now is how to govern it in a way that survives regulatory scrutiny and actually reduces risk rather than just producing documentation that says it does.

The FS AI RMF does not answer every question. It does provide the most operationally specific public guidance the financial sector has received so far on how to translate AI governance commitments into artifacts that mean something. For institutions still treating AI governance as a policy exercise, it is a signal that that approach has a limited shelf life.

Further Reading

U.S. Department of the Treasury

Cyber Risk Institute

NIST AI Risk Management Framework

JD Supra

Cooley FinInsights

Captain Compliance

Dunn Ixer