G360 Technologies

NIST’s Cyber AI Profile Draft: How CSF 2.0 Is Being Extended to AI Cybersecurity

NIST’s Cyber AI Profile Draft: How CSF 2.0 Is Being Extended to AI Cybersecurity

A security team is asked to “do a CSF assessment” for a new AI assistant that connects to internal content and external model APIs. Everyone agrees CSF is the right backbone, but the team keeps getting stuck on the same questions: What counts as an AI asset? Where do prompts, model access, and training data fit?

How do you describe AI-specific threats without creating a parallel framework?

NIST’s new draft profile is an attempt to make that mapping concrete.

In December 2025, NIST published an Initial Preliminary Draft of the Cybersecurity Framework Profile for Artificial Intelligence NIST IR 8596 iprd), positioned as a CSF 2.0 Community Profile focused on AI-related cybersecurity risk.

The draft ran a public comment period from December 16, 2025, through January 30, 2026. NIST also scheduled a follow-on workshop on January 14, 2026, to discuss the preliminary draft.

The Cyber AI Profile is designed to integrate into existing cybersecurity programs rather than replace them. It is organized around the NIST Cybersecurity Framework CSF 2.0 and coordinated with other NIST risk frameworks that organizations already use.

Two pieces of context matter for how to read the document:

It is an “Initial Preliminary Draft.” NIST explicitly framed it as an early release to share current thinking and solicit feedback before an Initial Public Draft and a final profile.

It intentionally avoids a narrow definition of “AI.” The draft uses “AI systems” broadly, covering stand-alone AI systems and AI embedded into other applications, infrastructure, and processes.

NIST ties the profile into a larger set of NIST AI risk work, including the AI Risk

Management Framework AI RMF 1.0, released January 26, 2023, and the Generative AI Profile NIST AI 600 1, published July 26, 2024.

How The Mechanism Works

At its core, the Cyber AI Profile is a structured overlay on CSF 2.0.

It starts with CSF 2.0 outcomes. The profile is organized by the CSF 2.0 Functions and their Categories and Subcategories. In the draft, this is implemented as a set of tables aligned to each CSF Function: GOVERN, IDENTIFY, PROTECT, DETECT, RESPOND, and RECOVER.

It adds three AI Focus Areas. For each CSF outcome, the profile layers AI cybersecurity considerations through three Focus Areas: Secure (cybersecurity of AI system components and the ecosystem they rely on), Defend (use of AI capabilities to improve cyber defense activities), and Thwart (resilience against adversaries using AI to enhance attacks). These Focus Areas are meant to structure AI-related cybersecurity risk without creating a separate framework taxonomy.

It uses table columns to connect outcome to AI-specific guidance. For each CSF Subcategory, the draft provides general considerations (baseline cybersecurity considerations), focus-area-specific considerations that describe AI-relevant threats, mitigations, and implementation details under Secure, Defend, and Thwart, proposed priority signals for focus-area work (the draft uses a 1 3 scale to indicate where organizations may focus first), and example informative references, with NIST noting the list is incomplete and undergoing further literature review.

It explicitly solicits feedback on usability and structure. The draft explicitly solicits feedback on how stakeholders would use the profile, whether Focus Areas should be presented together or separately, preferred delivery formats (including tooling-oriented formats), and what glossary terms and informative references should be added.

What This Actually Forces Into The Open

This draft matters because it takes a problem many enterprises already have and forces it into a consistent control language: how to treat AI systems as part of normal cybersecurity risk management while still acknowledging that AI introduces distinct attack surfaces and failure modes.

The immediate consequence is visibility. Teams that have been running AI pilots without formal asset classification now have to answer: where is the model hosted, who can access it, what data does it touch, and what happens if it gets compromised or starts behaving unexpectedly? The profile does not allow those questions to stay vague. CSF mapping requires explicit answers, which means AI systems that were treated as “innovation projects” become governed infrastructure with incident response obligations.

The structure is also a signal. By publishing this as a CSF 2.0 Community Profile, NIST is making a specific governance move: AI cybersecurity risk is expected to map to the same enterprise cybersecurity outcomes used for everything else, including governance, asset identification, protective controls, detection, response, and recovery. Organizations that built AI security programs in parallel to their existing cybersecurity frameworks now have a forcing function to consolidate.

The timing is deliberate. The draft was published in December 2025, with an immediate comment window and a January 2026 workshop, indicating NIST is actively pulling industry input to refine both the content and the practical form factor before the next draft stage. The speed suggests NIST expects this to move quickly from draft to operational guidance.

Implications for Enterprises

Operational Implications

Program integration work becomes clearer, but more explicit. Teams that already operate CSF-based assessments can use the profile to structure AI cybersecurity discussions in familiar CSF terms instead of inventing AI-only assessment categories. The trade-off is that AI systems can no longer be evaluated in isolation. If a marketing team deploys a chatbot that connects to a third-party API, that deployment now requires the same level of asset documentation, access control review, and incident response planning as any other system that handles enterprise data.

Inventory and dependency mapping pressure increases. The profile’s CSF alignment pushes organizations toward an explicit view of AI systems and their dependencies as governed assets, including embedded AI, not only obvious stand-alone deployments. This is where the friction shows up. Teams have to identify not just the chatbot, but the API it calls, the authentication mechanism it uses, the data sources it accesses, and the logging infrastructure that captures its behavior. Many organizations do not have that level of visibility today, especially for AI integrations that were deployed quickly or embedded into existing tools.

Incident response and recovery planning must include AI artifacts. The profile’s

RESPOND and RECOVER alignment makes it harder to treat AI incidents as “product issues” rather than operational security events with rollback and recovery considerations. If a model starts producing incorrect outputs or gets poisoned through adversarial inputs, the organization needs a documented process for detection, containment, root cause analysis, and recovery. That includes knowing how to roll back to a previous model version, how to validate that the issue is resolved, and how to communicate the incident internally and externally.

Technical Implications

Access control expands to models, prompts, and AI integrations. The draft frames AI cybersecurity as including the components and interfaces that make AI systems operational, including how systems are accessed and how data flows into and out of them. This expands the scope of what needs to be controlled. Model weights, training datasets, fine-tuning pipelines, prompt templates, and API keys all become access-controlled resources. Organizations that treated model access as a developer convenience rather than a security boundary now have to implement the same rigor they apply to database credentials or admin consoles.

Detection and monitoring must account for AI behavior. CSF DETECT mapping encourages monitoring for anomalous AI behavior and AI-relevant security signals, not only traditional infrastructure telemetry. This is a different kind of monitoring problem. Traditional security monitoring looks for unauthorized access, malware, or data exfiltration. AI monitoring also has to detect model drift, adversarial inputs, prompt injection attempts, and outputs that indicate the model has been compromised or is behaving unexpectedly. Teams need tooling that can baseline normal model behavior and flag deviations, which is not a capability most SOCs have today.

Tooling format becomes part of adoption. NIST’s explicit questions about delivery formats and reference tooling indicate the profile is intended to be used in operational workflows, not only read as narrative guidance. If NIST delivers this as structured data or integrates it into GRC platforms, adoption accelerates. If it remains a PDF, teams have to manually translate guidance into their own assessment templates, which slows implementation and creates inconsistency across organizations.

Risks and Open Questions

The reference base is still under construction. NIST notes that informative references are incomplete and tied to ongoing literature review, which can limit how directly teams can map profile guidance to implementation evidence today. Organizations that want to show compliance or maturity against the profile will struggle to identify authoritative sources for specific controls until NIST completes the reference mapping.

The glossary is not yet populated. The draft indicates the glossary will be expanded in future drafts and explicitly requests feedback on terms to include. That means definitional consistency is still a work item, not a solved layer. Teams assessing the same AI system could use different terminology for the same components, which creates gaps in assessments and makes it harder to compare maturity across organizations.

The focus area presentation could affect usability. NIST is actively asking whether Secure, Defend, and Thwart should be shown together or separately. That choice can materially change how practitioners use the profile for assessments versus for roadmap planning. If all three Focus Areas are shown together for each CSF Subcategory, assessments become comprehensive but dense. If they are presented separately, teams can focus on one area at a time but risk losing sight of how the areas interact.

Comment handling is not private. The draft states that comments are subject to FOIA, which can shape how candid organizations are in giving examples or sharing incident-informed feedback. Organizations that have experienced AI security incidents may be reluctant to describe those incidents in public comments, which means NIST may not get the most operationally useful feedback during the comment period.

Further Reading

NIST IR 8596 iprd, Cybersecurity Framework Profile for Artificial Intelligence Cyber AI Profile), Initial Preliminary Draft

NIST CSRC, NIST releases prelim draft of Cyber AI profile

NIST News, Draft NIST Guidelines Rethink Cybersecurity for the AI Era

NIST Cybersecurity Framework CSF 2.0

NIST AI Risk Management Framework AI RMF 1.0

NIST AI 600 1, Generative AI Profile

NCCoE, Cyber AI Profile Project Page

NCCoE, Cybersecurity and AI Workshop Concept Paper