NIST AI RMF: The Framework That Quietly Became Standard
The NIST AI Risk Management Framework (AI RMF 1.0, NIST AI 100-1) was released on January 26, 2023. It is voluntary. Nobody will fine you for ignoring it. And yet, in the three years since publication, it has become the de facto standard for AI risk management in the United States — referenced by federal procurement requirements, adopted by state regulators as a compliance benchmark, and embedded into corporate governance programs across industries.
How did a voluntary framework achieve what mandatory regulations often cannot? By being genuinely useful.
The Four Functions, Explained Without Jargon
The AI RMF is organized around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Each contains categories and subcategories with specific outcomes. Together, they form a lifecycle approach to AI risk — not a one-time assessment, but a continuous process.
GOVERN: The Function Everyone Skips
GOVERN is where most organizations stumble, and it is the function that matters most. Section 5 of the framework states that GOVERN "cultivates and implements a culture of risk management within organizations designing, developing, deploying, or using AI systems." It is not about governing individual AI systems. It is about governing how your organization approaches AI risk as a whole.
GOVERN covers organizational policies and procedures, roles and responsibilities, workforce diversity and AI expertise, organizational culture around risk, and stakeholder engagement processes. NIST describes GOVERN as cross-cutting — it "informs and is informed by" the other three functions. We call it the gravity function because without it, MAP, MEASURE, and MANAGE float untethered. You can have brilliant risk identification and precise measurement, but without the governance structures to act on findings, the exercise is academic.
In practice, GOVERN means having a named executive accountable for AI risk, documented policies that define acceptable use and risk thresholds, training programs that build AI literacy across the organization (not just in the data science team), and feedback mechanisms that allow affected stakeholders to surface concerns.
Most organizations start with MAP or MEASURE because they feel more concrete. That is backwards. Start with GOVERN. Establish who owns AI risk, what your risk appetite is, and how decisions get made. Everything else flows from there.
MAP: Understanding Context Before Quantifying Risk
MAP is the framework's risk assessment function. Its purpose is to establish context — to understand the conditions, constraints, and stakeholders surrounding an AI system before attempting to quantify risk.
MAP has five categories covering intended purposes and use context (MAP 1), interdependencies and potential impacts (MAP 2), AI-specific risks like bias and explainability (MAP 3), risk identification and prioritization (MAP 4), and the impacts on individuals, communities, and ecosystems (MAP 5).
The critical insight in MAP is its insistence on breadth. NIST explicitly calls for input from internal teams, external collaborators, end users, and affected communities. This is not participatory theater. AI risks are distributed — they affect people who were never consulted during development. MAP forces organizations to look beyond technical performance and consider real-world impact.
MEASURE: Quantifying What You Found
MEASURE takes the risks identified in MAP and applies metrics, benchmarks, and evaluation methods. It covers quantitative and qualitative assessment approaches, effectiveness of existing controls, tracking of identified risks over time, and performance metrics tied to trustworthiness characteristics.
NIST identifies seven trustworthiness characteristics: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. MEASURE asks you to evaluate your AI systems against each.
The challenge we see most often: organizations measure what is easy (accuracy, latency, throughput) and ignore what is hard (fairness, explainability, real-world validity). MEASURE pushes against this tendency by defining expected outcomes across all trustworthiness dimensions.
MANAGE: Acting on What You Know
MANAGE is the response function. Once risks are identified (MAP) and quantified (MEASURE), MANAGE defines how the organization responds — through mitigation, monitoring, incident response, and communication.
MANAGE covers resource allocation for risk treatment, risk response strategies (accept, mitigate, transfer, avoid), incident planning and response procedures, and ongoing monitoring and reassessment triggers.
The framework is explicit that MANAGE is not a one-time activity. Section 5 states that AI risk management should be "a continuous process" run "throughout the entire lifecycle." This aligns directly with the EU AI Act's requirement under Article 9(2)(a) for continuous risk management of high-risk systems.
The GOVERN Gap: Why It Matters
In July 2024, NIST released AI 600-1, the Generative AI Profile, extending the AI RMF to address risks specific to large language models and generative systems. Even in this extension, GOVERN remains the foundational function — with specific guidance on governing generative AI risks including hallucination, data provenance, and content authenticity.
Across implementations we have supported, the GOVERN function is consistently the weakest. Organizations allocate 60-70% of their effort to MAP and MEASURE — the technical and analytical work — and treat GOVERN as overhead. The result: risk assessments that produce excellent documentation and change nothing, because the governance structures to act on findings do not exist.
If you take one lesson from NIST AI RMF, make it this: governance precedes measurement. Build the decision-making infrastructure first.
Relationship to Other NIST Frameworks
The AI RMF does not exist in isolation within the NIST ecosystem. It is designed to complement:
NIST Cybersecurity Framework (CSF 2.0) — Uses a similar function-based structure (Identify, Protect, Detect, Respond, Recover, plus the new Govern function). Organizations already using CSF will find the AI RMF structurally familiar.
NIST Privacy Framework — Provides complementary controls for data minimization, consent management, and individual rights that intersect with AI-specific privacy risks.
NIST SP 800-53 — For federal agencies and government contractors, SP 800-53 security controls map to AI RMF outcomes, creating a path from existing security compliance to AI risk management.
If your organization already uses NIST frameworks, adopting the AI RMF is an extension, not a replacement.
Voluntary Framework, Growing Expectations
The AI RMF is voluntary, but "voluntary" does not mean "optional" in practice. Federal agencies increasingly reference it in procurement requirements. Executive Order 14110 (October 2023) directed agencies to use NIST frameworks. State regulations — including Colorado's AI Act effective February 2026 — reference NIST risk management approaches in their compliance guidance.
For AI compliance purposes, adopting NIST AI RMF positions you to meet current and future requirements. Our NIST AI RMF compliance guide provides a step-by-step implementation approach. Organizations pursuing ISO 42001 certification find significant overlap, reducing duplicative effort. Our framework comparison details these overlaps.
Where to Start
If your AI governance program is nascent, begin with three GOVERN outcomes: establish an AI risk management policy (GOVERN 1.1), define roles and responsibilities (GOVERN 1.2), and create a process for receiving stakeholder feedback (GOVERN 5). These three outcomes create the infrastructure for everything that follows.
Then pick your highest-risk AI system and run it through MAP. Not every system — one system. Learn the process, document what works and what does not, and iterate before scaling. Frameworks are tools. Their value is in the using, not the reading.
Implement NIST AI RMF across your AI portfolio with structured workflows and automated tracking. Start your free trial to see how Starkguard maps to each function.