Skip to content

NIST AI Risk Management Framework Guide

What is NIST AI RMF?

NIST AI Risk Management Framework (AI RMF) 1.0 is a voluntary framework published by the US National Institute of Standards and Technology to help organizations manage risks associated with artificial intelligence systems.

Official Name: NIST AI Risk Management Framework (AI RMF 1.0)

Published: January 26, 2023

Developed By: National Institute of Standards and Technology (NIST), US Department of Commerce

Framework Type: Voluntary risk management framework (non-certifiable)

Primary Audience:

  • US federal agencies and contractors
  • Organizations subject to Executive Order 14110 (Safe, Secure, and Trustworthy AI)
  • Enterprises seeking structured AI risk management

Why NIST AI RMF Matters for AI Governance

1. Federal Mandate for US Government

Executive Order 14110 (October 30, 2023) requires federal agencies to:

  • Use NIST AI RMF as the foundation for AI risk management
  • Report AI risk management activities to OMB (Office of Management and Budget)
  • Implement AI governance practices aligned with the framework

This matters because:

  • Federal contractors working with US government agencies increasingly face NIST AI RMF compliance requirements in procurement
  • OMB memoranda reference NIST AI RMF as the baseline for federal AI governance
  • Agency-specific guidance (DOD, DHS, HHS, etc.) builds on NIST AI RMF foundations
SectorNIST AI RMF StatusCompliance Driver
US Federal AgenciesMandatory (via EO 14110)OMB policy
Defence ContractorsStrongly recommendedDOD procurement requirements
Critical InfrastructureRecommendedCISA guidance, voluntary frameworks
Private SectorVoluntaryBest practice, investor due diligence

2. Designed for Risk-Based Decision Making

Unlike prescriptive standards that mandate specific controls, NIST AI RMF is principles-based:

Risk-Based Philosophy:

  • Organizations define their own risk tolerance
  • Framework provides structure, not mandates
  • Emphasis on context-specific risk assessment

Comparison:

FrameworkApproachCertification?Flexibility
ISO 42001Prescriptive (Annex SL structure)✅ YesLower (must meet specific clauses)
NIST AI RMFPrinciples-based❌ NoHigher (adapt to context)
EU AI ActLegal requirements⚠️ PartialLower (mandated for high-risk)

Best for:

  • Organizations needing flexibility in implementation
  • US federal contractors requiring alignment without certification burden
  • Enterprises seeking scalable governance that grows with AI maturity

3. Harmonized with Other NIST Frameworks

NIST AI RMF integrates seamlessly with established NIST frameworks:

FrameworkRelationship to AI RMF
NIST Cybersecurity Framework (CSF)Shares IDENTIFY, PROTECT, DETECT, RESPOND structure philosophy
NIST Privacy FrameworkCrosswalk available (NIST AI 100-1) for AI+privacy integration
NIST RMF (Risk Management Framework)AI-specific extension of federal RMF for systems

Benefit: If you already use NIST CSF for cybersecurity, AI RMF will feel familiar. Many organizations integrate both frameworks under a unified risk management program.

4. No Certification, But Conformance Attestation

While NIST AI RMF is not certifiable (no accredited certification bodies), organizations can:

Self-Attestation:

  • Declare conformance with NIST AI RMF
  • Document alignment in governance reports
  • Use SignalBreak evidence as proof

Third-Party Assessment:

  • Engage assessors (e.g., MITRE Corporation, consulting firms) for independent conformance review
  • No formal certification, but assessment report provides external validation

Federal Procurement:

  • Some RFPs require NIST AI RMF conformance attestation as qualification criterion
  • Assessment reports strengthen bids

How SignalBreak Maps to NIST AI RMF

SignalBreak provides automated evidence generation for all 4 NIST AI RMF functions. Your workflows, scenarios, and provider monitoring demonstrate conformance with:

The 4 Core Functions

FunctionPurposeSignalBreak EvidenceStatus
GOVERNEstablish AI governance culture, structures, and accountabilityWorkflow owners, governance platform🟡 Partial
MAPUnderstand AI system context, categorize risks, assess impactsWorkflow inventory, provider mapping, scenarios🟢 Implemented
MEASUREAnalyze, assess, benchmark, and monitor AI risksProvider monitoring (5,000+ signals), risk scoring🟡 Partial
MANAGEAllocate resources, prioritize risks, respond to incidentsRisk prioritization, scenario impacts🟢 Implemented

Overall Alignment: Typically 50-75% for organizations with SignalBreak evidence (varies by maturity)


NIST AI RMF Function Details

GOVERN: Governance and Accountability

Requirement: Cultivate and direct a culture and structure for responsible AI development and use, with clear roles, responsibilities, and accountability.

Subcategories (5):

Description: Organization understands and documents applicable laws, regulations, and policies regarding AI.

SignalBreak Evidence:

  • Provider compliance tracking: 2 providers with governance data tracked (OpenAI, Anthropic)
  • Regulatory mappings: EU AI Act, ISO 42001, GDPR considerations in workflow documentation

Example from Evidence Pack:

"SignalBreak tracks 13+ AI regulations including EU AI Act, California SB 1047, and Colorado AI Act. Provider profiles document regulatory compliance status (GDPR-compliant processing, US Cloud Act implications)."

Audit Readiness:Fully evidenced — Provider registry demonstrates awareness of third-party legal obligations.


GOVERN 1.2: Responsible AI Principles

Description: Organization defines and documents AI principles such as fairness, transparency, explainability, safety, and accountability.

SignalBreak Evidence:

  • Workflow AI capability types: Text Generation, Image Analysis, Code Generation, etc. (documents intended AI use)
  • Criticality classification: Critical, High, Medium, Low (prioritizes safety based on business impact)

Example from Evidence Pack:

"6 workflows assessed for AI characteristics including criticality (4 Critical, 2 High, 0 Medium, 0 Low). Criticality framework demonstrates risk-based prioritization aligned with responsible AI principles."

Audit Readiness: 🟡 Partial — Workflow categorization demonstrates responsible AI awareness, but formal AI principles document (fairness policy, explainability requirements, etc.) needed for full conformance.

Gap Remediation: Create AI Principles Policy document referencing:

  • Fairness (bias mitigation in model selection)
  • Transparency (disclosure of AI use in workflows)
  • Safety (criticality-based testing requirements)
  • Accountability (workflow ownership per GOVERN 1.3)

Estimated effort: 2-4 days (draft), 1-2 weeks (stakeholder review + approval)


GOVERN 1.3: Accountability and Responsibility

Description: Workforce accountability and responsibility for AI system outcomes is clearly defined and documented.

SignalBreak Evidence:

  • Workflow owners: Each workflow should have assigned owner (currently gap in many implementations)
  • Accountability structure: Who is responsible for AI system failures?

Gap: 🔴 Critical gap — SignalBreak workflows support owner field, but many organizations don't populate it. Zero workflows with documented accountability structures is common starting state.

Audit Readiness:Not evidenced without workflow owner assignment.

Gap Remediation:

  1. Assign workflow owners to all AI systems (individual or team)
  2. Document accountability matrix:
    • Who is responsible for AI output quality?
    • Who approves model changes?
    • Who responds to AI incidents?

Template:

WorkflowOwnerResponsible forAccountable to
Customer Support SummarisationJane Smith (CX Lead)Output quality, user feedbackVP Customer Experience
Email ClassificationIT TeamUptime, accuracyCIO
Code Review AgentEngineering ManagerSecurity, false positive rateCTO

Estimated effort: 4-8 hours (small org), 2-4 weeks (enterprise with change management)


GOVERN 1.4: Organizational AI Risk Culture

Description: Organizational culture is established and prioritizes AI risk management throughout the organization's lifecycle.

SignalBreak Evidence:

  • Governance platform operational: SignalBreak adoption demonstrates governance investment
  • Signal monitoring: 5,000+ signals tracked shows continuous risk awareness

Example from Evidence Pack:

"Governance platform operational with continuous signal monitoring (5,005 signals tracked). Platform adoption indicates organizational commitment to AI risk visibility."

Audit Readiness: 🟡 Partial — Platform use demonstrates technical culture, but organizational culture (training, communication, incentives) requires supplementary evidence.

Gap Remediation: Document AI risk culture activities:

  • Training: AI governance training for staff (attendance records)
  • Communication: Internal newsletters, town halls on AI risks
  • Incentives: Performance goals tied to AI risk management (e.g., "Maintain Green risk status")

Estimated effort: Ongoing (cultural change is continuous)


GOVERN 1.5: Organizational Policies and Practices

Description: Transparent and standardized practices, including reporting, are in place for determining how risks are managed based on impacts.

SignalBreak Evidence:

  • Workflow business context: 6 workflows with documented business context
  • Evidence pack generation: Standardized reporting on AI risks (monthly/quarterly cadence)

Example from Evidence Pack:

"6 workflows with documented business context including intended use, dependencies, and stakeholders. Evidence Pack generation provides standardized AI risk reporting with transparent methodology."

Audit Readiness:Fully evidenced — Evidence packs demonstrate standardized risk reporting practices.

Best Practice: Generate evidence packs monthly and present at management reviews to demonstrate GOVERN 1.5 conformance.


MAP: Context and Risk Identification

Requirement: Understand the business context, categorize AI systems, identify and assess AI risks, and understand potential impacts.

Subcategories (6):

MAP 1.1: Mission, Goals, and Context

Description: Context is established and understood for AI systems, including their purposes, environment, and constraints.

SignalBreak Evidence:

  • Comprehensive workflow mapping: 6-8 AI workflows documented with:
    • Workflow name and description
    • AI capability type (purpose)
    • Provider bindings (technology environment)
    • Criticality level (business constraints)

Example from Evidence Pack:

"6 AI workflows comprehensively mapped with complete metadata: ID, name, AI capability, provider bindings, criticality, owner. Workflow registry provides transparent system inventory meeting MAP 1.1 requirements."

Audit Readiness:Fully evidenced — Workflow registry satisfies context documentation requirements.

Key audit questions NIST AI RMF assessors ask:

QuestionSignalBreak Answer
❓ "What AI systems do you operate?"Workflow registry (Evidence Pack Appendix)
❓ "What are their purposes?"AI capability types (Text Generation, Image Analysis, etc.)
❓ "What's the operating environment?"Provider bindings (OpenAI, Anthropic, etc.)
❓ "What are the constraints?"Criticality levels, fallback configurations

MAP 1.2: AI System Categorization

Description: AI systems are categorized based on characteristics such as scope, complexity, risk level, and impact.

SignalBreak Evidence:

  • Complete categorization system: Workflows categorized by:
    • AI capability (functional categorization)
    • Criticality (risk-based categorization)
    • Provider tier (complexity/maturity categorization)

Example from Evidence Pack:

"Complete workflow categorization system active with 3 dimensions: AI capability (8 types), Criticality (4 levels), Provider tier (Tier 1-4). Categorization enables risk-based resource allocation."

Audit Readiness:Fully evidenced — Multi-dimensional categorization exceeds NIST AI RMF minimum requirements.

NIST AI RMF Categorization Guidance:

NIST DimensionSignalBreak Implementation
System scopeWorkflow-level tracking (scoped to business function)
ComplexityProvider tier (Tier 1 = enterprise, Tier 4 = experimental)
Risk levelCriticality (Critical, High, Medium, Low)
ImpactScenario impacts (business continuity, cost, downtime)

MAP 1.5: Impacts to Individuals and Communities

Description: AI system impacts to individuals, groups, communities, organizations, and society are identified and evaluated.

SignalBreak Evidence:

  • Scenario-based impact modeling: 4+ impact scenarios documented and assessed
  • Business impact quantification: Downtime hours, cost estimates, customer satisfaction impact

Example from Evidence Pack:

"4 impact scenarios documented and assessed with business impact quantification: downtime hours (24-72h), cost ranges (£15k-50k), likelihood (Medium: 2-4 incidents/year). Impact methodology demonstrates MAP 1.5 conformance."

Audit Readiness:Fully evidenced — Scenario impacts provide concrete evidence of impact evaluation.

NIST AI RMF Impact Categories:

Impact TypeExampleSignalBreak Evidence
IndividualsCustomer unable to get support due to AI chatbot failureCustomer satisfaction impact in scenario findings
OrganizationsBusiness disruption from provider outageDowntime hours, revenue impact estimates
SocietySystemic risks from AI dependencyConcentration risk analysis (>35% single provider)

Gap for societal impacts: SignalBreak focuses on organizational impacts. For societal impact assessment (e.g., environmental footprint of AI, labor displacement), supplement with:

  • Carbon footprint analysis of AI provider data centers
  • Workforce impact assessment (AI augmentation vs. replacement)

Estimated effort: 1-2 weeks for societal impact add-on


MAP 1.6: Third-Party AI Risks

Description: Risks from third-party entities (e.g., AI vendors, data providers) are documented and managed.

SignalBreak Evidence:

  • External provider tracking: 4+ external providers tracked with risk profiles
  • Continuous monitoring: 5,000+ provider change signals detected
  • Concentration risk: Provider concentration analysis (identifies single points of failure)

Example from Evidence Pack:

"4 external providers tracked with comprehensive risk profiles including Tier classification, SLA, incident history. Concentration risk analysis identifies supply chain vulnerabilities (max 25% OpenAI concentration)."

Audit Readiness:Fully evidenced — Provider monitoring demonstrates robust third-party risk management.

NIST AI RMF Third-Party Risk Factors:

Risk FactorSignalBreak Evidence
Vendor reliabilityProvider tier (Tier 1 = 99.9%+ SLA, proven track record)
Service availabilityHistorical uptime metrics, incident count
Data handlingProvider compliance (GDPR, SOC 2, ISO 27001)
Vendor concentrationConcentration analysis (warns if >35% single provider)
Change managementSignal detection (API changes, model deprecations, pricing)

MEASURE: Analysis and Assessment

Requirement: Use quantitative, qualitative, or mixed-method tools and techniques to analyze, assess, benchmark, and monitor AI risks and related impacts.

Subcategories (13 total, 4 key):

MEASURE 1.1: AI System Performance and Impacts

Description: Appropriate AI system metrics are identified and tracked to measure impacts.

SignalBreak Evidence:

  • Risk scoring system: Decision Readiness Score (0-100 scale) operational across workflows
  • Trend tracking: Historical score trajectory shows improvement/degradation

Example from Evidence Pack:

"Criticality scoring system operational with weighted methodology: Critical workflows = 40 points impact, High = 25 points, Medium = 10 points, Low = 5 points. Score trend tracking demonstrates MEASURE 1.1 conformance."

Audit Readiness:Fully evidenced — Risk scoring provides quantitative metrics for AI impacts.

NIST AI RMF Metric Categories:

CategorySignalBreak MetricFrequency
PerformanceProvider availability (%), incident countReal-time (5-min polls)
ImpactRisk score (0-100), RAG statusMonthly (evidence pack)
BusinessEstimated downtime (hours), cost impact (£)Per scenario

MEASURE 2.1: AI System Testing and Evaluation

Description: AI systems are tested and evaluated for performance, accuracy, safety, and security.

SignalBreak Evidence: 🟡 Partial — Provider monitoring active (observability), but formal testing protocols for AI systems incomplete.

Gap: SignalBreak monitors third-party providers (external AI services) but doesn't test your workflows (how you use AI).

What's Missing:

  • Accuracy testing: Does the AI chatbot give correct answers? (your responsibility)
  • Safety testing: Can users trick the AI into harmful outputs? (red teaming)
  • Bias testing: Does the AI treat all user groups fairly? (fairness evaluation)

Audit Readiness: 🟡 Partial — Provider health monitoring covers vendor reliability, but workflow-level testing needed for full MEASURE 2.1.

Gap Remediation: Implement AI testing procedures for each critical workflow:

Template: AI Testing Checklist

Test TypeFrequencyOwnerPass Criteria
AccuracyMonthlyWorkflow owner>95% correct responses (sample test set)
Safety (Red Team)QuarterlySecurity teamZero successful prompt injections
BiasAnnuallyCompliance team<5% disparity across demographic groups
PerformanceWeeklyOperations team<200ms p95 latency

Estimated effort:

  • Setup: 1-2 weeks (develop test sets, define pass criteria)
  • Ongoing: 4-8 hours/month per workflow

MEASURE 2.6: Mechanisms for Continuous Monitoring

Description: Mechanisms exist for ongoing monitoring of AI system performance and impacts.

SignalBreak Evidence:

  • Continuous provider monitoring: 5,000+ signals tracked via 47 sources across 21 providers
  • Automated signal detection: Status changes, API updates, model deprecations, pricing changes
  • Real-time alerting: Critical provider outages detected within 5 minutes

Example from Evidence Pack:

"Provider change monitoring active with 5,005 signals tracked. Continuous monitoring infrastructure includes 5-minute status polling, automated signal classification, and real-time alerting for critical events."

Audit Readiness:Fully evidenced — Continuous monitoring far exceeds NIST AI RMF baseline (many organizations still use manual quarterly reviews).

NIST AI RMF Monitoring Dimensions:

DimensionSignalBreak ImplementationFrequency
PerformanceProvider availability trackingEvery 5 minutes
ChangesAPI updates, model changes, pricingReal-time (signal detection)
IncidentsProvider outages, degradationsReal-time (status page polling)
TrendsHistorical uptime, incident frequencyMonthly (evidence pack)

Competitive advantage: Most organizations monitor AI systems reactively (wait for users to report issues). SignalBreak monitors proactively (detect provider issues before they affect users).


MEASURE 2.9: AI System Security and Resilience

Description: AI system security and resilience are assessed.

SignalBreak Evidence: 🟡 Partial — Provider security tracking (SOC 2, ISO 27001 compliance) for some providers, but not comprehensive.

Gap: Only 2 of 4 providers have complete security assessment data in typical implementations.

What's Missing:

  • Penetration testing of AI endpoints
  • Adversarial testing (can attackers manipulate AI outputs?)
  • Data security (how is training data protected?)

Audit Readiness: 🟡 Partial — Provider compliance tracking covers vendor security, but workflow-level security assessment needed.

Gap Remediation: Expand security assessment to all providers:

Provider Security Checklist:

ProviderSOC 2 Type 2ISO 27001Penetration TestAdversarial TestData Residency
OpenAI⏳ NeededUS
Anthropic⏳ NeededUS
Ollama (self-hosted)N/AN/A⏳ Needed⏳ NeededOn-prem
Google Vertex⏳ NeededEU

Estimated effort:

  • Provider compliance: 2-4 hours per provider (review attestations)
  • Adversarial testing: 1-2 weeks per critical workflow (engage security firm)

MANAGE: Risk Response and Mitigation

Requirement: Allocate resources to manage AI risks, prioritize risks, plan responses, and implement risk treatment strategies.

Subcategories (4):

MANAGE 1.1: Risk Prioritization

Description: AI risks are prioritized based on likelihood, impact, and organizational risk tolerance.

SignalBreak Evidence:

  • Criticality-based prioritization: Critical > High > Medium > Low
  • Impact severity: Critical impacts = 40 points, High = 25 points, Medium = 10 points, Low = 5 points
  • Risk score: Weighted sum of impacts provides overall risk level (0-100)

Example from Evidence Pack:

"Criticality-based prioritization system active with transparent weighting: Critical workflows receive 40 points per impact, High = 25 points, Medium = 10 points, Low = 5 points. Prioritization enables risk-based resource allocation."

Audit Readiness:Fully evidenced — Risk scoring provides objective prioritization methodology.

NIST AI RMF Prioritization Factors:

FactorSignalBreak Evidence
LikelihoodProvider incident frequency, historical availability
ImpactCriticality level, scenario impacts (downtime, cost)
Risk toleranceRAG thresholds (Red >70, Amber 30-70, Green <30)

Key audit questions:

QuestionSignalBreak Answer
❓ "How do you prioritize AI risks?"Criticality-based scoring with weighted impacts
❓ "Who decides what's critical?"Workflow owners assign criticality, CIO approves
❓ "How often do you re-prioritize?"Monthly via evidence pack regeneration

MANAGE 1.2: Risk Treatment

Description: AI risks are managed based on appropriate treatment strategies (accept, mitigate, transfer, avoid).

SignalBreak Evidence: 🔴 Critical gap — Risk identification and prioritization implemented, but treatment execution missing.

Gap: Zero workflows have documented mitigation strategies in typical implementations.

What's Missing:

  • Mitigation plans: How will you reduce risk? (e.g., add fallback provider)
  • Treatment decisions: Accept, mitigate, transfer, avoid for each risk
  • Implementation tracking: Are mitigations actually deployed?

Audit Readiness:Not evidenced — Recommendations exist (Evidence Pack p.5), but formal risk treatment process needed.

Gap Remediation: Create Risk Treatment Register:

Risk IDRisk DescriptionLikelihood × ImpactTreatment StrategyMitigation ActionOwnerTimelineStatus
R-001OpenAI outage affects customer support chatbotHigh × Critical = CriticalMitigateAdd Anthropic fallback providerCX Lead30 daysIn Progress
R-002Anthropic rate limiting impacts email classifierMedium × High = MediumAcceptMonitor usage, upgrade plan if neededIT ManagerOngoingAccepted
R-003Ollama server failure stops internal chatbotLow × Medium = LowAvoidMigrate to cloud provider (OpenAI)Engineering Manager90 daysPlanned

Estimated effort:

  • Setup: 4-8 hours (create register, document treatment decisions)
  • Ongoing: 2-4 hours/month (update status, track implementation)

MANAGE 1.3: Risk Documentation and Reporting

Description: AI risk information is documented and reported to appropriate personnel.

SignalBreak Evidence:

  • Scenario documentation: 4+ risk scenarios formally documented with business impacts
  • Evidence pack reporting: Monthly/quarterly reports to management with risk findings
  • Stakeholder communication: Evidence packs provide transparent risk communication

Example from Evidence Pack:

"4 risk scenarios formally documented with comprehensive impact analysis: scenario description, affected workflows, impact severity (Critical/High/Medium/Low), estimated downtime, cost impact, likelihood. Documentation supports MANAGE 1.3 reporting requirements."

Audit Readiness:Fully evidenced — Evidence packs provide comprehensive risk documentation and reporting mechanism.

NIST AI RMF Reporting Audiences:

AudienceReport FormatFrequencySignalBreak Evidence
ManagementEvidence Pack (executive summary, findings)Monthly/QuarterlyRisk score, RAG status, top recommendations
Technical teamsEvidence Pack (detailed findings, provider signals)MonthlyProvider health, signal analysis, impact scenarios
BoardEvidence Pack (score trajectory, strategic risks)QuarterlyTrend analysis, concentration risks, maturity assessment

Best Practice: Present evidence pack at quarterly management review meetings. Document review outcomes (decisions, resource allocation) separately to demonstrate management engagement.


MANAGE 2.2: Transparency and Documentation

Description: AI system lifecycle management is transparent and well-documented.

SignalBreak Evidence:

  • Workflow lifecycle tracking: Creation date, last modified, owner, status
  • Change log: Provider binding changes, model upgrades, configuration updates

Example from Evidence Pack:

"Workflow lifecycle tracking operational with metadata: creation date, last modified timestamp, owner assignment, active status. Change tracking enables transparency per MANAGE 2.2 requirements."

Audit Readiness:Fully evidenced — Workflow registry provides lifecycle transparency.

NIST AI RMF Lifecycle Stages:

StageSignalBreak Evidence
DesignWorkflow creation (initial configuration, provider selection)
DevelopmentProvider binding changes (model selection, fallback configuration)
DeploymentWorkflow status (active/inactive), criticality level
MonitoringContinuous provider health tracking, signal detection
DecommissioningWorkflow deletion (archived in audit log)

Scoring Methodology (NIST AI RMF Perspective)

How SignalBreak Calculates NIST AI RMF Alignment

SignalBreak generates NIST AI RMF Alignment Reports that assess conformance with the 4 core functions:

Function Scoring:

FunctionWeightingAssessment Criteria
GOVERN25%Governance structures, accountability, culture
MAP25%System inventory, categorization, impact assessment
MEASURE25%Monitoring infrastructure, testing protocols, metrics
MANAGE25%Risk prioritization, treatment, documentation

Overall Alignment Calculation:

Alignment % = (GOVERN score × 0.25) + (MAP score × 0.25) + (MEASURE score × 0.25) + (MANAGE score × 0.25)

Typical Ranges:

AlignmentOrganization ProfileCharacteristics
0-30%Early stageWorkflows registered, minimal governance
30-50%DevelopingBasic monitoring, some accountability structures
50-75%MatureMost functions implemented, minor gaps
75-100%AdvancedComprehensive governance, formal testing, continuous improvement

SignalBreak Evidence Contribution:

  • Workflows + Providers: ~40% alignment (MAP 1.1, MAP 1.2, MAP 1.6)
  • Scenarios + Impacts: ~20% alignment (MAP 1.5, MANAGE 1.1, MANAGE 1.3)
  • Provider Monitoring: ~15% alignment (MEASURE 2.6)
  • Governance Structures: 0-25% alignment (GOVERN functions — organization-dependent)

Control Categories and What They Assess

NIST AI RMF organizes requirements into 4 core functions (detailed above). Additionally, the NIST AI RMF Playbook provides subcategories (43 total) that break down each function:

Full Subcategory Breakdown

GOVERN (11 subcategories)

SubcategoryFocusSignalBreak Coverage
GOVERN-1.1Legal/regulatory🟡 Partial
GOVERN-1.2Organizational policies✅ Full
GOVERN-1.3Accountability❌ Gap
GOVERN-1.4Culture🟡 Partial
GOVERN-1.5Transparency✅ Full
GOVERN-2.1Roles and responsibilities❌ Gap
GOVERN-2.2Teams🟡 Partial
GOVERN-3.1Resources🟡 Partial
GOVERN-3.2Capabilities✅ Full
GOVERN-4.1AI risk culture🟡 Partial
GOVERN-4.2Incident reporting🟡 Partial

Strengths: Policy documentation, transparency via evidence packs Gaps: Accountability structures, formal roles/responsibilities


MAP (9 subcategories)

SubcategoryFocusSignalBreak Coverage
MAP-1.1System context✅ Full
MAP-1.2Categorization✅ Full
MAP-1.3Requirements🟡 Partial
MAP-1.4Risks and benefits🟡 Partial
MAP-1.5Impact assessment✅ Full
MAP-1.6Third-party risks✅ Full
MAP-2.1AI system lifecycle✅ Full
MAP-2.2Data lifecycle❌ Gap
MAP-3.1Interdependencies✅ Full

Strengths: System inventory, categorization, third-party tracking Gaps: Data lifecycle management (training data provenance, data quality)


MEASURE (13 subcategories)

SubcategoryFocusSignalBreak Coverage
MEASURE-1.1Metrics✅ Full
MEASURE-1.2Data quality❌ Gap
MEASURE-1.3Environmental impacts❌ Gap
MEASURE-2.1Testing/evaluation🟡 Partial
MEASURE-2.2AI system performance✅ Full
MEASURE-2.3Human-AI interaction❌ Gap
MEASURE-2.4Harmful bias❌ Gap
MEASURE-2.5Explainability❌ Gap
MEASURE-2.6Continuous monitoring✅ Full
MEASURE-2.7Incidents🟡 Partial
MEASURE-2.8Data security🟡 Partial
MEASURE-2.9Security/resilience🟡 Partial
MEASURE-3.1AI system output🟡 Partial

Strengths: Metrics, performance tracking, continuous monitoring Gaps: Data quality, bias testing, explainability (require domain-specific tools)


MANAGE (10 subcategories)

SubcategoryFocusSignalBreak Coverage
MANAGE-1.1Risk prioritization✅ Full
MANAGE-1.2Risk treatment❌ Gap
MANAGE-1.3Risk documentation✅ Full
MANAGE-2.1Risk communication✅ Full
MANAGE-2.2Transparency✅ Full
MANAGE-2.3Records management✅ Full
MANAGE-3.1Third-party risk✅ Full
MANAGE-3.2Third-party data🟡 Partial
MANAGE-4.1Incident response🟡 Partial
MANAGE-4.2Incident analysis🟡 Partial

Strengths: Risk documentation, communication, third-party tracking Gaps: Formal risk treatment execution, incident response procedures


How to Improve Your NIST AI RMF Alignment

Step 1: Achieve 50%+ Alignment (Baseline)

Current State: Organizations with SignalBreak typically start at 30-50% alignment.

Quick Wins (0-30 days):

  1. Assign workflow owners (GOVERN 1.3)

    • Populate owner field for all workflows
    • Document accountability (who approves model changes?)
    • Impact: +10-15% alignment
  2. Document AI principles (GOVERN 1.2)

    • Create 1-page AI Principles Policy
    • Reference in workflow documentation
    • Impact: +5% alignment
  3. Generate monthly evidence packs (MANAGE 1.3)

    • Establish regular reporting cadence
    • Present at management reviews
    • Impact: +5% alignment

Target: 50-60% alignment after quick wins


Step 2: Close GOVERN Gaps (60-75% Alignment)

Focus: Governance structures and accountability

Actions (30-90 days):

  1. Create AI Governance Committee

    • Executive sponsor (CIO, CTO, or CDO)
    • Cross-functional members (legal, compliance, engineering, product)
    • Quarterly meetings to review evidence packs
    • Impact: +10% alignment (GOVERN 2.1, GOVERN 2.2)
  2. Formalize roles and responsibilities

    • Document who is responsible for:
      • AI system design approvals
      • Model selection and changes
      • Incident response
      • Compliance attestation
    • Impact: +5% alignment (GOVERN 2.1)
  3. Implement AI risk culture training

    • Training for all staff using AI systems
    • Attendance tracking for audit evidence
    • Impact: +5% alignment (GOVERN 4.1)

Target: 65-75% alignment after GOVERN improvements


Step 3: Implement Testing Protocols (75-85% Alignment)

Focus: MEASURE function (currently weakest for most organizations)

Actions (3-6 months):

  1. Develop AI testing procedures (MEASURE 2.1)

    • Accuracy testing (sample test sets, pass criteria)
    • Safety testing (red teaming, prompt injection attempts)
    • Bias testing (fairness evaluation across demographics)
    • Impact: +10% alignment
  2. Conduct security assessments (MEASURE 2.9)

    • Penetration testing of AI endpoints
    • Adversarial testing (can attackers manipulate outputs?)
    • Data security reviews (training data protection)
    • Impact: +5% alignment
  3. Expand provider security tracking (MEASURE 2.8)

    • Verify SOC 2, ISO 27001 for all providers
    • Document data residency (US, EU, etc.)
    • Impact: +3% alignment

Target: 78-85% alignment after MEASURE improvements


Step 4: Formalize Risk Treatment (85-95% Alignment)

Focus: MANAGE 1.2 (critical gap for most organizations)

Actions (6-12 months):

  1. Create Risk Treatment Register (MANAGE 1.2)

    • Document treatment strategy for each risk (accept, mitigate, transfer, avoid)
    • Assign owners and timelines
    • Track implementation status
    • Impact: +10% alignment
  2. Develop incident response procedures (MANAGE 4.1)

    • AI-specific incident playbooks (provider outage, model failure, bias discovery)
    • Tabletop exercises (test response plans)
    • Impact: +5% alignment
  3. Implement data lifecycle management (MAP 2.2)

    • Track training data sources (provenance)
    • Document data quality assessments
    • Impact: +5% alignment (addresses MEASURE 1.2 gap as well)

Target: 88-95% alignment after MANAGE improvements


Evidence Requirements for Conformance Attestation

What Third-Party Assessors Will Request

When you engage an assessor for NIST AI RMF conformance evaluation, expect requests for:

1. GOVERN Evidence

Evidence TypeSignalBreak Provides?What You Need
AI Principles PolicyDocument defining fairness, transparency, safety, accountability
AI Governance Committee charterCommittee structure, roles, meeting cadence
Workflow ownership matrix🟡Owner field populated in all workflows
AI risk culture training recordsTraining attendance, course materials
Management review recordsMinutes from quarterly reviews

2. MAP Evidence

Evidence TypeSignalBreak Provides?What You Need
AI system inventoryWorkflow registry (Evidence Pack Appendix)
System categorizationCriticality, AI capability, provider tier
Impact assessmentsScenario impacts (Evidence Pack findings)
Third-party risk profilesProvider concentration analysis
Interdependency mappingWorkflow provider bindings

3. MEASURE Evidence

Evidence TypeSignalBreak Provides?What You Need
Monitoring infrastructureProvider health logs, signal detection
Performance metricsAvailability %, incident count
Testing proceduresAccuracy, safety, bias testing protocols
Testing resultsTest reports, pass/fail records
Security assessments🟡Provider SOC 2, ISO 27001 attestations

4. MANAGE Evidence

Evidence TypeSignalBreak Provides?What You Need
Risk prioritization methodologyRisk scoring (Evidence Pack Section 2)
Risk treatment registerTreatment strategies, implementation status
Risk documentationScenario documentation (Evidence Pack findings)
Incident response proceduresAI incident playbooks
Incident records🟡Provider outages detected (needs incident response documentation)

Assessment Process Overview

Step 1: Self-Assessment (Internal)

  • Generate latest SignalBreak evidence pack
  • Review against NIST AI RMF Playbook (43 subcategories)
  • Document gaps and remediation plans
  • Duration: 2-4 weeks
  • Cost: Internal effort only

Step 2: Third-Party Assessment (External)

  • Engage assessor (MITRE Corporation, consulting firm, or Big 4)
  • Provide evidence pack + supplementary documentation
  • Assessor conducts interviews, document review
  • Duration: 4-8 weeks
  • Cost: £15k-40k (varies by organization size, assessor)

Step 3: Conformance Attestation

  • Assessor issues conformance report
  • Report details alignment %, gaps, recommendations
  • Use for federal procurement, investor due diligence
  • Validity: 12 months (re-assess annually)

Timeline and Costs

Typical NIST AI RMF Conformance Journey

PhaseDurationEstimated CostKey Activities
0. Baseline1 month£0 (internal effort)Generate first SignalBreak evidence pack, identify gaps
1. Quick wins1-3 months£2k-5k (consulting support)Assign owners, create AI Principles, establish reporting cadence
2. GOVERN improvements3-6 months£5k-10k (committee setup, training)AI Governance Committee, roles documentation, culture training
3. MEASURE improvements3-6 months£10k-20k (testing tools, security assessments)Testing protocols, security assessments, provider tracking
4. MANAGE improvements6-12 months£5k-10k (internal effort, workshops)Risk treatment register, incident response procedures
5. Third-party assessment1-2 months£15k-40k (assessor)External conformance evaluation
Total (to attestation)12-18 months£37k-85kFirst-time conformance (no existing governance)

Annual Maintenance: £10k-20k (re-assessment, evidence pack generation, training)


How SignalBreak Reduces Conformance Cost

Cost CategoryWithout SignalBreakWith SignalBreakSavings
Data gathering60h @ £100/h = £6k4h (review evidence pack) = £400£5.6k
Monitoring infrastructure£15k/year (Datadog + custom dashboards)Included in SignalBreak£15k/year
Impact assessment40h @ £100/h = £4kAutomated (scenario analysis) = £400£3.6k
Evidence documentation30h @ £100/h = £3kEvidence pack generation = £300£2.7k

Total estimated savings: £27k+ in first year


NIST AI RMF vs Other Frameworks

Complementary Use with ISO 42001 and EU AI Act

NIST AI RMF is not mutually exclusive with other frameworks. In fact, it's designed to complement:

FrameworkRelationship to NIST AI RMFUse Both?
ISO 42001Management system structure (NIST provides risk methodology)✅ Yes — ISO for certification, NIST for US federal alignment
EU AI ActLegal compliance (NIST supports conformity assessment)✅ Yes — EU AI Act mandates risk management, NIST is recognised method
NIST CSFCybersecurity framework (AI RMF extends to AI-specific risks)✅ Yes — Integrated risk management across cyber + AI

SignalBreak supports all three simultaneously — evidence packs include:

  • ISO 42001 clause mapping
  • NIST AI RMF function alignment
  • EU AI Act risk classification

See Governance Overview for multi-framework strategy.


When to Choose NIST AI RMF

Choose NIST AI RMF if:

  • ✅ You're a US federal contractor or supplier
  • ✅ You're subject to Executive Order 14110 requirements
  • ✅ You need flexibility without certification burden
  • ✅ You already use NIST CSF (familiar structure)
  • ✅ You want risk-based approach (not prescriptive)

Don't choose NIST AI RMF if:

  • ❌ You need third-party certification (choose ISO 42001 instead)
  • ❌ You're only in EU with no US nexus (EU AI Act may suffice)
  • ❌ You're a small startup with <5 AI workflows (overhead may not justify)

Hybrid approach: Many organizations use NIST AI RMF for risk methodology, then pursue ISO 42001 certification when they need formal attestation for enterprise sales.


Common Questions

Is NIST AI RMF mandatory for US federal contractors?

Not universally, but increasingly expected.

Current State (2026):

  • Executive Order 14110 mandates NIST AI RMF for federal agencies
  • OMB memoranda reference NIST AI RMF as baseline for AI procurement
  • Individual agencies (DOD, DHS, HHS) are incorporating NIST AI RMF into RFPs

Practical Impact:

  • Defence contractors: Many DOD RFPs now require "NIST AI RMF conformance attestation" as qualification criterion
  • Civilian agencies: NIST AI RMF mentioned in evaluation criteria (competitive advantage)
  • Non-federal: Not mandatory, but demonstrates best practice

Recommendation: If you bid on federal contracts involving AI, assume NIST AI RMF conformance will be required or preferred within 12-24 months.


Can SignalBreak alone get me NIST AI RMF conformance?

No, but it provides ~50-60% of evidence.

What SignalBreak provides:

  • ✅ AI system inventory (MAP 1.1, MAP 1.2)
  • ✅ Third-party risk tracking (MAP 1.6, MANAGE 3.1)
  • ✅ Continuous monitoring (MEASURE 2.6)
  • ✅ Risk prioritization (MANAGE 1.1)
  • ✅ Risk documentation (MANAGE 1.3)

What you still need:

  • ❌ AI Governance Committee (GOVERN 2.1)
  • ❌ Testing protocols (MEASURE 2.1, MEASURE 2.4)
  • ❌ Risk treatment execution (MANAGE 1.2)
  • ❌ Incident response procedures (MANAGE 4.1)

Analogy: SignalBreak is like Jira for NIST AI RMF — it tracks your AI systems and risks, but you still need governance processes around it.


How does NIST AI RMF differ from NIST CSF?

NIST Cybersecurity Framework (CSF) and NIST AI RMF are related but distinct:

AspectNIST CSFNIST AI RMF
FocusCybersecurity risksAI-specific risks
FunctionsIDENTIFY, PROTECT, DETECT, RESPOND, RECOVER (5)GOVERN, MAP, MEASURE, MANAGE (4)
ScopeIT systems, networks, dataAI systems, models, training data
OverlapIDENTIFY ≈ MAP, DETECT ≈ MEASURE~40% conceptual overlap
Use CaseSecurity operations, incident responseAI governance, trustworthy AI

Can I use both? ✅ Yes — and you should if you have AI systems.

Integration Strategy:

  • Use NIST CSF for securing AI infrastructure (API keys, data encryption, access control)
  • Use NIST AI RMF for managing AI-specific risks (bias, explainability, third-party models)

SignalBreak supports both by monitoring provider security (CSF DETECT function) and AI system risks (AI RMF MEASURE function).


What's the difference between conformance attestation and certification?

Key Difference:

AspectCertification (ISO 42001)Conformance Attestation (NIST AI RMF)
Issuing BodyAccredited certification body (BSI, SGS, etc.)Third-party assessor (no accreditation required)
StandardNormative (must meet specific requirements)Voluntary (principles-based)
Audit RigorStage 1 + Stage 2 audits, surveillance auditsSingle assessment, no ongoing surveillance
Certificate Validity3 years (annual surveillance)Typically 12 months (annual re-assessment)
Market RecognitionGlobally recognisedPrimarily US federal procurement
Cost£18k-43k (first certification)£15k-40k (first assessment)

Which one do you need?

  • ISO 42001 certification: If you're selling to global enterprises requiring third-party certification
  • NIST AI RMF attestation: If you're bidding on US federal contracts
  • Both: Large vendors serving both markets pursue dual compliance

SignalBreak evidence packs support both pathways.


How often should I re-assess NIST AI RMF conformance?

Minimum: Annually (to keep conformance attestation current) Recommended: Quarterly self-assessment, annual third-party assessment Best practice: Continuous self-assessment via monthly evidence packs

Why quarterly?

  • AI systems change frequently (new workflows, provider changes, model updates)
  • Quarterly aligns with typical management review cadence
  • Federal agencies may request current attestation (within 12 months)

Cost-benefit:

  • Annual third-party: £15k-40k (required for attestation)
  • Quarterly self-assessment: £0 (internal effort, automated via SignalBreak)
  • Additional value: Early detection of conformance gaps, always procurement-ready

Exception: If you have <10 AI workflows with stable providers, annual may suffice. For >20 workflows or high-risk use cases, quarterly is essential.


Next Steps

  1. Generate NIST AI RMF Alignment Report:

    • Navigate to Dashboard → Governance → NIST AI RMF
    • Click "Generate Report"
    • Review current alignment % and function-level gaps
  2. Close critical gaps:

    • Assign workflow owners (GOVERN 1.3)
    • Create Risk Treatment Register (MANAGE 1.2)
    • Document AI Principles Policy (GOVERN 1.2)
  3. Establish governance rhythm:

    • Monthly evidence pack generation
    • Quarterly management reviews
    • Annual third-party assessment
  4. Engage assessor (when ready):

    • Target 60%+ alignment before external assessment
    • Provide evidence pack as demonstration of maturity
    • Expect 12-18 month timeline to first conformance attestation


External Resources


Last updated: 2026-01-26Based on: NIST AI RMF 1.0 (January 2023), Executive Order 14110 (October 2023)

AI Governance Intelligence