NIST AI Risk Management Framework Guide
What is NIST AI RMF?
NIST AI Risk Management Framework (AI RMF) 1.0 is a voluntary framework published by the US National Institute of Standards and Technology to help organizations manage risks associated with artificial intelligence systems.
Official Name: NIST AI Risk Management Framework (AI RMF 1.0)
Published: January 26, 2023
Developed By: National Institute of Standards and Technology (NIST), US Department of Commerce
Framework Type: Voluntary risk management framework (non-certifiable)
Primary Audience:
- US federal agencies and contractors
- Organizations subject to Executive Order 14110 (Safe, Secure, and Trustworthy AI)
- Enterprises seeking structured AI risk management
Why NIST AI RMF Matters for AI Governance
1. Federal Mandate for US Government
Executive Order 14110 (October 30, 2023) requires federal agencies to:
- Use NIST AI RMF as the foundation for AI risk management
- Report AI risk management activities to OMB (Office of Management and Budget)
- Implement AI governance practices aligned with the framework
This matters because:
- Federal contractors working with US government agencies increasingly face NIST AI RMF compliance requirements in procurement
- OMB memoranda reference NIST AI RMF as the baseline for federal AI governance
- Agency-specific guidance (DOD, DHS, HHS, etc.) builds on NIST AI RMF foundations
| Sector | NIST AI RMF Status | Compliance Driver |
|---|---|---|
| US Federal Agencies | Mandatory (via EO 14110) | OMB policy |
| Defence Contractors | Strongly recommended | DOD procurement requirements |
| Critical Infrastructure | Recommended | CISA guidance, voluntary frameworks |
| Private Sector | Voluntary | Best practice, investor due diligence |
2. Designed for Risk-Based Decision Making
Unlike prescriptive standards that mandate specific controls, NIST AI RMF is principles-based:
Risk-Based Philosophy:
- Organizations define their own risk tolerance
- Framework provides structure, not mandates
- Emphasis on context-specific risk assessment
Comparison:
| Framework | Approach | Certification? | Flexibility |
|---|---|---|---|
| ISO 42001 | Prescriptive (Annex SL structure) | ✅ Yes | Lower (must meet specific clauses) |
| NIST AI RMF | Principles-based | ❌ No | Higher (adapt to context) |
| EU AI Act | Legal requirements | ⚠️ Partial | Lower (mandated for high-risk) |
Best for:
- Organizations needing flexibility in implementation
- US federal contractors requiring alignment without certification burden
- Enterprises seeking scalable governance that grows with AI maturity
3. Harmonized with Other NIST Frameworks
NIST AI RMF integrates seamlessly with established NIST frameworks:
| Framework | Relationship to AI RMF |
|---|---|
| NIST Cybersecurity Framework (CSF) | Shares IDENTIFY, PROTECT, DETECT, RESPOND structure philosophy |
| NIST Privacy Framework | Crosswalk available (NIST AI 100-1) for AI+privacy integration |
| NIST RMF (Risk Management Framework) | AI-specific extension of federal RMF for systems |
Benefit: If you already use NIST CSF for cybersecurity, AI RMF will feel familiar. Many organizations integrate both frameworks under a unified risk management program.
4. No Certification, But Conformance Attestation
While NIST AI RMF is not certifiable (no accredited certification bodies), organizations can:
Self-Attestation:
- Declare conformance with NIST AI RMF
- Document alignment in governance reports
- Use SignalBreak evidence as proof
Third-Party Assessment:
- Engage assessors (e.g., MITRE Corporation, consulting firms) for independent conformance review
- No formal certification, but assessment report provides external validation
Federal Procurement:
- Some RFPs require NIST AI RMF conformance attestation as qualification criterion
- Assessment reports strengthen bids
How SignalBreak Maps to NIST AI RMF
SignalBreak provides automated evidence generation for all 4 NIST AI RMF functions. Your workflows, scenarios, and provider monitoring demonstrate conformance with:
The 4 Core Functions
| Function | Purpose | SignalBreak Evidence | Status |
|---|---|---|---|
| GOVERN | Establish AI governance culture, structures, and accountability | Workflow owners, governance platform | 🟡 Partial |
| MAP | Understand AI system context, categorize risks, assess impacts | Workflow inventory, provider mapping, scenarios | 🟢 Implemented |
| MEASURE | Analyze, assess, benchmark, and monitor AI risks | Provider monitoring (5,000+ signals), risk scoring | 🟡 Partial |
| MANAGE | Allocate resources, prioritize risks, respond to incidents | Risk prioritization, scenario impacts | 🟢 Implemented |
Overall Alignment: Typically 50-75% for organizations with SignalBreak evidence (varies by maturity)
NIST AI RMF Function Details
GOVERN: Governance and Accountability
Requirement: Cultivate and direct a culture and structure for responsible AI development and use, with clear roles, responsibilities, and accountability.
Subcategories (5):
GOVERN 1.1: Legal and Regulatory Requirements
Description: Organization understands and documents applicable laws, regulations, and policies regarding AI.
SignalBreak Evidence:
- Provider compliance tracking: 2 providers with governance data tracked (OpenAI, Anthropic)
- Regulatory mappings: EU AI Act, ISO 42001, GDPR considerations in workflow documentation
Example from Evidence Pack:
"SignalBreak tracks 13+ AI regulations including EU AI Act, California SB 1047, and Colorado AI Act. Provider profiles document regulatory compliance status (GDPR-compliant processing, US Cloud Act implications)."
Audit Readiness: ✅ Fully evidenced — Provider registry demonstrates awareness of third-party legal obligations.
GOVERN 1.2: Responsible AI Principles
Description: Organization defines and documents AI principles such as fairness, transparency, explainability, safety, and accountability.
SignalBreak Evidence:
- Workflow AI capability types: Text Generation, Image Analysis, Code Generation, etc. (documents intended AI use)
- Criticality classification: Critical, High, Medium, Low (prioritizes safety based on business impact)
Example from Evidence Pack:
"6 workflows assessed for AI characteristics including criticality (4 Critical, 2 High, 0 Medium, 0 Low). Criticality framework demonstrates risk-based prioritization aligned with responsible AI principles."
Audit Readiness: 🟡 Partial — Workflow categorization demonstrates responsible AI awareness, but formal AI principles document (fairness policy, explainability requirements, etc.) needed for full conformance.
Gap Remediation: Create AI Principles Policy document referencing:
- Fairness (bias mitigation in model selection)
- Transparency (disclosure of AI use in workflows)
- Safety (criticality-based testing requirements)
- Accountability (workflow ownership per GOVERN 1.3)
Estimated effort: 2-4 days (draft), 1-2 weeks (stakeholder review + approval)
GOVERN 1.3: Accountability and Responsibility
Description: Workforce accountability and responsibility for AI system outcomes is clearly defined and documented.
SignalBreak Evidence:
- Workflow owners: Each workflow should have assigned owner (currently gap in many implementations)
- Accountability structure: Who is responsible for AI system failures?
Gap: 🔴 Critical gap — SignalBreak workflows support owner field, but many organizations don't populate it. Zero workflows with documented accountability structures is common starting state.
Audit Readiness: ❌ Not evidenced without workflow owner assignment.
Gap Remediation:
- Assign workflow owners to all AI systems (individual or team)
- Document accountability matrix:
- Who is responsible for AI output quality?
- Who approves model changes?
- Who responds to AI incidents?
Template:
| Workflow | Owner | Responsible for | Accountable to |
|---|---|---|---|
| Customer Support Summarisation | Jane Smith (CX Lead) | Output quality, user feedback | VP Customer Experience |
| Email Classification | IT Team | Uptime, accuracy | CIO |
| Code Review Agent | Engineering Manager | Security, false positive rate | CTO |
Estimated effort: 4-8 hours (small org), 2-4 weeks (enterprise with change management)
GOVERN 1.4: Organizational AI Risk Culture
Description: Organizational culture is established and prioritizes AI risk management throughout the organization's lifecycle.
SignalBreak Evidence:
- Governance platform operational: SignalBreak adoption demonstrates governance investment
- Signal monitoring: 5,000+ signals tracked shows continuous risk awareness
Example from Evidence Pack:
"Governance platform operational with continuous signal monitoring (5,005 signals tracked). Platform adoption indicates organizational commitment to AI risk visibility."
Audit Readiness: 🟡 Partial — Platform use demonstrates technical culture, but organizational culture (training, communication, incentives) requires supplementary evidence.
Gap Remediation: Document AI risk culture activities:
- Training: AI governance training for staff (attendance records)
- Communication: Internal newsletters, town halls on AI risks
- Incentives: Performance goals tied to AI risk management (e.g., "Maintain Green risk status")
Estimated effort: Ongoing (cultural change is continuous)
GOVERN 1.5: Organizational Policies and Practices
Description: Transparent and standardized practices, including reporting, are in place for determining how risks are managed based on impacts.
SignalBreak Evidence:
- Workflow business context: 6 workflows with documented business context
- Evidence pack generation: Standardized reporting on AI risks (monthly/quarterly cadence)
Example from Evidence Pack:
"6 workflows with documented business context including intended use, dependencies, and stakeholders. Evidence Pack generation provides standardized AI risk reporting with transparent methodology."
Audit Readiness: ✅ Fully evidenced — Evidence packs demonstrate standardized risk reporting practices.
Best Practice: Generate evidence packs monthly and present at management reviews to demonstrate GOVERN 1.5 conformance.
MAP: Context and Risk Identification
Requirement: Understand the business context, categorize AI systems, identify and assess AI risks, and understand potential impacts.
Subcategories (6):
MAP 1.1: Mission, Goals, and Context
Description: Context is established and understood for AI systems, including their purposes, environment, and constraints.
SignalBreak Evidence:
- Comprehensive workflow mapping: 6-8 AI workflows documented with:
- Workflow name and description
- AI capability type (purpose)
- Provider bindings (technology environment)
- Criticality level (business constraints)
Example from Evidence Pack:
"6 AI workflows comprehensively mapped with complete metadata: ID, name, AI capability, provider bindings, criticality, owner. Workflow registry provides transparent system inventory meeting MAP 1.1 requirements."
Audit Readiness: ✅ Fully evidenced — Workflow registry satisfies context documentation requirements.
Key audit questions NIST AI RMF assessors ask:
| Question | SignalBreak Answer |
|---|---|
| ❓ "What AI systems do you operate?" | Workflow registry (Evidence Pack Appendix) |
| ❓ "What are their purposes?" | AI capability types (Text Generation, Image Analysis, etc.) |
| ❓ "What's the operating environment?" | Provider bindings (OpenAI, Anthropic, etc.) |
| ❓ "What are the constraints?" | Criticality levels, fallback configurations |
MAP 1.2: AI System Categorization
Description: AI systems are categorized based on characteristics such as scope, complexity, risk level, and impact.
SignalBreak Evidence:
- Complete categorization system: Workflows categorized by:
- AI capability (functional categorization)
- Criticality (risk-based categorization)
- Provider tier (complexity/maturity categorization)
Example from Evidence Pack:
"Complete workflow categorization system active with 3 dimensions: AI capability (8 types), Criticality (4 levels), Provider tier (Tier 1-4). Categorization enables risk-based resource allocation."
Audit Readiness: ✅ Fully evidenced — Multi-dimensional categorization exceeds NIST AI RMF minimum requirements.
NIST AI RMF Categorization Guidance:
| NIST Dimension | SignalBreak Implementation |
|---|---|
| System scope | Workflow-level tracking (scoped to business function) |
| Complexity | Provider tier (Tier 1 = enterprise, Tier 4 = experimental) |
| Risk level | Criticality (Critical, High, Medium, Low) |
| Impact | Scenario impacts (business continuity, cost, downtime) |
MAP 1.5: Impacts to Individuals and Communities
Description: AI system impacts to individuals, groups, communities, organizations, and society are identified and evaluated.
SignalBreak Evidence:
- Scenario-based impact modeling: 4+ impact scenarios documented and assessed
- Business impact quantification: Downtime hours, cost estimates, customer satisfaction impact
Example from Evidence Pack:
"4 impact scenarios documented and assessed with business impact quantification: downtime hours (24-72h), cost ranges (£15k-50k), likelihood (Medium: 2-4 incidents/year). Impact methodology demonstrates MAP 1.5 conformance."
Audit Readiness: ✅ Fully evidenced — Scenario impacts provide concrete evidence of impact evaluation.
NIST AI RMF Impact Categories:
| Impact Type | Example | SignalBreak Evidence |
|---|---|---|
| Individuals | Customer unable to get support due to AI chatbot failure | Customer satisfaction impact in scenario findings |
| Organizations | Business disruption from provider outage | Downtime hours, revenue impact estimates |
| Society | Systemic risks from AI dependency | Concentration risk analysis (>35% single provider) |
Gap for societal impacts: SignalBreak focuses on organizational impacts. For societal impact assessment (e.g., environmental footprint of AI, labor displacement), supplement with:
- Carbon footprint analysis of AI provider data centers
- Workforce impact assessment (AI augmentation vs. replacement)
Estimated effort: 1-2 weeks for societal impact add-on
MAP 1.6: Third-Party AI Risks
Description: Risks from third-party entities (e.g., AI vendors, data providers) are documented and managed.
SignalBreak Evidence:
- External provider tracking: 4+ external providers tracked with risk profiles
- Continuous monitoring: 5,000+ provider change signals detected
- Concentration risk: Provider concentration analysis (identifies single points of failure)
Example from Evidence Pack:
"4 external providers tracked with comprehensive risk profiles including Tier classification, SLA, incident history. Concentration risk analysis identifies supply chain vulnerabilities (max 25% OpenAI concentration)."
Audit Readiness: ✅ Fully evidenced — Provider monitoring demonstrates robust third-party risk management.
NIST AI RMF Third-Party Risk Factors:
| Risk Factor | SignalBreak Evidence |
|---|---|
| Vendor reliability | Provider tier (Tier 1 = 99.9%+ SLA, proven track record) |
| Service availability | Historical uptime metrics, incident count |
| Data handling | Provider compliance (GDPR, SOC 2, ISO 27001) |
| Vendor concentration | Concentration analysis (warns if >35% single provider) |
| Change management | Signal detection (API changes, model deprecations, pricing) |
MEASURE: Analysis and Assessment
Requirement: Use quantitative, qualitative, or mixed-method tools and techniques to analyze, assess, benchmark, and monitor AI risks and related impacts.
Subcategories (13 total, 4 key):
MEASURE 1.1: AI System Performance and Impacts
Description: Appropriate AI system metrics are identified and tracked to measure impacts.
SignalBreak Evidence:
- Risk scoring system: Decision Readiness Score (0-100 scale) operational across workflows
- Trend tracking: Historical score trajectory shows improvement/degradation
Example from Evidence Pack:
"Criticality scoring system operational with weighted methodology: Critical workflows = 40 points impact, High = 25 points, Medium = 10 points, Low = 5 points. Score trend tracking demonstrates MEASURE 1.1 conformance."
Audit Readiness: ✅ Fully evidenced — Risk scoring provides quantitative metrics for AI impacts.
NIST AI RMF Metric Categories:
| Category | SignalBreak Metric | Frequency |
|---|---|---|
| Performance | Provider availability (%), incident count | Real-time (5-min polls) |
| Impact | Risk score (0-100), RAG status | Monthly (evidence pack) |
| Business | Estimated downtime (hours), cost impact (£) | Per scenario |
MEASURE 2.1: AI System Testing and Evaluation
Description: AI systems are tested and evaluated for performance, accuracy, safety, and security.
SignalBreak Evidence: 🟡 Partial — Provider monitoring active (observability), but formal testing protocols for AI systems incomplete.
Gap: SignalBreak monitors third-party providers (external AI services) but doesn't test your workflows (how you use AI).
What's Missing:
- Accuracy testing: Does the AI chatbot give correct answers? (your responsibility)
- Safety testing: Can users trick the AI into harmful outputs? (red teaming)
- Bias testing: Does the AI treat all user groups fairly? (fairness evaluation)
Audit Readiness: 🟡 Partial — Provider health monitoring covers vendor reliability, but workflow-level testing needed for full MEASURE 2.1.
Gap Remediation: Implement AI testing procedures for each critical workflow:
Template: AI Testing Checklist
| Test Type | Frequency | Owner | Pass Criteria |
|---|---|---|---|
| Accuracy | Monthly | Workflow owner | >95% correct responses (sample test set) |
| Safety (Red Team) | Quarterly | Security team | Zero successful prompt injections |
| Bias | Annually | Compliance team | <5% disparity across demographic groups |
| Performance | Weekly | Operations team | <200ms p95 latency |
Estimated effort:
- Setup: 1-2 weeks (develop test sets, define pass criteria)
- Ongoing: 4-8 hours/month per workflow
MEASURE 2.6: Mechanisms for Continuous Monitoring
Description: Mechanisms exist for ongoing monitoring of AI system performance and impacts.
SignalBreak Evidence:
- Continuous provider monitoring: 5,000+ signals tracked via 47 sources across 21 providers
- Automated signal detection: Status changes, API updates, model deprecations, pricing changes
- Real-time alerting: Critical provider outages detected within 5 minutes
Example from Evidence Pack:
"Provider change monitoring active with 5,005 signals tracked. Continuous monitoring infrastructure includes 5-minute status polling, automated signal classification, and real-time alerting for critical events."
Audit Readiness: ✅ Fully evidenced — Continuous monitoring far exceeds NIST AI RMF baseline (many organizations still use manual quarterly reviews).
NIST AI RMF Monitoring Dimensions:
| Dimension | SignalBreak Implementation | Frequency |
|---|---|---|
| Performance | Provider availability tracking | Every 5 minutes |
| Changes | API updates, model changes, pricing | Real-time (signal detection) |
| Incidents | Provider outages, degradations | Real-time (status page polling) |
| Trends | Historical uptime, incident frequency | Monthly (evidence pack) |
Competitive advantage: Most organizations monitor AI systems reactively (wait for users to report issues). SignalBreak monitors proactively (detect provider issues before they affect users).
MEASURE 2.9: AI System Security and Resilience
Description: AI system security and resilience are assessed.
SignalBreak Evidence: 🟡 Partial — Provider security tracking (SOC 2, ISO 27001 compliance) for some providers, but not comprehensive.
Gap: Only 2 of 4 providers have complete security assessment data in typical implementations.
What's Missing:
- Penetration testing of AI endpoints
- Adversarial testing (can attackers manipulate AI outputs?)
- Data security (how is training data protected?)
Audit Readiness: 🟡 Partial — Provider compliance tracking covers vendor security, but workflow-level security assessment needed.
Gap Remediation: Expand security assessment to all providers:
Provider Security Checklist:
| Provider | SOC 2 Type 2 | ISO 27001 | Penetration Test | Adversarial Test | Data Residency |
|---|---|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ | ⏳ Needed | US |
| Anthropic | ✅ | ✅ | ✅ | ⏳ Needed | US |
| Ollama (self-hosted) | N/A | N/A | ⏳ Needed | ⏳ Needed | On-prem |
| Google Vertex | ✅ | ✅ | ✅ | ⏳ Needed | EU |
Estimated effort:
- Provider compliance: 2-4 hours per provider (review attestations)
- Adversarial testing: 1-2 weeks per critical workflow (engage security firm)
MANAGE: Risk Response and Mitigation
Requirement: Allocate resources to manage AI risks, prioritize risks, plan responses, and implement risk treatment strategies.
Subcategories (4):
MANAGE 1.1: Risk Prioritization
Description: AI risks are prioritized based on likelihood, impact, and organizational risk tolerance.
SignalBreak Evidence:
- Criticality-based prioritization: Critical > High > Medium > Low
- Impact severity: Critical impacts = 40 points, High = 25 points, Medium = 10 points, Low = 5 points
- Risk score: Weighted sum of impacts provides overall risk level (0-100)
Example from Evidence Pack:
"Criticality-based prioritization system active with transparent weighting: Critical workflows receive 40 points per impact, High = 25 points, Medium = 10 points, Low = 5 points. Prioritization enables risk-based resource allocation."
Audit Readiness: ✅ Fully evidenced — Risk scoring provides objective prioritization methodology.
NIST AI RMF Prioritization Factors:
| Factor | SignalBreak Evidence |
|---|---|
| Likelihood | Provider incident frequency, historical availability |
| Impact | Criticality level, scenario impacts (downtime, cost) |
| Risk tolerance | RAG thresholds (Red >70, Amber 30-70, Green <30) |
Key audit questions:
| Question | SignalBreak Answer |
|---|---|
| ❓ "How do you prioritize AI risks?" | Criticality-based scoring with weighted impacts |
| ❓ "Who decides what's critical?" | Workflow owners assign criticality, CIO approves |
| ❓ "How often do you re-prioritize?" | Monthly via evidence pack regeneration |
MANAGE 1.2: Risk Treatment
Description: AI risks are managed based on appropriate treatment strategies (accept, mitigate, transfer, avoid).
SignalBreak Evidence: 🔴 Critical gap — Risk identification and prioritization implemented, but treatment execution missing.
Gap: Zero workflows have documented mitigation strategies in typical implementations.
What's Missing:
- Mitigation plans: How will you reduce risk? (e.g., add fallback provider)
- Treatment decisions: Accept, mitigate, transfer, avoid for each risk
- Implementation tracking: Are mitigations actually deployed?
Audit Readiness: ❌ Not evidenced — Recommendations exist (Evidence Pack p.5), but formal risk treatment process needed.
Gap Remediation: Create Risk Treatment Register:
| Risk ID | Risk Description | Likelihood × Impact | Treatment Strategy | Mitigation Action | Owner | Timeline | Status |
|---|---|---|---|---|---|---|---|
| R-001 | OpenAI outage affects customer support chatbot | High × Critical = Critical | Mitigate | Add Anthropic fallback provider | CX Lead | 30 days | In Progress |
| R-002 | Anthropic rate limiting impacts email classifier | Medium × High = Medium | Accept | Monitor usage, upgrade plan if needed | IT Manager | Ongoing | Accepted |
| R-003 | Ollama server failure stops internal chatbot | Low × Medium = Low | Avoid | Migrate to cloud provider (OpenAI) | Engineering Manager | 90 days | Planned |
Estimated effort:
- Setup: 4-8 hours (create register, document treatment decisions)
- Ongoing: 2-4 hours/month (update status, track implementation)
MANAGE 1.3: Risk Documentation and Reporting
Description: AI risk information is documented and reported to appropriate personnel.
SignalBreak Evidence:
- Scenario documentation: 4+ risk scenarios formally documented with business impacts
- Evidence pack reporting: Monthly/quarterly reports to management with risk findings
- Stakeholder communication: Evidence packs provide transparent risk communication
Example from Evidence Pack:
"4 risk scenarios formally documented with comprehensive impact analysis: scenario description, affected workflows, impact severity (Critical/High/Medium/Low), estimated downtime, cost impact, likelihood. Documentation supports MANAGE 1.3 reporting requirements."
Audit Readiness: ✅ Fully evidenced — Evidence packs provide comprehensive risk documentation and reporting mechanism.
NIST AI RMF Reporting Audiences:
| Audience | Report Format | Frequency | SignalBreak Evidence |
|---|---|---|---|
| Management | Evidence Pack (executive summary, findings) | Monthly/Quarterly | Risk score, RAG status, top recommendations |
| Technical teams | Evidence Pack (detailed findings, provider signals) | Monthly | Provider health, signal analysis, impact scenarios |
| Board | Evidence Pack (score trajectory, strategic risks) | Quarterly | Trend analysis, concentration risks, maturity assessment |
Best Practice: Present evidence pack at quarterly management review meetings. Document review outcomes (decisions, resource allocation) separately to demonstrate management engagement.
MANAGE 2.2: Transparency and Documentation
Description: AI system lifecycle management is transparent and well-documented.
SignalBreak Evidence:
- Workflow lifecycle tracking: Creation date, last modified, owner, status
- Change log: Provider binding changes, model upgrades, configuration updates
Example from Evidence Pack:
"Workflow lifecycle tracking operational with metadata: creation date, last modified timestamp, owner assignment, active status. Change tracking enables transparency per MANAGE 2.2 requirements."
Audit Readiness: ✅ Fully evidenced — Workflow registry provides lifecycle transparency.
NIST AI RMF Lifecycle Stages:
| Stage | SignalBreak Evidence |
|---|---|
| Design | Workflow creation (initial configuration, provider selection) |
| Development | Provider binding changes (model selection, fallback configuration) |
| Deployment | Workflow status (active/inactive), criticality level |
| Monitoring | Continuous provider health tracking, signal detection |
| Decommissioning | Workflow deletion (archived in audit log) |
Scoring Methodology (NIST AI RMF Perspective)
How SignalBreak Calculates NIST AI RMF Alignment
SignalBreak generates NIST AI RMF Alignment Reports that assess conformance with the 4 core functions:
Function Scoring:
| Function | Weighting | Assessment Criteria |
|---|---|---|
| GOVERN | 25% | Governance structures, accountability, culture |
| MAP | 25% | System inventory, categorization, impact assessment |
| MEASURE | 25% | Monitoring infrastructure, testing protocols, metrics |
| MANAGE | 25% | Risk prioritization, treatment, documentation |
Overall Alignment Calculation:
Alignment % = (GOVERN score × 0.25) + (MAP score × 0.25) + (MEASURE score × 0.25) + (MANAGE score × 0.25)Typical Ranges:
| Alignment | Organization Profile | Characteristics |
|---|---|---|
| 0-30% | Early stage | Workflows registered, minimal governance |
| 30-50% | Developing | Basic monitoring, some accountability structures |
| 50-75% | Mature | Most functions implemented, minor gaps |
| 75-100% | Advanced | Comprehensive governance, formal testing, continuous improvement |
SignalBreak Evidence Contribution:
- Workflows + Providers: ~40% alignment (MAP 1.1, MAP 1.2, MAP 1.6)
- Scenarios + Impacts: ~20% alignment (MAP 1.5, MANAGE 1.1, MANAGE 1.3)
- Provider Monitoring: ~15% alignment (MEASURE 2.6)
- Governance Structures: 0-25% alignment (GOVERN functions — organization-dependent)
Control Categories and What They Assess
NIST AI RMF organizes requirements into 4 core functions (detailed above). Additionally, the NIST AI RMF Playbook provides subcategories (43 total) that break down each function:
Full Subcategory Breakdown
GOVERN (11 subcategories)
| Subcategory | Focus | SignalBreak Coverage |
|---|---|---|
| GOVERN-1.1 | Legal/regulatory | 🟡 Partial |
| GOVERN-1.2 | Organizational policies | ✅ Full |
| GOVERN-1.3 | Accountability | ❌ Gap |
| GOVERN-1.4 | Culture | 🟡 Partial |
| GOVERN-1.5 | Transparency | ✅ Full |
| GOVERN-2.1 | Roles and responsibilities | ❌ Gap |
| GOVERN-2.2 | Teams | 🟡 Partial |
| GOVERN-3.1 | Resources | 🟡 Partial |
| GOVERN-3.2 | Capabilities | ✅ Full |
| GOVERN-4.1 | AI risk culture | 🟡 Partial |
| GOVERN-4.2 | Incident reporting | 🟡 Partial |
Strengths: Policy documentation, transparency via evidence packs Gaps: Accountability structures, formal roles/responsibilities
MAP (9 subcategories)
| Subcategory | Focus | SignalBreak Coverage |
|---|---|---|
| MAP-1.1 | System context | ✅ Full |
| MAP-1.2 | Categorization | ✅ Full |
| MAP-1.3 | Requirements | 🟡 Partial |
| MAP-1.4 | Risks and benefits | 🟡 Partial |
| MAP-1.5 | Impact assessment | ✅ Full |
| MAP-1.6 | Third-party risks | ✅ Full |
| MAP-2.1 | AI system lifecycle | ✅ Full |
| MAP-2.2 | Data lifecycle | ❌ Gap |
| MAP-3.1 | Interdependencies | ✅ Full |
Strengths: System inventory, categorization, third-party tracking Gaps: Data lifecycle management (training data provenance, data quality)
MEASURE (13 subcategories)
| Subcategory | Focus | SignalBreak Coverage |
|---|---|---|
| MEASURE-1.1 | Metrics | ✅ Full |
| MEASURE-1.2 | Data quality | ❌ Gap |
| MEASURE-1.3 | Environmental impacts | ❌ Gap |
| MEASURE-2.1 | Testing/evaluation | 🟡 Partial |
| MEASURE-2.2 | AI system performance | ✅ Full |
| MEASURE-2.3 | Human-AI interaction | ❌ Gap |
| MEASURE-2.4 | Harmful bias | ❌ Gap |
| MEASURE-2.5 | Explainability | ❌ Gap |
| MEASURE-2.6 | Continuous monitoring | ✅ Full |
| MEASURE-2.7 | Incidents | 🟡 Partial |
| MEASURE-2.8 | Data security | 🟡 Partial |
| MEASURE-2.9 | Security/resilience | 🟡 Partial |
| MEASURE-3.1 | AI system output | 🟡 Partial |
Strengths: Metrics, performance tracking, continuous monitoring Gaps: Data quality, bias testing, explainability (require domain-specific tools)
MANAGE (10 subcategories)
| Subcategory | Focus | SignalBreak Coverage |
|---|---|---|
| MANAGE-1.1 | Risk prioritization | ✅ Full |
| MANAGE-1.2 | Risk treatment | ❌ Gap |
| MANAGE-1.3 | Risk documentation | ✅ Full |
| MANAGE-2.1 | Risk communication | ✅ Full |
| MANAGE-2.2 | Transparency | ✅ Full |
| MANAGE-2.3 | Records management | ✅ Full |
| MANAGE-3.1 | Third-party risk | ✅ Full |
| MANAGE-3.2 | Third-party data | 🟡 Partial |
| MANAGE-4.1 | Incident response | 🟡 Partial |
| MANAGE-4.2 | Incident analysis | 🟡 Partial |
Strengths: Risk documentation, communication, third-party tracking Gaps: Formal risk treatment execution, incident response procedures
How to Improve Your NIST AI RMF Alignment
Step 1: Achieve 50%+ Alignment (Baseline)
Current State: Organizations with SignalBreak typically start at 30-50% alignment.
Quick Wins (0-30 days):
Assign workflow owners (GOVERN 1.3)
- Populate owner field for all workflows
- Document accountability (who approves model changes?)
- Impact: +10-15% alignment
Document AI principles (GOVERN 1.2)
- Create 1-page AI Principles Policy
- Reference in workflow documentation
- Impact: +5% alignment
Generate monthly evidence packs (MANAGE 1.3)
- Establish regular reporting cadence
- Present at management reviews
- Impact: +5% alignment
Target: 50-60% alignment after quick wins
Step 2: Close GOVERN Gaps (60-75% Alignment)
Focus: Governance structures and accountability
Actions (30-90 days):
Create AI Governance Committee
- Executive sponsor (CIO, CTO, or CDO)
- Cross-functional members (legal, compliance, engineering, product)
- Quarterly meetings to review evidence packs
- Impact: +10% alignment (GOVERN 2.1, GOVERN 2.2)
Formalize roles and responsibilities
- Document who is responsible for:
- AI system design approvals
- Model selection and changes
- Incident response
- Compliance attestation
- Impact: +5% alignment (GOVERN 2.1)
- Document who is responsible for:
Implement AI risk culture training
- Training for all staff using AI systems
- Attendance tracking for audit evidence
- Impact: +5% alignment (GOVERN 4.1)
Target: 65-75% alignment after GOVERN improvements
Step 3: Implement Testing Protocols (75-85% Alignment)
Focus: MEASURE function (currently weakest for most organizations)
Actions (3-6 months):
Develop AI testing procedures (MEASURE 2.1)
- Accuracy testing (sample test sets, pass criteria)
- Safety testing (red teaming, prompt injection attempts)
- Bias testing (fairness evaluation across demographics)
- Impact: +10% alignment
Conduct security assessments (MEASURE 2.9)
- Penetration testing of AI endpoints
- Adversarial testing (can attackers manipulate outputs?)
- Data security reviews (training data protection)
- Impact: +5% alignment
Expand provider security tracking (MEASURE 2.8)
- Verify SOC 2, ISO 27001 for all providers
- Document data residency (US, EU, etc.)
- Impact: +3% alignment
Target: 78-85% alignment after MEASURE improvements
Step 4: Formalize Risk Treatment (85-95% Alignment)
Focus: MANAGE 1.2 (critical gap for most organizations)
Actions (6-12 months):
Create Risk Treatment Register (MANAGE 1.2)
- Document treatment strategy for each risk (accept, mitigate, transfer, avoid)
- Assign owners and timelines
- Track implementation status
- Impact: +10% alignment
Develop incident response procedures (MANAGE 4.1)
- AI-specific incident playbooks (provider outage, model failure, bias discovery)
- Tabletop exercises (test response plans)
- Impact: +5% alignment
Implement data lifecycle management (MAP 2.2)
- Track training data sources (provenance)
- Document data quality assessments
- Impact: +5% alignment (addresses MEASURE 1.2 gap as well)
Target: 88-95% alignment after MANAGE improvements
Evidence Requirements for Conformance Attestation
What Third-Party Assessors Will Request
When you engage an assessor for NIST AI RMF conformance evaluation, expect requests for:
1. GOVERN Evidence
| Evidence Type | SignalBreak Provides? | What You Need |
|---|---|---|
| AI Principles Policy | ❌ | Document defining fairness, transparency, safety, accountability |
| AI Governance Committee charter | ❌ | Committee structure, roles, meeting cadence |
| Workflow ownership matrix | 🟡 | Owner field populated in all workflows |
| AI risk culture training records | ❌ | Training attendance, course materials |
| Management review records | ❌ | Minutes from quarterly reviews |
2. MAP Evidence
| Evidence Type | SignalBreak Provides? | What You Need |
|---|---|---|
| AI system inventory | ✅ | Workflow registry (Evidence Pack Appendix) |
| System categorization | ✅ | Criticality, AI capability, provider tier |
| Impact assessments | ✅ | Scenario impacts (Evidence Pack findings) |
| Third-party risk profiles | ✅ | Provider concentration analysis |
| Interdependency mapping | ✅ | Workflow provider bindings |
3. MEASURE Evidence
| Evidence Type | SignalBreak Provides? | What You Need |
|---|---|---|
| Monitoring infrastructure | ✅ | Provider health logs, signal detection |
| Performance metrics | ✅ | Availability %, incident count |
| Testing procedures | ❌ | Accuracy, safety, bias testing protocols |
| Testing results | ❌ | Test reports, pass/fail records |
| Security assessments | 🟡 | Provider SOC 2, ISO 27001 attestations |
4. MANAGE Evidence
| Evidence Type | SignalBreak Provides? | What You Need |
|---|---|---|
| Risk prioritization methodology | ✅ | Risk scoring (Evidence Pack Section 2) |
| Risk treatment register | ❌ | Treatment strategies, implementation status |
| Risk documentation | ✅ | Scenario documentation (Evidence Pack findings) |
| Incident response procedures | ❌ | AI incident playbooks |
| Incident records | 🟡 | Provider outages detected (needs incident response documentation) |
Assessment Process Overview
Step 1: Self-Assessment (Internal)
- Generate latest SignalBreak evidence pack
- Review against NIST AI RMF Playbook (43 subcategories)
- Document gaps and remediation plans
- Duration: 2-4 weeks
- Cost: Internal effort only
Step 2: Third-Party Assessment (External)
- Engage assessor (MITRE Corporation, consulting firm, or Big 4)
- Provide evidence pack + supplementary documentation
- Assessor conducts interviews, document review
- Duration: 4-8 weeks
- Cost: £15k-40k (varies by organization size, assessor)
Step 3: Conformance Attestation
- Assessor issues conformance report
- Report details alignment %, gaps, recommendations
- Use for federal procurement, investor due diligence
- Validity: 12 months (re-assess annually)
Timeline and Costs
Typical NIST AI RMF Conformance Journey
| Phase | Duration | Estimated Cost | Key Activities |
|---|---|---|---|
| 0. Baseline | 1 month | £0 (internal effort) | Generate first SignalBreak evidence pack, identify gaps |
| 1. Quick wins | 1-3 months | £2k-5k (consulting support) | Assign owners, create AI Principles, establish reporting cadence |
| 2. GOVERN improvements | 3-6 months | £5k-10k (committee setup, training) | AI Governance Committee, roles documentation, culture training |
| 3. MEASURE improvements | 3-6 months | £10k-20k (testing tools, security assessments) | Testing protocols, security assessments, provider tracking |
| 4. MANAGE improvements | 6-12 months | £5k-10k (internal effort, workshops) | Risk treatment register, incident response procedures |
| 5. Third-party assessment | 1-2 months | £15k-40k (assessor) | External conformance evaluation |
| Total (to attestation) | 12-18 months | £37k-85k | First-time conformance (no existing governance) |
Annual Maintenance: £10k-20k (re-assessment, evidence pack generation, training)
How SignalBreak Reduces Conformance Cost
| Cost Category | Without SignalBreak | With SignalBreak | Savings |
|---|---|---|---|
| Data gathering | 60h @ £100/h = £6k | 4h (review evidence pack) = £400 | £5.6k |
| Monitoring infrastructure | £15k/year (Datadog + custom dashboards) | Included in SignalBreak | £15k/year |
| Impact assessment | 40h @ £100/h = £4k | Automated (scenario analysis) = £400 | £3.6k |
| Evidence documentation | 30h @ £100/h = £3k | Evidence pack generation = £300 | £2.7k |
Total estimated savings: £27k+ in first year
NIST AI RMF vs Other Frameworks
Complementary Use with ISO 42001 and EU AI Act
NIST AI RMF is not mutually exclusive with other frameworks. In fact, it's designed to complement:
| Framework | Relationship to NIST AI RMF | Use Both? |
|---|---|---|
| ISO 42001 | Management system structure (NIST provides risk methodology) | ✅ Yes — ISO for certification, NIST for US federal alignment |
| EU AI Act | Legal compliance (NIST supports conformity assessment) | ✅ Yes — EU AI Act mandates risk management, NIST is recognised method |
| NIST CSF | Cybersecurity framework (AI RMF extends to AI-specific risks) | ✅ Yes — Integrated risk management across cyber + AI |
SignalBreak supports all three simultaneously — evidence packs include:
- ISO 42001 clause mapping
- NIST AI RMF function alignment
- EU AI Act risk classification
See Governance Overview for multi-framework strategy.
When to Choose NIST AI RMF
Choose NIST AI RMF if:
- ✅ You're a US federal contractor or supplier
- ✅ You're subject to Executive Order 14110 requirements
- ✅ You need flexibility without certification burden
- ✅ You already use NIST CSF (familiar structure)
- ✅ You want risk-based approach (not prescriptive)
Don't choose NIST AI RMF if:
- ❌ You need third-party certification (choose ISO 42001 instead)
- ❌ You're only in EU with no US nexus (EU AI Act may suffice)
- ❌ You're a small startup with <5 AI workflows (overhead may not justify)
Hybrid approach: Many organizations use NIST AI RMF for risk methodology, then pursue ISO 42001 certification when they need formal attestation for enterprise sales.
Common Questions
Is NIST AI RMF mandatory for US federal contractors?
Not universally, but increasingly expected.
Current State (2026):
- Executive Order 14110 mandates NIST AI RMF for federal agencies
- OMB memoranda reference NIST AI RMF as baseline for AI procurement
- Individual agencies (DOD, DHS, HHS) are incorporating NIST AI RMF into RFPs
Practical Impact:
- Defence contractors: Many DOD RFPs now require "NIST AI RMF conformance attestation" as qualification criterion
- Civilian agencies: NIST AI RMF mentioned in evaluation criteria (competitive advantage)
- Non-federal: Not mandatory, but demonstrates best practice
Recommendation: If you bid on federal contracts involving AI, assume NIST AI RMF conformance will be required or preferred within 12-24 months.
Can SignalBreak alone get me NIST AI RMF conformance?
No, but it provides ~50-60% of evidence.
What SignalBreak provides:
- ✅ AI system inventory (MAP 1.1, MAP 1.2)
- ✅ Third-party risk tracking (MAP 1.6, MANAGE 3.1)
- ✅ Continuous monitoring (MEASURE 2.6)
- ✅ Risk prioritization (MANAGE 1.1)
- ✅ Risk documentation (MANAGE 1.3)
What you still need:
- ❌ AI Governance Committee (GOVERN 2.1)
- ❌ Testing protocols (MEASURE 2.1, MEASURE 2.4)
- ❌ Risk treatment execution (MANAGE 1.2)
- ❌ Incident response procedures (MANAGE 4.1)
Analogy: SignalBreak is like Jira for NIST AI RMF — it tracks your AI systems and risks, but you still need governance processes around it.
How does NIST AI RMF differ from NIST CSF?
NIST Cybersecurity Framework (CSF) and NIST AI RMF are related but distinct:
| Aspect | NIST CSF | NIST AI RMF |
|---|---|---|
| Focus | Cybersecurity risks | AI-specific risks |
| Functions | IDENTIFY, PROTECT, DETECT, RESPOND, RECOVER (5) | GOVERN, MAP, MEASURE, MANAGE (4) |
| Scope | IT systems, networks, data | AI systems, models, training data |
| Overlap | IDENTIFY ≈ MAP, DETECT ≈ MEASURE | ~40% conceptual overlap |
| Use Case | Security operations, incident response | AI governance, trustworthy AI |
Can I use both? ✅ Yes — and you should if you have AI systems.
Integration Strategy:
- Use NIST CSF for securing AI infrastructure (API keys, data encryption, access control)
- Use NIST AI RMF for managing AI-specific risks (bias, explainability, third-party models)
SignalBreak supports both by monitoring provider security (CSF DETECT function) and AI system risks (AI RMF MEASURE function).
What's the difference between conformance attestation and certification?
Key Difference:
| Aspect | Certification (ISO 42001) | Conformance Attestation (NIST AI RMF) |
|---|---|---|
| Issuing Body | Accredited certification body (BSI, SGS, etc.) | Third-party assessor (no accreditation required) |
| Standard | Normative (must meet specific requirements) | Voluntary (principles-based) |
| Audit Rigor | Stage 1 + Stage 2 audits, surveillance audits | Single assessment, no ongoing surveillance |
| Certificate Validity | 3 years (annual surveillance) | Typically 12 months (annual re-assessment) |
| Market Recognition | Globally recognised | Primarily US federal procurement |
| Cost | £18k-43k (first certification) | £15k-40k (first assessment) |
Which one do you need?
- ISO 42001 certification: If you're selling to global enterprises requiring third-party certification
- NIST AI RMF attestation: If you're bidding on US federal contracts
- Both: Large vendors serving both markets pursue dual compliance
SignalBreak evidence packs support both pathways.
How often should I re-assess NIST AI RMF conformance?
Minimum: Annually (to keep conformance attestation current) Recommended: Quarterly self-assessment, annual third-party assessment Best practice: Continuous self-assessment via monthly evidence packs
Why quarterly?
- AI systems change frequently (new workflows, provider changes, model updates)
- Quarterly aligns with typical management review cadence
- Federal agencies may request current attestation (within 12 months)
Cost-benefit:
- Annual third-party: £15k-40k (required for attestation)
- Quarterly self-assessment: £0 (internal effort, automated via SignalBreak)
- Additional value: Early detection of conformance gaps, always procurement-ready
Exception: If you have <10 AI workflows with stable providers, annual may suffice. For >20 workflows or high-risk use cases, quarterly is essential.
Next Steps
Generate NIST AI RMF Alignment Report:
- Navigate to Dashboard → Governance → NIST AI RMF
- Click "Generate Report"
- Review current alignment % and function-level gaps
Close critical gaps:
- Assign workflow owners (GOVERN 1.3)
- Create Risk Treatment Register (MANAGE 1.2)
- Document AI Principles Policy (GOVERN 1.2)
Establish governance rhythm:
- Monthly evidence pack generation
- Quarterly management reviews
- Annual third-party assessment
Engage assessor (when ready):
- Target 60%+ alignment before external assessment
- Provide evidence pack as demonstration of maturity
- Expect 12-18 month timeline to first conformance attestation
Related Documentation
- Governance Overview — Comparison of ISO 42001, NIST AI RMF, EU AI Act
- ISO 42001 Guide — Certifiable AI management system
- EU AI Act Guide — Legal compliance requirements
- Evidence Packs Guide — How to generate and use evidence packs
- Risk Scoring Methodology — Understanding your score
External Resources
- NIST AI RMF 1.0: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (PDF, free)
- NIST AI RMF Playbook: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook (43 subcategories)
- Executive Order 14110: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
- OMB AI Guidance: https://www.whitehouse.gov/omb/briefing-room/ (search "artificial intelligence")
- MITRE ATLAS (AI Threats): https://atlas.mitre.org/ (complements MEASURE 2.9 security assessment)
Last updated: 2026-01-26Based on: NIST AI RMF 1.0 (January 2023), Executive Order 14110 (October 2023)