Skip to content

Report Types

SignalBreak generates 10 governance reports for different stakeholders across 4 maturity phases.


Overview

SignalBreak's Governance Intelligence reports transform your AI infrastructure data into actionable insights for different audiences. Each report type addresses specific governance needs, from daily operations to boardroom presentations to regulatory compliance.

Report Phases

Reports are organized by maturity and complexity:

PhaseFocusAudienceReport Count
Phase 1Core Operations & ExecutiveDevOps, C-suite2 reports
Phase 2Compliance & AuditGRC, Legal, Auditors3 reports
Phase 3Advanced Compliance & RiskEnterprise, Federal, Procurement3 reports
Phase 4Deep-Dive AnalysisTechnical, Strategic2 reports

Phase 1 Reports (Available Now)

1. Operational Brief 📊

Target audience: DevOps, Platform Engineers, AI/ML Teams

Purpose: Weekly digest of operational AI governance status for technical teams managing day-to-day AI infrastructure.

What it contains:

  • Active signals (last 7 days): Provider changes affecting your workflows
  • At-risk workflows: Workflows without fallbacks or single-provider dependencies
  • Upcoming changes: Model deprecations, pricing changes with deadlines
  • Active scenarios: Running business continuity tests
  • Quick wins: Low-effort, high-impact improvements you can make this week

When to use:

  • Monday morning team stand-ups
  • Weekly sprint planning
  • Incident response prioritization
  • Quarterly OKR planning

Report length: 2-3 pages

Example use case:

Use case: Platform team needs to prioritize this week's tasks

Report shows:
- 3 critical workflows with no fallback (high risk)
- OpenAI deprecating gpt-3.5-turbo-0301 in 30 days (12 workflows affected)
- Quick win: Add fallback to "Customer Support Chatbot" (15 mins)

Action: Team spends Monday morning adding fallbacks to 3 critical workflows
Result: Reduced single-point-of-failure risk from 15 to 12 workflows

Key sections:

  1. Executive Summary: 3-sentence overview of operational status
  2. Critical Alerts: Signals requiring immediate action (< 7 days)
  3. Workflow Health: At-risk workflows table with remediation steps
  4. Provider Changes: Upcoming deprecations/pricing/policy changes
  5. Recommendations: Prioritized quick wins ranked by impact/effort

2. Board Summary 👔

Target audience: C-suite (CEO, CTO, CFO), Board Members, Executives

Purpose: High-level governance overview for leadership to understand AI risk posture without technical details.

What it contains:

  • Governance maturity score (0-100): Overall AI governance health (Green/Amber/Red)
  • Provider concentration risk: Dependency on single providers (charts)
  • Top exposures: Highest-risk workflows ranked by business impact
  • 90-day trend: Governance score trajectory (improving/stable/declining)
  • Key performance indicators (KPIs): Critical metrics for decision-making

When to use:

  • Monthly board meetings
  • Quarterly business reviews
  • Investor due diligence
  • Executive risk committee updates
  • Budget/resource allocation decisions

Report length: 2 pages (designed for executive brevity)

Example use case:

Use case: CFO presenting AI risk to board

Report shows:
- Overall Score: 72/100 (Amber) - up from 65 last quarter
- Concentration Risk: 85% of workflows use OpenAI (critical dependency)
- Top Exposure: Fraud detection system (£50k/day revenue at risk)
- Trend: Improving (+7 points in 90 days)

Action: Board approves £50k budget to diversify providers
Result: Reduced OpenAI concentration from 85% to 60% in Q2

Key sections:

  1. RAG Status Dashboard: Red/Amber/Green governance indicators
  2. Governance Maturity: Score with maturity level (L1-L5)
  3. Concentration Risk Chart: Provider dependency breakdown
  4. Top 5 Exposures: Critical workflows with business impact estimates
  5. 90-Day Trend: Line chart showing governance score trajectory
  6. Executive Actions: 3-5 prioritized recommendations

RAG Status Table (unique to Board Summary):

CategoryStatusScoreAction Required
Provider Diversity🔴 Red35/100High concentration (85% OpenAI)
Fallback Coverage🟡 Amber65/10045% of critical workflows have fallbacks
Signal Response🟢 Green90/100All signals reviewed within 48h
Governance Maturity🟡 Amber72/100L3 (Defined) - target L4 by Q3

Phase 2 Reports (Available)

3. ISO 42001 Alignment 📋

Target audience: GRC (Governance, Risk, Compliance), Compliance Officers, Internal Audit

Purpose: Gap analysis against ISO 42001:2023 AI Management System standard to demonstrate compliance readiness.

What it contains:

  • Control mapping: All 39 ISO 42001 clauses (4.1 - A.18)
  • Compliance status: Red/Amber/Green per control
  • Evidence trail: Specific data points proving compliance
  • Gap analysis: Missing controls with remediation guidance
  • Overall compliance percentage: (Green controls / Total controls) × 100

When to use:

  • ISO 42001 certification preparation
  • External audit readiness
  • Client/partner RFPs requiring ISO compliance
  • Quarterly compliance reviews
  • Management system documentation

Report length: 8-12 pages

Example use case:

Use case: Compliance preparing for ISO 42001 certification audit

Report shows:
- 28/39 controls Green (72% compliant)
- 8/39 controls Amber (needs improvement)
- 3/39 controls Red (critical gaps)

Red control: "A.12 - Data Protection Impact Assessment (DPIA)"
Gap: "No DPIA documented for high-risk AI systems"
Evidence needed: "DPIA for fraud detection + customer chatbot"

Action: Compliance completes 2 DPIAs over 3 weeks
Result: Red → Green, overall compliance 80% → 85%

Key sections:

  1. Executive Summary: Overall compliance score and readiness level
  2. Control Mapping Table: All 39 clauses with Red/Amber/Green status
  3. Evidence Matrix: Data sources proving each control
  4. Gap Analysis: Detailed breakdown of Amber/Red controls
  5. Remediation Roadmap: Prioritized action plan to reach 100%
  6. Certification Readiness: Estimated timeline to audit-ready state

ISO 42001 Clauses Covered:

  • Context (4.1-4.4): Organization context, stakeholder needs, scope, AI management system
  • Leadership (5.1-5.3): Top management commitment, policy, roles/responsibilities
  • Planning (6.1-6.4): Risk assessment, objectives, change management
  • Support (7.1-7.6): Resources, competence, awareness, communication, documented information
  • Operation (8.1-8.3): Operational planning, AI system lifecycle, third-party management
  • Performance (9.1-9.3): Monitoring, internal audit, management review
  • Improvement (10.1-10.2): Nonconformity, continual improvement
  • Controls (A.1-A.18): 18 specific AI controls (data quality, bias, explainability, etc.)

4. EU AI Act Readiness 🇪🇺

Target audience: Legal, Compliance, Data Protection Officers (DPOs), Product Teams

Purpose: Regulatory compliance assessment against EU AI Act (Regulation 2024/1689) to identify high-risk AI systems and demonstrate readiness.

What it contains:

  • AI system inventory: All workflows classified by risk level (High/Limited/Minimal/Unacceptable)
  • Article compliance: Status per EU AI Act article (Art 6, Art 9, Art 10-15, etc.)
  • High-risk system analysis: Mandatory requirements for high-risk AI (if applicable)
  • Transparency requirements: Disclosure obligations per risk category
  • Remediation roadmap: Steps to full compliance

When to use:

  • EU market entry planning
  • Product compliance assessment
  • Regulatory audit preparation
  • Customer/partner compliance questionnaires
  • Legal risk assessment

Report length: 10-15 pages

Example use case:

Use case: SaaS company selling to EU customers needs compliance

Report shows:
- 2 High-Risk AI systems identified:
  1. "Credit Scoring Model" (Art 6: High-risk - financial scoring)
  2. "Resume Screening AI" (Art 6: High-risk - employment decisions)

Compliance status:
- Art 9 (Risk Management System): Partial - no documented risk assessment
- Art 10 (Data Governance): Partial - training data not documented
- Art 13 (Transparency): Non-Compliant - no user disclosures

Action: Legal creates compliance package over 8 weeks
Result: All high-risk systems compliant, safe to sell in EU

Key sections:

  1. Executive Summary: Risk profile and compliance readiness
  2. System Inventory: All AI workflows with risk classifications
  3. High-Risk Analysis: Detailed assessment if high-risk systems present
  4. Article Compliance Table: All applicable articles with status
  5. Transparency Requirements: Required disclosures per system
  6. Compliance Roadmap: Timeline and actions to full compliance

EU AI Act Risk Categories:

CategoryDefinitionExamplesRequirements
Unacceptable RiskProhibited AI practicesSocial scoring, subliminal manipulationBanned - cannot deploy
High-RiskSignificant impact on rights/safetyCredit scoring, CV screening, medical diagnosisStrict requirements (Art 8-15)
Limited-RiskTransparency obligationsChatbots, deepfakes, emotion recognitionUser disclosure required
Minimal-RiskLow or no riskSpam filters, inventory managementNo specific requirements

5. Audit Pack 📁

Target audience: External Auditors, Regulators, Certification Bodies

Purpose: Evidence bundle with integrity verification for third-party audits. Provides tamper-proof governance trail.

What it contains:

  • All governance reports: Operational, Board, ISO 42001, EU AI Act (all generated at same timestamp)
  • Data manifest: List of all reports with SHA-256 integrity hashes
  • Verification instructions: How auditors can verify report authenticity
  • Timestamp & version: SignalBreak version used for report generation
  • Export format: ZIP archive with markdown files + checksums

When to use:

  • External compliance audits (ISO, SOC 2, etc.)
  • Regulatory inquiries or investigations
  • Customer/partner due diligence
  • Insurance underwriting (cyber insurance)
  • Legal discovery/evidence preservation

Report length: Bundle of all reports (25-40 pages total)

Example use case:

Use case: External auditor requires governance evidence trail

Report shows:
- 5 reports generated at 2025-01-15 14:23:00 GMT
- Each report has SHA-256 hash for integrity verification
- SignalBreak version: v2.1.3

Auditor actions:
1. Downloads audit-pack-2025-01-15.zip
2. Verifies SHA-256 hashes match manifest
3. Reviews all 5 reports
4. Confirms no tampering (hashes valid)

Result: Auditor accepts evidence as authentic, audit proceeds smoothly

Key sections (Manifest):

  1. Table of Contents: All included reports with page counts
  2. Manifest: Report filenames + SHA-256 hashes
  3. Verification Guide: Step-by-step hash verification instructions
  4. Generation Metadata: Timestamp, SignalBreak version, tenant
  5. Individual Reports: Each report as separate markdown file

Integrity Verification:

bash
# Auditors verify report authenticity

# On Windows (PowerShell)
CertUtil -hashfile operational-brief.md SHA256

# On macOS/Linux
shasum -a 256 operational-brief.md

# Compare output to manifest hash
# If match: Report is authentic
# If mismatch: Report was tampered with

Phase 3 Reports (Available)

6. SOC 2 Type II Readiness 🔒

Target audience: Enterprise Sales, Security Teams, Compliance, Prospective Customers

Purpose: Trust Services Criteria (TSC) assessment for enterprise sales and SOC 2 certification readiness.

What it contains:

  • 5 Trust Services Categories: Security, Availability, Processing Integrity, Confidentiality, Privacy
  • Control objectives: 64 common criteria + 25 additional points of focus
  • Evidence mapping: How SignalBreak data demonstrates control effectiveness
  • Gaps & remediation: Missing controls with implementation guidance
  • Readiness score: (Met criteria / Total criteria) × 100

When to use:

  • SOC 2 Type II certification preparation
  • Enterprise customer RFPs (security questionnaires)
  • Security program assessment
  • Vendor risk reviews
  • Trust Center documentation

Report length: 12-18 pages

Example use case:

Use case: SaaS startup pursuing enterprise customers needs SOC 2

Report shows:
- Overall Readiness: 78% (70 of 89 criteria met)
- Security (CC1-CC9): 85% met
- Availability (A1): 100% met (fallback coverage strong)
- Processing Integrity (PI1): 60% met (gaps in AI accuracy monitoring)

Gap: "PI1.4 - Processing is complete, accurate, and timely"
Evidence needed: "AI model output quality monitoring"

Action: Implement quality monitoring dashboards over 4 weeks
Result: PI1 readiness 60% → 95%, overall readiness 78% → 88%

Key sections:

  1. Executive Summary: Overall readiness and certification timeline
  2. TSC Scorecard: 5 categories with Met/Partial/Not Met status
  3. Control Objectives: Detailed breakdown of 89 criteria
  4. Evidence Matrix: SignalBreak data mapped to each control
  5. Gap Analysis: Critical gaps blocking certification
  6. Remediation Roadmap: Prioritized action plan with timelines

Trust Services Categories:

CategoryCommon CriteriaFocusSignalBreak Evidence
Security (CC)CC1-CC9Control environment, risk assessment, monitoringWorkflow access controls, audit logs, signal monitoring
Availability (A)A1System uptime and continuityFallback coverage, provider health tracking
Processing Integrity (PI)PI1Accurate, complete, timely processingAI output quality, workflow monitoring
Confidentiality (C)C1Confidential data protectionModel access controls, data residency tracking
Privacy (P)P1-P8Personal data handlingPII processing workflows, consent tracking (if applicable)

7. NIST AI RMF Alignment 🏛️

Target audience: US Federal Agencies, Defense Contractors, Highly Regulated Industries (Healthcare, Finance)

Purpose: AI Risk Management Framework (NIST AI 100-1) alignment assessment for US government and regulated industries.

What it contains:

  • 4 Core Functions: GOVERN, MAP, MEASURE, MANAGE
  • 47 Subcategories: Detailed practices for each function
  • Implementation status: Implemented / Partial / Not Implemented per subcategory
  • Evidence mapping: How SignalBreak data proves implementation
  • Maturity assessment: Current maturity level per function

When to use:

  • Federal procurement (RFPs, contracts)
  • FedRAMP compliance preparation
  • Healthcare AI compliance (FDA premarket approval)
  • Financial services AI (OCC guidance)
  • Executive Order 14110 compliance (Federal AI use)

Report length: 15-20 pages

Example use case:

Use case: Healthcare AI startup bidding on federal contract

Report shows:
- Overall NIST AI RMF Alignment: 68%
- GOVERN: 85% implemented (strong governance foundation)
- MAP: 70% implemented (AI system inventory complete)
- MEASURE: 55% implemented (gaps in performance tracking)
- MANAGE: 60% implemented (incident response needs work)

Gap: "MEASURE 2.11 - AI system performance is regularly monitored"
Evidence needed: "Automated model quality monitoring"

Action: Implement monitoring dashboards for 3 critical AI models
Result: MEASURE function 55% → 80%, overall 68% → 75%
Federal contracting officer accepts proposal

Key sections:

  1. Executive Summary: Overall alignment percentage and maturity
  2. Function Overview: 4 core functions with implementation status
  3. Subcategory Mapping: All 47 practices with evidence
  4. Gap Analysis: Not Implemented subcategories with recommendations
  5. Maturity Roadmap: Path from current state to full implementation
  6. Federal Compliance: Mapping to Executive Order 14110 requirements

NIST AI RMF Functions:

FunctionFocusSubcategoriesKey Practices
GOVERNOrganizational AI governance12AI governance structure, policies, roles, risk culture
MAPAI risk identification11System inventory, context analysis, impact assessment
MEASUREAI risk quantification14Performance metrics, bias testing, transparency, documentation
MANAGEAI risk mitigation10Risk response, incident management, third-party oversight

8. Vendor Risk Assessment ⚖️

Target audience: Procurement, Security, Risk Management, Third-Party Risk Teams

Purpose: Per-provider risk scoring to quantify supply chain risk and inform vendor selection/diversification decisions.

What it contains:

  • Provider risk profiles: Each AI provider scored on 6 risk dimensions
  • Overall risk score (0-100): Higher = more risk (inverse of governance score)
  • Risk dimensions: Operational, Data, Concentration, Continuity, Compliance, Financial
  • Aggregated view: Portfolio-level risk summary across all providers
  • Comparison matrix: Providers ranked by risk level

When to use:

  • Vendor selection (RFPs, procurement decisions)
  • Third-party risk assessments
  • Contract negotiations (SLA requirements)
  • Budget/resource allocation
  • Supplier diversification strategy

Report length: 8-12 pages (depends on provider count)

Example use case:

Use case: Procurement evaluating 3 AI providers for contract renewal

Report shows:

Provider A (OpenAI):
- Overall Risk Score: 75/100 (High)
- Concentration Risk: 85% of workflows (critical dependency)
- Continuity Risk: No fallbacks for 12 critical workflows
- Financial Risk: Recent 30% price increase

Provider B (Anthropic):
- Overall Risk Score: 45/100 (Medium)
- Concentration Risk: 10% of workflows (healthy)
- Continuity Risk: All critical workflows have fallbacks
- Compliance Risk: Strong data residency options (EU/US)

Provider C (Google):
- Overall Risk Score: 60/100 (Medium)
- Operational Risk: 3 outages in last 90 days
- Data Risk: Limited regional availability

Action: Procurement negotiates multi-year contract with Provider B
+ diversification strategy to reduce Provider A dependency
Result: Concentration risk reduced from 85% to 55% over 6 months

Key sections:

  1. Executive Summary: Aggregate portfolio risk and top concerns
  2. Provider Risk Profiles: Each provider scored on 6 dimensions
  3. Risk Heatmap: Visual matrix comparing all providers
  4. Concentration Analysis: Dependency breakdown by provider
  5. Recommendations: Diversification opportunities and cost trade-offs

Risk Dimensions Explained:

DimensionWhat It MeasuresHigh Risk Indicators
OperationalProvider reliability and uptimeFrequent outages, slow response times, API instability
DataData sovereignty and residencyLimited regional options, unclear data handling, GDPR concerns
ConcentrationWorkflow dependency>50% of workflows on single provider, critical systems single-sourced
ContinuityBusiness continuity preparednessNo fallbacks, single-provider dependencies, long RTO/RPO
ComplianceRegulatory alignmentMissing certifications (SOC 2, ISO), poor audit trail
FinancialCost volatilityFrequent price changes, unclear pricing, high cost per transaction

Phase 4 Reports (Coming Soon)

9. Provider Deep-Dive 🔍

Target audience: Technical Leads, Solution Architects, AI/ML Engineers

Purpose: Detailed technical analysis of single provider to inform architecture decisions and risk mitigation.

What it will contain:

  • Dependency mapping: All workflows, models, and API calls for this provider
  • Usage analytics: API call volume, latency, error rates, cost breakdown
  • Failure scenarios: Impact analysis if this provider fails
  • Migration plan: Step-by-step guide to reduce/eliminate dependency
  • Alternative providers: Comparison matrix with migration effort estimates

When to use:

  • Provider migration planning
  • Architecture reviews
  • Incident postmortems
  • Cost optimization initiatives
  • Contract renegotiations

Report length: 10-15 pages

Planned availability: Q2 2025


10. Third-Party AI Exposure 🔗

Target audience: Procurement, Security, Vendor Management, Legal

Purpose: Map AI dependencies hidden in vendor software to understand indirect AI exposure.

What it will contain:

  • Vendor AI inventory: Which vendors use AI in their products
  • Indirect exposure: AI dependencies you don't directly control
  • Cascading risk: Impact if vendor's AI provider fails
  • Contract gaps: Missing AI-specific terms in vendor contracts
  • Remediation strategy: How to mitigate third-party AI risk

When to use:

  • Vendor due diligence
  • Third-party risk assessments
  • Contract negotiations
  • Supply chain risk mapping
  • Regulatory compliance (EU AI Act subprocessor requirements)

Report length: 12-18 pages

Planned availability: Q3 2025


Choosing the Right Report

By Audience

AudiencePrimary Report(s)Secondary Report(s)
DevOps / Platform EngineersOperational BriefProvider Deep-Dive
C-suite / BoardBoard Summary-
GRC / ComplianceISO 42001, EU AI ActSOC 2, NIST AI RMF
LegalEU AI ActISO 42001
External AuditorsAudit PackISO 42001, SOC 2
Procurement / VendorsVendor RiskThird-Party AI Exposure
Enterprise SalesSOC 2Board Summary
Federal / DefenseNIST AI RMFSOC 2

By Use Case

Use CaseRecommended Report(s)Frequency
Weekly operationsOperational BriefWeekly
Board meetingsBoard SummaryMonthly/Quarterly
ISO 42001 certificationISO 42001 AlignmentQuarterly (until certified)
EU market entryEU AI Act ReadinessOne-time + annual
External auditAudit PackAs needed
Enterprise salesSOC 2Quarterly + RFPs
Federal procurementNIST AI RMFPer contract
Vendor selectionVendor RiskPer procurement cycle
Provider migrationProvider Deep-DiveAs needed
Third-party riskThird-Party AI ExposureAnnual

Report Features

Generation

How reports are generated:

  1. Navigate to Dashboard → Governance → Reports
  2. Select report type
  3. Click "Generate Report"
  4. Wait 5-30 seconds (depends on data volume)
  5. View in-browser or download as markdown/PDF

Generation time:

  • Fast (5-10s): Operational Brief, Board Summary
  • Medium (10-20s): ISO 42001, EU AI Act, SOC 2
  • Slow (20-30s): Audit Pack, NIST AI RMF, Vendor Risk

Formats

Available export formats:

  • Markdown (.md): For version control, GitHub, internal wikis
  • PDF (.pdf): For sharing with stakeholders, printing
  • HTML: For embedding in internal tools
  • JSON (Audit Pack only): For programmatic consumption

Scheduling

Automated report generation:

  • Configure in Settings → Reports → Schedules
  • Frequency: Daily, Weekly, Monthly, Quarterly
  • Delivery: Email, Slack, Webhook
  • Recipients: Email addresses or Slack channels

Example schedule:

Operational Brief: Weekly (Monday 9am)
Board Summary: Monthly (1st of month, 8am)
ISO 42001: Quarterly (end of quarter)

Best Practices

1. Start with Operational Brief

Why: Simplest report, immediate value for teams managing AI daily.

Recommended workflow:

  1. Week 1: Generate Operational Brief, review with team
  2. Week 2-4: Act on quick wins, monitor progress
  3. Month 2: Add Board Summary for leadership visibility
  4. Month 3+: Add compliance reports based on needs

2. Schedule Regular Generation

Why: Reports become stale quickly as your AI landscape changes.

Recommended frequencies:

  • Operational Brief: Weekly (Monday morning prep)
  • Board Summary: Monthly or quarterly (before board meetings)
  • ISO 42001 / EU AI Act: Quarterly (track compliance progress)
  • SOC 2 / NIST AI RMF: Quarterly (certification prep)
  • Vendor Risk: Bi-annually or per procurement cycle

3. Share Reports Widely

Why: Governance is a team sport. Different stakeholders need different reports.

Who should see what:

  • Everyone: Operational Brief (transparency builds trust)
  • Leadership: Board Summary (executive visibility)
  • Compliance: ISO 42001, EU AI Act, SOC 2, NIST AI RMF
  • Procurement: Vendor Risk Assessment
  • External auditors: Audit Pack only (on demand)

Why: Single report is a snapshot. Trends show progress.

What to track:

  • Board Summary: Governance score trend (aim for consistent improvement)
  • ISO 42001: Compliance percentage (track gap closure)
  • Operational Brief: At-risk workflow count (aim to reduce)
  • Vendor Risk: Provider concentration (aim for diversification)

How to track:

  • Save reports to version control (Git)
  • Name files with timestamps: board-summary-2025-01-15.md
  • Generate quarterly comparison charts

5. Use Reports in Decision-Making

Why: Reports are only valuable if they drive action.

Examples:

  • Operational Brief shows 15 workflows with no fallback → Sprint to add fallbacks
  • Board Summary shows 85% concentration on OpenAI → Budget allocated for diversification
  • ISO 42001 shows Red controls → Compliance prioritizes gap remediation
  • Vendor Risk shows high financial risk → Procurement negotiates price lock-ins

FAQ

Can I customize reports?

Not currently. Report templates are fixed to ensure consistency and compliance alignment.

Future feature: Custom report builder (Q3 2025) will allow:

  • Custom branding (logo, colors)
  • Section selection (include/exclude specific sections)
  • Custom KPIs and metrics

How long does report generation take?

Typical times:

  • Operational Brief: 5-10 seconds
  • Board Summary: 10-15 seconds
  • ISO 42001 / EU AI Act: 15-20 seconds
  • SOC 2 / NIST AI RMF: 20-25 seconds
  • Audit Pack: 25-30 seconds (bundles all reports)

Factors affecting speed:

  • Data volume (number of workflows, signals, providers)
  • Report complexity (Audit Pack is slowest)
  • LLM API latency (Claude Sonnet used for generation)

Are reports updated in real-time?

No. Reports are snapshots at generation time.

How it works:

  • Generate report at 10:00 AM → Uses data as of 10:00 AM
  • Add new workflow at 10:30 AM → Not included in 10:00 AM report
  • Re-generate report at 11:00 AM → New workflow included

Best practice: Re-generate reports before important meetings or presentations to ensure latest data.


Can I share reports with external auditors?

Yes. Use the Audit Pack for external sharing.

Why Audit Pack:

  • Includes integrity hashes (tamper-proof)
  • Bundled format (all reports together)
  • Verification instructions included
  • Designed for third-party auditors

Security:

  • Reports contain tenant-specific data (workflows, providers)
  • Only share with trusted parties (auditors, compliance consultants)
  • Use encrypted channels (email encryption, secure file sharing)

Do reports contain sensitive data?

Yes. Reports include:

  • Workflow names and descriptions
  • Provider names and models
  • Business context and impact estimates
  • Governance scores and maturity levels

What's NOT included:

  • API keys or credentials
  • Actual AI prompts or responses
  • Customer data or PII
  • Financial details (beyond high-level cost estimates)

Access control:

  • Reports require authentication (logged-in user)
  • Role-based: Not all users can generate all report types
  • Audit trail: All report generations logged

Can I generate reports via API?

Yes (Enterprise plans only).

Endpoint:

http
POST /api/reports/generate
Content-Type: application/json

{
  "reportType": "operational",
  "format": "pdf"
}

Response:
{
  "reportId": "abc-123",
  "downloadUrl": "https://signalbreak.com/api/reports/abc-123/download",
  "expiresAt": "2025-01-16T14:23:00Z"
}

Use cases:

  • Automated report generation (cron jobs)
  • Integration with BI tools (Tableau, PowerBI)
  • Custom workflows (Zapier, n8n)

  • Governance: Understand governance maturity scoring and how it feeds into Board Summary
  • Workflows: Workflow configuration impacts Operational Brief and Vendor Risk reports
  • Signals: Active signals appear in Operational Brief and Board Summary
  • Scenarios: Scenario execution results feed into risk assessments

Support

Need help with reports?

Common requests:

  • Report interpretation and recommendations
  • Custom report requirements (Enterprise feature)
  • Compliance mapping assistance (ISO, EU AI Act, NIST)
  • API integration for automated report generation

Enterprise support:

  • Quarterly report review sessions with governance advisor
  • Custom compliance mapping for industry-specific regulations
  • White-label reports with your company branding

Last updated: January 2025

AI Governance Intelligence