Report Types
SignalBreak generates 10 governance reports for different stakeholders across 4 maturity phases.
Overview
SignalBreak's Governance Intelligence reports transform your AI infrastructure data into actionable insights for different audiences. Each report type addresses specific governance needs, from daily operations to boardroom presentations to regulatory compliance.
Report Phases
Reports are organized by maturity and complexity:
| Phase | Focus | Audience | Report Count |
|---|---|---|---|
| Phase 1 | Core Operations & Executive | DevOps, C-suite | 2 reports |
| Phase 2 | Compliance & Audit | GRC, Legal, Auditors | 3 reports |
| Phase 3 | Advanced Compliance & Risk | Enterprise, Federal, Procurement | 3 reports |
| Phase 4 | Deep-Dive Analysis | Technical, Strategic | 2 reports |
Phase 1 Reports (Available Now)
1. Operational Brief 📊
Target audience: DevOps, Platform Engineers, AI/ML Teams
Purpose: Weekly digest of operational AI governance status for technical teams managing day-to-day AI infrastructure.
What it contains:
- Active signals (last 7 days): Provider changes affecting your workflows
- At-risk workflows: Workflows without fallbacks or single-provider dependencies
- Upcoming changes: Model deprecations, pricing changes with deadlines
- Active scenarios: Running business continuity tests
- Quick wins: Low-effort, high-impact improvements you can make this week
When to use:
- Monday morning team stand-ups
- Weekly sprint planning
- Incident response prioritization
- Quarterly OKR planning
Report length: 2-3 pages
Example use case:
Use case: Platform team needs to prioritize this week's tasks
Report shows:
- 3 critical workflows with no fallback (high risk)
- OpenAI deprecating gpt-3.5-turbo-0301 in 30 days (12 workflows affected)
- Quick win: Add fallback to "Customer Support Chatbot" (15 mins)
Action: Team spends Monday morning adding fallbacks to 3 critical workflows
Result: Reduced single-point-of-failure risk from 15 to 12 workflowsKey sections:
- Executive Summary: 3-sentence overview of operational status
- Critical Alerts: Signals requiring immediate action (< 7 days)
- Workflow Health: At-risk workflows table with remediation steps
- Provider Changes: Upcoming deprecations/pricing/policy changes
- Recommendations: Prioritized quick wins ranked by impact/effort
2. Board Summary 👔
Target audience: C-suite (CEO, CTO, CFO), Board Members, Executives
Purpose: High-level governance overview for leadership to understand AI risk posture without technical details.
What it contains:
- Governance maturity score (0-100): Overall AI governance health (Green/Amber/Red)
- Provider concentration risk: Dependency on single providers (charts)
- Top exposures: Highest-risk workflows ranked by business impact
- 90-day trend: Governance score trajectory (improving/stable/declining)
- Key performance indicators (KPIs): Critical metrics for decision-making
When to use:
- Monthly board meetings
- Quarterly business reviews
- Investor due diligence
- Executive risk committee updates
- Budget/resource allocation decisions
Report length: 2 pages (designed for executive brevity)
Example use case:
Use case: CFO presenting AI risk to board
Report shows:
- Overall Score: 72/100 (Amber) - up from 65 last quarter
- Concentration Risk: 85% of workflows use OpenAI (critical dependency)
- Top Exposure: Fraud detection system (£50k/day revenue at risk)
- Trend: Improving (+7 points in 90 days)
Action: Board approves £50k budget to diversify providers
Result: Reduced OpenAI concentration from 85% to 60% in Q2Key sections:
- RAG Status Dashboard: Red/Amber/Green governance indicators
- Governance Maturity: Score with maturity level (L1-L5)
- Concentration Risk Chart: Provider dependency breakdown
- Top 5 Exposures: Critical workflows with business impact estimates
- 90-Day Trend: Line chart showing governance score trajectory
- Executive Actions: 3-5 prioritized recommendations
RAG Status Table (unique to Board Summary):
| Category | Status | Score | Action Required |
|---|---|---|---|
| Provider Diversity | 🔴 Red | 35/100 | High concentration (85% OpenAI) |
| Fallback Coverage | 🟡 Amber | 65/100 | 45% of critical workflows have fallbacks |
| Signal Response | 🟢 Green | 90/100 | All signals reviewed within 48h |
| Governance Maturity | 🟡 Amber | 72/100 | L3 (Defined) - target L4 by Q3 |
Phase 2 Reports (Available)
3. ISO 42001 Alignment 📋
Target audience: GRC (Governance, Risk, Compliance), Compliance Officers, Internal Audit
Purpose: Gap analysis against ISO 42001:2023 AI Management System standard to demonstrate compliance readiness.
What it contains:
- Control mapping: All 39 ISO 42001 clauses (4.1 - A.18)
- Compliance status: Red/Amber/Green per control
- Evidence trail: Specific data points proving compliance
- Gap analysis: Missing controls with remediation guidance
- Overall compliance percentage: (Green controls / Total controls) × 100
When to use:
- ISO 42001 certification preparation
- External audit readiness
- Client/partner RFPs requiring ISO compliance
- Quarterly compliance reviews
- Management system documentation
Report length: 8-12 pages
Example use case:
Use case: Compliance preparing for ISO 42001 certification audit
Report shows:
- 28/39 controls Green (72% compliant)
- 8/39 controls Amber (needs improvement)
- 3/39 controls Red (critical gaps)
Red control: "A.12 - Data Protection Impact Assessment (DPIA)"
Gap: "No DPIA documented for high-risk AI systems"
Evidence needed: "DPIA for fraud detection + customer chatbot"
Action: Compliance completes 2 DPIAs over 3 weeks
Result: Red → Green, overall compliance 80% → 85%Key sections:
- Executive Summary: Overall compliance score and readiness level
- Control Mapping Table: All 39 clauses with Red/Amber/Green status
- Evidence Matrix: Data sources proving each control
- Gap Analysis: Detailed breakdown of Amber/Red controls
- Remediation Roadmap: Prioritized action plan to reach 100%
- Certification Readiness: Estimated timeline to audit-ready state
ISO 42001 Clauses Covered:
- Context (4.1-4.4): Organization context, stakeholder needs, scope, AI management system
- Leadership (5.1-5.3): Top management commitment, policy, roles/responsibilities
- Planning (6.1-6.4): Risk assessment, objectives, change management
- Support (7.1-7.6): Resources, competence, awareness, communication, documented information
- Operation (8.1-8.3): Operational planning, AI system lifecycle, third-party management
- Performance (9.1-9.3): Monitoring, internal audit, management review
- Improvement (10.1-10.2): Nonconformity, continual improvement
- Controls (A.1-A.18): 18 specific AI controls (data quality, bias, explainability, etc.)
4. EU AI Act Readiness 🇪🇺
Target audience: Legal, Compliance, Data Protection Officers (DPOs), Product Teams
Purpose: Regulatory compliance assessment against EU AI Act (Regulation 2024/1689) to identify high-risk AI systems and demonstrate readiness.
What it contains:
- AI system inventory: All workflows classified by risk level (High/Limited/Minimal/Unacceptable)
- Article compliance: Status per EU AI Act article (Art 6, Art 9, Art 10-15, etc.)
- High-risk system analysis: Mandatory requirements for high-risk AI (if applicable)
- Transparency requirements: Disclosure obligations per risk category
- Remediation roadmap: Steps to full compliance
When to use:
- EU market entry planning
- Product compliance assessment
- Regulatory audit preparation
- Customer/partner compliance questionnaires
- Legal risk assessment
Report length: 10-15 pages
Example use case:
Use case: SaaS company selling to EU customers needs compliance
Report shows:
- 2 High-Risk AI systems identified:
1. "Credit Scoring Model" (Art 6: High-risk - financial scoring)
2. "Resume Screening AI" (Art 6: High-risk - employment decisions)
Compliance status:
- Art 9 (Risk Management System): Partial - no documented risk assessment
- Art 10 (Data Governance): Partial - training data not documented
- Art 13 (Transparency): Non-Compliant - no user disclosures
Action: Legal creates compliance package over 8 weeks
Result: All high-risk systems compliant, safe to sell in EUKey sections:
- Executive Summary: Risk profile and compliance readiness
- System Inventory: All AI workflows with risk classifications
- High-Risk Analysis: Detailed assessment if high-risk systems present
- Article Compliance Table: All applicable articles with status
- Transparency Requirements: Required disclosures per system
- Compliance Roadmap: Timeline and actions to full compliance
EU AI Act Risk Categories:
| Category | Definition | Examples | Requirements |
|---|---|---|---|
| Unacceptable Risk | Prohibited AI practices | Social scoring, subliminal manipulation | Banned - cannot deploy |
| High-Risk | Significant impact on rights/safety | Credit scoring, CV screening, medical diagnosis | Strict requirements (Art 8-15) |
| Limited-Risk | Transparency obligations | Chatbots, deepfakes, emotion recognition | User disclosure required |
| Minimal-Risk | Low or no risk | Spam filters, inventory management | No specific requirements |
5. Audit Pack 📁
Target audience: External Auditors, Regulators, Certification Bodies
Purpose: Evidence bundle with integrity verification for third-party audits. Provides tamper-proof governance trail.
What it contains:
- All governance reports: Operational, Board, ISO 42001, EU AI Act (all generated at same timestamp)
- Data manifest: List of all reports with SHA-256 integrity hashes
- Verification instructions: How auditors can verify report authenticity
- Timestamp & version: SignalBreak version used for report generation
- Export format: ZIP archive with markdown files + checksums
When to use:
- External compliance audits (ISO, SOC 2, etc.)
- Regulatory inquiries or investigations
- Customer/partner due diligence
- Insurance underwriting (cyber insurance)
- Legal discovery/evidence preservation
Report length: Bundle of all reports (25-40 pages total)
Example use case:
Use case: External auditor requires governance evidence trail
Report shows:
- 5 reports generated at 2025-01-15 14:23:00 GMT
- Each report has SHA-256 hash for integrity verification
- SignalBreak version: v2.1.3
Auditor actions:
1. Downloads audit-pack-2025-01-15.zip
2. Verifies SHA-256 hashes match manifest
3. Reviews all 5 reports
4. Confirms no tampering (hashes valid)
Result: Auditor accepts evidence as authentic, audit proceeds smoothlyKey sections (Manifest):
- Table of Contents: All included reports with page counts
- Manifest: Report filenames + SHA-256 hashes
- Verification Guide: Step-by-step hash verification instructions
- Generation Metadata: Timestamp, SignalBreak version, tenant
- Individual Reports: Each report as separate markdown file
Integrity Verification:
# Auditors verify report authenticity
# On Windows (PowerShell)
CertUtil -hashfile operational-brief.md SHA256
# On macOS/Linux
shasum -a 256 operational-brief.md
# Compare output to manifest hash
# If match: Report is authentic
# If mismatch: Report was tampered withPhase 3 Reports (Available)
6. SOC 2 Type II Readiness 🔒
Target audience: Enterprise Sales, Security Teams, Compliance, Prospective Customers
Purpose: Trust Services Criteria (TSC) assessment for enterprise sales and SOC 2 certification readiness.
What it contains:
- 5 Trust Services Categories: Security, Availability, Processing Integrity, Confidentiality, Privacy
- Control objectives: 64 common criteria + 25 additional points of focus
- Evidence mapping: How SignalBreak data demonstrates control effectiveness
- Gaps & remediation: Missing controls with implementation guidance
- Readiness score: (Met criteria / Total criteria) × 100
When to use:
- SOC 2 Type II certification preparation
- Enterprise customer RFPs (security questionnaires)
- Security program assessment
- Vendor risk reviews
- Trust Center documentation
Report length: 12-18 pages
Example use case:
Use case: SaaS startup pursuing enterprise customers needs SOC 2
Report shows:
- Overall Readiness: 78% (70 of 89 criteria met)
- Security (CC1-CC9): 85% met
- Availability (A1): 100% met (fallback coverage strong)
- Processing Integrity (PI1): 60% met (gaps in AI accuracy monitoring)
Gap: "PI1.4 - Processing is complete, accurate, and timely"
Evidence needed: "AI model output quality monitoring"
Action: Implement quality monitoring dashboards over 4 weeks
Result: PI1 readiness 60% → 95%, overall readiness 78% → 88%Key sections:
- Executive Summary: Overall readiness and certification timeline
- TSC Scorecard: 5 categories with Met/Partial/Not Met status
- Control Objectives: Detailed breakdown of 89 criteria
- Evidence Matrix: SignalBreak data mapped to each control
- Gap Analysis: Critical gaps blocking certification
- Remediation Roadmap: Prioritized action plan with timelines
Trust Services Categories:
| Category | Common Criteria | Focus | SignalBreak Evidence |
|---|---|---|---|
| Security (CC) | CC1-CC9 | Control environment, risk assessment, monitoring | Workflow access controls, audit logs, signal monitoring |
| Availability (A) | A1 | System uptime and continuity | Fallback coverage, provider health tracking |
| Processing Integrity (PI) | PI1 | Accurate, complete, timely processing | AI output quality, workflow monitoring |
| Confidentiality (C) | C1 | Confidential data protection | Model access controls, data residency tracking |
| Privacy (P) | P1-P8 | Personal data handling | PII processing workflows, consent tracking (if applicable) |
7. NIST AI RMF Alignment 🏛️
Target audience: US Federal Agencies, Defense Contractors, Highly Regulated Industries (Healthcare, Finance)
Purpose: AI Risk Management Framework (NIST AI 100-1) alignment assessment for US government and regulated industries.
What it contains:
- 4 Core Functions: GOVERN, MAP, MEASURE, MANAGE
- 47 Subcategories: Detailed practices for each function
- Implementation status: Implemented / Partial / Not Implemented per subcategory
- Evidence mapping: How SignalBreak data proves implementation
- Maturity assessment: Current maturity level per function
When to use:
- Federal procurement (RFPs, contracts)
- FedRAMP compliance preparation
- Healthcare AI compliance (FDA premarket approval)
- Financial services AI (OCC guidance)
- Executive Order 14110 compliance (Federal AI use)
Report length: 15-20 pages
Example use case:
Use case: Healthcare AI startup bidding on federal contract
Report shows:
- Overall NIST AI RMF Alignment: 68%
- GOVERN: 85% implemented (strong governance foundation)
- MAP: 70% implemented (AI system inventory complete)
- MEASURE: 55% implemented (gaps in performance tracking)
- MANAGE: 60% implemented (incident response needs work)
Gap: "MEASURE 2.11 - AI system performance is regularly monitored"
Evidence needed: "Automated model quality monitoring"
Action: Implement monitoring dashboards for 3 critical AI models
Result: MEASURE function 55% → 80%, overall 68% → 75%
Federal contracting officer accepts proposalKey sections:
- Executive Summary: Overall alignment percentage and maturity
- Function Overview: 4 core functions with implementation status
- Subcategory Mapping: All 47 practices with evidence
- Gap Analysis: Not Implemented subcategories with recommendations
- Maturity Roadmap: Path from current state to full implementation
- Federal Compliance: Mapping to Executive Order 14110 requirements
NIST AI RMF Functions:
| Function | Focus | Subcategories | Key Practices |
|---|---|---|---|
| GOVERN | Organizational AI governance | 12 | AI governance structure, policies, roles, risk culture |
| MAP | AI risk identification | 11 | System inventory, context analysis, impact assessment |
| MEASURE | AI risk quantification | 14 | Performance metrics, bias testing, transparency, documentation |
| MANAGE | AI risk mitigation | 10 | Risk response, incident management, third-party oversight |
8. Vendor Risk Assessment ⚖️
Target audience: Procurement, Security, Risk Management, Third-Party Risk Teams
Purpose: Per-provider risk scoring to quantify supply chain risk and inform vendor selection/diversification decisions.
What it contains:
- Provider risk profiles: Each AI provider scored on 6 risk dimensions
- Overall risk score (0-100): Higher = more risk (inverse of governance score)
- Risk dimensions: Operational, Data, Concentration, Continuity, Compliance, Financial
- Aggregated view: Portfolio-level risk summary across all providers
- Comparison matrix: Providers ranked by risk level
When to use:
- Vendor selection (RFPs, procurement decisions)
- Third-party risk assessments
- Contract negotiations (SLA requirements)
- Budget/resource allocation
- Supplier diversification strategy
Report length: 8-12 pages (depends on provider count)
Example use case:
Use case: Procurement evaluating 3 AI providers for contract renewal
Report shows:
Provider A (OpenAI):
- Overall Risk Score: 75/100 (High)
- Concentration Risk: 85% of workflows (critical dependency)
- Continuity Risk: No fallbacks for 12 critical workflows
- Financial Risk: Recent 30% price increase
Provider B (Anthropic):
- Overall Risk Score: 45/100 (Medium)
- Concentration Risk: 10% of workflows (healthy)
- Continuity Risk: All critical workflows have fallbacks
- Compliance Risk: Strong data residency options (EU/US)
Provider C (Google):
- Overall Risk Score: 60/100 (Medium)
- Operational Risk: 3 outages in last 90 days
- Data Risk: Limited regional availability
Action: Procurement negotiates multi-year contract with Provider B
+ diversification strategy to reduce Provider A dependency
Result: Concentration risk reduced from 85% to 55% over 6 monthsKey sections:
- Executive Summary: Aggregate portfolio risk and top concerns
- Provider Risk Profiles: Each provider scored on 6 dimensions
- Risk Heatmap: Visual matrix comparing all providers
- Concentration Analysis: Dependency breakdown by provider
- Recommendations: Diversification opportunities and cost trade-offs
Risk Dimensions Explained:
| Dimension | What It Measures | High Risk Indicators |
|---|---|---|
| Operational | Provider reliability and uptime | Frequent outages, slow response times, API instability |
| Data | Data sovereignty and residency | Limited regional options, unclear data handling, GDPR concerns |
| Concentration | Workflow dependency | >50% of workflows on single provider, critical systems single-sourced |
| Continuity | Business continuity preparedness | No fallbacks, single-provider dependencies, long RTO/RPO |
| Compliance | Regulatory alignment | Missing certifications (SOC 2, ISO), poor audit trail |
| Financial | Cost volatility | Frequent price changes, unclear pricing, high cost per transaction |
Phase 4 Reports (Coming Soon)
9. Provider Deep-Dive 🔍
Target audience: Technical Leads, Solution Architects, AI/ML Engineers
Purpose: Detailed technical analysis of single provider to inform architecture decisions and risk mitigation.
What it will contain:
- Dependency mapping: All workflows, models, and API calls for this provider
- Usage analytics: API call volume, latency, error rates, cost breakdown
- Failure scenarios: Impact analysis if this provider fails
- Migration plan: Step-by-step guide to reduce/eliminate dependency
- Alternative providers: Comparison matrix with migration effort estimates
When to use:
- Provider migration planning
- Architecture reviews
- Incident postmortems
- Cost optimization initiatives
- Contract renegotiations
Report length: 10-15 pages
Planned availability: Q2 2025
10. Third-Party AI Exposure 🔗
Target audience: Procurement, Security, Vendor Management, Legal
Purpose: Map AI dependencies hidden in vendor software to understand indirect AI exposure.
What it will contain:
- Vendor AI inventory: Which vendors use AI in their products
- Indirect exposure: AI dependencies you don't directly control
- Cascading risk: Impact if vendor's AI provider fails
- Contract gaps: Missing AI-specific terms in vendor contracts
- Remediation strategy: How to mitigate third-party AI risk
When to use:
- Vendor due diligence
- Third-party risk assessments
- Contract negotiations
- Supply chain risk mapping
- Regulatory compliance (EU AI Act subprocessor requirements)
Report length: 12-18 pages
Planned availability: Q3 2025
Choosing the Right Report
By Audience
| Audience | Primary Report(s) | Secondary Report(s) |
|---|---|---|
| DevOps / Platform Engineers | Operational Brief | Provider Deep-Dive |
| C-suite / Board | Board Summary | - |
| GRC / Compliance | ISO 42001, EU AI Act | SOC 2, NIST AI RMF |
| Legal | EU AI Act | ISO 42001 |
| External Auditors | Audit Pack | ISO 42001, SOC 2 |
| Procurement / Vendors | Vendor Risk | Third-Party AI Exposure |
| Enterprise Sales | SOC 2 | Board Summary |
| Federal / Defense | NIST AI RMF | SOC 2 |
By Use Case
| Use Case | Recommended Report(s) | Frequency |
|---|---|---|
| Weekly operations | Operational Brief | Weekly |
| Board meetings | Board Summary | Monthly/Quarterly |
| ISO 42001 certification | ISO 42001 Alignment | Quarterly (until certified) |
| EU market entry | EU AI Act Readiness | One-time + annual |
| External audit | Audit Pack | As needed |
| Enterprise sales | SOC 2 | Quarterly + RFPs |
| Federal procurement | NIST AI RMF | Per contract |
| Vendor selection | Vendor Risk | Per procurement cycle |
| Provider migration | Provider Deep-Dive | As needed |
| Third-party risk | Third-Party AI Exposure | Annual |
Report Features
Generation
How reports are generated:
- Navigate to Dashboard → Governance → Reports
- Select report type
- Click "Generate Report"
- Wait 5-30 seconds (depends on data volume)
- View in-browser or download as markdown/PDF
Generation time:
- Fast (5-10s): Operational Brief, Board Summary
- Medium (10-20s): ISO 42001, EU AI Act, SOC 2
- Slow (20-30s): Audit Pack, NIST AI RMF, Vendor Risk
Formats
Available export formats:
- Markdown (.md): For version control, GitHub, internal wikis
- PDF (.pdf): For sharing with stakeholders, printing
- HTML: For embedding in internal tools
- JSON (Audit Pack only): For programmatic consumption
Scheduling
Automated report generation:
- Configure in Settings → Reports → Schedules
- Frequency: Daily, Weekly, Monthly, Quarterly
- Delivery: Email, Slack, Webhook
- Recipients: Email addresses or Slack channels
Example schedule:
Operational Brief: Weekly (Monday 9am)
Board Summary: Monthly (1st of month, 8am)
ISO 42001: Quarterly (end of quarter)Best Practices
1. Start with Operational Brief
Why: Simplest report, immediate value for teams managing AI daily.
Recommended workflow:
- Week 1: Generate Operational Brief, review with team
- Week 2-4: Act on quick wins, monitor progress
- Month 2: Add Board Summary for leadership visibility
- Month 3+: Add compliance reports based on needs
2. Schedule Regular Generation
Why: Reports become stale quickly as your AI landscape changes.
Recommended frequencies:
- Operational Brief: Weekly (Monday morning prep)
- Board Summary: Monthly or quarterly (before board meetings)
- ISO 42001 / EU AI Act: Quarterly (track compliance progress)
- SOC 2 / NIST AI RMF: Quarterly (certification prep)
- Vendor Risk: Bi-annually or per procurement cycle
3. Share Reports Widely
Why: Governance is a team sport. Different stakeholders need different reports.
Who should see what:
- Everyone: Operational Brief (transparency builds trust)
- Leadership: Board Summary (executive visibility)
- Compliance: ISO 42001, EU AI Act, SOC 2, NIST AI RMF
- Procurement: Vendor Risk Assessment
- External auditors: Audit Pack only (on demand)
4. Track Report Trends Over Time
Why: Single report is a snapshot. Trends show progress.
What to track:
- Board Summary: Governance score trend (aim for consistent improvement)
- ISO 42001: Compliance percentage (track gap closure)
- Operational Brief: At-risk workflow count (aim to reduce)
- Vendor Risk: Provider concentration (aim for diversification)
How to track:
- Save reports to version control (Git)
- Name files with timestamps:
board-summary-2025-01-15.md - Generate quarterly comparison charts
5. Use Reports in Decision-Making
Why: Reports are only valuable if they drive action.
Examples:
- Operational Brief shows 15 workflows with no fallback → Sprint to add fallbacks
- Board Summary shows 85% concentration on OpenAI → Budget allocated for diversification
- ISO 42001 shows Red controls → Compliance prioritizes gap remediation
- Vendor Risk shows high financial risk → Procurement negotiates price lock-ins
FAQ
Can I customize reports?
Not currently. Report templates are fixed to ensure consistency and compliance alignment.
Future feature: Custom report builder (Q3 2025) will allow:
- Custom branding (logo, colors)
- Section selection (include/exclude specific sections)
- Custom KPIs and metrics
How long does report generation take?
Typical times:
- Operational Brief: 5-10 seconds
- Board Summary: 10-15 seconds
- ISO 42001 / EU AI Act: 15-20 seconds
- SOC 2 / NIST AI RMF: 20-25 seconds
- Audit Pack: 25-30 seconds (bundles all reports)
Factors affecting speed:
- Data volume (number of workflows, signals, providers)
- Report complexity (Audit Pack is slowest)
- LLM API latency (Claude Sonnet used for generation)
Are reports updated in real-time?
No. Reports are snapshots at generation time.
How it works:
- Generate report at 10:00 AM → Uses data as of 10:00 AM
- Add new workflow at 10:30 AM → Not included in 10:00 AM report
- Re-generate report at 11:00 AM → New workflow included
Best practice: Re-generate reports before important meetings or presentations to ensure latest data.
Can I share reports with external auditors?
Yes. Use the Audit Pack for external sharing.
Why Audit Pack:
- Includes integrity hashes (tamper-proof)
- Bundled format (all reports together)
- Verification instructions included
- Designed for third-party auditors
Security:
- Reports contain tenant-specific data (workflows, providers)
- Only share with trusted parties (auditors, compliance consultants)
- Use encrypted channels (email encryption, secure file sharing)
Do reports contain sensitive data?
Yes. Reports include:
- Workflow names and descriptions
- Provider names and models
- Business context and impact estimates
- Governance scores and maturity levels
What's NOT included:
- API keys or credentials
- Actual AI prompts or responses
- Customer data or PII
- Financial details (beyond high-level cost estimates)
Access control:
- Reports require authentication (logged-in user)
- Role-based: Not all users can generate all report types
- Audit trail: All report generations logged
Can I generate reports via API?
Yes (Enterprise plans only).
Endpoint:
POST /api/reports/generate
Content-Type: application/json
{
"reportType": "operational",
"format": "pdf"
}
Response:
{
"reportId": "abc-123",
"downloadUrl": "https://signalbreak.com/api/reports/abc-123/download",
"expiresAt": "2025-01-16T14:23:00Z"
}Use cases:
- Automated report generation (cron jobs)
- Integration with BI tools (Tableau, PowerBI)
- Custom workflows (Zapier, n8n)
Related Features
- Governance: Understand governance maturity scoring and how it feeds into Board Summary
- Workflows: Workflow configuration impacts Operational Brief and Vendor Risk reports
- Signals: Active signals appear in Operational Brief and Board Summary
- Scenarios: Scenario execution results feed into risk assessments
Support
Need help with reports?
- 📧 Email: support@signal-break.com
- 💬 Live Chat: Click chat icon (bottom right) for instant support
- 📚 Knowledge Base: docs.signal-break.com
- 🎥 Video Tutorial: Understanding SignalBreak Reports (10 mins)
Common requests:
- Report interpretation and recommendations
- Custom report requirements (Enterprise feature)
- Compliance mapping assistance (ISO, EU AI Act, NIST)
- API integration for automated report generation
Enterprise support:
- Quarterly report review sessions with governance advisor
- Custom compliance mapping for industry-specific regulations
- White-label reports with your company branding
Last updated: January 2025