Frequently Asked Questions (FAQ)
Getting Started
What is SignalBreak?
SignalBreak is an AI governance SaaS platform that helps organisations manage dependencies on AI providers. It monitors provider changes (deprecations, outages, policy updates, pricing changes), assesses risk impacts on your workflows, and provides governance evidence for compliance frameworks like ISO 42001, NIST AI RMF, and EU AI Act.
Core Capabilities:
- AI workflow dependency mapping
- Real-time provider signal monitoring
- Risk scenario assessment
- Governance compliance reporting
- MIT AI Risk Repository integration
Who is SignalBreak for?
SignalBreak is designed for:
Primary Users:
- AI/ML Engineers — Managing AI workflow dependencies and fallback strategies
- Risk & Compliance Teams — Demonstrating AI governance maturity
- CTOs & Engineering Leaders — Reducing AI provider concentration risk
Industries:
- Financial services (regulatory compliance)
- Healthcare (patient safety, HIPAA)
- Legal (client confidentiality)
- Government (security clearances)
- Any organisation with mission-critical AI systems
How long does onboarding take?
Quick Start: 15 minutes
- Sign up (2 minutes)
- Connect 2-3 AI providers (5 minutes)
- Import first 5 workflows (5 minutes)
- Configure fallback bindings (3 minutes)
Full Setup: 1-2 hours
- Map all AI workflows and systems
- Configure provider bindings with fallback chains
- Create initial risk scenarios
- Generate first evidence pack
Best Practice: Start with 3-5 mission-critical workflows, then expand.
Do I need to integrate with my codebase?
No. SignalBreak is a non-invasive monitoring platform—it doesn't require code changes or runtime integration.
What you DO need:
- List of AI workflows (name, provider, criticality)
- Provider API keys (for self-hosted discovery—optional)
- Team member access
What you DON'T need:
- Code changes
- SDK installation
- Runtime instrumentation
- Database modifications
Workflows & Provider Bindings
What is a workflow in SignalBreak?
A workflow is any business process or system that depends on AI capabilities. Examples:
- Customer support chatbot
- Legal document analysis
- Image captioning pipeline
- Code review assistant
- Voice transcription service
Key Fields:
- AI Capability — Type of AI task (text generation, vision, embeddings, etc.)
- Criticality — Mission-critical, Important, or Nice-to-have
- Provider Bindings — Primary and fallback AI providers
- Human-in-Loop — Whether human review is required
How many workflows should I create?
Minimum (Free Plan): 5 workflows Recommended: 10-20 workflows covering critical AI dependencies Enterprise: Unlimited
Best Practice:
- Start with mission-critical workflows (customer-facing, revenue-generating)
- Add important workflows (internal tools, automation)
- Gradually expand to nice-to-have workflows
Avoid: Creating one workflow per API call—group related API usage into logical workflows (e.g., "Customer Support Chatbot" not "Chat Completion Call #1, #2, #3...").
Should I configure fallback bindings for every workflow?
Mission-critical workflows: Yes, always configure fallback bindings. Important workflows: Highly recommended (reduces downtime risk). Nice-to-have workflows: Optional (manual fallback acceptable).
Example Fallback Strategy:
| Workflow | Primary | Fallback 1 | Fallback 2 |
|---|---|---|---|
| Customer Support Bot | Claude 3.5 Sonnet | GPT-4o mini | Gemini 1.5 Pro |
| Legal Doc Analysis | GPT-4o | Claude 3.5 Sonnet | — |
| Image Captioning | GPT-4 Vision | Claude 3.5 Sonnet (multimodal) | — |
Result: 99.9%+ uptime even during provider outages.
What happens if I don't configure a fallback?
Without fallback bindings:
- Workflow fails when primary provider is unavailable
- Manual intervention required to switch providers
- Downtime measured in minutes to hours (depending on team response time)
With automatic fallback:
- System switches to fallback provider after 3 failed requests
- Downtime reduced to seconds
- No manual intervention required
Bottom Line: Fallback bindings are essential for mission-critical workflows.
Can I use self-hosted AI models (Ollama, vLLM)?
Yes. SignalBreak supports self-hosted AI providers via the Self-Hosted Connections feature.
Supported Platforms:
- Ollama
- vLLM
- LM Studio
- Text Generation Inference (TGI)
- OpenAI-compatible APIs
Setup:
- Go to Providers → Self-Hosted Connections → Add Connection
- Enter connection details (endpoint URL, API key)
- Run discovery to detect available models
- Configure workflow bindings to use discovered models
Note: Self-hosted discovery requires network access to your AI infrastructure (VPN or public endpoint).
Provider Signals
How often does SignalBreak poll for new signals?
Frequency:
- Status Pages: Every 5 minutes
- Changelogs: Every hour
- Social Media: Every hour
Latency:
- Incidents: Detected within 5 minutes
- Deprecations/Policy Changes: Detected within 1 hour
- Capability Announcements: Detected within 1 hour
Note: Change detection algorithm only creates signals when status changes (not on every poll), reducing noise by ~99%.
Why do some signals say "No Affected Workflows"?
This means the signal doesn't match any of your workflow's provider bindings.
Common Reasons:
- Signal is from a provider you don't use — e.g., you see a Cohere signal but only use OpenAI
- Signal affects a specific model you don't use — e.g., GPT-3.5 deprecation but you only use GPT-4
- Signal is informational only — e.g., pricing change for a tier you're not on
Action: Signals with no affected workflows can be safely dismissed or ignored.
Can I manually create signals?
Not currently. Signals are automatically detected from provider sources.
Workaround: Create a Scenario instead to manually track risks not captured by automated monitoring.
Example Use Case:
- Internal policy: "We don't want to use AI for X use case"
- Vendor relationship concern: "Our OpenAI rep mentioned potential price increase next quarter"
- Third-party intelligence: "Security researcher discovered vulnerability in Y model"
Go to Scenarios → Create Scenario → Enter details.
How accurate is the MIT domain classification?
Average Confidence: 85%
Most Accurate Classifications:
- Deprecations → Domain 7.3 (Lack of capability)
- Security incidents → Domain 2.2 (Security vulnerabilities)
- Policy changes → Domain 2.1 (Privacy), Domain 6.5 (Governance)
Less Reliable Classifications:
- Generic capability announcements (ambiguous risk domain)
- Marketing-heavy content (difficult to extract technical substance)
Fallback: If MIT domain classification fails, signal is still usable—domain mapping is optional enrichment.
What if I disagree with a signal's severity classification?
Current: Severity is automatically assigned and not user-editable.
Workaround: Create a Scenario from the signal and assign your preferred severity at the scenario level.
Future Feature (Planned): Manual severity override + feedback mechanism to improve classification models.
Governance & Compliance
Which compliance frameworks does SignalBreak support?
Primary Frameworks:
- ISO 42001:2023 — AI Management System (certifiable)
- NIST AI RMF — AI Risk Management Framework (US federal)
- EU AI Act — Regulation (EU) 2024/1689 (mandatory)
Additional Coverage:
- SOC 2 (security controls)
- ISO 27001 (information security)
- GDPR (data privacy—indirectly via provider analysis)
Tracked Regulations (13 total): See Governance Overview
Can SignalBreak help me get ISO 42001 certified?
Yes, partially. SignalBreak provides:
✅ Evidence Collection: Automated evidence for 8 of 10 key ISO 42001 clauses ✅ Gap Analysis: Identifies missing controls ✅ Evidence Packs: PDF reports for auditors
❌ Not Provided: Full AIMS implementation, policy writing, formal certification process
Certification Timeline:
- With SignalBreak: 12-18 months (saves ~40% consulting hours)
- Without SignalBreak: 18-24 months
Cost:
- With SignalBreak: £18k-43k (certification body fees + reduced consulting)
- Without SignalBreak: £25k-60k (full consulting engagement)
What is an Evidence Pack?
An Evidence Pack is a consulting-grade PDF report that demonstrates AI governance maturity for auditors, regulators, and board members.
Contents (10 Sections):
- Executive Summary
- Governance Scorecard (risk score, RAG status)
- Provider Dependency Analysis
- Signal Analysis (last 90 days)
- Key Findings & Recommendations
- ISO 42001 Clause Mapping
- NIST AI RMF Function Mapping
- EU AI Act Readiness Assessment
- Remediation Roadmap
- Methodology & Data Sources
Generation Time: 30-60 seconds File Size: 40-80 pages PDF Frequency: Generate monthly or before audits
Use Cases:
- ISO 42001 certification audits
- NIST AI RMF conformance assessments
- EU AI Act compliance validation
- Board/investor presentations
- RFP responses
How often should I generate Evidence Packs?
Recommended Frequency:
| Use Case | Frequency |
|---|---|
| Ongoing monitoring | Monthly |
| Pre-audit preparation | 2 weeks before audit |
| Board reporting | Quarterly |
| RFP responses | As needed |
| Post-incident review | Immediately after major incident |
Best Practice: Generate monthly to track governance maturity trends over time.
Does SignalBreak replace my GRC platform?
No. SignalBreak is AI-specific risk monitoring, not a general GRC platform.
SignalBreak Covers:
- AI provider dependencies
- AI workflow risk assessment
- AI governance frameworks (ISO 42001, NIST AI RMF, EU AI Act)
- MIT AI Risk Repository integration
SignalBreak Does NOT Cover:
- General IT risk management
- Financial risk
- Physical security
- HR/employee compliance
- Non-AI third-party vendors
Integration Approach: Export SignalBreak scenarios and evidence to your existing GRC platform for holistic risk tracking.
Risk Scoring & Scenarios
How is the governance risk score calculated?
The governance risk score (0-100) is calculated from:
Inputs (Weighted):
- Provider Concentration (30%) — Single-provider dependency risk
- Untreated Risks (25%) — MIT domain risks without mitigation
- High-Severity Signals (20%) — Recent critical/warning signals
- Fallback Coverage (15%) — % of mission-critical workflows with fallback
- Scenario Maturity (10%) — Documented response plans
RAG Status:
- Green: Score < 30 (low risk)
- Amber: Score 30-70 (moderate risk)
- Red: Score > 70 (high risk)
Example Calculation:
Concentration: 58% (single provider) → 30 × 0.58 = 17.4
Untreated Risks: 4 risks untreated → 25 × 0.4 = 10.0
High-Severity Signals: 3 signals → 20 × 0.15 = 3.0
Fallback Coverage: 75% → 15 × 0.25 = 3.75
Scenario Maturity: 8 scenarios → 10 × 0.2 = 2.0
---
Total Score: 36.15 → Amber (Moderate Risk)What's the difference between a Signal and a Scenario?
| Aspect | Signal | Scenario |
|---|---|---|
| Definition | Detected provider event | Formal risk assessment |
| Source | Automated monitoring | User-created or signal-derived |
| Lifecycle | Immutable (historical record) | Editable (draft → active → resolved) |
| Purpose | Awareness & alerting | Response planning & mitigation |
| Example | "OpenAI GPT-3.5 deprecation detected" | "Migrate workflows from GPT-3.5 to GPT-4o mini by March 2025" |
Workflow:
- SignalBreak detects Signal (e.g., deprecation)
- You create Scenario from signal (click "Create Scenario" button)
- Scenario includes mitigation actions, assigned owner, due date
- Scenario tracked until resolved
Should I create a scenario for every signal?
No. Only create scenarios for signals that require formal response planning.
Create Scenario If:
- Signal severity is Critical or Warning
- Signal affects mission-critical workflows
- Signal requires coordinated team response
- Signal needs audit trail for compliance
Don't Create Scenario If:
- Signal severity is Info
- Signal has no affected workflows
- Response is trivial (e.g., acknowledge pricing change)
Rule of Thumb: Create scenarios for ~20-30% of signals (high-impact events only).
Billing & Subscriptions
What's included in the Free plan?
Free Plan Limits:
- 5 workflows
- 10 scenarios
- 1 team member
- 7 days signal history
- 1 evidence pack per month
- Basic provider monitoring (all 8 providers)
- MIT Risk Repository access
Best For: Individual developers, small side projects, proof-of-concept
When should I upgrade to Professional?
Upgrade to Professional when you hit Free plan limits:
Professional Plan:
- 50 workflows
- 100 scenarios
- 10 team members
- 90 days signal history
- Unlimited evidence packs
- Priority support
Cost: £49/month (annual billing) or £59/month (monthly billing)
Best For: Startups, SMEs, teams with 5-20 AI workflows
What does Enterprise include?
Enterprise Plan Features:
- Unlimited workflows, scenarios, team members
- Unlimited signal history (full retention)
- SSO (SAML/OIDC)
- IP allowlist
- Custom SLAs
- Dedicated account manager
- On-premise deployment option (future)
Cost: Custom pricing (contact sales)
Best For: Large enterprises, regulated industries (finance, healthcare), >50 AI workflows
Can I cancel my subscription anytime?
Yes. SignalBreak subscriptions are month-to-month (or annual with discount).
Cancellation Process:
- Go to Settings → Billing
- Click Manage Subscription (opens Stripe Customer Portal)
- Click Cancel Subscription
- Subscription ends at end of current billing period
Data Retention: Your data is retained for 90 days after cancellation (in case you want to reactivate).
Do you offer discounts for non-profits or education?
Yes. SignalBreak offers:
- Non-profits: 50% discount on Professional/Enterprise plans
- Educational Institutions: 50% discount on Professional/Enterprise plans
- Open Source Projects: Free Professional plan (public repos only)
Application: Email support@signalbreak.com with:
- Organisation name and website
- Proof of non-profit status (501(c)(3), charity registration, etc.)
- Use case description
Technical Integration
Does SignalBreak have an API?
Yes. SignalBreak provides a REST API for all platform features.
Base URL: https://signalbreak.vercel.app/api
Authentication: Session-based (cookie) — API key authentication planned for Q2 2026
Endpoints (122 total):
- Workflows management
- Provider signals
- Scenarios & risk assessment
- Governance reports
- Billing & usage
Documentation: API Reference
Are there client libraries (SDKs)?
Status: Planned for Q3 2026
Planned Libraries:
- JavaScript/TypeScript (Node.js, Browser)
- Python
- Go
Current Workaround: Use REST API directly with fetch or axios (JavaScript) or requests (Python).
Can I export my data?
Yes. SignalBreak supports data export via:
1. Evidence Packs (PDF)
- Go to Governance → Evidence Pack → Generate
- Download PDF report with all governance data
2. Audit Log (CSV)
- Go to Settings → Audit Log → Export
- CSV file with all user actions
3. API Export (JSON)
- Use
/api/workflows,/api/provider-changes,/api/scenariosendpoints - Fetch data programmatically and save to JSON
4. Database Export (Enterprise only)
- Contact support for full PostgreSQL dump
Future: One-click "Export All Data" button (GDPR compliance—planned Q2 2026)
Does SignalBreak support webhooks?
Status: Planned for Q2 2026
Planned Webhook Events:
signal.created— New signal detectedsignal.high_severity— Critical/Warning signalscenario.executed— Scenario response activatedworkflow.impacted— Workflow affected by signalevidence_pack.generated— Evidence pack ready
Current Workaround: Poll /api/provider-changes endpoint every 5 minutes to detect new signals.
Troubleshooting
I'm not seeing any signals. Why?
Possible Causes:
No providers connectedFix: Go to Providers → Mark providers as "In Use"
Providers not monitored yet (first poll takes 5 minutes) Fix: Wait 5 minutes after adding first provider
No provider changes in last 7 days (Free plan only shows 7-day history) Fix: Upgrade to Professional (90-day history) or wait for new signals
Polling service down (rare) Fix: Check provider health status on Providers page
Why does my signal show "No Affected Workflows"?
Cause: Signal doesn't match any workflow's provider bindings or models.
Common Scenarios:
- Signal is for a provider you don't use
- Signal affects a specific model you don't use
- Signal is informational (no direct workflow impact)
Fix:
- If signal is relevant: Create a workflow that uses this provider
- If signal is not relevant: Dismiss or ignore (expected behavior)
My workflow shows 0 signal count. Is monitoring working?
Possible Causes:
Provider has no recent changes (last 90 days) This is normal. Some providers (e.g., Cohere, AI21) rarely announce changes.
Provider added recently (< 24 hours) Wait 24 hours for first polling cycle to complete.
Provider not in SignalBreak's monitoring listCheck: Providers page → Confirm provider is marked "Monitored"
Evidence Pack generation failed. What went wrong?
Common Causes:
Insufficient data (no workflows or scenarios) Fix: Create at least 5 workflows and 2 scenarios before generating
LLM API timeout (Claude/OpenAI rate limit exceeded) Fix: Wait 5 minutes and retry
Database query timeout (large dataset) Fix: Contact support for optimization
Subscription limit (Free plan: 1/month) Fix: Upgrade to Professional (unlimited) or wait until next month
Error Message: Check Governance → Evidence Pack page for specific error details.
I imported workflows via CSV but some failed. Why?
Common Import Errors:
| Error | Cause | Fix |
|---|---|---|
| "Duplicate workflow name" | Workflow with same name exists | Rename in CSV or delete existing workflow |
| "Invalid criticality value" | Must be mission_critical, important, or nice_to_have | Fix CSV spelling/case |
| "Invalid AI capability" | Must match allowed capability types | Check allowed values in API Reference |
| "Provider not found" | Provider doesn't exist in system | Use provider ID from Providers page |
Known Limitation: Bulk import does NOT run fuzzy matching against existing products (tracked as TECH-DEBT-001). You may import duplicates—review imported workflows after bulk import.
MIT domain classification is missing for some signals. Why?
Causes:
Ollama service unavailable (self-hosted AI classifier down) Check: Internal service health dashboard Impact: Classification delayed, not lost—will retry on next enrichment cycle
Signal content too ambiguous (generic marketing announcement) Expected: Some signals don't map cleanly to MIT domains Action: Domain classification is optional enrichment—signal is still usable
LLM classifier confidence too low (< 50%) Expected: Classifier skips low-confidence predictions to avoid noise Action: No user action needed
Fallback: Signal is fully functional without MIT domain mapping—domain is optional context enrichment.
Signal title is generic ("Provider Status Update"). Why?
Cause: Raw content was too noisy or Claude API timed out—fallback title generator used.
What Happened:
- SignalBreak detected content change
- Claude API attempted interpretation
- Claude timed out (10-second limit) or returned unparseable response
- Fallback: Rule-based title generator created generic title
Fix:
- Check source URL for full details (click signal card)
- System will automatically retry interpretation in next enrichment batch (runs hourly)
- If interpretation still fails after 24 hours, content is likely too noisy to parse
Prevention: This is expected for ~5-10% of signals (low-quality content from provider).
I see duplicate signals for the same event. Why?
Cause: Provider announced same change on multiple channels (e.g., status page + blog post + Twitter).
Expected Behavior: Content hashing should prevent most duplicates—report to support if seeing many.
Workaround: Manually dismiss duplicate signals.
Prevention (Future): Improved deduplication logic (planned Q2 2026).
Known Limitations
What features are NOT available yet?
Planned for Q2 2026:
- API key authentication (current: session-based only)
- Webhooks
- Manual signal creation
- User-editable severity classification
- Custom alert rules
- Slack/Teams integrations
Planned for Q3 2026:
- JavaScript/Python SDKs
- On-premise deployment (Enterprise)
- Multi-tenant SSO
- Advanced analytics dashboard
Tracked as Technical Debt:
- Bulk import fuzzy matching (TECH-DEBT-001)
- Claude extractor JSON parsing edge cases (TECH-DEBT-008)
Does SignalBreak monitor ALL AI providers?
Currently Monitored (8 providers):
- OpenAI
- Anthropic
- AWS Bedrock
- Google AI (Gemini)
- Cohere
- AI21 Labs
- Mistral AI
- Perplexity
Not Yet Monitored:
- Hugging Face (community models—too fragmented)
- Replicate (marketplace platform—monitoring planned Q2 2026)
- Azure OpenAI (uses OpenAI models—indirectly covered)
- Self-hosted models (use Self-Hosted Connections feature instead)
Request Coverage: Email support@signalbreak.com to request new provider monitoring.
Can I monitor internal/proprietary AI models?
Yes, via Self-Hosted Connections feature.
Supported:
- Ollama
- vLLM
- LM Studio
- Text Generation Inference (TGI)
- OpenAI-compatible APIs
Limitations:
- No automatic signal detection (internal models don't publish changelogs)
- Manual scenario creation required for internal model risks
- Health monitoring available via
/healthendpoint polling
Does SignalBreak access my data or code?
No. SignalBreak is a non-invasive monitoring platform.
What SignalBreak Sees:
- Workflow metadata (name, provider, criticality)
- Public provider announcements (changelogs, status pages)
- User-entered scenario descriptions
What SignalBreak Does NOT See:
- Your application code
- Production data (prompts, responses, user inputs)
- API keys or credentials (stored encrypted in Supabase Vault)
- Internal system metrics
Privacy: SignalBreak monitors public provider information only—no access to your private data or systems.
Still Have Questions?
Search the Documentation
- Workflows: Workflows & Provider Bindings
- Signals: Provider Signals
- Governance: Governance Overview
- API: API Reference
- Troubleshooting: Troubleshooting Guide
Contact Support
Email: support@signalbreak.com
Response Time:
- Free Plan: 48-72 hours
- Professional: 24 hours
- Enterprise: 4 hours (SLA-backed)
Include in Your Email:
- SignalBreak account email
- Description of issue
- Steps to reproduce (if applicable)
- Screenshots (if UI issue)
- Workflow/signal ID (if relevant)
Join the Community
Roadmap & Feature Requests:
- Public Roadmap (Link TBD)
- Submit feature requests via support email
Release Notes:
- Changelog (Link TBD)
Social Media:
- Twitter/X: @SignalBreakAI (Link TBD)
- LinkedIn: SignalBreak (Link TBD)
Last Updated: 2026-01-26 Documentation Version: 1.0