Glossary
A comprehensive reference of terms used in SignalBreak.
A
AI Capability
The type of AI task performed by a workflow. SignalBreak supports 10 capability types:
| Capability | Description | Examples |
|---|---|---|
| text_generation | Text completion, chat, summarisation | GPT-4o, Claude 3.5 Sonnet |
| image_generation | Text-to-image synthesis | DALL-E 3, Stable Diffusion |
| image_to_text | Image captioning, OCR, visual Q&A | GPT-4 Vision, Claude 3.5 Sonnet |
| text_to_speech | Speech synthesis from text | OpenAI TTS, ElevenLabs |
| speech_to_text | Audio transcription, voice recognition | Whisper, Deepgram |
| embeddings | Vector representations for semantic search | text-embedding-3, Cohere Embed |
| reasoning | Multi-step problem solving, chain-of-thought | o1, o1-mini |
| code_generation | Code completion, code explanation | GPT-4o, Claude 3.5 Sonnet |
| vision | Image understanding, visual reasoning | GPT-4 Vision, Gemini Pro Vision |
| multimodal | Combined text, image, audio processing | Gemini 1.5 Pro, Claude 3.5 Sonnet |
See: Workflows & Provider Bindings
AI System
A discrete AI application or service that serves a specific business function. Multiple workflows may belong to a single AI system.
Example: "Customer Support AI" system contains:
- Chatbot workflow
- Email response workflow
- Ticket classification workflow
EU AI Act Context: AI systems are classified into risk tiers (Prohibited, High-Risk, Limited-Risk, Minimal-Risk) for compliance purposes.
See: EU AI Act Guide
AI System Safety (MIT Domain 7)
One of seven MIT AI Risk Repository domains, covering system failures, loss of control, and existential risks.
Subdomains:
- 7.1 — AI pursuing goals in conflict with human values
- 7.2 — AI possessing dangerous capabilities
- 7.3 — Lack of capability or robustness
- 7.4 — Lack of transparency or interpretability
- 7.5 — AI welfare and rights
Common Signals: Model deprecations, API outages, performance degradation
See: MIT Risk Framework
Amber Status
Risk status indicating moderate risk (RAG: Red/Amber/Green).
Definition: Risk score between 30-70 Action Required: Review within 24 hours, plan mitigation UI Indicator: Yellow/orange badge
API Key
Credential used to authenticate with AI provider APIs. SignalBreak stores API keys encrypted in Supabase Vault for self-hosted discovery.
Security: SignalBreak never stores API keys for cloud providers (OpenAI, Anthropic, etc.)—only for self-hosted connections.
Future: SignalBreak API keys for programmatic access (planned Q2 2026).
Audit Trail
Timestamped record of all user actions in SignalBreak.
Tracked Events:
- Workflow created/updated/deleted
- Scenario executed
- Provider binding modified
- Evidence pack generated
- User invited/removed
Export: CSV format via Settings → Audit Log → Export
Retention: 365 days (all plans)
B
Binding Role
The role of a provider binding in a workflow's failover strategy.
| Role | Description | Failover Behavior |
|---|---|---|
| Primary | Main provider for this workflow | First choice for all requests |
| Fallback | Backup provider | Used after 3 primary failures |
| Experiment | A/B test provider | Used for specified % of traffic |
Best Practice: Every mission-critical workflow should have Primary + Fallback bindings.
See: Workflows & Provider Bindings
C
Capability
See: AI Capability
Change Type
Classification of provider signals into five categories:
| Type | Description | Severity Bias | Examples |
|---|---|---|---|
| deprecation | End-of-life announcements | Warning/Critical | GPT-3.5 Turbo sunset |
| policy | Terms of service updates | Warning/Info | Usage policy restricted |
| pricing | Cost changes | Info/Warning | Price increase 20% |
| capability | New features or models | Info | Claude 3.7 launched |
| incident | Outages or degraded performance | Critical/Warning | API downtime |
Detection: Rules-based keyword matching + Claude AI interpretation
See: Provider Signals
Circuit Breaker
Failover mechanism that automatically switches to fallback provider after detecting repeated failures.
Threshold: 3 consecutive failed requests to primary provider Action: Switch to fallback provider Reset: After 5 minutes of primary provider health restoration
Configuration: Automatic (no user configuration required)
See: Workflows & Provider Bindings
Classification
Process of categorizing provider signals by change type and severity.
Stage 1: Rules-based keyword matching Stage 2: Claude AI interpretation (refines classification) Stage 3: MIT domain mapping (Ollama classifier)
Confidence Score: 0.0-1.0 (higher = more certain)
See: Provider Signals
Compliance Framework
Formal standard or regulation for AI governance.
SignalBreak Supported:
- ISO 42001 — AI Management System (certifiable)
- NIST AI RMF — US federal AI risk framework
- EU AI Act — Mandatory EU regulation
- SOC 2 — Security controls
- ISO 27001 — Information security
See: Governance Overview
Concentration Risk
Risk arising from over-dependence on a single AI provider.
Calculation:
Concentration % = (Workflows using primary provider) / (Total workflows) × 100Risk Levels:
- Low: < 40% (diversified)
- Medium: 40-60% (moderate concentration)
- High: > 60% (single-provider dependency)
Mitigation: Configure fallback bindings to distribute risk across providers.
See: Governance Scorecard
Criticality
Business impact classification for workflows.
| Level | Definition | RTO Target | Fallback Required? |
|---|---|---|---|
| Mission-Critical | Customer-facing, revenue-generating | < 5 minutes | Yes (mandatory) |
| Important | Internal tools, automation | < 1 hour | Recommended |
| Nice-to-Have | Non-essential features | < 24 hours | Optional |
Impact: Used in risk score calculation (Mission-Critical workflows weighted 4× higher).
See: Workflows & Provider Bindings
D
Decision Readiness Framework
Five-dimension scoring system for assessing AI governance maturity.
Dimensions:
- Workflow Coverage — % of AI systems documented
- Fallback Readiness — % of critical workflows with fallback
- Provider Diversity — Inverse of concentration risk
- Signal Response — Average scenario creation time after critical signals
- Governance Maturity — Framework adoption percentage
Aggregate Score: 0-100 (weighted average)
See: Governance Overview
Deprecation
Planned end-of-life for an AI model, API, or feature.
Typical Timeline: 3-12 months advance notice Risk: Critical/Warning (high impact) Action Required: Migrate to replacement model before cutoff date
Example Signals:
- "OpenAI deprecating GPT-3.5 Turbo on 2025-06-30"
- "Anthropic retiring Claude 2.0 API endpoints"
See: Provider Signals - Deprecation
Discovered Model
AI model detected via self-hosted discovery (Ollama, vLLM, etc.).
Discovery Process:
- Connect to self-hosted endpoint
- Query
/modelsor/v1/modelsAPI - Extract model identifiers and metadata
- Create tenant-scoped model records
Usage: Discovered models can be used in workflow provider bindings.
Domain (MIT Risk)
Top-level category in MIT AI Risk Repository (7 domains total).
| Domain | Name | Focus |
|---|---|---|
| 1 | Discrimination & Toxicity | Bias, harmful content |
| 2 | Privacy & Security | Data breaches, attacks |
| 3 | Misinformation | False content, deepfakes |
| 4 | Malicious Actors | Cyberattacks, fraud |
| 5 | Human-Computer Interaction | Overreliance, agency loss |
| 6 | Socioeconomic & Environmental | Jobs, inequality, resources |
| 7 | AI System Safety | Failures, control loss |
Purpose: Contextualise provider signals with real-world AI risk categories.
See: MIT Risk Framework
E
Embeddings
Vector representations of text for semantic search and similarity.
Use Cases:
- Document search
- Recommendation systems
- Clustering and classification
Common Models:
- OpenAI
text-embedding-3-small/large - Cohere
embed-english-v3.0 - Voyage AI
voyage-large-2
Workflow Capability: embeddings
Enrichment
Process of adding context and analysis to provider signals using LLMs.
Enrichment Pipeline (v2):
- Interpretation — Severity, categories, affected components
- MIT Domains — Risk domain mapping (1-3 domains)
- Model Impacts — Specific models affected
LLM Gateway: Claude 3.5 Sonnet → GPT-4o mini → Ollama (fallback chain)
Latency: 30-60 seconds per signal
See: Provider Signals - Stage 5 Enrichment
EU AI Act
Regulation (EU) 2024/1689 — Mandatory AI regulation in the European Union.
Risk Tiers:
- Prohibited — Banned AI uses (social scoring, subliminal manipulation)
- High-Risk — Strict requirements (safety components, biometric ID, critical infrastructure)
- Limited-Risk — Transparency obligations (chatbots, deepfakes)
- Minimal-Risk — No specific obligations (spam filters, video games)
Penalties:
- Prohibited AI: Up to €35M or 7% global turnover
- Non-compliance: Up to €15M or 3% global turnover
Phase-in Timeline: Aug 2025 (prohibited), Aug 2026 (high-risk), Aug 2027 (limited-risk)
See: EU AI Act Guide
Evidence Pack
Consulting-grade PDF report demonstrating AI governance maturity.
Contents (10 Sections):
- Executive Summary
- Governance Scorecard
- Provider Dependency Analysis
- Signal Analysis
- Key Findings
- ISO 42001 Mapping
- NIST AI RMF Mapping
- EU AI Act Readiness
- Remediation Roadmap
- Methodology
Generation Time: 30-60 seconds File Size: 40-80 pages PDF Frequency: Monthly recommended
Use Cases: Audits, board reports, RFP responses
See: Evidence Packs Guide
F
Fallback Binding
Backup AI provider configured for a workflow to ensure uptime during primary provider outages.
Types:
- Automatic — Circuit breaker triggers failover after 3 failures
- Manual — User manually switches provider
Best Practice: Configure automatic fallback for mission-critical workflows.
See: Workflows & Provider Bindings
Feature Gate
Billing limit enforcement mechanism that restricts access based on subscription tier.
Gated Features:
| Feature | Free | Professional | Enterprise |
|---|---|---|---|
| Workflows | 5 | 50 | Unlimited |
| Scenarios | 10 | 100 | Unlimited |
| Signal History | 7 days | 90 days | Unlimited |
| Evidence Packs | 1/month | Unlimited | Unlimited |
403 Response: Returns error when limit exceeded, includes requiresUpgrade: true.
See: API Reference - Feature Gates
G
Governance Scorecard
Dashboard summarising AI governance maturity metrics.
Components:
- Risk Score (0-100) — Aggregate risk level
- RAG Status — Red/Amber/Green classification
- Provider Concentration — Single-provider dependency %
- Top Exposures — Highest-severity scenario impacts
- Active Signals — Recent critical/warning signals
API Endpoint: GET /api/governance/scorecard
See: API Reference - Governance
Green Status
Risk status indicating low risk (RAG: Red/Amber/Green).
Definition: Risk score < 30 Action Required: Continue monitoring (no immediate action) UI Indicator: Green badge
H
Health Status
Current operational state of an AI provider's services.
Status Values:
- operational — All systems functioning normally
- degraded — Reduced performance or partial outage
- major_outage — Significant service disruption
- under_maintenance — Planned downtime
Monitoring Frequency: Every 5 minutes (status pages)
Sources:
- Provider status pages (Statuspage.io JSON API)
- API response time monitoring
- Community reports (Reddit, Twitter/X)
See: Provider Health Monitoring
Human-in-Loop
Configuration indicating whether human review is required before AI outputs are used.
Values:
true— Human must review AI output before usefalse— AI output used directly without review
Impact: High criticality + no human-in-loop = higher risk score
Regulatory Context: EU AI Act requires human oversight for high-risk AI systems.
I
Impact Score
Calculated risk score (0-240) for a workflow affected by a provider signal.
Formula:
Impact Score = Signal Severity × Workflow Criticality × Binding CentralityComponents:
- Signal Severity: Critical=40, Warning=25, Info=10
- Workflow Criticality: Mission-Critical=×4, Important=×2, Nice-to-Have=×1
- Binding Centrality: Primary=×1.5, Fallback=×0.5
RAG Thresholds:
- Red: ≥ 160 (immediate action)
- Amber: 80-159 (plan response within 24h)
- Green: < 80 (monitor)
Example:
Critical signal (40) × Mission-Critical workflow (×4) × Primary binding (×1.5) = 240 (Red)See: Provider Signals - Impact Analysis
Incident (Signal Type)
Provider signal indicating service outage, degraded performance, or operational issue.
Severity: Critical or Warning (high priority) Typical Duration: Minutes to hours Action Required: Activate fallback provider immediately (if automatic failover not configured)
Example Signals:
- "OpenAI API experiencing elevated error rates (15-minute outage)"
- "AWS Bedrock: Intermittent 503 errors in us-east-1"
See: Provider Signals - Incident
ISO 42001
ISO/IEC 42001:2023 — AI Management System standard.
Status: Certifiable (requires third-party audit) Scope: AI lifecycle management (development, deployment, monitoring, decommissioning) Key Clauses: 10 core requirements (4.1-10.2)
Certification:
- Timeline: 12-18 months
- Cost: £18k-43k (with SignalBreak evidence)
- Auditor: Accredited certification body (BSI, LRQA, etc.)
SignalBreak Coverage: 8/10 clauses fully supported, 2/10 partial
See: ISO 42001 Guide
L
LLM Gateway
Multi-provider fallback chain for LLM-powered enrichment.
Providers (Priority Order):
- Claude 3.5 Sonnet (Anthropic) — Primary
- GPT-4o mini (OpenAI) — Secondary
- Llama 3.2 3B (Ollama self-hosted) — Tertiary
Failover Logic: Circuit breaker switches after 3 consecutive errors
Use Cases:
- Signal interpretation (Claude Haiku)
- Signal enrichment (Claude Sonnet)
- MIT domain classification (Ollama)
See: Provider Signals - Enrichment
M
MIT AI Risk Repository
Curated database of 1,328 real-world AI incidents and 831 mitigation strategies.
Structure:
- 7 domains (top-level categories)
- 24 subdomains (specific risk types)
- 1,328 incidents (historical failures)
- 831 mitigations (remediation strategies)
Purpose: Contextualise provider signals with historical AI risk patterns.
Source: MIT AI Risk Repository
SignalBreak Integration: Automatic signal classification into MIT domains using Ollama.
See: MIT Risk Framework
Mitigation
Action taken to reduce or eliminate a risk.
Context in SignalBreak:
- Scenario Mitigation — User-defined actions for risk scenarios
- MIT Mitigation — Pre-defined strategies from MIT AI Risk Repository
Example Scenario Mitigations:
- "Migrate workflows from GPT-3.5 to GPT-4o mini by March 2025"
- "Implement automatic fallback to Anthropic Claude"
- "Add human-in-loop review for customer-facing outputs"
See: Scenarios
Model (AI)
Specific version of an AI capability offered by a provider.
Examples:
- OpenAI:
gpt-4o,gpt-4o-mini,text-embedding-3-small - Anthropic:
claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022 - Google:
gemini-1.5-pro,gemini-1.5-flash
SignalBreak Context:
- Workflows bind to specific models via provider bindings
- Signals may reference specific models (e.g., "GPT-3.5 Turbo deprecation")
- Self-hosted discovery detects available models
See: Workflows & Provider Bindings
Multimodal
AI capability that processes multiple modalities (text, image, audio, video) in a single request.
Examples:
- Gemini 1.5 Pro (text + image + audio)
- Claude 3.5 Sonnet (text + image)
- GPT-4o (text + image + audio)
Use Cases:
- Document understanding with diagrams
- Video content analysis
- Audio transcription with context
Workflow Capability: multimodal
N
NIST AI RMF
NIST AI Risk Management Framework — US federal guidance for AI risk management.
Structure: 4 core functions
- GOVERN — Policies, culture, accountability
- MAP — Context, risks, impacts
- MEASURE — Metrics, testing, monitoring
- MANAGE — Response, recovery, communication
Subcategories: 43 total across 4 functions
Compliance: Voluntary framework, but mandatory for US federal agencies (Executive Order 14110)
Attestation: Self-certification (not third-party audit like ISO 42001)
See: NIST AI RMF Guide
O
Ollama
Open-source, self-hosted LLM platform.
SignalBreak Usage:
- MIT domain classification (Llama 3.2 3B)
- Self-hosted model discovery
- LLM gateway tertiary fallback
Discovery: SignalBreak can connect to Ollama endpoints to detect available models.
Outage
Complete or partial unavailability of an AI provider's services.
Detection: Provider status pages, API health checks Severity: Critical (if affecting production systems) Response: Automatic failover to fallback provider (if configured)
Example Signals:
- "OpenAI API down globally (complete outage)"
- "AWS Bedrock degraded performance in eu-west-1 (partial outage)"
See: Provider Health Monitoring
P
Policy (Signal Type)
Provider signal indicating changes to terms of service, usage policies, or compliance requirements.
Severity: Info to Warning (depending on impact) Typical Timeline: Immediate to 30 days Risk: Compliance violation if usage doesn't align with new policy
Example Signals:
- "OpenAI updated Usage Policy to restrict military applications"
- "Anthropic introduced new data retention requirements"
See: Provider Signals - Policy
Pricing (Signal Type)
Provider signal indicating cost changes, billing updates, or tier modifications.
Severity: Info to Warning (if price increase exceeds 20%) Impact: Budget forecasting, ROI calculations Action Required: Recalculate AI spend, evaluate alternatives
Example Signals:
- "Anthropic increasing Claude 3.5 Sonnet pricing to $3.00/MTok"
- "Google reducing Gemini Pro pricing by 40%"
See: Provider Signals - Pricing
Provider
AI service provider offering models or APIs.
SignalBreak Monitored (8 providers):
- OpenAI
- Anthropic
- AWS Bedrock
- Google AI (Gemini)
- Cohere
- AI21 Labs
- Mistral AI
- Perplexity
Self-Hosted: Ollama, vLLM, LM Studio, TGI (via Self-Hosted Connections)
See: Providers
Provider Binding
Connection between a workflow and an AI provider/model.
Components:
- Workflow ID — Which workflow uses this binding
- Provider ID — Which provider (OpenAI, Anthropic, etc.)
- Model Class — Model family (e.g.,
gpt-4o) - Model Name — User-friendly display name
- Binding Role — Primary, Fallback, or Experiment
- Is Active — Whether binding is currently enabled
Example:
Workflow: Customer Support Chatbot
├─ Primary Binding: Claude 3.5 Sonnet (Anthropic)
└─ Fallback Binding: GPT-4o mini (OpenAI)See: Workflows & Provider Bindings
R
RAG Status
Red/Amber/Green classification for risk levels.
| Status | Risk Score | Action Required |
|---|---|---|
| Green | < 30 | Monitor (no immediate action) |
| Amber | 30-70 | Review within 24h, plan mitigation |
| Red | > 70 | Immediate action required |
Usage:
- Governance risk score
- Impact score for signal-workflow pairs
- Scenario severity classification
Red Status
Risk status indicating high risk (RAG: Red/Amber/Green).
Definition: Risk score > 70 Action Required: Immediate action (activate response plan) UI Indicator: Red badge
Risk Score
Aggregate risk metric (0-100) calculated from multiple factors.
Governance Risk Score Inputs (Weighted):
- Provider Concentration (30%)
- Untreated MIT Risks (25%)
- High-Severity Signals (20%)
- Fallback Coverage (15%)
- Scenario Maturity (10%)
Impact Risk Score Inputs:
- Signal severity
- Workflow criticality
- Binding centrality
RAG Mapping:
- Green: < 30
- Amber: 30-70
- Red: > 70
S
Scenario
Formal risk assessment with mitigation plan, owner, and timeline.
Lifecycle:
- Draft — Initial creation, planning phase
- Active — Approved, monitoring for trigger conditions
- Executed — Response plan activated
- Resolved — Mitigation complete, risk addressed
- Archived — Historical record
Components:
- Scenario Name — Brief description
- Scenario Type — Category (outage, deprecation, policy, etc.)
- Impact Severity — Critical, High, Medium, Low
- Likelihood — Certain, Likely, Possible, Unlikely
- Mitigation Actions — Steps to reduce risk
- Owner — Responsible team member
- Due Date — Target resolution date
Creation: From signals (click "Create Scenario" button) or manually
See: Scenarios
Self-Hosted Connection
Configuration for accessing self-hosted AI platforms (Ollama, vLLM, LM Studio).
Setup:
- Enter endpoint URL (e.g.,
https://ollama.internal.company.com) - Optionally provide API key
- Run discovery to detect available models
- Configure workflow bindings
Health Monitoring: SignalBreak polls self-hosted endpoints to track availability.
Limitations: No automatic signal detection (internal models don't publish changelogs).
Severity
Risk level assigned to provider signals and scenarios.
Signal Severity (5 levels):
- critical — Service unavailable, data loss, security breach
- high — Major functionality impaired
- medium — Partial degradation, workarounds available
- low — Minor issues, limited impact
- info — Announcements, no immediate impact
Scenario Severity (4 levels):
- critical — Business-critical systems at risk
- high — Significant operational impact
- medium — Moderate disruption
- low — Minimal business impact
See: Provider Signals - Severity
Signal
Detected provider event (change, incident, announcement) requiring attention.
Types: Deprecation, Policy, Pricing, Capability, Incident
Detection: Automated monitoring of provider sources
Enrichment: LLM-powered analysis adds context and MIT domain mapping
Lifecycle: Immutable (historical record)—create Scenario for response planning
See: Provider Signals
Signal History Retention
Number of days of past signals accessible based on subscription tier.
| Plan | Retention | Notes |
|---|---|---|
| Free | 7 days | Recent signals only |
| Professional | 90 days | Quarterly trend analysis |
| Enterprise | Unlimited | Full historical audit trail |
Feature Gate: List endpoints filter by retention period.
See: Billing Tier Limits
SOC 2
Service Organization Control 2 — Security audit framework for service providers.
Trust Service Criteria:
- Security
- Availability
- Processing Integrity
- Confidentiality
- Privacy
SignalBreak Context: Evidence packs include SOC 2 control mappings for AI-specific risks.
See: Governance Overview
Statuspage.io
Provider status page platform used by OpenAI, Anthropic, AWS, and others.
SignalBreak Integration:
- Polls JSON API every 5 minutes
- Parses component statuses and incidents
- Creates signals only on status changes (not every poll)
Result: ~99% noise reduction vs. naive polling.
See: Provider Health Monitoring
Subdomain (MIT Risk)
Specific risk type within a MIT domain (24 subdomains across 7 domains).
Example:
- Domain 7: AI System Safety
- Subdomain 7.1: AI pursuing goals in conflict with human values
- Subdomain 7.2: AI possessing dangerous capabilities
- Subdomain 7.3: Lack of capability or robustness (most common for signals)
Purpose: Granular risk classification for signals and scenarios.
See: MIT Risk Framework
T
Tenant
Organisation account in SignalBreak (isolated data namespace).
Isolation: All data (workflows, signals, scenarios) scoped to tenant via Row-Level Security.
Multi-Tenancy: Each user belongs to one tenant; tenants cannot access each other's data.
Tenant ID: UUID identifier for tenant (used in API queries).
See: API Reference - Authentication
Tier (Provider)
Classification of provider reliability and market position.
SignalBreak Tiers:
- Tier 1 — Industry leaders (OpenAI, Anthropic, Google, AWS)
- Tier 2 — Established players (Cohere, AI21, Mistral)
- Tier 3 — Emerging providers (smaller companies, startups)
Impact: Provider tier affects risk scoring (Tier 1 failures = higher impact).
See: Provider Health Monitoring
W
Webhook (Planned)
HTTP callback for real-time event notifications.
Status: Planned for Q2 2026
Planned Events:
signal.created— New signal detectedsignal.high_severity— Critical/Warning signalscenario.executed— Response plan activatedworkflow.impacted— Workflow affected by signalevidence_pack.generated— Evidence pack ready
Workflow
Business process or system dependent on AI capabilities.
Core Fields:
- Workflow Name — Descriptive name (e.g., "Customer Support Chatbot")
- AI Capability — Type of AI task (text_generation, vision, etc.)
- Criticality — Mission-Critical, Important, Nice-to-Have
- Human-in-Loop — Whether human review required
- Provider Bindings — Primary and fallback providers
Lifecycle:
- Active — Workflow in use
- Inactive — Workflow disabled (soft delete)
See: Workflows & Provider Bindings
Related Documentation
- Features: Workflows | Signals
- Governance: Overview | ISO 42001 | NIST AI RMF | EU AI Act
- Reference: API
- Support: FAQ | Troubleshooting
Last Updated: 2026-01-26 Documentation Version: 1.0