Provider Signals
What Are Signals?
Signals are notifications of changes, announcements, incidents, and updates from your AI providers. SignalBreak continuously monitors provider status pages, changelogs, and announcements to detect events that may impact your AI workflows.
Each signal represents a discrete provider event—such as a model deprecation, policy change, pricing update, service outage, or new capability launch—that requires your attention and possible action.
Why Signals Matter
AI providers change frequently. Without automated monitoring:
- Model deprecations can break production systems with little warning
- Policy changes may introduce compliance risks overnight
- Pricing adjustments affect budget forecasts and cost planning
- Service incidents require immediate workflow failover
- New capabilities create optimization opportunities you might miss
SignalBreak detects these changes automatically, classifies their risk severity, and maps them to MIT AI Risk Repository domains to help you prioritise responses.
Signal Types
SignalBreak classifies provider events into five signal types:
1. Deprecation
Definition: Planned end-of-life for models, APIs, or features.
Examples:
- "OpenAI deprecating GPT-3.5 Turbo on 2025-06-30"
- "Anthropic retiring Claude 2.0 API endpoints"
- "Google sunsetting PaLM 2 models"
Risk Impact: High — Requires workflow migration before cutoff date.
Typical Timeline: 3-12 months advance notice.
Action Required:
- Review affected workflows (SignalBreak automatically identifies these)
- Test replacement models
- Update provider bindings before deprecation date
- Budget for any cost differences
2. Policy
Definition: Changes to terms of service, usage policies, compliance requirements, or data handling rules.
Examples:
- "OpenAI updated Usage Policy to restrict military applications"
- "Anthropic introduced new data retention requirements for Enterprise tier"
- "AWS Bedrock added GDPR-specific data residency options"
Risk Impact: Medium to High — May create compliance gaps or require legal review.
Typical Timeline: Immediate to 30 days.
Action Required:
- Review policy changes against your use case
- Assess compliance risk (GDPR, CCPA, sector-specific regulations)
- Update internal documentation
- Consult legal/compliance team if restrictions affect critical workflows
3. Pricing
Definition: Cost changes, billing updates, tier modifications, or quota adjustments.
Examples:
- "Anthropic increasing Claude 3.5 Sonnet input pricing to $3.00/MTok"
- "OpenAI introducing volume discounts for Enterprise customers"
- "Google reducing Gemini Pro pricing by 40%"
Risk Impact: Low to Medium — Affects budget forecasting and ROI calculations.
Typical Timeline: Immediate to 60 days.
Action Required:
- Recalculate monthly AI spend projections
- Evaluate cost-optimisation opportunities (model downgrading, prompt compression)
- Update stakeholder reports
- Consider switching providers if pricing becomes uncompetitive
4. Capability
Definition: New features, model releases, API updates, or performance improvements.
Examples:
- "Anthropic launched Claude 3.7 Opus with extended 200K context window"
- "OpenAI added real-time voice API support"
- "Google Gemini now supports multi-image reasoning"
Risk Impact: Low — Typically positive, but may require evaluation for adoption.
Typical Timeline: Immediate availability.
Action Required:
- Assess if new capability addresses existing workflow limitations
- Test new features in non-production environment
- Evaluate cost-benefit of upgrading
- Document new capability for future reference
5. Incident
Definition: Service outages, degraded performance, API errors, or operational issues.
Examples:
- "OpenAI API experiencing elevated error rates (15-minute outage)"
- "Anthropic: Claude 3.5 Sonnet response times degraded"
- "AWS Bedrock: Intermittent 503 errors in us-east-1"
Risk Impact: Critical to High — Immediate workflow disruption.
Typical Timeline: Minutes to hours (real-time detection).
Action Required:
- Activate fallback providers immediately (if automatic fallback is not enabled)
- Monitor incident updates from provider status pages
- Assess user impact and communicate with stakeholders
- Document incident for post-mortem and resilience planning
Severity Levels
Each signal is assigned a severity level based on keyword analysis and AI interpretation:
Critical
- Service completely unavailable (outages, total API failure)
- Data loss or security breach
- Immediate action required to prevent workflow failure
UI Indicator: 🔴 Red badge
Response Time: Immediate (within 15 minutes)
Example Signals:
- "OpenAI API down globally"
- "Anthropic security incident: API keys potentially compromised"
Warning
- Major functionality impaired or significant deprecations
- Breaking API changes
- Policy violations possible without intervention
UI Indicator: 🟡 Yellow badge
Response Time: Within 24 hours
Example Signals:
- "Claude 2.0 deprecating in 30 days"
- "OpenAI Usage Policy updated: military use prohibited"
- "Anthropic API rate limits reduced for free tier"
Info
- Announcements, planned changes, new features
- No immediate workflow impact
- Informational updates for planning and awareness
UI Indicator: 🔵 Blue badge
Response Time: Review within 1 week
Example Signals:
- "Google launches Gemini 2.0 Ultra preview"
- "Anthropic releases transparency report for Q4 2024"
- "OpenAI announces pricing reduction effective next quarter"
How Signals Are Generated
SignalBreak uses a multi-stage pipeline to detect, classify, and enrich provider signals:
Stage 1: Detection & Polling
Frequency: Every 5 minutes
Sources Monitored:
- Provider status pages (Statuspage.io JSON API)
- Official changelogs and release notes
- Public announcements and blog posts
- Social media monitoring (Reddit r/LocalLLaMA, Twitter/X)
Detection Logic:
- Content hashing to identify new or changed content
- Deduplication to prevent signal spam
- Change-detection algorithm (only creates signals when status changes, not on every poll)
Result: ~99% noise reduction vs. naive polling approach.
Stage 2: Classification (Rules-Based)
Engine: Keyword matching algorithm
Process:
- Scan raw content for change type keywords
- Scan for severity keywords
- Assign classification with confidence score (0.0–1.0)
Priority Order (first match wins):
- Incident → Deprecation → Policy → Pricing → Capability
- Critical → Warning → Info
Default for Ambiguous Content:
- Change Type:
capability - Severity:
info
Keyword Examples:
| Type | Keywords |
|---|---|
| Deprecation | "deprecat", "sunset", "end of life", "EOL", "retire", "discontinue" |
| Policy | "policy", "terms", "compliance", "GDPR", "legal", "agreement" |
| Pricing | "price", "cost", "rate", "billing", "tier", "quota" |
| Capability | "feature", "model", "version", "update", "release", "launch" |
| Incident | "outage", "degraded", "issue", "error", "down", "unavailable" |
| Severity | Keywords |
|---|---|
| Critical | "critical", "urgent", "immediate", "outage", "down" |
| Warning | "warning", "deprecat", "sunset", "action required", "breaking change" |
| Info | "announce", "update", "release", "new feature", "improvement" |
Stage 3: Interpretation (Claude API)
Engine: Claude 3.5 Haiku (via Anthropic API)
Purpose: Generate human-readable titles and descriptions from raw provider content.
Input:
- Raw content (truncated to first 3,000 characters)
- Source URL
- Provider name
- Detected change type and severity (from Stage 2)
Output:
- Professional title (max 150 characters)
- Format:
[Provider] [Action] [Subject] - Example:
"OpenAI Deprecating GPT-3.5 Turbo in March 2025"
- Format:
- Business-focused description (1-2 sentences, max 300 characters)
- Example:
"GPT-3.5 Turbo will be retired on 2025-03-31. Affected workflows should migrate to GPT-4o mini for cost-effective performance."
- Example:
- Refined change type and severity suggestions
Fallback: If Claude API fails or times out (10-second timeout), falls back to rule-based title generation.
Configuration:
- Model:
claude-3-5-haiku-20241022 - Max tokens: 500
- Temperature: 0.2 (low for consistency)
- Timeout: 10 seconds
Cost: ~$0.001 per signal interpretation (negligible at typical signal volumes).
Stage 4: Domain Classification (MIT Risk Repository)
Engine: Ollama (self-hosted Llama 3.2 3B)
Purpose: Map signals to MIT AI Risk Repository domains to contextualise risk categories.
Process:
- Send signal title + description + change type to Ollama
- Ollama classifies into 1-3 most relevant MIT domains
- Returns domain ID, subdomain ID, confidence score (0.0–1.0), and reasoning
MIT AI Risk Repository Structure:
| Domain | Description | Subdomains |
|---|---|---|
| 1 – Discrimination & Toxicity | Bias, unfairness, toxic outputs | 1.1 Unfair discrimination 1.2 Toxic content 1.3 Unequal performance |
| 2 – Privacy & Security | Data breaches, surveillance | 2.1 Privacy compromise 2.2 Security vulnerabilities |
| 3 – Misinformation | False content, deepfakes | 3.1 False information 3.2 Ecosystem pollution |
| 4 – Malicious Actors | Cyberattacks, fraud, weapons | 4.1 Cyberattacks 4.2 Fraud and manipulation |
| 5 – Human-Computer Interaction | Overreliance, loss of agency | 5.1 Overreliance 5.2 Loss of agency |
| 6 – Socioeconomic & Environmental | Jobs, inequality, resources | 6.1 Power centralisation 6.2 Job displacement 6.3 Environmental impact |
| 7 – AI System Safety | Failures, control loss, risks | 7.1 System failures 7.2 Loss of control 7.3 Lack of capability 7.4 Lack of transparency 7.5 AI welfare |
Repository Scale:
- 7 domains, 24 subdomains
- 1,328 historical incidents (real-world AI failures)
- 831 mitigation strategies
Example Classification:
Signal: "OpenAI deprecating GPT-3.5 Turbo on 2025-06-30"
Classification:
- Domain: 7 – AI System Safety
- Subdomain: 7.3 – Lack of capability or robustness
- Confidence: 0.92
- Reason: "Model deprecation may cause system failures for dependent workflows that have not migrated to alternative models."
Ollama Configuration:
- Model:
llama3.2:3b - Temperature: 0.3 (lower for more consistent classification)
- Max tokens: 500
- URL: Self-hosted (internal network or Cloudflare Access-protected public endpoint)
Stage 5: Enrichment (Unified LLM Service)
Engine: Multi-provider LLM Gateway (Claude → OpenAI → Ollama fallback chain)
Purpose: Perform comprehensive enrichment in a single LLM call:
- Interpretation – Severity, categories, affected components (API/UI)
- MIT Domains – 1-3 relevant risk domains with confidence scores
- Model Impacts – Specific models affected (from tenant's product model registry)
Input:
- Signal ID
- Provider ID and name
- Title and description (from Stage 3)
- Change type (from Stage 2)
- Source URL
Output:
{
"interpretation": {
"service_affected": "OpenAI Chat API",
"symptom": "Model deprecation",
"categories": ["deprecation"],
"severity": "high",
"affects_api": true,
"affects_ui": false,
"confidence": 0.85
},
"mit_domains": [
{
"domain_id": "7",
"subdomain_id": "7.3",
"confidence": 0.92,
"reason": "Model deprecation may cause system failures"
}
],
"model_impacts": [
{
"model_identifier": "gpt-3.5-turbo",
"impact_type": "deprecation",
"severity": "critical",
"reason": "Model will be retired on 2025-06-30"
}
]
}LLM Gateway Failover:
- Primary: Claude 3.5 Sonnet (Anthropic API)
- Secondary: GPT-4o mini (OpenAI API)
- Tertiary: Llama 3.2 3B (Ollama self-hosted)
Circuit Breaker: Automatically fails over if provider returns errors for 3+ consecutive requests.
Performance: 30-60 seconds per signal (including LLM latency).
Viewing and Managing Signals
Signals Dashboard
Location: Dashboard → Signals
Features:
Filter Panel
Filter signals by:
- Provider – Show signals only from providers you use
- Change Type – Deprecation, Policy, Pricing, Capability, Incident
- Severity – Critical, Warning, Info
- MIT Domain – Filter by risk domain (1–7)
- Date Range – Last 7 days, 30 days, 90 days, All time
Signal Feed
Displays signals in reverse chronological order (newest first).
Signal Card Format:
[Provider Logo] [Change Type Badge] [Severity Badge] [MIT Domain Badges]
Title: OpenAI Deprecating GPT-3.5 Turbo in March 2025
Description: GPT-3.5 Turbo will be retired on 2025-03-31.
Affected workflows should migrate to GPT-4o mini.
Source: https://openai.com/changelog/2025-01-15
Published: 2025-01-15 14:32 UTC
[View Details] [Create Scenario]MIT Domain Badges: Colour-coded badges show MIT risk domains:
- Domain 1: Purple (Discrimination & Toxicity)
- Domain 2: Blue (Privacy & Security)
- Domain 3: Yellow (Misinformation)
- Domain 4: Red (Malicious Actors)
- Domain 5: Green (Human-Computer Interaction)
- Domain 6: Orange (Socioeconomic & Environmental)
- Domain 7: Dark Grey (AI System Safety)
Signal Detail View
Click any signal to open the detail panel.
Tabs:
1. Overview
- Full title and description
- Provider information
- Change type and severity
- Publication date
- Source URL (link to provider's announcement)
- Raw content (truncated)
2. Affected Workflows
- List of workflows using this provider/model
- Workflow criticality levels
- Current provider bindings
- Recommended actions per workflow
Example:
Affected Workflows (3)
1. Customer Support Chatbot
Criticality: Mission-Critical
Current Binding: GPT-3.5 Turbo (primary)
Action: Migrate to GPT-4o mini before 2025-03-31
2. Content Summarisation Pipeline
Criticality: Important
Current Binding: GPT-3.5 Turbo (primary)
Action: Test Claude 3.5 Haiku as cost-effective alternative3. MIT Risk Context
- Mapped MIT domains with confidence scores
- Related historical incidents from MIT Repository
- Suggested mitigation strategies
- Risk assessment specific to your workflows
Example:
MIT Domain 7.3 – Lack of Capability or Robustness (92% confidence)
Related Incidents: 127 historical incidents involving model deprecations
Mitigation Strategies:
1. Implement multi-provider fallback architecture
2. Conduct quarterly dependency audits
3. Monitor provider roadmaps for EOL announcements4. Timeline
- Signal detection timestamp
- Classification and enrichment completion
- User actions taken (scenarios created, workflows updated)
Creating Scenarios from Signals
Signals can be converted into risk scenarios for formal assessment and response planning.
Process:
- Click [Create Scenario] button on signal card or detail view
- SignalBreak auto-populates scenario fields:
- Scenario Title: From signal title
- Description: From signal description + enrichment
- Impact Type: Derived from change type (deprecation → workflow disruption)
- Likelihood: Calculated from signal severity + provider reliability
- MIT Domains: Pre-mapped from Stage 4 classification
- Review and adjust impact assessment
- Add custom mitigation actions
- Assign owner and due date
- Save scenario
Result: Signal-derived scenario appears in Scenarios page for tracking and governance reporting.
Use Case: Convert high-severity deprecation signals into formal scenarios for board-level evidence packs.
Signal-to-Workflow Impact Analysis
SignalBreak automatically identifies which workflows are affected by each signal.
Impact Detection Logic
Criteria:
- Provider Match: Signal provider matches workflow's provider binding
- Model Match: (If signal references specific model) Model identifier matches workflow's bound model
- Capability Match: (If signal affects specific AI capability) Workflow uses that capability type
Example:
Signal: "Anthropic updated Claude 3.5 Sonnet system prompt behaviour"
Affected Workflows:
- ✅ Legal Document Analysis (uses Claude 3.5 Sonnet for text generation)
- ❌ Image Captioning (uses GPT-4 Vision for image-to-text)
- ✅ Regulatory Compliance Checker (uses Claude 3.5 Sonnet)
Impact Severity Calculation
For each affected workflow, SignalBreak calculates an impact score (0–100):
Formula:
Impact Score = Signal Severity × Workflow Criticality × Binding CentralityComponents:
- Signal Severity: Critical = 40, Warning = 25, Info = 10
- Workflow Criticality: Mission-Critical = ×4, Important = ×2, Nice-to-Have = ×1
- Binding Centrality: Primary = ×1.5, Fallback = ×0.5
Example:
| Signal | Workflow | Severity | Criticality | Binding | Impact Score |
|---|---|---|---|---|---|
| Claude 3.5 Sonnet deprecation | Customer Support Bot | Critical (40) | Mission-Critical (×4) | Primary (×1.5) | 240 |
| GPT-4 pricing increase | Content Summariser | Warning (25) | Important (×2) | Fallback (×0.5) | 25 |
RAG Status:
- Red: Impact score ≥ 160 (immediate action required)
- Amber: 80 ≤ Impact score < 160 (plan response within 24h)
- Green: Impact score < 80 (monitor and review)
Best Practices
1. Review Signals Daily
Recommendation: Check the Signals dashboard at the start of each day.
Why: Providers announce changes at unpredictable times. Daily review ensures you:
- Catch critical incidents within minutes
- Plan deprecation responses before deadlines
- Identify cost-optimisation opportunities early
Time Required: 5–10 minutes for typical signal volume (2–10 signals/day across 5–10 providers).
2. Set Up Fallback Bindings for Critical Workflows
Recommendation: Every mission-critical workflow should have:
- Primary provider (main model)
- Fallback provider (backup model from different provider)
- Automatic failover enabled
Why: When incident signals arrive (outages, degraded performance), automatic failover prevents workflow downtime.
Example:
Workflow: Customer Support Chatbot
Primary: Claude 3.5 Sonnet (Anthropic)
Fallback: GPT-4o mini (OpenAI) – automatic failover after 3 failed requestsResult: 99.9%+ uptime even during provider incidents.
3. Create Scenarios for High-Severity Signals
Recommendation: Convert any signal with severity = Critical or Warning into a formal scenario.
Why: Scenarios enable:
- Structured response planning
- Governance audit trails
- Evidence for ISO 42001, NIST AI RMF, EU AI Act compliance
When to Create Scenarios:
- Deprecations affecting mission-critical workflows
- Policy changes with compliance implications
- Incidents with user-facing impact
- Pricing changes exceeding 20% of AI budget
4. Map Signals to Your Risk Register
Recommendation: Integrate signal-derived scenarios into your enterprise risk register.
Process:
- Export scenario summaries from SignalBreak
- Add to corporate risk register (e.g., GRC platform, Excel)
- Assign risk owners and mitigation deadlines
- Track in monthly risk review meetings
Benefit: Demonstrates AI governance maturity to auditors, regulators, and board.
5. Monitor MIT Domain Trends
Recommendation: Use the MIT Domain filter to track emerging risk patterns.
Example Analyses:
- Spike in Domain 7.3 signals? → Provider reliability declining, diversify dependencies
- Multiple Domain 2.1 signals? → Privacy risks increasing, review data handling
- Domain 6.2 signals? → Job displacement concerns, assess reputational risk
Frequency: Monthly trend review (15 minutes).
Output: Include trend analysis in quarterly board reports or evidence packs.
6. Archive or Dismiss Low-Priority Signals
Recommendation: After reviewing signals, mark low-priority ones as "Reviewed" or "Not Applicable".
Why: Keeps signal feed focused on actionable items.
Criteria for Dismissal:
- Info-severity capability announcements for features you won't use
- Pricing changes for providers you don't use
- Policy updates that don't affect your use case
How: (Feature planned for future release) Click [Dismiss] button on signal card.
Troubleshooting
Issue: No Signals Appearing
Possible Causes:
- No providers connected
- Providers not monitored yet (first poll takes 5 minutes)
- Polling service down
Solutions:
- Go to Providers page → Ensure at least one provider is marked "In Use"
- Wait 5 minutes after adding first provider
- Check provider health status on Providers page
Issue: Signal Shows "No Affected Workflows"
Cause: Signal doesn't match any workflow's provider bindings or models.
Solutions:
- If signal is relevant: Create a workflow that uses this provider
- If signal is not relevant: This is expected (e.g., pricing change for a provider you don't use)
Issue: MIT Domain Classification Missing
Cause: Ollama service unavailable or signal text too ambiguous.
Solutions:
- Check Ollama service health (internal monitoring)
- Domain classification is optional—signal is still usable without it
- Manually add domain tags if needed (feature planned for future release)
Issue: Signal Title is Generic ("Provider Status Update")
Cause: Raw content was too noisy or Claude API timed out, fallback title generator used.
Solutions:
- Check source URL for full details
- Click [View Details] to read raw content
- System will automatically retry interpretation in next enrichment batch (runs hourly)
Issue: Duplicate Signals for Same Event
Cause: Provider announced same change on multiple channels (e.g., status page + blog post).
Solutions:
- Content hashing should prevent most duplicates—report to support if seeing many
- Manually dismiss duplicate signals
API Integration
Signals can be accessed programmatically via REST API.
Endpoint: List Signals
GET /api/provider-changes
Query Parameters:
provider_id(optional) – Filter by providerchange_type(optional) – Filter by type (deprecation,policy,pricing,capability,incident)severity(optional) – Filter by severity (critical,warning,info)from_date(optional) – ISO 8601 date (e.g.,2025-01-01)to_date(optional) – ISO 8601 datelimit(optional) – Default 50, max 200offset(optional) – For pagination
Response:
{
"signals": [
{
"id": "uuid",
"provider_id": 1,
"provider_name": "OpenAI",
"change_type": "deprecation",
"severity": "warning",
"title": "OpenAI Deprecating GPT-3.5 Turbo in March 2025",
"description": "GPT-3.5 Turbo will be retired on 2025-03-31...",
"source_url": "https://openai.com/changelog/...",
"published_at": "2025-01-15T14:32:00Z",
"created_at": "2025-01-15T14:35:12Z",
"mit_domains": ["7.3"]
}
],
"total": 42,
"limit": 50,
"offset": 0
}Endpoint: Get Signal by ID
GET /api/provider-changes/{id}
Response: Single signal object with enrichment data.
Endpoint: List Workflow Impacts for Signal
GET /api/workflows/{workflow_id}/signals
Response: List of signals affecting the specified workflow.
Related Documentation
- Workflows & Provider Bindings – How to create and manage AI workflows
- Risk Scoring Methodology – How SignalBreak calculates risk scores
- Provider Health Monitoring – Provider status tracking system
- Scenarios & Risk Assessment – Creating formal risk scenarios from signals
- MIT AI Risk Framework – Deep dive into MIT Risk Repository integration
Glossary
| Term | Definition |
|---|---|
| Signal | A detected provider event (change, incident, announcement) |
| Change Type | Classification of signal (deprecation, policy, pricing, capability, incident) |
| Severity | Risk level (critical, warning, info) |
| MIT Domain | Risk category from MIT AI Risk Repository (1–7) |
| Enrichment | LLM-powered analysis adding context to raw signals |
| Classification | Automated categorisation of signal type and severity |
| Interpretation | AI-generated human-readable title and description |
| Impact Score | Calculated risk score for workflow-signal pairs (0–100) |
| Scenario | Formal risk assessment derived from signal |
| Provider Binding | Connection between workflow and AI provider/model |
Frequently Asked Questions
How often does SignalBreak poll for new signals?
Every 5 minutes for status pages, hourly for changelogs and blogs. This ensures near-real-time detection of incidents while avoiding rate limits.
Can I disable signal monitoring for specific providers?
Yes. Go to Providers page → Select provider → Uncheck "Monitor for Changes". Existing signals will remain visible.
Do I need to manually review every signal?
No. Focus on:
- Critical/Warning severity – Immediate action required
- Signals with affected workflows – Direct impact on your systems
- MIT Domain 7 (AI System Safety) – Highest risk category
Info-severity signals for providers you don't use can be safely ignored.
How accurate is the MIT domain classification?
Average confidence: 85%. Classification is most accurate for:
- Deprecations (→ Domain 7.3)
- Security incidents (→ Domain 2.2)
- Policy changes (→ Domain 2.1, 6.5)
Classification may be ambiguous for generic capability announcements.
Can I export signals for compliance reporting?
Yes. Signals are included in Evidence Packs (PDF reports) generated from Governance → Evidence Pack. Evidence packs include signal summaries, affected workflows, and MIT domain mappings.
What happens if I miss a critical signal?
SignalBreak retains all signals indefinitely. You can:
- Review historical signals via date filter
- Check Affected Workflows tab on workflow detail pages
- Generate evidence packs for historical compliance audits
How do I contact support if a signal seems incorrect?
Email support@signalbreak.com with:
- Signal ID
- Provider name
- Why classification seems incorrect
- Expected change type/severity
We use feedback to improve classification models.
Last Updated: 2026-01-26 Applies To: SignalBreak Portal v2.x Documentation Version: 1.0