Key Concepts
Understanding these concepts will help you get the most from SignalBreak.
The SignalBreak Model
┌─────────────────────────────────────────────────────────────────┐
│ YOUR ORGANISATION │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Workflow │ │ Workflow │ │ Workflow │ │
│ │ (Chatbot) │ │ (Search) │ │ (Copilot) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Provider Bindings │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ AI PROVIDERS │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ OpenAI │ │ Anthropic│ │ Google │ │ Cohere │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ SIGNALS │
│ Outages • Deprecations • Policy Changes • Security • Cost │
└─────────────────────────────────────────────────────────────────┘Core Entities
Workflows
A workflow is an AI-powered system, process, or application in your organisation.
| Examples | Not Workflows |
|---|---|
| Customer support chatbot | The team using the chatbot |
| Document search system | The documents being searched |
| Code assistant integration | The developers using it |
Each workflow has:
- AI Capability — What the AI does (generate, classify, extract, etc.)
- Criticality — How important it is to your business
- Volume — How much it's used
- Constraints — Compliance requirements (data sensitivity, residency, SLAs)
Provider Bindings
A binding connects a workflow to a specific AI provider and model.
Workflow: Customer Support Chatbot
├── Primary Binding: Anthropic Claude Sonnet
└── Fallback Binding: OpenAI GPT-4 (automatic failover)Binding roles:
- Primary — The main provider for this workflow
- Fallback — Backup provider if primary fails
- Experiment — Testing a new provider
Providers
Providers are AI vendors (OpenAI, Anthropic, Google, etc.) that offer models and APIs.
SignalBreak categorises providers into tiers:
| Tier | Description | Examples |
|---|---|---|
| Tier 1 | Hyperscaler AI services | OpenAI, Anthropic, Google AI |
| Tier 2 | Major cloud platforms | AWS Bedrock, Azure OpenAI |
| Tier 3 | Specialised AI vendors | Cohere, AI21, Stability |
| Tier 4 | Open-source/self-hosted | Ollama, Hugging Face |
Risk & Signals
Signals
A signal is an event that could affect your AI systems:
| Signal Type | Description | Example |
|---|---|---|
| Outage | Service unavailable | "OpenAI API returning 500 errors" |
| Degradation | Reduced performance | "Latency increased 3x" |
| Deprecation | Feature/model being retired | "GPT-3.5 sunset announced" |
| Drift | Model behaviour change | "Response quality degradation reported" |
| Policy Change | Terms or usage rules changed | "New content restrictions" |
| Cost Increase | Pricing changes | "Token prices increased 20%" |
| Security | Vulnerability or breach | "Data exposure incident reported" |
Risk Domains
Signals are categorised into risk domains:
| Domain | What It Covers |
|---|---|
| Availability | Can you access the service? |
| Drift | Is the model behaving as expected? |
| Policy | Are terms compatible with your use? |
| Cost | Is pricing predictable? |
| External | Regulatory, competitive, market changes |
Risk Score
Your risk score (0–100) quantifies the potential impact of signals on your workflows.
The score considers:
- Signal severity — How serious is the event?
- Workflow criticality — How important is the affected system?
- Volume — How much usage is impacted?
- Data sensitivity — Are sensitive workloads affected?
- Redundancy — Do you have fallbacks?
Learn more about risk scoring →
Governance
Governance Frameworks
SignalBreak scores your AI governance against three frameworks:
| Framework | Focus | Who Uses It |
|---|---|---|
| ISO 42001 | AI management systems | Organisations seeking certification |
| NIST AI RMF | Risk management | US federal agencies, enterprises |
| EU AI Act | Regulatory compliance | Organisations operating in EU |
Controls
Controls are specific governance requirements within a framework.
Example ISO 42001 controls:
- "AI system inventory maintained"
- "Provider risk assessment documented"
- "Incident response procedure defined"
SignalBreak maps your SignalBreak configuration to controls, showing which you satisfy.
Evidence Packs
Evidence packs are reports that document your governance posture:
- Board Report — Executive summary for leadership
- Operational Report — Detailed view for practitioners
- Audit Evidence — Documentation for auditors
Relationships Summary
Tenant (your organisation)
├── Workflows (your AI systems)
│ ├── Bindings (provider connections)
│ │ └── Providers (AI vendors)
│ └── Constraints (compliance rules)
├── Signals (events affecting you)
└── Reports (governance evidence)Next Steps
Now that you understand the concepts: