Appearance
Provider Directory
SignalBreak monitors 50+ AI providers across cloud platforms, specialists, and self-hosted solutions.
Provider Tiers
| Tier | Description | Examples |
|---|---|---|
| Tier 1 | Major cloud AI platforms | OpenAI, Anthropic, Google AI, AWS Bedrock, Azure OpenAI |
| Tier 2 | Established specialists | Cohere, Mistral, Stability AI, Hugging Face |
| Tier 3 | Emerging & regional | Smaller providers, open-source platforms |
What We Monitor
For each provider, SignalBreak tracks:
| Source | Signal Types |
|---|---|
| Status Pages | Outages, incidents, maintenance |
| Changelogs | Deprecations, new features, breaking changes |
| Pricing Pages | Cost changes, plan modifications |
| Documentation | API changes, migration guides |
| Social & Community | Early warnings, user reports |
Provider Selection & Model Enablement
SignalBreak requires providers and models to be configured in a specific order. See Getting Started for the full setup sequence.
Step 1: Select Providers
- Navigate to Providers → Directory in the sidebar
- Browse the provider catalog (50+ providers available)
- Click Select Provider on each provider you use
- Confirm the provider appears in your Selected Providers list
Examples of available providers:
- Cloud AI Platforms: OpenAI, Anthropic, Google AI, Azure OpenAI, AWS Bedrock
- Specialist Providers: Cohere, Mistral AI, Stability AI, Together AI, Replicate
- Open Source Platforms: Hugging Face, Ollama, vLLM
- Regional Providers: Aleph Alpha (EU), Baidu (China), Naver (Korea)
Step 2: Enable Models
After selecting providers, enable the specific models you use:
- Navigate to Providers → Directory → Products tab
- For each selected provider, expand the provider row
- Review available models (e.g., "GPT-4o", "Claude Sonnet 3.5", "Gemini 2.0 Flash")
- Click Enable on models you use
- Verify enabled models appear in your Products list
Why model enablement matters:
- Required for workflow bindings — You cannot bind a workflow to a model until it's enabled
- API validation — SignalBreak enforces that only enabled models can be used in workflows
- Cost tracking — Enabled models appear in usage dashboards and cost reports
Required Before Creating Bindings
You MUST enable models here before creating workflow bindings. The binding step validates that models are enabled and will fail with a 400 Bad Request error if you skip this step.
See Quick Start: Step 1 - Enable Your Models for detailed instructions.
Example enabled models:
- OpenAI:
gpt-4o,gpt-4o-mini,gpt-3.5-turbo - Anthropic:
claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022 - Google:
gemini-2.0-flash-exp,gemini-1.5-pro
Self-Hosted & Discovered Models
SignalBreak supports monitoring self-hosted AI infrastructure alongside cloud providers.
Discovered Models
What they are: AI models you host on your own infrastructure (on-premises, private cloud, or local servers)
Examples:
- Ollama models:
llama3.2:3b,mistral-7b-instruct,codellama:13b - vLLM deployments: Custom model endpoints on your GPU clusters
- Azure AI self-hosted: Models deployed in your Azure tenant
- Custom endpoints: Any OpenAI-compatible API you host
How to Register Discovered Models
Set up self-hosted connection:
- Navigate to Providers → Discovered Models
- Click Add Connection
- Provide connection details (API endpoint, authentication, model discovery method)
Discover models:
- SignalBreak queries your endpoint to detect available models
- Models are automatically added to your Discovered Models list
Use in workflows:
- Bind workflows to discovered models (same process as platform models)
- Configure fallbacks between cloud and self-hosted models
Monitor health:
- SignalBreak polls discovered models for availability and performance
- Detects outages, latency issues, and capacity problems
Self-Hosted Monitoring Capabilities
| Feature | Platform Models | Discovered Models |
|---|---|---|
| Availability Monitoring | ✅ Provider status pages | ✅ Direct endpoint polling |
| Signal Detection | ✅ Deprecations, pricing, policy | ✅ Availability, performance |
| Scenario Testing | ✅ Test cloud outages | ✅ Test infrastructure failures |
| Workflow Bindings | ✅ Full support | ✅ Full support |
| Fallback Configuration | ✅ Multi-provider fallbacks | ✅ Cloud↔self-hosted fallbacks |
Use cases for discovered models:
- Hybrid cloud/on-prem strategy: Use self-hosted for sensitive data, cloud for scale
- Cost optimization: Route low-priority traffic to self-hosted, critical traffic to cloud
- Compliance requirements: Keep regulated workloads on-premises with full control
- Development workflows: Test on local Ollama before deploying to cloud production
Hybrid Fallback Strategy
Configure workflows with both cloud and self-hosted models:
- Primary: Self-hosted model (cost-effective, full control)
- Fallback: Cloud provider (automatic scaling, high availability)
This strategy provides cost savings during normal operation with automatic cloud failover during infrastructure issues.
Provider Health & Status
SignalBreak continuously monitors provider health across multiple dimensions:
| Health Indicator | Description | Source |
|---|---|---|
| Operational Status | Current uptime/downtime | Provider status pages |
| Incident History | Past 90 days of outages | Incident reports, signals |
| API Performance | Response time trends | SignalBreak probes (Enterprise) |
| Deprecation Schedule | Upcoming model sunsets | Changelogs, announcements |
| Policy Compliance | Terms of service changes | Documentation monitoring |
View provider health:
- Navigate to Providers → Health in the sidebar
- See real-time status indicators for all selected providers
- Click a provider to view detailed health metrics and incident history
See Provider Health for detailed health monitoring documentation.
Provider-Specific Settings
Some providers require additional configuration beyond basic model enablement:
API Key Management
For cloud providers (OpenAI, Anthropic, etc.):
- Store API keys in Settings → Integrations → API Keys
- Keys are encrypted at rest and used for health monitoring (Enterprise plans)
- Optional: SignalBreak can monitor your API usage and costs via provider APIs
Self-Hosted Connection Settings
For discovered models (Ollama, vLLM, custom endpoints):
- Endpoint URL: Where SignalBreak should connect (e.g.,
http://ollama.internal:11434) - Authentication: API key, bearer token, or none (for internal endpoints)
- Discovery method:
- Automatic: SignalBreak queries
/v1/modelsor Ollama list endpoint - Manual: You specify exact model IDs to track
- Automatic: SignalBreak queries
- Polling frequency: How often to check health (1 min, 5 min, 15 min, 1 hour)
Regional Configuration
For multi-region providers (Azure OpenAI, AWS Bedrock):
- Specify deployment regions (e.g., "US East", "EU West")
- SignalBreak tracks region-specific outages and deprecations
- Scenario testing can target specific regional deployments
Full Provider Catalogue
Access the Complete Directory
The complete provider catalogue with health status, capabilities, documentation links, and model details is available in the platform:
Providers → Directory
Features:
- 50+ AI providers across all tiers
- Real-time health indicators
- Model availability and version tracking
- Provider comparison (pricing, features, regions)
- Direct links to provider documentation