Skip to content

Provider Directory

SignalBreak monitors 50+ AI providers across cloud platforms, specialists, and self-hosted solutions.

Provider Tiers

TierDescriptionExamples
Tier 1Major cloud AI platformsOpenAI, Anthropic, Google AI, AWS Bedrock, Azure OpenAI
Tier 2Established specialistsCohere, Mistral, Stability AI, Hugging Face
Tier 3Emerging & regionalSmaller providers, open-source platforms

What We Monitor

For each provider, SignalBreak tracks:

SourceSignal Types
Status PagesOutages, incidents, maintenance
ChangelogsDeprecations, new features, breaking changes
Pricing PagesCost changes, plan modifications
DocumentationAPI changes, migration guides
Social & CommunityEarly warnings, user reports

Provider Selection & Model Enablement

SignalBreak requires providers and models to be configured in a specific order. See Getting Started for the full setup sequence.

Step 1: Select Providers

  1. Navigate to ProvidersDirectory in the sidebar
  2. Browse the provider catalog (50+ providers available)
  3. Click Select Provider on each provider you use
  4. Confirm the provider appears in your Selected Providers list

Examples of available providers:

  • Cloud AI Platforms: OpenAI, Anthropic, Google AI, Azure OpenAI, AWS Bedrock
  • Specialist Providers: Cohere, Mistral AI, Stability AI, Together AI, Replicate
  • Open Source Platforms: Hugging Face, Ollama, vLLM
  • Regional Providers: Aleph Alpha (EU), Baidu (China), Naver (Korea)

Step 2: Enable Models

After selecting providers, enable the specific models you use:

  1. Navigate to ProvidersDirectoryProducts tab
  2. For each selected provider, expand the provider row
  3. Review available models (e.g., "GPT-4o", "Claude Sonnet 3.5", "Gemini 2.0 Flash")
  4. Click Enable on models you use
  5. Verify enabled models appear in your Products list

Why model enablement matters:

  • Required for workflow bindings — You cannot bind a workflow to a model until it's enabled
  • API validation — SignalBreak enforces that only enabled models can be used in workflows
  • Cost tracking — Enabled models appear in usage dashboards and cost reports

Required Before Creating Bindings

You MUST enable models here before creating workflow bindings. The binding step validates that models are enabled and will fail with a 400 Bad Request error if you skip this step.

See Quick Start: Step 1 - Enable Your Models for detailed instructions.

Example enabled models:

  • OpenAI: gpt-4o, gpt-4o-mini, gpt-3.5-turbo
  • Anthropic: claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
  • Google: gemini-2.0-flash-exp, gemini-1.5-pro

Self-Hosted & Discovered Models

SignalBreak supports monitoring self-hosted AI infrastructure alongside cloud providers.

Discovered Models

What they are: AI models you host on your own infrastructure (on-premises, private cloud, or local servers)

Examples:

  • Ollama models: llama3.2:3b, mistral-7b-instruct, codellama:13b
  • vLLM deployments: Custom model endpoints on your GPU clusters
  • Azure AI self-hosted: Models deployed in your Azure tenant
  • Custom endpoints: Any OpenAI-compatible API you host

How to Register Discovered Models

  1. Set up self-hosted connection:

    • Navigate to ProvidersDiscovered Models
    • Click Add Connection
    • Provide connection details (API endpoint, authentication, model discovery method)
  2. Discover models:

    • SignalBreak queries your endpoint to detect available models
    • Models are automatically added to your Discovered Models list
  3. Use in workflows:

    • Bind workflows to discovered models (same process as platform models)
    • Configure fallbacks between cloud and self-hosted models
  4. Monitor health:

    • SignalBreak polls discovered models for availability and performance
    • Detects outages, latency issues, and capacity problems

Self-Hosted Monitoring Capabilities

FeaturePlatform ModelsDiscovered Models
Availability Monitoring✅ Provider status pages✅ Direct endpoint polling
Signal Detection✅ Deprecations, pricing, policy✅ Availability, performance
Scenario Testing✅ Test cloud outages✅ Test infrastructure failures
Workflow Bindings✅ Full support✅ Full support
Fallback Configuration✅ Multi-provider fallbacks✅ Cloud↔self-hosted fallbacks

Use cases for discovered models:

  • Hybrid cloud/on-prem strategy: Use self-hosted for sensitive data, cloud for scale
  • Cost optimization: Route low-priority traffic to self-hosted, critical traffic to cloud
  • Compliance requirements: Keep regulated workloads on-premises with full control
  • Development workflows: Test on local Ollama before deploying to cloud production

Hybrid Fallback Strategy

Configure workflows with both cloud and self-hosted models:

  • Primary: Self-hosted model (cost-effective, full control)
  • Fallback: Cloud provider (automatic scaling, high availability)

This strategy provides cost savings during normal operation with automatic cloud failover during infrastructure issues.


Provider Health & Status

SignalBreak continuously monitors provider health across multiple dimensions:

Health IndicatorDescriptionSource
Operational StatusCurrent uptime/downtimeProvider status pages
Incident HistoryPast 90 days of outagesIncident reports, signals
API PerformanceResponse time trendsSignalBreak probes (Enterprise)
Deprecation ScheduleUpcoming model sunsetsChangelogs, announcements
Policy ComplianceTerms of service changesDocumentation monitoring

View provider health:

  1. Navigate to ProvidersHealth in the sidebar
  2. See real-time status indicators for all selected providers
  3. Click a provider to view detailed health metrics and incident history

See Provider Health for detailed health monitoring documentation.


Provider-Specific Settings

Some providers require additional configuration beyond basic model enablement:

API Key Management

For cloud providers (OpenAI, Anthropic, etc.):

  • Store API keys in SettingsIntegrationsAPI Keys
  • Keys are encrypted at rest and used for health monitoring (Enterprise plans)
  • Optional: SignalBreak can monitor your API usage and costs via provider APIs

Self-Hosted Connection Settings

For discovered models (Ollama, vLLM, custom endpoints):

  • Endpoint URL: Where SignalBreak should connect (e.g., http://ollama.internal:11434)
  • Authentication: API key, bearer token, or none (for internal endpoints)
  • Discovery method:
    • Automatic: SignalBreak queries /v1/models or Ollama list endpoint
    • Manual: You specify exact model IDs to track
  • Polling frequency: How often to check health (1 min, 5 min, 15 min, 1 hour)

Regional Configuration

For multi-region providers (Azure OpenAI, AWS Bedrock):

  • Specify deployment regions (e.g., "US East", "EU West")
  • SignalBreak tracks region-specific outages and deprecations
  • Scenario testing can target specific regional deployments

Full Provider Catalogue

Access the Complete Directory

The complete provider catalogue with health status, capabilities, documentation links, and model details is available in the platform:

ProvidersDirectory

Features:

  • 50+ AI providers across all tiers
  • Real-time health indicators
  • Model availability and version tracking
  • Provider comparison (pricing, features, regions)
  • Direct links to provider documentation