Skip to content

Glossary

A comprehensive reference of terms used in SignalBreak.


A

AI Capability

The type of AI task performed by a workflow. SignalBreak supports 10 capability types:

CapabilityDescriptionExamples
text_generationText completion, chat, summarisationGPT-4o, Claude 3.5 Sonnet
image_generationText-to-image synthesisDALL-E 3, Stable Diffusion
image_to_textImage captioning, OCR, visual Q&AGPT-4 Vision, Claude 3.5 Sonnet
text_to_speechSpeech synthesis from textOpenAI TTS, ElevenLabs
speech_to_textAudio transcription, voice recognitionWhisper, Deepgram
embeddingsVector representations for semantic searchtext-embedding-3, Cohere Embed
reasoningMulti-step problem solving, chain-of-thoughto1, o1-mini
code_generationCode completion, code explanationGPT-4o, Claude 3.5 Sonnet
visionImage understanding, visual reasoningGPT-4 Vision, Gemini Pro Vision
multimodalCombined text, image, audio processingGemini 1.5 Pro, Claude 3.5 Sonnet

See: Workflows & Provider Bindings


AI System

A discrete AI application or service that serves a specific business function. Multiple workflows may belong to a single AI system.

Example: "Customer Support AI" system contains:

  • Chatbot workflow
  • Email response workflow
  • Ticket classification workflow

EU AI Act Context: AI systems are classified into risk tiers (Prohibited, High-Risk, Limited-Risk, Minimal-Risk) for compliance purposes.

See: EU AI Act Guide


AI System Safety (MIT Domain 7)

One of seven MIT AI Risk Repository domains, covering system failures, loss of control, and existential risks.

Subdomains:

  • 7.1 — AI pursuing goals in conflict with human values
  • 7.2 — AI possessing dangerous capabilities
  • 7.3 — Lack of capability or robustness
  • 7.4 — Lack of transparency or interpretability
  • 7.5 — AI welfare and rights

Common Signals: Model deprecations, API outages, performance degradation

See: MIT Risk Framework


Amber Status

Risk status indicating moderate risk (RAG: Red/Amber/Green).

Definition: Risk score between 30-70 Action Required: Review within 24 hours, plan mitigation UI Indicator: Yellow/orange badge

See: Risk Scoring Methodology


API Key

Credential used to authenticate with AI provider APIs. SignalBreak stores API keys encrypted in Supabase Vault for self-hosted discovery.

Security: SignalBreak never stores API keys for cloud providers (OpenAI, Anthropic, etc.)—only for self-hosted connections.

Future: SignalBreak API keys for programmatic access (planned Q2 2026).

See: Self-Hosted Connections


Audit Trail

Timestamped record of all user actions in SignalBreak.

Tracked Events:

  • Workflow created/updated/deleted
  • Scenario executed
  • Provider binding modified
  • Evidence pack generated
  • User invited/removed

Export: CSV format via SettingsAudit LogExport

Retention: 365 days (all plans)

See: API Reference - Audit


B

Binding Role

The role of a provider binding in a workflow's failover strategy.

RoleDescriptionFailover Behavior
PrimaryMain provider for this workflowFirst choice for all requests
FallbackBackup providerUsed after 3 primary failures
ExperimentA/B test providerUsed for specified % of traffic

Best Practice: Every mission-critical workflow should have Primary + Fallback bindings.

See: Workflows & Provider Bindings


C

Capability

See: AI Capability


Change Type

Classification of provider signals into five categories:

TypeDescriptionSeverity BiasExamples
deprecationEnd-of-life announcementsWarning/CriticalGPT-3.5 Turbo sunset
policyTerms of service updatesWarning/InfoUsage policy restricted
pricingCost changesInfo/WarningPrice increase 20%
capabilityNew features or modelsInfoClaude 3.7 launched
incidentOutages or degraded performanceCritical/WarningAPI downtime

Detection: Rules-based keyword matching + Claude AI interpretation

See: Provider Signals


Circuit Breaker

Failover mechanism that automatically switches to fallback provider after detecting repeated failures.

Threshold: 3 consecutive failed requests to primary provider Action: Switch to fallback provider Reset: After 5 minutes of primary provider health restoration

Configuration: Automatic (no user configuration required)

See: Workflows & Provider Bindings


Classification

Process of categorizing provider signals by change type and severity.

Stage 1: Rules-based keyword matching Stage 2: Claude AI interpretation (refines classification) Stage 3: MIT domain mapping (Ollama classifier)

Confidence Score: 0.0-1.0 (higher = more certain)

See: Provider Signals


Compliance Framework

Formal standard or regulation for AI governance.

SignalBreak Supported:

  1. ISO 42001 — AI Management System (certifiable)
  2. NIST AI RMF — US federal AI risk framework
  3. EU AI Act — Mandatory EU regulation
  4. SOC 2 — Security controls
  5. ISO 27001 — Information security

See: Governance Overview


Concentration Risk

Risk arising from over-dependence on a single AI provider.

Calculation:

Concentration % = (Workflows using primary provider) / (Total workflows) × 100

Risk Levels:

  • Low: < 40% (diversified)
  • Medium: 40-60% (moderate concentration)
  • High: > 60% (single-provider dependency)

Mitigation: Configure fallback bindings to distribute risk across providers.

See: Governance Scorecard


Criticality

Business impact classification for workflows.

LevelDefinitionRTO TargetFallback Required?
Mission-CriticalCustomer-facing, revenue-generating< 5 minutesYes (mandatory)
ImportantInternal tools, automation< 1 hourRecommended
Nice-to-HaveNon-essential features< 24 hoursOptional

Impact: Used in risk score calculation (Mission-Critical workflows weighted 4× higher).

See: Workflows & Provider Bindings


D

Decision Readiness Framework

Five-dimension scoring system for assessing AI governance maturity.

Dimensions:

  1. Workflow Coverage — % of AI systems documented
  2. Fallback Readiness — % of critical workflows with fallback
  3. Provider Diversity — Inverse of concentration risk
  4. Signal Response — Average scenario creation time after critical signals
  5. Governance Maturity — Framework adoption percentage

Aggregate Score: 0-100 (weighted average)

See: Governance Overview


Deprecation

Planned end-of-life for an AI model, API, or feature.

Typical Timeline: 3-12 months advance notice Risk: Critical/Warning (high impact) Action Required: Migrate to replacement model before cutoff date

Example Signals:

  • "OpenAI deprecating GPT-3.5 Turbo on 2025-06-30"
  • "Anthropic retiring Claude 2.0 API endpoints"

See: Provider Signals - Deprecation


Discovered Model

AI model detected via self-hosted discovery (Ollama, vLLM, etc.).

Discovery Process:

  1. Connect to self-hosted endpoint
  2. Query /models or /v1/models API
  3. Extract model identifiers and metadata
  4. Create tenant-scoped model records

Usage: Discovered models can be used in workflow provider bindings.

See: Self-Hosted Connections


Domain (MIT Risk)

Top-level category in MIT AI Risk Repository (7 domains total).

DomainNameFocus
1Discrimination & ToxicityBias, harmful content
2Privacy & SecurityData breaches, attacks
3MisinformationFalse content, deepfakes
4Malicious ActorsCyberattacks, fraud
5Human-Computer InteractionOverreliance, agency loss
6Socioeconomic & EnvironmentalJobs, inequality, resources
7AI System SafetyFailures, control loss

Purpose: Contextualise provider signals with real-world AI risk categories.

See: MIT Risk Framework


E

Embeddings

Vector representations of text for semantic search and similarity.

Use Cases:

  • Document search
  • Recommendation systems
  • Clustering and classification

Common Models:

  • OpenAI text-embedding-3-small/large
  • Cohere embed-english-v3.0
  • Voyage AI voyage-large-2

Workflow Capability: embeddings


Enrichment

Process of adding context and analysis to provider signals using LLMs.

Enrichment Pipeline (v2):

  1. Interpretation — Severity, categories, affected components
  2. MIT Domains — Risk domain mapping (1-3 domains)
  3. Model Impacts — Specific models affected

LLM Gateway: Claude 3.5 Sonnet → GPT-4o mini → Ollama (fallback chain)

Latency: 30-60 seconds per signal

See: Provider Signals - Stage 5 Enrichment


EU AI Act

Regulation (EU) 2024/1689 — Mandatory AI regulation in the European Union.

Risk Tiers:

  1. Prohibited — Banned AI uses (social scoring, subliminal manipulation)
  2. High-Risk — Strict requirements (safety components, biometric ID, critical infrastructure)
  3. Limited-Risk — Transparency obligations (chatbots, deepfakes)
  4. Minimal-Risk — No specific obligations (spam filters, video games)

Penalties:

  • Prohibited AI: Up to €35M or 7% global turnover
  • Non-compliance: Up to €15M or 3% global turnover

Phase-in Timeline: Aug 2025 (prohibited), Aug 2026 (high-risk), Aug 2027 (limited-risk)

See: EU AI Act Guide


Evidence Pack

Consulting-grade PDF report demonstrating AI governance maturity.

Contents (10 Sections):

  1. Executive Summary
  2. Governance Scorecard
  3. Provider Dependency Analysis
  4. Signal Analysis
  5. Key Findings
  6. ISO 42001 Mapping
  7. NIST AI RMF Mapping
  8. EU AI Act Readiness
  9. Remediation Roadmap
  10. Methodology

Generation Time: 30-60 seconds File Size: 40-80 pages PDF Frequency: Monthly recommended

Use Cases: Audits, board reports, RFP responses

See: Evidence Packs Guide


F

Fallback Binding

Backup AI provider configured for a workflow to ensure uptime during primary provider outages.

Types:

  • Automatic — Circuit breaker triggers failover after 3 failures
  • Manual — User manually switches provider

Best Practice: Configure automatic fallback for mission-critical workflows.

See: Workflows & Provider Bindings


Feature Gate

Billing limit enforcement mechanism that restricts access based on subscription tier.

Gated Features:

FeatureFreeProfessionalEnterprise
Workflows550Unlimited
Scenarios10100Unlimited
Signal History7 days90 daysUnlimited
Evidence Packs1/monthUnlimitedUnlimited

403 Response: Returns error when limit exceeded, includes requiresUpgrade: true.

See: API Reference - Feature Gates


G

Governance Scorecard

Dashboard summarising AI governance maturity metrics.

Components:

  1. Risk Score (0-100) — Aggregate risk level
  2. RAG Status — Red/Amber/Green classification
  3. Provider Concentration — Single-provider dependency %
  4. Top Exposures — Highest-severity scenario impacts
  5. Active Signals — Recent critical/warning signals

API Endpoint: GET /api/governance/scorecard

See: API Reference - Governance


Green Status

Risk status indicating low risk (RAG: Red/Amber/Green).

Definition: Risk score < 30 Action Required: Continue monitoring (no immediate action) UI Indicator: Green badge

See: Risk Scoring Methodology


H

Health Status

Current operational state of an AI provider's services.

Status Values:

  • operational — All systems functioning normally
  • degraded — Reduced performance or partial outage
  • major_outage — Significant service disruption
  • under_maintenance — Planned downtime

Monitoring Frequency: Every 5 minutes (status pages)

Sources:

  • Provider status pages (Statuspage.io JSON API)
  • API response time monitoring
  • Community reports (Reddit, Twitter/X)

See: Provider Health Monitoring


Human-in-Loop

Configuration indicating whether human review is required before AI outputs are used.

Values:

  • true — Human must review AI output before use
  • false — AI output used directly without review

Impact: High criticality + no human-in-loop = higher risk score

Regulatory Context: EU AI Act requires human oversight for high-risk AI systems.

See: EU AI Act - Article 14


I

Impact Score

Calculated risk score (0-240) for a workflow affected by a provider signal.

Formula:

Impact Score = Signal Severity × Workflow Criticality × Binding Centrality

Components:

  • Signal Severity: Critical=40, Warning=25, Info=10
  • Workflow Criticality: Mission-Critical=×4, Important=×2, Nice-to-Have=×1
  • Binding Centrality: Primary=×1.5, Fallback=×0.5

RAG Thresholds:

  • Red: ≥ 160 (immediate action)
  • Amber: 80-159 (plan response within 24h)
  • Green: < 80 (monitor)

Example:

Critical signal (40) × Mission-Critical workflow (×4) × Primary binding (×1.5) = 240 (Red)

See: Provider Signals - Impact Analysis


Incident (Signal Type)

Provider signal indicating service outage, degraded performance, or operational issue.

Severity: Critical or Warning (high priority) Typical Duration: Minutes to hours Action Required: Activate fallback provider immediately (if automatic failover not configured)

Example Signals:

  • "OpenAI API experiencing elevated error rates (15-minute outage)"
  • "AWS Bedrock: Intermittent 503 errors in us-east-1"

See: Provider Signals - Incident


ISO 42001

ISO/IEC 42001:2023 — AI Management System standard.

Status: Certifiable (requires third-party audit) Scope: AI lifecycle management (development, deployment, monitoring, decommissioning) Key Clauses: 10 core requirements (4.1-10.2)

Certification:

  • Timeline: 12-18 months
  • Cost: £18k-43k (with SignalBreak evidence)
  • Auditor: Accredited certification body (BSI, LRQA, etc.)

SignalBreak Coverage: 8/10 clauses fully supported, 2/10 partial

See: ISO 42001 Guide


L

LLM Gateway

Multi-provider fallback chain for LLM-powered enrichment.

Providers (Priority Order):

  1. Claude 3.5 Sonnet (Anthropic) — Primary
  2. GPT-4o mini (OpenAI) — Secondary
  3. Llama 3.2 3B (Ollama self-hosted) — Tertiary

Failover Logic: Circuit breaker switches after 3 consecutive errors

Use Cases:

  • Signal interpretation (Claude Haiku)
  • Signal enrichment (Claude Sonnet)
  • MIT domain classification (Ollama)

See: Provider Signals - Enrichment


M

MIT AI Risk Repository

Curated database of 1,328 real-world AI incidents and 831 mitigation strategies.

Structure:

  • 7 domains (top-level categories)
  • 24 subdomains (specific risk types)
  • 1,328 incidents (historical failures)
  • 831 mitigations (remediation strategies)

Purpose: Contextualise provider signals with historical AI risk patterns.

Source: MIT AI Risk Repository

SignalBreak Integration: Automatic signal classification into MIT domains using Ollama.

See: MIT Risk Framework


Mitigation

Action taken to reduce or eliminate a risk.

Context in SignalBreak:

  1. Scenario Mitigation — User-defined actions for risk scenarios
  2. MIT Mitigation — Pre-defined strategies from MIT AI Risk Repository

Example Scenario Mitigations:

  • "Migrate workflows from GPT-3.5 to GPT-4o mini by March 2025"
  • "Implement automatic fallback to Anthropic Claude"
  • "Add human-in-loop review for customer-facing outputs"

See: Scenarios


Model (AI)

Specific version of an AI capability offered by a provider.

Examples:

  • OpenAI: gpt-4o, gpt-4o-mini, text-embedding-3-small
  • Anthropic: claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
  • Google: gemini-1.5-pro, gemini-1.5-flash

SignalBreak Context:

  • Workflows bind to specific models via provider bindings
  • Signals may reference specific models (e.g., "GPT-3.5 Turbo deprecation")
  • Self-hosted discovery detects available models

See: Workflows & Provider Bindings


Multimodal

AI capability that processes multiple modalities (text, image, audio, video) in a single request.

Examples:

  • Gemini 1.5 Pro (text + image + audio)
  • Claude 3.5 Sonnet (text + image)
  • GPT-4o (text + image + audio)

Use Cases:

  • Document understanding with diagrams
  • Video content analysis
  • Audio transcription with context

Workflow Capability: multimodal


N

NIST AI RMF

NIST AI Risk Management Framework — US federal guidance for AI risk management.

Structure: 4 core functions

  1. GOVERN — Policies, culture, accountability
  2. MAP — Context, risks, impacts
  3. MEASURE — Metrics, testing, monitoring
  4. MANAGE — Response, recovery, communication

Subcategories: 43 total across 4 functions

Compliance: Voluntary framework, but mandatory for US federal agencies (Executive Order 14110)

Attestation: Self-certification (not third-party audit like ISO 42001)

See: NIST AI RMF Guide


O

Ollama

Open-source, self-hosted LLM platform.

SignalBreak Usage:

  • MIT domain classification (Llama 3.2 3B)
  • Self-hosted model discovery
  • LLM gateway tertiary fallback

Discovery: SignalBreak can connect to Ollama endpoints to detect available models.

See: Self-Hosted Connections


Outage

Complete or partial unavailability of an AI provider's services.

Detection: Provider status pages, API health checks Severity: Critical (if affecting production systems) Response: Automatic failover to fallback provider (if configured)

Example Signals:

  • "OpenAI API down globally (complete outage)"
  • "AWS Bedrock degraded performance in eu-west-1 (partial outage)"

See: Provider Health Monitoring


P

Policy (Signal Type)

Provider signal indicating changes to terms of service, usage policies, or compliance requirements.

Severity: Info to Warning (depending on impact) Typical Timeline: Immediate to 30 days Risk: Compliance violation if usage doesn't align with new policy

Example Signals:

  • "OpenAI updated Usage Policy to restrict military applications"
  • "Anthropic introduced new data retention requirements"

See: Provider Signals - Policy


Pricing (Signal Type)

Provider signal indicating cost changes, billing updates, or tier modifications.

Severity: Info to Warning (if price increase exceeds 20%) Impact: Budget forecasting, ROI calculations Action Required: Recalculate AI spend, evaluate alternatives

Example Signals:

  • "Anthropic increasing Claude 3.5 Sonnet pricing to $3.00/MTok"
  • "Google reducing Gemini Pro pricing by 40%"

See: Provider Signals - Pricing


Provider

AI service provider offering models or APIs.

SignalBreak Monitored (8 providers):

  1. OpenAI
  2. Anthropic
  3. AWS Bedrock
  4. Google AI (Gemini)
  5. Cohere
  6. AI21 Labs
  7. Mistral AI
  8. Perplexity

Self-Hosted: Ollama, vLLM, LM Studio, TGI (via Self-Hosted Connections)

See: Providers


Provider Binding

Connection between a workflow and an AI provider/model.

Components:

  • Workflow ID — Which workflow uses this binding
  • Provider ID — Which provider (OpenAI, Anthropic, etc.)
  • Model Class — Model family (e.g., gpt-4o)
  • Model Name — User-friendly display name
  • Binding Role — Primary, Fallback, or Experiment
  • Is Active — Whether binding is currently enabled

Example:

Workflow: Customer Support Chatbot
├─ Primary Binding: Claude 3.5 Sonnet (Anthropic)
└─ Fallback Binding: GPT-4o mini (OpenAI)

See: Workflows & Provider Bindings


R

RAG Status

Red/Amber/Green classification for risk levels.

StatusRisk ScoreAction Required
Green< 30Monitor (no immediate action)
Amber30-70Review within 24h, plan mitigation
Red> 70Immediate action required

Usage:

  • Governance risk score
  • Impact score for signal-workflow pairs
  • Scenario severity classification

See: Risk Scoring Methodology


Red Status

Risk status indicating high risk (RAG: Red/Amber/Green).

Definition: Risk score > 70 Action Required: Immediate action (activate response plan) UI Indicator: Red badge

See: Risk Scoring Methodology


Risk Score

Aggregate risk metric (0-100) calculated from multiple factors.

Governance Risk Score Inputs (Weighted):

  1. Provider Concentration (30%)
  2. Untreated MIT Risks (25%)
  3. High-Severity Signals (20%)
  4. Fallback Coverage (15%)
  5. Scenario Maturity (10%)

Impact Risk Score Inputs:

  • Signal severity
  • Workflow criticality
  • Binding centrality

RAG Mapping:

  • Green: < 30
  • Amber: 30-70
  • Red: > 70

See: Risk Scoring Methodology


S

Scenario

Formal risk assessment with mitigation plan, owner, and timeline.

Lifecycle:

  1. Draft — Initial creation, planning phase
  2. Active — Approved, monitoring for trigger conditions
  3. Executed — Response plan activated
  4. Resolved — Mitigation complete, risk addressed
  5. Archived — Historical record

Components:

  • Scenario Name — Brief description
  • Scenario Type — Category (outage, deprecation, policy, etc.)
  • Impact Severity — Critical, High, Medium, Low
  • Likelihood — Certain, Likely, Possible, Unlikely
  • Mitigation Actions — Steps to reduce risk
  • Owner — Responsible team member
  • Due Date — Target resolution date

Creation: From signals (click "Create Scenario" button) or manually

See: Scenarios


Self-Hosted Connection

Configuration for accessing self-hosted AI platforms (Ollama, vLLM, LM Studio).

Setup:

  1. Enter endpoint URL (e.g., https://ollama.internal.company.com)
  2. Optionally provide API key
  3. Run discovery to detect available models
  4. Configure workflow bindings

Health Monitoring: SignalBreak polls self-hosted endpoints to track availability.

Limitations: No automatic signal detection (internal models don't publish changelogs).

See: Self-Hosted Connections


Severity

Risk level assigned to provider signals and scenarios.

Signal Severity (5 levels):

  • critical — Service unavailable, data loss, security breach
  • high — Major functionality impaired
  • medium — Partial degradation, workarounds available
  • low — Minor issues, limited impact
  • info — Announcements, no immediate impact

Scenario Severity (4 levels):

  • critical — Business-critical systems at risk
  • high — Significant operational impact
  • medium — Moderate disruption
  • low — Minimal business impact

See: Provider Signals - Severity


Signal

Detected provider event (change, incident, announcement) requiring attention.

Types: Deprecation, Policy, Pricing, Capability, Incident

Detection: Automated monitoring of provider sources

Enrichment: LLM-powered analysis adds context and MIT domain mapping

Lifecycle: Immutable (historical record)—create Scenario for response planning

See: Provider Signals


Signal History Retention

Number of days of past signals accessible based on subscription tier.

PlanRetentionNotes
Free7 daysRecent signals only
Professional90 daysQuarterly trend analysis
EnterpriseUnlimitedFull historical audit trail

Feature Gate: List endpoints filter by retention period.

See: Billing Tier Limits


SOC 2

Service Organization Control 2 — Security audit framework for service providers.

Trust Service Criteria:

  • Security
  • Availability
  • Processing Integrity
  • Confidentiality
  • Privacy

SignalBreak Context: Evidence packs include SOC 2 control mappings for AI-specific risks.

See: Governance Overview


Statuspage.io

Provider status page platform used by OpenAI, Anthropic, AWS, and others.

SignalBreak Integration:

  • Polls JSON API every 5 minutes
  • Parses component statuses and incidents
  • Creates signals only on status changes (not every poll)

Result: ~99% noise reduction vs. naive polling.

See: Provider Health Monitoring


Subdomain (MIT Risk)

Specific risk type within a MIT domain (24 subdomains across 7 domains).

Example:

  • Domain 7: AI System Safety
    • Subdomain 7.1: AI pursuing goals in conflict with human values
    • Subdomain 7.2: AI possessing dangerous capabilities
    • Subdomain 7.3: Lack of capability or robustness (most common for signals)

Purpose: Granular risk classification for signals and scenarios.

See: MIT Risk Framework


T

Tenant

Organisation account in SignalBreak (isolated data namespace).

Isolation: All data (workflows, signals, scenarios) scoped to tenant via Row-Level Security.

Multi-Tenancy: Each user belongs to one tenant; tenants cannot access each other's data.

Tenant ID: UUID identifier for tenant (used in API queries).

See: API Reference - Authentication


Tier (Provider)

Classification of provider reliability and market position.

SignalBreak Tiers:

  • Tier 1 — Industry leaders (OpenAI, Anthropic, Google, AWS)
  • Tier 2 — Established players (Cohere, AI21, Mistral)
  • Tier 3 — Emerging providers (smaller companies, startups)

Impact: Provider tier affects risk scoring (Tier 1 failures = higher impact).

See: Provider Health Monitoring


W

Webhook (Planned)

HTTP callback for real-time event notifications.

Status: Planned for Q2 2026

Planned Events:

  • signal.created — New signal detected
  • signal.high_severity — Critical/Warning signal
  • scenario.executed — Response plan activated
  • workflow.impacted — Workflow affected by signal
  • evidence_pack.generated — Evidence pack ready

See: API Reference - Webhooks


Workflow

Business process or system dependent on AI capabilities.

Core Fields:

  • Workflow Name — Descriptive name (e.g., "Customer Support Chatbot")
  • AI Capability — Type of AI task (text_generation, vision, etc.)
  • Criticality — Mission-Critical, Important, Nice-to-Have
  • Human-in-Loop — Whether human review required
  • Provider Bindings — Primary and fallback providers

Lifecycle:

  • Active — Workflow in use
  • Inactive — Workflow disabled (soft delete)

See: Workflows & Provider Bindings



Last Updated: 2026-01-26 Documentation Version: 1.0

AI Governance Intelligence