Skip to content

AI Gateway Observability Integrations

OpenObserve integrates with AI gateway and proxy layers to provide observability across all LLM traffic routed through them. Capture token usage, per-model latency, routing decisions, error rates, and cost metadata for every request passing through your AI gateway.

These integrations instrument the gateway's OpenAI-compatible API endpoints using OpenTelemetry, so you get traces regardless of which underlying model the gateway routes to.

Gateway Integration Guides