Skip to content

LLM Provider Observability Integrations

OpenObserve provides per-call observability for LLM provider APIs, capturing token usage (prompt, completion, total), latency, model name, and error details for every request. Monitor cost, performance, and reliability across all the LLM providers your application uses.

These integrations use OpenInference or OpenTelemetry instrumentors for each provider's SDK, sending traces directly to OpenObserve.

LLM Provider Integration Guides