Migrating Logs
Overview
This section walks you through migrating logs from Loki to OpenObserve. You will:
- Assess how logs currently reach Loki
- Identify the migration path for each source type
- Update configs to point at OpenObserve
- Validate that logs are flowing correctly
Step 1: Assess Your Current Log Sources
Check how logs currently reach Loki. Common setups:
- Promtail running as a DaemonSet, tailing container logs
- OTel Collector with the
lokiexporter - Grafana Agent/Alloy with a
logsblock - Fluent Bit or Vector with a Loki output
- Applications sending logs directly via HTTP
Step 2: Categorize Your Sources
| Source Type | Migration Path |
|---|---|
OTel Collector with loki exporter |
Switch to otlphttp exporter |
| Promtail | Update endpoint (Loki push API supported) |
| Grafana Agent / Alloy | Update endpoint |
| Fluent Bit | Change output from loki to http |
| Vector | Change sink from loki to http |
| Telegraf | See dedicated guide |
| AWS CloudWatch logs | See dedicated guide |
| Kubernetes container logs | Use OpenObserve Collector Helm chart |
Step 3: Migrate Each Source
From OTel Collector
The loki exporter in the OTel Collector has been deprecated since July 2024 (Loki v3+ supports native OTLP ingestion). The exporter still exists in opentelemetry-collector-contrib but emits deprecation warnings and is scheduled for removal. Use otlphttp instead.
Current config:
Copy the exact updated configuration from the Data Sources UI in OpenObserve.

From Promtail
Promtail speaks the Loki push API, which OpenObserve supports natively. You can keep Promtail and just change the endpoint:
Updated config:
clients:
- url: http://openobserve:5080/api/{org}/loki/api/v1/push
basic_auth:
username: admin@example.com
password: Complexpass#123
Replace {org} with your OpenObserve organization name (e.g. default).
Alternatively, replace Promtail with the OpenObserve Collector or Fluent Bit for a more modern, OTel-native setup.
Copy the exact command to deploy O2 Collector from the Data Sources UI in OpenObserve.

From Grafana Agent / Alloy
Current config:
Updated config:
logs:
configs:
- name: default
clients:
- url: http://openobserve:5080/api/default/_json
basic_auth:
username: admin@example.com
password: Complexpass#123
From Fluent Bit
Current config:
Change the output plugin from loki to http and point it at OpenObserve.
You can copy the exact Fluent Bit configuration from the OpenObserve Data Sources UI
From Vector
Current config:
Change the sink type from loki to http and point it at OpenObserve.
You can copy the exact Vector configuration from the OpenObserve Data Sources UI
From Telegraf
For detailed steps on ingesting data into OpenObserve using Telegraf, see the dedicated guide:

Dedicated guide : Telegraf → OpenObserve
From AWS CloudWatch Logs
For detailed steps on ingesting AWS CloudWatch logs into OpenObserve, see the dedicated guide:

Dedicated guide: AWS CloudWatch Logs → OpenObserve
From Kubernetes Container Logs
Use the OpenObserve Collector Helm chart to replace Promtail as a DaemonSet.
You can copy the exact collector installation command from the OpenObserve Data Sources UI
Step 4: How to Verify
Check in the UI
- Open the OpenObserve UI → Logs in the left sidebar.
- Confirm each log stream from your source inventory appears in the stream list.
- Run a test query against a known stream:
SELECT * FROM "default" - Verify field names look correct — especially that structured fields (like
level,service,trace_id) are parsed as columns, not buried in a raw string.

OpenObserve Logs Explorer — verify log streams and field parsing after migration

Verify Field Parsing
If your logs are JSON, OpenObserve auto-parses them into columns. Check that expected fields appear as filterable columns in the Logs explorer. If a field is missing, check whether the raw log line is actually valid JSON.
Troubleshooting
- No streams visible: Check your collector's output logs for HTTP errors. Confirm the URI path matches your org —
/api/default/for the default org. - Fields not parsed: If logs aren't JSON, add a parsing processor in your collector (e.g.
logstransformorjsonparser) before the exporter. - Auth errors: Regenerate the Base64 credential string — whitespace in the original input will break it.
Next Steps
- OpenObserve Logs User Guide — exploring streams, running queries, and configuring stream settings in the UI
- OpenObserve Full-Text Search Functions — complete reference for
match_all(),str_match(),re_match(), and more
Back to Overview | Previous: Migrating Traces