Migrating Metrics
Overview
This section walks you through migrating metrics from Mimir (or Prometheus) to OpenObserve. You will:
- Assess what metric sources you currently have
- Identify the migration path for each source type
- Update configs to point at OpenObserve
- Validate that metrics are flowing correctly
OpenObserve supports Prometheus remote write natively and accepts OTLP metrics, so migration typically involves changing an endpoint URL in your collector or Prometheus config.
Step 1: Assess Your Current Metric Sources
Run this PromQL query in Grafana (against your Mimir/Prometheus data source) to see what's active:
This gives you a list of jobs (e.g. node_exporter, kubernetes-pods, myapp) and series count for each.
Step 2: Categorize Your Sources
Group your metrics by how they're currently collected:
| Source Type | Examples | Migration Path |
|---|---|---|
| Prometheus server with remote_write to Mimir | Scraping exporters, K8s service discovery | Update remote_write URL |
| OTel Collector sending to Mimir | prometheusremotewrite exporter |
Switch to otlphttp exporter |
| Grafana Agent / Alloy | Agent metrics block with remote_write |
Update endpoint |
| Telegraf | outputs.http with prometheusremotewrite |
Update output URL |
| Kubernetes (kube-prometheus-stack) | Prometheus Operator, ServiceMonitors | Update remoteWrite in Helm values |
| AWS CloudWatch metrics | CloudWatch → OTel Collector or Telegraf | See dedicated guide |
| Azure Monitor metrics | Azure Event Hub → OTel Collector | See dedicated guide |
Step 3: Migrate Each Source
From Prometheus Server
If you're running Prometheus with remote_write to Mimir, update the destination URL:
Current config:
Update the url to the OpenObserve remote write endpoint and add your credentials.
You can copy the exact Prometheus remote write configuration from the OpenObserve Data Sources UI
Reload Prometheus after updating (no restart needed):
Your Prometheus exporters (node_exporter, cAdvisor, kube-state-metrics, mysqld_exporter, etc.) don't change at all — they expose metrics, and Prometheus scrapes them the same way. Only the remote_write destination changes.
From OTel Collector
If you're using the OTel Collector with the prometheusremotewrite exporter to send metrics to Mimir, switch to the otlphttp exporter.
Current config:
Copy the exact updated configuration from the Data Sources UI in OpenObserve.

From Grafana Agent / Alloy
Grafana Agent reached EOL on November 1, 2025, replaced by Grafana Alloy.
Current Grafana Agent config:
Updated config:
logs:
configs:
- name: default
clients:
- url: http://openobserve:5080/api/{org_id}/ingest/metrics/_json
basic_auth:
username: admin@example.com
password: Complexpass#123
Update the remote_write URL to the OpenObserve endpoint. Copy the exact configuration from the Data Sources UI in OpenObserve.
Or replace with OpenObserve Collector.
From Telegraf
Current config:
Update the output URL to the OpenObserve endpoint. Copy the exact configuration from the Data Sources UI in OpenObserve.

Dedicated guide : Telegraf → OpenObserve
From Kubernetes (kube-prometheus-stack)
If you deployed Prometheus via the kube-prometheus-stack Helm chart with remoteWrite to Mimir:
Create the secret first:
kubectl create secret generic openobserve-secret \
--from-literal=username=admin@example.com \
--from-literal=password=Complexpass#123 \
-n monitoring
Update your Helm values:
prometheus:
prometheusSpec:
remoteWrite:
- url: http://openobserve:5080/api/default/prometheus/api/v1/write
basicAuth:
username:
name: openobserve-secret
key: username
password:
name: openobserve-secret
key: password
Apply the changes:
From AWS CloudWatch
For detailed steps on ingesting AWS CloudWatch metrics into OpenObserve, see the dedicated guide:

Dedicated guide: AWS CloudWatch Metrics → OpenObserve
From Azure Monitor
For detailed steps on ingesting Azure Monitor metrics into OpenObserve, see the dedicated guide:

Dedicated guide: Azure Monitor Metrics → OpenObserve
Step 4: How to Verify
Check in the UI
- Open the OpenObserve UI → Metrics in the left sidebar.
- Verify each job from your source inventory appears.
- Run a test query —
sum(process_cpu_utilization{})or any counter you know is active. - Confirm the values and timestamps match what you'd expect from the source.
OpenObserve Metrics Explorer — verify metrics are flowing after migration
Troubleshooting
- No data: Check collector logs for connection errors or auth failures. Confirm the
remote_writeorotlphttpexporter URL is correct and reachable. - Missing labels: Ensure your collector isn't stripping labels. Remote write passes all Prometheus labels through by default.
- Case mismatch in PromQL: OpenObserve label matching is case-sensitive. Confirm label values match exactly as ingested.
PromQL Compatibility
OpenObserve supports PromQL for metrics queries. Common functions — rate(), histogram_quantile(), sum by(), avg_over_time(), and label matchers — work as expected.
Next Steps
- Migrating Traces — migrate your trace sources next
- Migrating Logs — migrate your log sources
Back to Overview | Previous: Architecture & Terminology | Next: Migrating Traces