Observability Pipelines
Transform, Enrich, Redact, Reduce, and Parse your observability data with real-time and scheduled pipelines. Streamline data ingestion, enhance data quality, and gain actionable insights faster.

Why Use OpenObserve Pipelines?
OpenObserve simplifies telemetry processing with flexible pipelines and powerful enrichment capabilities. Handle any data format, enrich data on-the-fly, and ensure data quality for reliable analysis.

Pipeline Types
Real-time
Transform data as it arrives in your streams in real time. Parse, filter, and enrich on-the-fly for immediate insights through powerful stream processing capabilities.
Scheduled
Run pipelines at defined intervals for batch processing and periodic data transformations. Perform aggregations. Convert logs to metrics.

Data Transformation
VRL Functions
Create custom transformations using Vector Remap Language (VRL). Parse, enrich, and filter data with powerful VRL functions that support complex logic.
Data Parsing
Parse structured and unstructured data using prebuilt VRL functions to extract meaningful information. Convert complex log formats into structured data for easier querying.

Data Enrichment
Enrichment Tables
Add context to your data using CSV-based lookup tables. Enrich events with location data, user information, and other external metadata.
Dynamic Lookups
Enrich data with external information from APIs and other dynamic sources. Seamlessly integrate with external data sources to enhance the value of your observability data.

Pipeline Components
Function Nodes
Process data with VRL functions that execute custom logic. Create reusable function nodes for common data transformation tasks.
Stream Operations
Route and transform data streams with flexible stream operations. Filter data, clone streams, and route data to multiple destinations.
Get Started with Pipelines
Begin building data processing pipelines with OpenObserve. Start with our free tier or schedule a demo.
Openobserve Cloud Free Tier
Monthly Limits:
Ingestion - 50 GB logs, 50 GB metrics , 50 GB traces
Query volume - 200 GB
Pipelines - 50 GB of Data Processing
1K RUM & Session Replay
1K Action Script Runs
3 Users
7-Days Retention
Get started in minutes—no credit card required.
Pipeline Management FAQs
What types of pipelines does OpenObserve support?
OpenObserve supports two main types of pipelines: real-time and scheduled. Real-time pipelines process data as it arrives, transforming and enriching it on the fly. These pipelines can include multiple processing steps using functions, conditions, and stream operations. Scheduled pipelines run at defined intervals, allowing for batch processing and periodic data transformations.
How do functions work in pipelines?
Functions in OpenObserve use Vector Remap Language (VRL) for data transformation. Each function contains VRL code that can parse, transform, and enrich data. Functions can access the incoming data fields, perform conditional processing, and modify or create new fields. For example, a function might parse JSON logs, extract specific fields, and enrich them with geographical information using enrichment tables.
What are enrichment tables and how are they used?
Enrichment tables are CSV-based lookup tables that allow you to add additional context to your data. You can upload CSV files containing reference data, which can then be queried within pipeline functions using the get_enrichment_table_record function. Common use cases include IP to location mapping, user agent parsing, and adding business context to technical data.
How does data parsing work in pipelines?
Data parsing in OpenObserve pipelines is handled through VRL functions. The platform supports parsing various formats including JSON, structured logs, and custom formats. Functions can include conditional logic to apply different parsing rules based on the data source or content. Error handling is built into the parsing functions, allowing graceful handling of malformed data.
What stream operations are available?
Pipeline streams can be configured with various operations including source selection, transformation steps, and destination routing. The platform supports both logs and metrics streams. You can create complex workflows with multiple processing steps, conditions, and parallel processing paths. Stream operations maintain data consistency while allowing for flexible transformation chains.
How do pipeline conditions work?
Conditions in pipelines allow for selective processing based on data attributes. Using VRL expressions, you can create sophisticated routing and processing logic. Conditions can check field values, apply pattern matching, and implement complex business rules. This enables targeted processing of specific data streams or message types.
What monitoring and debugging features are available?
The platform provides visibility into pipeline execution through the pipeline viewer. You can monitor pipeline performance, track processing errors, and debug transformation logic. The system maintains metrics about pipeline throughput and processing latency. Test functions allow you to verify transformation logic before deployment.
How are pipeline changes managed?
Pipeline configurations are managed through the OpenObserve interface, where you can create, edit, and delete pipelines. Changes to pipeline functions can be tested before deployment using sample data. The platform maintains version control for pipeline configurations, allowing you to track changes and roll back if needed.
Want to learn more? Check out our blog.
Explore pipeline development best practices and OpenObserve capabilities.