OpenObserve Reaches 15,000 GitHub Stars: A Journey to Provide Simple, Efficient, and Performant Observability for All

OpenObserve has just surpassed 15,000 stars on GitHub, a milestone that fills me with both pride and gratitude. When we started this project three years ago, the goal was simple yet ambitious: to build an open-source observability platform that is easier, faster, and dramatically more cost-effective than anything out there. Today, as we celebrate this achievement, I want to reflect on the journey from building a humble log management solution to a full-stack observability platform and thank the amazing community of developers, SREs, DevOps, CTOs, and architects who made it possible.
What truly humbles me is the breadth of OpenObserve’s adoption. We now have over 3,600 active deployments worldwide, in use everywhere from hobbyist home labs to massive enterprise data centers. OpenObserve is being used by startups, scale-ups, and even Fortune 100 companies, proving its reliability at every scale. Some folks run OpenObserve on a $35 Raspberry Pi for their home projects, while others deploy it across hundreds of nodes to handle petabytes of data in production clusters. It’s mind-blowing to see the same binary delivering value on such extremes of scale.
Equally inspiring is what OpenObserve is replacing. Every day we hear stories of teams ripping out legacy observability stacks and switching to OpenObserve. There are now thousands of installations globally, and many organizations have replaced products like Splunk, Elasticsearch, OpenSearch, Grafana+Loki, Graylog, Datadog, New Relic, and more with OpenObserve. The fact that one open-source platform can step in for all these specialized tools and do so with less complexity validates our core mission. We set out to eliminate the need for stitching together multiple systems for logs, metrics, traces, and dashboards, and the community’s embrace of OpenObserve, confirms we’re on the right track.
Another cornerstone of our community is our vibrant Slack workspace, now over 1,800+ members strong. This Slack community has been a constant source of feedback, support, and enthusiasm. Users help each other with setup and queries, share tips, and even contribute code. If you haven’t already, consider this an open invitation to join our Slack (we love welcoming new members!). It’s one of the best ways to learn from others’ experiences and to get help directly from the maintainers, myself included. Our growth to 15k stars is as much a story of this community as it is of the code, and we owe a huge “thank you” to everyone who has joined the journey so far.
From day one, we designed OpenObserve with a focus on extreme performance and efficiency. We chose Rust for the backend implementation to squeeze the most out of every CPU cycle and byte of memory, giving us safety and high performance at scale. The result is a system that can ingest and query data blazingly fast with minimal resource footprint. In fact, OpenObserve can ingest data at roughly 510x the rate of Elasticsearch or Splunk (based on regency hardware) for the same hardware, thanks to optimizations in how we handle writes and avoid heavy indexing overhead. Some users have benchmarked ingestion speeds of over 2030 MB/second per CPU core, which is several times what legacy stacks typically achieve. This means you can handle more data with fewer servers, a clear win for efficiency.
But performance isn’t only about speed it’s also about storage compression and cost savings. OpenObserve stores all data in an open columnar format (Apache Parquet) and compresses it with zstd, enabling jaw-dropping compression ratios. Real-world tests have shown that OpenObserve can deliver about 140× lower storage costs compared to Elasticsearch. That’s not a typo 140x! By using cheap object storage (like S3, GCS, etc.) in distributed mode, OpenObserve lets you retain more data for longer at a fraction of the cost of traditional solutions. Many users report that the storage they need for logs on OpenObserve is only a few percent of what it was under Elastic.
OpenObserve vs Elasticsearch Storage Comparison: In an internal test ingesting the same data, OpenObserve compressed logs dramatically more than Elasticsearch, resulting in ~140x lower storage cost in a 3-node cluster scenario. This extreme compression translates directly into cost savings without losing data fidelity.
Crucially, these savings come without sacrificing query performance. OpenObserve takes a different approach than Elasticsearch by not creating inverted indices for every log field by default. Instead, data is partitioned by time (and optionally by user-defined keys) and stored in columnar format with built-in page indexes. This architecture leverages modern hardware extremely well, sequential scans on compressed, parquet files are surprisingly fast, and aggregations (counts, sums, percentiles, etc.) run directly on columnar data. The proof is in our users’ experiences: one user migrated from a 5-node OpenSearch cluster to a single OpenObserve node and found that queries still completed in about the same time while costing 10x less in infrastructure and storage. When we heard this feedback, we were thrilled but not surprised. It validated that our design (trading heavy indexing for efficient scanning and compression) provides comparable or better query performance for real-world workloads at a tiny fraction of the cost.
“We moved from [a] 5 node OpenSearch cluster to [a] single node OpenObserve and measured using our actual everyday queries... We see that typically they complete in about the same time. OpenObserve costs us 10 times less though (instances + storage).”
For full-text searches, OpenObserve offers flexible options. You can perform full-text queries across data using built-in functions (searching raw data streams), or enable inverted indexes on specific fields if you have particular high-frequency search needs. In practice, our combination of columnar storage, partition pruning, and optional targeted indexing yields fantastic search performance for logs and traces. And when it comes to aggregated analytics, the columnar engine really shines OpenObserve can crunch through billions of records on the fly for dashboards and reports with ease. The net result is that teams no longer have to choose between retaining lots of data and being able to search it quickly. With OpenObserve you can have both: big data volume, low costs, and fast queries.
OpenObserve’s feature set has grown leaps and bounds, transforming it from a logging tool into a complete observability platform. We built it to be an all-in-one solution so you don’t need to glue together half a dozen different products for a comprehensive view. I’m excited to highlight some of the major features and capabilities that OpenObserve offers today:
Logs, Metrics, and Traces Unified support for all three pillars of observability. Send your application logs, infrastructure metrics, and distributed traces to OpenObserve and correlate them within one system. We fully support OpenTelemetry (OTLP) standards for ingesting metrics and traces, and you can even use SQL to query logs/traces and PromQL for metrics.
Frontend Monitoring (RUM & More) Real User Monitoring is built in. OpenObserve can capture user-centric metrics from web apps, track frontend errors, and even perform session replay. This means you get visibility into client-side performance and user experience (page load times, JS errors, etc.) right next to your backend logs and metrics with no need for a separate Grafana Tempo or proprietary RUM tool.
Pipelines for Data Processing A powerful pipeline engine allows you to use a powerful GUI to process and transform data in real-time or on a schedule as it’s ingested. Under the hood, pipeline functions use Vector Remap Language (VRL) for defining transformations. You can parse or enrich logs, do filtering, normalization, or redact sensitive information on the fly. (For example, a pipeline could automatically redact PII like email addresses from your logs before storage all via a few lines of VRL.) Pipelines can also route data between streams or down-sample metrics for long-term retention. This level of in-stream processing typically would require an extra tool like Logstash or Vector, but with OpenObserve it’s native and configured right in the UI.
Actions (Python Automation) OpenObserve introduces an innovative Actions feature: user-defined Python scripts that can be triggered by events or schedules. Think of Actions as lightweight serverless functions (similar to AWS Lambda) that run directly within OpenObserve. You can set Actions to execute when an alert condition fires or run them periodically via cron. This enables powerful automation workflows for example, automatically creating a Jira ticket when a certain error pattern appears in logs, or sweeping up/archive old data on a schedule. Actions make OpenObserve not just an observability tool, but an automation platform for operational data.
Dashboards and Visualizations Observability isn’t complete without rich visualization. OpenObserve comes with a built-in dashboard builder that supports 19 pre-built chart types (time-series graphs, heatmaps, tables, gauges, top-K lists, etc.) plus one fully customizable chart via JSON configuration. You can drag-and-drop to create dashboards combining logs, metrics, and trace data side by side. This removes the need for external dashboard tools no separate Grafana deployment is required.
Alerts - Alerts integrated allow you to define real-time or scheduled alerting rules that can provide you notifications based on which actions can be taken automatically. A great UI for most needs as well a SQL/PromQL interface for highly sophisticated needs.
It’s amazing to step back and realize that all of the above is available in a single platform. One binary, one UI, one set of APIs complete observability with unified context. This unification not only simplifies operations (less moving parts to manage), but also unlocks use-cases that are hard to achieve with segregated tools. For instance, in OpenObserve you can jump from a log entry to the related trace, then to the relevant metrics and back, without switching contexts or juggling credentials. This “single pane of glass” approach has been a game-changer for teams, especially those with small DevOps staffs who can’t afford to maintain a complex ecosystem of tools. OpenObserve proves that simplicity and power can coexist.
A common question I get is “How does OpenObserve do it?” How do we manage such high performance and compression while offering a broad feature set? The answers lie in some core design choices we made:
Built in Rust: We chose Rust for its blend of system-level performance and memory safety. Rust’s efficiency allows OpenObserve to utilize CPU and memory optimally, which is evident in our low resource usage (some deployments manage with under 1GB RAM!) and ability to run on modest hardware. Rust gives us confidence in safety (no garbage collector pauses or use-after-free bugs) even under heavy multithreaded workloads. This translates to a stable, fast platform you can trust in production.
Columnar Storage (Apache Parquet): Unlike traditional log stores that use row-oriented storage or custom binary formats, OpenObserve uses the open Apache Parquet format to store all data. Parquet is a columnar format widely used in data analytics it stores data by columns and applies aggressive compression and encoding. For log and telemetry data, this means fields with repetitive values compress extremely well, and queries that aggregate over a few fields can scan through data very efficiently. An added bonus is interoperability: your data in OpenObserve isn’t locked in a proprietary silo you could query those Parquet files with other tools or frameworks if needed.
No-Waste Indexing: We took a nuanced approach to indexing. Elasticsearch-style inverted indexes on every field are great for search speed, but they come at a huge cost in storage and ingestion overhead. In OpenObserve, we avoided global full-text indexes by default, which slashes the storage footprint (no massive index files) and speeds up ingestion (no heavy index writes). Instead, we rely on scanning and use partitioning (organizing data by time or other keys) and page indexes (Parquet’s internal indexing on block ranges) to narrow down search scope. In practice, this is remarkably effective especially since most log queries include a time range and often additional filters (which we use to skip irrelevant partitions). And for those truly frequent queries where an index makes a difference, we let you turn on an inverted index for specific fields selectively. This way, you get the best of both worlds: lean storage and fast ingest, and fast search on the fields that matter most.
Stateless, Cloud-Native Architecture: OpenObserve’s services (ingesters, queriers, etc.) are stateless and share nothing, relying on object storage as the source of truth. This cloud-native design means you can scale out by simply adding more stateless nodes and no complicated cluster state to manage. We’ve focused on making both single-node deployment (for small setups) and HA clustering (for large scale) as simple as possible. You can start with a single binary today and later deploy a multi-node cluster on Kubernetes without changing your data format or reconfiguring clients. This scalability and flexibility have been key to OpenObserve’s adoption from tiny labs to Fortune 100 companies.
All these technical choices boil down to a guiding philosophy: build a solution that is powerful but also pragmatic. OpenObserve wouldn’t be where it is if it only excelled in benchmarks or on paper. We built it to solve real observability problems that we faced ourselves with excessive complexity, runaway costs, and scaling pain with incumbent tools. Each feature and optimization in OpenObserve aims to reduce toil and increase value for the engineers using it.
Reaching 15,000 stars is a perfect occasion to look back on how far we’ve come. It’s been about three years since we started OpenObserve. In the early days, the project was focused on log management, born from frustration with the ELK stack’s operational burden. I wanted logging to be something you could set up in minutes and not spend all day babysitting.
Over time, however, it became clear that logs are just one part of the observability puzzle. Users (and my own team) needed metrics and traces alongside logs for a true end-to-end picture. So we embarked on expanding OpenObserve’s scope: adding a metrics engine, integrating tracing support, and building a unified UI that could correlate across all data types. By mid-2023, after two years of hard work, OpenObserve evolved into a full-stack observability platform with not only logs/metrics/traces, but also user experience monitoring, alerting, and flexible data processing. It’s been a rapid evolution one that typically might require an array of separate tools but having it all natively integrated has been worth the effort.
This journey hasn’t been traveled alone. The open-source community around OpenObserve has been our driving force. Contributors have added features, fixed bugs, written documentation and guides. Our Slack members and GitHub discussants have given invaluable feedback that shaped the product. For example, the idea for the Actions (Python scripting) feature came directly from customers and community discussions about wanting more flexible alert responses. Many of our UI improvements and UX tweaks were guided by user suggestions. OpenObserve today is truly community-driven software, and I’m constantly amazed by the passion and ingenuity of our users and contributors.
As a founder, seeing OpenObserve mature from a logging tool into an all-in-one observability platform trusted by thousands is incredibly fulfilling. We’ve gone from zero to 15k stars, but more importantly, from an idea to a reality that is helping engineers every day. Our mission was (and remains) to make observability 10x easier and 100x more cost-effective for everyone whether you are a student learning things or you're a Fortune 100 company. Hitting this star milestone reassures us that we’re on the right path, but there’s so much more to do.
In celebrating this moment, I want to extend my deepest thanks to all of you, our users, contributors, and community members. Every GitHub star represents someone who believes in what we’re doing. Every Slack message, issue, or PR represents someone who has invested their time to make OpenObserve better. This 15,000-star milestone isn’t the finish line; it’s motivation to push even harder.
Looking ahead, the roadmap is exciting. We plan to continue doubling down on performance (there are still ways to go even faster!), expanding our analytics capabilities, and polishing the user experience. We are working on features like improved alerting UI, data ingestion agents, and even more visualization options. We’re also committed to keeping OpenObserve open and inclusive ensuring that a small startup or an individual developer can always use the open-source version to its fullest potential, just as a large enterprise can.
To everyone who has joined us on this journey: thank you for making OpenObserve what it is today. Reaching 15,000 GitHub stars is a celebration of our community’s enthusiasm and a reminder of the responsibility we have to keep delivering. I’m incredibly excited for the next chapter of OpenObserve. With your support and feedback, we’ll continue to revolutionize observability together making it simpler, faster, and more affordable than ever before.
Here’s to the next 15,000 stars and beyond! 🚀
If you haven’t tried OpenObserve yet, now is the perfect time to join our Slack community, check out our GitHub, and see for yourself why so many are betting on this platform. We can’t wait to see what you do with it, and we’re here to help every step of the way.
Happy observing!