Detecting Frustrated Users Before They Churn: A Deep Dive into OpenObserve's Frustration Signals

Bhargav Patel
Bhargav Patel
April 03, 2026
12 min read
Don’t forget to share!
TwitterLinkedInFacebook

Ready to get started?

Try OpenObserve Cloud today for more efficient and performant observability.

Table of Contents
Untitled design (5).png

The Problem: Errors Don't Capture Everything

Your monitoring dashboard shows zero errors. Your Lighthouse score is green. Your API response times are healthy. And yet users are churning from your checkout page at twice the rate they were last month.

What's going on?

The answer is invisible friction. Not every UX problem throws a JavaScript error. A button that looks clickable but doesn't respond. A form submission that silently fails. A loading spinner that never disappears. These micro-frustrations don't show up in traditional error tracking, but they drive users away.

OpenObserve's RUM module solves this with Frustration Signals automatic behavioral detection that identifies three distinct patterns of user distress: rage clicks, dead clicks, and error clicks.

This post is a deep dive into how frustration signals work, how they surface in the UI, and how to use them to find and fix UX problems before they become churn problems.

What Are Frustration Signals?

Frustration signals are behavioral patterns that indicate a user is struggling with your interface. Unlike errors (which are technical events), frustration signals are human events they reflect what the user is experiencing, not just what the code is doing.

OpenObserve's @openobserve/browser-rum SDK detects three types:

Signal Description When It Fires
Rage Click User clicks the same element 3+ times in rapid succession Repeated clicks within a short time window on the same DOM element
Dead Click User clicks an element that produces no observable response Click fires but no DOM mutation, network request, or navigation follows
Error Click User clicks an element and a JavaScript error is thrown Click event handler triggers an uncaught exception

Why These Three Patterns Matter

Rage clicks are the most emotionally charged signal. When a user hammers a button repeatedly, they've already crossed the frustration threshold. In usability research, rapid repeated clicking is one of the strongest predictors of task abandonment. Common causes include:

  • Buttons with no loading state (user doesn't know the first click registered)
  • Elements that become unresponsive due to JavaScript blocking the main thread
  • Double-submit prevention that freezes the UI without feedback
  • Slow API calls with no visual progress indicator

Dead clicks are the most diagnostic signal. They reveal specific UI elements that users expect to be interactive but aren't. This is pure signal about design mismatches. Common causes:

  • Styled <div> or <span> elements that look like buttons but have no click handler
  • Disabled buttons that don't appear visually disabled
  • Links that have been removed but whose visual styling remains
  • Elements obscured by invisible overlays (modals, tooltips, absolute-positioned elements)
  • Images or cards that look clickable but aren't

Error clicks connect user actions directly to technical failures. They answer the question: "Which bugs are users actually hitting?" An error that fires on page load is different from an error that fires when a user clicks "Purchase" the latter has direct revenue impact. Common causes:

  • Undefined variable in a click handler
  • API call that returns unexpected data structure
  • Race condition triggered by user interaction timing
  • Missing null checks on data that hasn't loaded yet

How Frustration Signals Are Captured

SDK-Level Detection

The @openobserve/browser-rum SDK (v0.3.2+) performs real-time behavioral analysis in the browser:

  1. Click events are intercepted via event listeners on the document
  2. Post-click analysis runs after a short observation window:
    • Was there a DOM mutation? (If no → candidate for dead click)
    • Was there a network request? (If no → stronger dead click signal)
    • Was there a navigation? (If no → dead click confirmed)
    • Did a JavaScript error fire within the window? (If yes → error click)
  3. Click frequency analysis tracks same-element clicks within a time window (If 3+ → rage click)
  4. Frustration records are emitted as part of the action event

Data Schema

Each action event in the _rumdata stream includes a action_frustration_type field:

{
  "type": "action",
  "action_type": "click",
  "action_name": "click on Submit Order",
  "action_frustration_type": "[\"rage_click\", \"dead_click\"]",
  "action_id": "abc-123-def",
  "session_id": "sess-456-ghi",
  "timestamp": 1711900800000000
}

A single click can carry multiple frustration types for example, a rage click that also triggers an error is both rage_click and error_click.

For session replay, frustration signals are also captured as FrustrationRecords (Record Type 9) that include references to the specific click DOM events, enabling precise timeline synchronization during playback.

How Frustration Signals Surface in the UI

Sessions List: The Severity Badge

When you navigate to RUM → Sessions, every session row includes a Frustration Count column. This displays a color-coded badge that immediately tells you which sessions had the most friction:

Severity Count Badge Color Meaning
None 0 Green dash () No frustration detected
Low 1–3 Yellow Normal friction minor UX issues
Medium 4–7 Orange Concerning pattern needs investigation
High 8+ Red (pulsing) Critical immediate attention required

The high-severity badge includes a pulse animation a subtle visual cue that draws your eye to the sessions that need attention first.

How the count is computed: The sessions list runs an aggregation query:

SELECT session_id,
  SUM(CASE WHEN type='action' AND action_frustration_type IS NOT NULL
      THEN 1 ELSE 0 END) AS frustration_count
FROM "_rumdata"
WHERE session_has_replay IS NOT NULL
GROUP BY session_id
ORDER BY zo_sql_timestamp DESC

This means the count reflects the total number of frustrated interactions in the session, not the number of unique frustration types.

Session Viewer: The Frustration Summary

When you open a session, the viewer header shows a frustration summary a sad-face icon with the total count (e.g., "5 Frustration(s)"). This only appears when the count is greater than zero, so clean sessions have an uncluttered header.

Session Replay Timeline: Orange Markers

The session replay playback bar displays event markers along the timeline. Frustration events are visually distinct:

  • Regular events: 2px wide, standard height
  • Frustration events: 3px wide, taller (1.125rem), orange (#fb923c) with a glow shadow

Hovering over a frustration marker shows a tooltip:

⚠️ FRUSTRATION: Rage Click, Dead Click
click on Submit Button

You can click any marker to jump directly to that moment in the replay. This is the fastest path from "this session looks bad" to "here's exactly what happened."

Events Sidebar: Type-Level Badges

The events sidebar lists every recorded event in the session. Events with frustration signals display a FrustrationEventBadge showing the specific type(s):

  • Rage Click: Orange badge
  • Dead Click: Yellow badge
  • Error Click: Red badge

You can also filter the sidebar by frustration type select "frustration" from the event type filter to see only frustrated interactions.

Event Detail Drawer

Clicking any event opens a detail drawer with three tabs: Overview, Network, and Attributes. The frustration badge appears in the header next to the event name, and the Attributes tab includes the raw action_frustration_type field for programmatic analysis.

SQL Queries for Frustration Analysis

Because OpenObserve stores RUM data in SQL-queryable streams, you can run sophisticated frustration analysis without leaving the platform. Here are practical queries you can use today:

Find Your Most Frustrated Pages

SELECT
  view_url,
  COUNT(*) as frustration_events,
  COUNT(DISTINCT session_id) as affected_sessions
FROM "_rumdata"
WHERE type = 'action'
  AND action_frustration_type IS NOT NULL
GROUP BY view_url
ORDER BY frustration_events DESC
LIMIT 10

Break Down Frustration by Type

SELECT
  action_frustration_type,
  COUNT(*) as occurrences,
  COUNT(DISTINCT session_id) as unique_sessions
FROM "_rumdata"
WHERE type = 'action'
  AND action_frustration_type IS NOT NULL
GROUP BY action_frustration_type
ORDER BY occurrences DESC

Identify the Specific Elements Causing Frustration

SELECT
  action_name,
  view_url,
  action_frustration_type,
  COUNT(*) as times_triggered
FROM "_rumdata"
WHERE type = 'action'
  AND action_frustration_type IS NOT NULL
GROUP BY action_name, view_url, action_frustration_type
ORDER BY times_triggered DESC
LIMIT 20

This query tells you exactly which button on which page is causing the most frustration e.g., "click on Add to Cart on /products/summer-sale triggered 47 rage clicks in the last 24 hours."

Frustration by Browser or Geography

SELECT
  browser,
  country,
  COUNT(*) as frustration_events,
  COUNT(DISTINCT session_id) as sessions
FROM "_rumdata"
WHERE type = 'action'
  AND action_frustration_type IS NOT NULL
GROUP BY browser, country
ORDER BY frustration_events DESC
LIMIT 15

If frustration spikes only in Safari or only in Brazil, you've narrowed your investigation dramatically.

Correlate Frustration with Performance

SELECT
  CASE
    WHEN view_largest_contentful_paint < 2500 THEN 'Good LCP (<2.5s)'
    WHEN view_largest_contentful_paint < 4000 THEN 'Needs Improvement (2.5-4s)'
    ELSE 'Poor LCP (>4s)'
  END as lcp_bucket,
  SUM(CASE WHEN action_frustration_type LIKE '%rage_click%' THEN 1 ELSE 0 END) as rage_clicks,
  COUNT(DISTINCT session_id) as sessions
FROM "_rumdata"
WHERE type = 'action'
GROUP BY lcp_bucket
ORDER BY rage_clicks DESC

This reveals whether slow pages directly correlate with frustrated interactions often they do, and this query gives you the data to prove it to stakeholders.

Real-World Workflow: From Badge to Fix

Here's a concrete example of how frustration signals accelerate debugging:

Step 1: Spot the pattern. In the Sessions list, you notice multiple sessions from the past hour showing red frustration badges (8+ signals). They're all on the /checkout page.

Step 2: Open the worst session. Click the session with the highest frustration count. The viewer header shows "12 Frustration(s)."

Step 3: Jump to the first frustration marker. On the replay timeline, click the first orange marker. You see the user click "Place Order" nothing happens. They click again. And again. And again. The rage click badge lights up.

Step 4: Check the events sidebar. Filter by "frustration" to see all 12 signals. You notice a pattern: 8 rage clicks on "Place Order" and 4 dead clicks on "Apply Coupon."

Step 5: Inspect the error click. One of the events is an error click. Open the detail drawer the error is TypeError: Cannot read properties of undefined (reading 'discount'). The coupon API returned an unexpected response.

Step 6: Correlate with traces. The Trace Correlation Card in the error detail shows the backend trace. The /api/coupons/validate endpoint returned a 500 because a downstream service was down.

Step 7: Fix. The coupon service is down, causing the coupon field to silently break. The "Place Order" button depends on coupon validation completing, so it's also frozen. Two fixes: add a loading state to the button, and handle the coupon API failure gracefully.

Total time from "something's wrong" to "here's the root cause": under 5 minutes.

Setting Up Frustration Signal Detection

Frustration signals are captured automatically by the @openobserve/browser-rum SDK no additional configuration required. Just initialize the SDK:

import { openobserveRum } from '@openobserve/browser-rum';

openobserveRum.init({
  applicationId: 'your-app-id',
  clientToken: 'your-rum-token',
  site: 'https://your-openobserve-instance',
  service: 'web-app',
  env: 'production',
  version: '1.0.0',
  organizationIdentifier: 'your-org',
  insecureHTTP: false,
  apiVersion: 'v1',
});

// Enable session replay to see frustration events in video playback
openobserveRum.startSessionReplayRecording();

Once initialized, the SDK automatically:

  • Monitors all click events
  • Detects rage clicks (3+ rapid clicks on same element)
  • Detects dead clicks (no DOM/network response)
  • Detects error clicks (uncaught exceptions after click)
  • Tags action events with action_frustration_type
  • Creates FrustrationRecords for session replay timeline markers

Best Practices

1. Triage by Severity, Not by Count

A session with 2 error clicks on your payment button is more urgent than a session with 10 dead clicks on a decorative element. Use frustration type + page context to prioritize.

2. Combine with Core Web Vitals

Rage clicks often correlate with poor INP (Interaction to Next Paint) scores. If your rage click count spikes, check whether your INP degraded in the same timeframe.

3. Set Up Alerts on Frustration Spikes

Use OpenObserve's alerting to catch sudden increases. Create an alert with a 15-minute evaluation window and this query:

SELECT COUNT(*) as frustration_count
FROM "_rumdata"
WHERE type = 'action'
  AND action_frustration_type IS NOT NULL

The alert's time range configuration handles the window you don't need time filters in the query itself. Alert when frustration_count exceeds your baseline by 2x or more.

4. Track Frustration as a Release Health Metric

After every deployment, compare frustration counts before and after. A new release that introduces dead clicks on previously working buttons is a regression even if zero errors were logged.

5. Use Frustration Data in Product Reviews

Export frustration-by-page data to share with your product and design teams. Dead click data is a direct map of user expectations vs. actual interactivity gold for UX redesigns.

Conclusion

Errors tell you what broke. Performance metrics tell you what's slow. Frustration signals tell you what's annoying your users right now. They bridge the gap between technical monitoring and user experience surfacing the invisible friction that drives churn.

With OpenObserve's RUM module, frustration signals are:

  • Automatically detected no manual instrumentation required
  • Visually surfaced color-coded severity badges, timeline markers, and sidebar filters
  • SQL-queryable run any analysis you can imagine on raw frustration data
  • Connected to context trace correlation links frustrated clicks to backend root causes
  • Included in standard pricing no premium tier required

Stop guessing why users leave. Start watching what frustrates them.

Ready to get started?

About the Author

Bhargav Patel

Bhargav Patel

LinkedIn

Bhargav Patel is a frontend-focused Software Engineer working on observability platforms. He builds seamless user experiences for visualizing and interacting with system data like logs, metrics, and traces. His focus is on performance, usability, and developer experience.

Latest From Our Blogs

View all posts