Detecting Frustrated Users Before They Churn: A Deep Dive into OpenObserve's Frustration Signals


Try OpenObserve Cloud today for more efficient and performant observability.

Your monitoring dashboard shows zero errors. Your Lighthouse score is green. Your API response times are healthy. And yet users are churning from your checkout page at twice the rate they were last month.
What's going on?
The answer is invisible friction. Not every UX problem throws a JavaScript error. A button that looks clickable but doesn't respond. A form submission that silently fails. A loading spinner that never disappears. These micro-frustrations don't show up in traditional error tracking, but they drive users away.
OpenObserve's RUM module solves this with Frustration Signals automatic behavioral detection that identifies three distinct patterns of user distress: rage clicks, dead clicks, and error clicks.
This post is a deep dive into how frustration signals work, how they surface in the UI, and how to use them to find and fix UX problems before they become churn problems.
Frustration signals are behavioral patterns that indicate a user is struggling with your interface. Unlike errors (which are technical events), frustration signals are human events they reflect what the user is experiencing, not just what the code is doing.
OpenObserve's @openobserve/browser-rum SDK detects three types:
| Signal | Description | When It Fires |
|---|---|---|
| Rage Click | User clicks the same element 3+ times in rapid succession | Repeated clicks within a short time window on the same DOM element |
| Dead Click | User clicks an element that produces no observable response | Click fires but no DOM mutation, network request, or navigation follows |
| Error Click | User clicks an element and a JavaScript error is thrown | Click event handler triggers an uncaught exception |
Rage clicks are the most emotionally charged signal. When a user hammers a button repeatedly, they've already crossed the frustration threshold. In usability research, rapid repeated clicking is one of the strongest predictors of task abandonment. Common causes include:
Dead clicks are the most diagnostic signal. They reveal specific UI elements that users expect to be interactive but aren't. This is pure signal about design mismatches. Common causes:
<div> or <span> elements that look like buttons but have no click handlerError clicks connect user actions directly to technical failures. They answer the question: "Which bugs are users actually hitting?" An error that fires on page load is different from an error that fires when a user clicks "Purchase" the latter has direct revenue impact. Common causes:
The @openobserve/browser-rum SDK (v0.3.2+) performs real-time behavioral analysis in the browser:
Each action event in the _rumdata stream includes a action_frustration_type field:
{
"type": "action",
"action_type": "click",
"action_name": "click on Submit Order",
"action_frustration_type": "[\"rage_click\", \"dead_click\"]",
"action_id": "abc-123-def",
"session_id": "sess-456-ghi",
"timestamp": 1711900800000000
}
A single click can carry multiple frustration types for example, a rage click that also triggers an error is both rage_click and error_click.
For session replay, frustration signals are also captured as FrustrationRecords (Record Type 9) that include references to the specific click DOM events, enabling precise timeline synchronization during playback.
When you navigate to RUM → Sessions, every session row includes a Frustration Count column. This displays a color-coded badge that immediately tells you which sessions had the most friction:
| Severity | Count | Badge Color | Meaning |
|---|---|---|---|
| None | 0 | Green dash () | No frustration detected |
| Low | 1–3 | Yellow | Normal friction minor UX issues |
| Medium | 4–7 | Orange | Concerning pattern needs investigation |
| High | 8+ | Red (pulsing) | Critical immediate attention required |
The high-severity badge includes a pulse animation a subtle visual cue that draws your eye to the sessions that need attention first.
How the count is computed: The sessions list runs an aggregation query:
SELECT session_id,
SUM(CASE WHEN type='action' AND action_frustration_type IS NOT NULL
THEN 1 ELSE 0 END) AS frustration_count
FROM "_rumdata"
WHERE session_has_replay IS NOT NULL
GROUP BY session_id
ORDER BY zo_sql_timestamp DESC
This means the count reflects the total number of frustrated interactions in the session, not the number of unique frustration types.
When you open a session, the viewer header shows a frustration summary a sad-face icon with the total count (e.g., "5 Frustration(s)"). This only appears when the count is greater than zero, so clean sessions have an uncluttered header.
The session replay playback bar displays event markers along the timeline. Frustration events are visually distinct:
Hovering over a frustration marker shows a tooltip:
⚠️ FRUSTRATION: Rage Click, Dead Click
click on Submit Button
You can click any marker to jump directly to that moment in the replay. This is the fastest path from "this session looks bad" to "here's exactly what happened."
The events sidebar lists every recorded event in the session. Events with frustration signals display a FrustrationEventBadge showing the specific type(s):
You can also filter the sidebar by frustration type select "frustration" from the event type filter to see only frustrated interactions.
Clicking any event opens a detail drawer with three tabs: Overview, Network, and Attributes. The frustration badge appears in the header next to the event name, and the Attributes tab includes the raw action_frustration_type field for programmatic analysis.
Because OpenObserve stores RUM data in SQL-queryable streams, you can run sophisticated frustration analysis without leaving the platform. Here are practical queries you can use today:
SELECT
view_url,
COUNT(*) as frustration_events,
COUNT(DISTINCT session_id) as affected_sessions
FROM "_rumdata"
WHERE type = 'action'
AND action_frustration_type IS NOT NULL
GROUP BY view_url
ORDER BY frustration_events DESC
LIMIT 10
SELECT
action_frustration_type,
COUNT(*) as occurrences,
COUNT(DISTINCT session_id) as unique_sessions
FROM "_rumdata"
WHERE type = 'action'
AND action_frustration_type IS NOT NULL
GROUP BY action_frustration_type
ORDER BY occurrences DESC
SELECT
action_name,
view_url,
action_frustration_type,
COUNT(*) as times_triggered
FROM "_rumdata"
WHERE type = 'action'
AND action_frustration_type IS NOT NULL
GROUP BY action_name, view_url, action_frustration_type
ORDER BY times_triggered DESC
LIMIT 20
This query tells you exactly which button on which page is causing the most frustration e.g., "click on Add to Cart on /products/summer-sale triggered 47 rage clicks in the last 24 hours."
SELECT
browser,
country,
COUNT(*) as frustration_events,
COUNT(DISTINCT session_id) as sessions
FROM "_rumdata"
WHERE type = 'action'
AND action_frustration_type IS NOT NULL
GROUP BY browser, country
ORDER BY frustration_events DESC
LIMIT 15
If frustration spikes only in Safari or only in Brazil, you've narrowed your investigation dramatically.
SELECT
CASE
WHEN view_largest_contentful_paint < 2500 THEN 'Good LCP (<2.5s)'
WHEN view_largest_contentful_paint < 4000 THEN 'Needs Improvement (2.5-4s)'
ELSE 'Poor LCP (>4s)'
END as lcp_bucket,
SUM(CASE WHEN action_frustration_type LIKE '%rage_click%' THEN 1 ELSE 0 END) as rage_clicks,
COUNT(DISTINCT session_id) as sessions
FROM "_rumdata"
WHERE type = 'action'
GROUP BY lcp_bucket
ORDER BY rage_clicks DESC
This reveals whether slow pages directly correlate with frustrated interactions often they do, and this query gives you the data to prove it to stakeholders.
Here's a concrete example of how frustration signals accelerate debugging:
Step 1: Spot the pattern. In the Sessions list, you notice multiple sessions from the past hour showing red frustration badges (8+ signals). They're all on the /checkout page.
Step 2: Open the worst session. Click the session with the highest frustration count. The viewer header shows "12 Frustration(s)."
Step 3: Jump to the first frustration marker. On the replay timeline, click the first orange marker. You see the user click "Place Order" nothing happens. They click again. And again. And again. The rage click badge lights up.
Step 4: Check the events sidebar. Filter by "frustration" to see all 12 signals. You notice a pattern: 8 rage clicks on "Place Order" and 4 dead clicks on "Apply Coupon."
Step 5: Inspect the error click. One of the events is an error click. Open the detail drawer the error is TypeError: Cannot read properties of undefined (reading 'discount'). The coupon API returned an unexpected response.
Step 6: Correlate with traces. The Trace Correlation Card in the error detail shows the backend trace. The /api/coupons/validate endpoint returned a 500 because a downstream service was down.
Step 7: Fix. The coupon service is down, causing the coupon field to silently break. The "Place Order" button depends on coupon validation completing, so it's also frozen. Two fixes: add a loading state to the button, and handle the coupon API failure gracefully.
Total time from "something's wrong" to "here's the root cause": under 5 minutes.
Frustration signals are captured automatically by the @openobserve/browser-rum SDK no additional configuration required. Just initialize the SDK:
import { openobserveRum } from '@openobserve/browser-rum';
openobserveRum.init({
applicationId: 'your-app-id',
clientToken: 'your-rum-token',
site: 'https://your-openobserve-instance',
service: 'web-app',
env: 'production',
version: '1.0.0',
organizationIdentifier: 'your-org',
insecureHTTP: false,
apiVersion: 'v1',
});
// Enable session replay to see frustration events in video playback
openobserveRum.startSessionReplayRecording();
Once initialized, the SDK automatically:
action_frustration_typeA session with 2 error clicks on your payment button is more urgent than a session with 10 dead clicks on a decorative element. Use frustration type + page context to prioritize.
Rage clicks often correlate with poor INP (Interaction to Next Paint) scores. If your rage click count spikes, check whether your INP degraded in the same timeframe.
Use OpenObserve's alerting to catch sudden increases. Create an alert with a 15-minute evaluation window and this query:
SELECT COUNT(*) as frustration_count
FROM "_rumdata"
WHERE type = 'action'
AND action_frustration_type IS NOT NULL
The alert's time range configuration handles the window you don't need time filters in the query itself. Alert when frustration_count exceeds your baseline by 2x or more.
After every deployment, compare frustration counts before and after. A new release that introduces dead clicks on previously working buttons is a regression even if zero errors were logged.
Export frustration-by-page data to share with your product and design teams. Dead click data is a direct map of user expectations vs. actual interactivity gold for UX redesigns.
Errors tell you what broke. Performance metrics tell you what's slow. Frustration signals tell you what's annoying your users right now. They bridge the gap between technical monitoring and user experience surfacing the invisible friction that drives churn.
With OpenObserve's RUM module, frustration signals are:
Stop guessing why users leave. Start watching what frustrates them.
Ready to get started?