Loading…
Loading…
• RICE scores features by Reach × Impact × Confidence ÷ Effort to produce a single prioritization number. • It quantifies prioritization, reducing subjective bias in what gets built first. • Originally developed at Intercom, RICE is now widely used across product and design teams.
stellae.design
RICE Scoring is a quantitative prioritization framework developed by Sean McBride at Intercom. Each initiative is scored across four dimensions: Reach (how many users will this affect in a given time period?), Impact (how much will it affect each user? scored 0.25-3), Confidence (how sure are you about your estimates? as a percentage), and Effort (how many person-months will this take?). The formula — (Reach × Impact × Confidence) / Effort — produces a single score that makes comparing very different initiatives straightforward.
RICE scoring brings quantitative discipline to product prioritization by evaluating ideas across four dimensions — Reach, Impact, Confidence, and Effort — producing a single comparable score that cuts through subjective debates. Without a structured scoring method, teams default to the loudest voice in the room or the most recent customer complaint, leading to roadmaps that swing reactively rather than advancing a coherent strategy. RICE's explicit confidence factor is particularly valuable because it forces teams to acknowledge uncertainty rather than treating every estimate as equally reliable.
A product team scores a simplified onboarding flow with high Reach (every new user), high Impact (activation rate increase), medium Confidence (based on competitor benchmarks but no internal data), and moderate Effort (three sprints). The resulting RICE score places it above a feature request from a single enterprise client, giving the team a defensible rationale for sequencing. After launch, the activation lift validates the score and builds organizational trust in the framework.
The design team scores a checkout flow redesign with massive Reach (all purchasing users) and high Impact (reducing cart abandonment by an estimated 15 percent), backed by high Confidence from A/B test data on a prototype. The RICE score clearly outranks lower-traffic feature requests, giving leadership a data-backed reason to allocate two dedicated sprints to the effort. The transparent scoring process also helps the engineering team understand why checkout work takes priority over their preferred technical refactor.
A product manager assigns maximum Confidence to a new social feature despite having no user research, competitive analysis, or prototype validation to support the impact estimate. The inflated score pushes the feature to the top of the backlog, displacing well-researched improvements with genuine data behind them. Three months after launch the feature sees negligible adoption, wasting engineering capacity and eroding team trust in the prioritization process.
• The most common mistake is treating RICE scores as absolute truths rather than conversation starters — teams that skip the discussion and simply sort by score miss important qualitative context like strategic alignment and user pain severity. Another frequent error is allowing one person to fill in all four dimensions without cross-functional input, which bakes individual bias into what should be a collaborative assessment. Teams also forget to revisit and recalibrate scores as new data arrives, letting stale estimates drive decisions months after the original scoring session.
Was this article helpful?