Loading…
Loading…
• OKRs (Objectives and Key Results) align design team goals with company strategy through measurable outcomes. • Good design OKRs focus on user outcomes, not design outputs — measure impact, not deliverables. • Design teams should own OKRs that they can directly influence through their work.
stellae.design
OKRs (Objectives and Key Results), popularized by John Doerr's 'Measure What Matters' (2018), are a goal-setting framework where ambitious Objectives are paired with measurable Key Results. For design teams, OKRs translate UX strategy into actionable quarterly goals. The Objective is qualitative and inspiring ('Deliver a world-class onboarding experience'), while Key Results are quantitative and time-bound ('Increase 7-day retention from 40% to 55%'). Well-crafted design OKRs connect craft work to business impact, making the value of design visible to the organization.
OKRs — Objectives and Key Results — provide design teams with a goal-setting framework that connects daily design work to measurable business and user outcomes, solving the chronic problem that design teams face of being perceived as a service function that takes orders rather than a strategic function that drives outcomes. Without a structured goal framework, design teams default to measuring their contribution through output (mockups delivered, sprints completed) rather than impact (user problems solved, business metrics improved), which makes it impossible to demonstrate ROI, prioritize competing requests based on strategic value, or argue for resources based on projected impact. Well-implemented OKRs give design teams the language and evidence to participate in strategic planning as equals — when a design team can say 'our objective is to reduce onboarding abandonment, and our key result of improving first-task completion from 45% to 70% requires these specific design investments,' they shift from order-takers to strategic partners.
Dropbox's design team structured OKRs around user activation — with the objective of making new users successful in their first session and key results measuring first-file-upload rate within ten minutes, tutorial completion rate, and seven-day return rate — tying design work directly to business-critical activation metrics rather than design output measures. This framing gave the design team clear authority to prioritize onboarding improvements over other requests because the OKR explicitly connected their work to revenue-impacting activation metrics, and it gave leadership a concrete way to evaluate whether the design investment was producing returns. The key results were chosen because they were leading indicators of long-term retention, meaning the design team could demonstrate impact within a quarter rather than waiting months for lagging metrics to move.
Atlassian's design and engineering teams share OKRs at the product level — a shared objective like 'Make Jira configuration intuitive for new administrators' with design-owned key results (reduce configuration task time from 45 to 15 minutes, increase configuration completion rate from 60% to 90%) and engineering-owned key results (reduce configuration errors by 80%, achieve sub-second response time for all configuration actions) — ensuring that design quality and engineering quality are measured as complementary contributions to the same user outcome. This shared OKR structure eliminates the dynamic where design 'throws designs over the wall' and engineering implements them without shared accountability for whether users succeed, because both teams are measured on the same outcome metric. Quarterly OKR reviews include joint design-engineering retrospectives that analyze whether the combined effort moved the shared metrics, creating a feedback loop that improves cross-functional collaboration over time.
A design team writes quarterly OKRs with objectives like 'Deliver all Q3 product designs on schedule' and key results measuring the number of mockups completed, design review meetings held, and Figma files handed off to engineering — creating a goal framework that incentivizes design output speed and volume without any measurement of whether the designs improve user outcomes, solve actual problems, or produce business value. The team achieves 100% of its key results by shipping designs on schedule, but product metrics show declining user satisfaction and increasing support tickets for usability issues, because the OKRs measured whether designs were produced rather than whether they were effective. When leadership questions the design team's value despite their 'perfect' OKR scores, the team cannot point to any evidence of user impact because their goal framework never required them to measure it.
• The most common mistake is writing design OKRs that measure activities and outputs — screens designed, prototypes delivered, design sprints completed — rather than outcomes and impact, because output OKRs are easy to achieve and satisfying to report but provide no evidence that the design work improved anything for users or the business. Another frequent error is setting key results that the design team cannot directly influence or measure — 'increase revenue by 20%' depends on far more variables than design quality, making it an unreliable indicator of design impact, while 'improve checkout task completion rate from 65% to 85%' isolates a design-influenceable metric that contributes to revenue. Teams also fail to review and adjust OKRs mid-quarter when user research reveals that the original objective was based on incorrect assumptions — the rigidity of treating OKRs as fixed commitments rather than hypotheses that can be updated with evidence undermines the framework's value as a learning tool.
Was this article helpful?