Chapter 15: Outcome-Driven Roadmapping
Part III — Strategy & Value Design
1. Executive Summary
Traditional roadmaps are feature wishlists divorced from business impact. Outcome-driven roadmapping flips this model: you define measurable customer and business outcomes first, then identify solutions through experimentation. Instead of "Ship SSO by Q2," you commit to "Reduce IT admin onboarding effort by 75% within 90 days." This chapter provides frameworks for building roadmaps anchored in Key Results (KRs), opportunity trees, and hypothesis-driven development. You'll learn to structure roadmaps that align cross-functional teams around shared outcomes, measure progress through leading indicators, and pivot based on evidence rather than opinion. For B2B IT services companies, this approach accelerates time-to-value, reduces wasted engineering cycles, and creates defensible ROI narratives for customer renewals.
2. Definitions & Scope
Core Concepts
Outcome-Driven Roadmap: A strategic plan organized around measurable customer and business outcomes rather than feature delivery dates. Outcomes describe the change in user behavior or business metric you want to achieve.
Key Results (KRs): Quantifiable metrics that indicate progress toward an objective. Example: "Reduce customer support tickets related to login by 60%" vs. "Build SSO."
Opportunity Tree: A hierarchical framework connecting business objectives to measurable outcomes to solution hypotheses. Popularized by Teresa Torres, it visualizes how potential solutions map to desired outcomes.
Hypothesis-Driven Development: Treating each roadmap item as a testable assumption: "We believe [solution] will achieve [outcome] as measured by [metric]."
Now-Next-Later Framework: A flexible roadmap structure that commits to current work (Now), signals near-term direction (Next), and maintains strategic options (Later) without rigid dates.
What This Chapter Covers
- Translating business objectives into measurable outcomes
- Building and maintaining opportunity trees
- Writing effective hypotheses and success criteria
- Roadmap formats for B2B contexts (sales-driven, compliance-heavy, multi-tenant)
- Stakeholder communication when you can't promise specific features
What This Chapter Does Not Cover
- Product vision and strategy creation (see Chapter 13: Experience Strategy)
- Detailed prioritization frameworks like RICE or ICE (see Chapter 14: Value-Based Prioritization)
- Agile sprint planning or release management
- Feature specification and requirements gathering
3. Customer Jobs & Pain Map
| Customer Persona | Job to Be Done | Current Pain Point | Outcome Need |
|---|---|---|---|
| VP Product | Justify product investment to CFO | Roadmap shows features, not business impact | Clear ROI story: "This initiative will reduce churn by X%" |
| Engineering Lead | Align team around valuable work | Engineers question why they're building features no one uses | Shared understanding of success metrics before coding starts |
| Customer Success Manager | Prove value to customers at renewal | Can't connect product updates to customer's business goals | Evidence-based narrative: "We shipped outcomes you asked for" |
| Enterprise Buyer (IT Director) | Evaluate vendor product direction | Vendor roadmap is vaporware wish list | Transparent view of vendor's strategic bets and progress |
| Product Manager | Negotiate feature requests from Sales | Sales promises features to close deals, PM loses autonomy | Framework to say "We're solving that outcome via different approach" |
| UX Designer | Ensure design work solves real problems | Asked to design features without understanding success criteria | Clear problem statement and measurable definition of "better" |
| Sales Engineer | Communicate product direction without overpromising | Roadmap slide deck full of features that may get deprioritized | Outcome-focused messaging: "We're investing in security automation" |
| Compliance Officer | Plan for regulatory changes | Product roadmap doesn't surface compliance work | Explicit outcomes for audit readiness and regulatory timelines |
4. Framework / Model
The Outcome-Driven Roadmap Stack
Layer 1: Business Objectives (Annual/Bi-Annual)
- Strategic goals set by leadership
- Examples: "Expand into Financial Services sector," "Reduce gross churn by 20%," "Achieve FedRAMP certification"
Layer 2: Measurable Outcomes (Quarterly)
- Specific, time-bound KRs that ladder up to objectives
- Must be customer-observable or business-measurable
- Examples: "Reduce IT admin setup time from 4 hours to 30 minutes," "Increase feature adoption in FinServ accounts from 12% to 35%," "Pass SOC 2 Type II audit with zero critical findings"
Layer 3: Opportunity Trees (Quarterly, Living Document)
- Visual map connecting outcomes to solution hypotheses
- Structure: Objective → Outcome → Opportunity → Solution Bet
- Enables parallel exploration of multiple solution paths
Layer 4: Solution Bets (Monthly/Sprint)
- Specific initiatives you're testing to achieve outcomes
- Framed as hypotheses with success criteria
- Example: "We believe adding SSO will reduce IT admin setup time by 80%, as measured by avg. onboarding duration in telemetry"
Layer 5: Delivery Artifacts (Weekly/Sprint)
- The actual features, designs, APIs, and code you ship
- These are the output, not the goal
Building an Opportunity Tree
Business Objective: Increase Annual Recurring Revenue (ARR) by $5M
├─ Outcome 1: Reduce time-to-first-value for new customers from 45 days to 14 days
│ ├─ Opportunity 1.1: Onboarding friction in IT provisioning
│ │ ├─ Solution Bet: SCIM-based auto-provisioning
│ │ ├─ Solution Bet: Pre-built SSO integrations (Okta, Azure AD, Google)
│ │ └─ Solution Bet: Onboarding checklist with progress tracking
│ ├─ Opportunity 1.2: Unclear activation path for end users
│ │ ├─ Solution Bet: Contextual in-app tours
│ │ └─ Solution Bet: Role-based default configurations
│ └─ Opportunity 1.3: Data migration delays
│ ├─ Solution Bet: Async bulk import with error handling
│ └─ Solution Bet: CSV validation tool
│
└─ Outcome 2: Increase expansion revenue from existing accounts by 30%
├─ Opportunity 2.1: Low awareness of advanced features
│ ├─ Solution Bet: Usage-triggered upsell prompts
│ └─ Solution Bet: CS-triggered feature demos
└─ Opportunity 2.2: Friction in adding seats/licenses
├─ Solution Bet: Self-service license management
└─ Solution Bet: Annual true-up automation
Writing Outcome-Focused Hypotheses
Template:
We believe [solution/feature]
will achieve [specific outcome]
for [target user segment]
as measured by [metric and target]
within [timeframe].
Examples:
Poor (Feature-Focused):
- "Build a dashboard for analytics"
- "Add SSO support"
- "Improve mobile app performance"
Good (Outcome-Focused):
- "We believe a real-time usage dashboard will increase feature discovery by finance teams, as measured by a 40% increase in report generation within 60 days of launch."
- "We believe pre-integrated Okta SSO will reduce IT admin onboarding effort from 4 hours to 30 minutes, as measured by time-to-first-login telemetry, achieving this for 80% of Enterprise accounts within 90 days."
- "We believe reducing mobile app cold start time to <2s will increase daily active users by 25%, as measured by DAU/MAU ratio, within 30 days of release."
5. Implementation Playbook
Days 0–30: Establish Outcome-Driven Foundations
Week 1: Audit Current Roadmap
- Export your current roadmap (Jira, Productboard, etc.)
- For each initiative, identify: What customer/business outcome does this achieve?
- Flag items with no clear outcome or weak outcome justification
- Interview 3–5 stakeholders: "What business result are we trying to drive this quarter?"
Week 2: Define Top 3 Outcomes
- Facilitate session with Product, Engineering, CS, Sales leads
- Align on business objectives (from leadership or OKRs)
- Define 3 measurable outcomes per objective (use SMART criteria)
- Document baseline metrics and targets (see Section 8)
- Example output: "Reduce Enterprise customer onboarding from 45 days to 14 days (baseline: 47 days avg. over last 90 days)"
Week 3: Build Your First Opportunity Tree
- Pick one outcome to decompose (choose highest-impact or most urgent)
- Conduct discovery: customer interviews, support ticket analysis, sales loss reviews
- Identify 3–5 opportunities (specific problems or friction points)
- Brainstorm 2–3 solution bets per opportunity (don't commit yet)
- Use Miro, FigJam, or Opportunity Solution Tree tool
Week 4: Write Hypotheses for Top Solution Bets
- Select 2–3 solution bets to test first (based on evidence, effort, risk)
- Write hypotheses using template in Section 4
- Define success metrics and instrumentation plan
- Create lightweight experiment design (A/B test, prototype, dogfooding)
- Share with stakeholders for feedback before committing resources
Deliverable: Draft outcome-driven roadmap for current quarter with 3 KRs, 1 opportunity tree, and 3 active hypotheses.
Days 30–90: Operationalize and Scale
Month 2: Instrument and Communicate
Week 5–6: Set Up Measurement Infrastructure
- Audit analytics instrumentation for KR metrics (see Chapter 32: Product Analytics)
- Implement missing telemetry (onboarding time, feature adoption, error rates)
- Create dashboard showing KR progress (weekly refresh)
- Set up automated alerts for metric regressions
- Example: If "Time to first value" is a KR, instrument onboarding funnel steps
Week 7–8: Stakeholder Communication Plan
- Create two roadmap views:
- Internal (Team): Full opportunity tree with hypotheses and experiments
- External (Sales/Customers): Now-Next-Later format focused on outcomes
- Train Sales and CS on outcome-based messaging: "We're investing in reducing onboarding friction" vs. "SSO ships in Q2"
- Create FAQ doc: "What if a customer asks for a specific feature?"
- Answer: "Share the outcome we're solving for and invite them to discovery"
Month 3: Test, Learn, Iterate
Week 9–10: Run First Experiments
- Ship smallest testable versions of solution bets (MVPs, prototypes, beta features)
- Monitor KR metrics weekly
- Conduct user interviews post-release: "Did this improve your workflow?"
- Document learnings in experiment log (what worked, what didn't, why)
Week 11–12: Update Roadmap Based on Evidence
- Review KR progress: Are you on track to hit targets?
- For underperforming bets: Pivot to alternative solution or kill initiative
- For successful bets: Double down and scale (expand to more customers, add complementary features)
- Update opportunity tree: Prune dead branches, add new opportunities from discovery
- Communicate changes transparently: "We paused Initiative X because Metric Y didn't move; we're testing Alternative Z"
Deliverable: Live roadmap with instrumented KRs, active experiments, and quarterly review cadence.
6. Design & Engineering Guidance
Product Design Patterns
1. Outcome-Driven Feature Specs
- Start every design brief with: "Job to be done," "Success metric," "Target outcome"
- Example: Instead of "Design SSO login flow," write: "Enable IT admins to provision 100+ users in <10 minutes (current: 4 hours via manual CSV upload). Success metric: Time from 'Add users' click to 'All users active' confirmation."
2. Hypothesis-Driven Prototyping
- Build lo-fi prototypes to test riskiest assumptions before high-fidelity design
- Example: If hypothesis is "Contextual tooltips will reduce support tickets by 40%," prototype tooltip system in Figma and test with 5 users before building
3. Instrumentation-First Design
- Design with analytics in mind: What user actions signal success?
- Embed tracking plan in design specs: "Track: Button clicks, modal dismissals, time on page, completion rate"
Engineering Patterns
1. Feature Flags for Hypothesis Testing
- Instrument features behind flags to enable A/B tests and gradual rollouts
- Example: Launch SSO to 10% of Enterprise accounts, measure onboarding time vs. control group
2. Telemetry for Outcome Metrics
- Emit events for KR-related actions:
onboarding.step_completed,feature.first_use,error.authentication_failed - Aggregate into KR dashboards (see Section 8)
3. API-First for Flexibility
- Design APIs to support multiple solution paths for same outcome
- Example: If outcome is "Reduce integration time," build webhook API that enables Zapier, SCIM, and custom integrations (don't over-invest in one path)
Accessibility Considerations
- Ensure outcome metrics don't penalize accessibility features (e.g., "time to complete task" shouldn't exclude keyboard-only users who may take longer)
- Test hypotheses with diverse user groups: Include screen reader users, keyboard-only users in usability testing
- Outcome example: "Increase task completion rate to 95% for all users, including those using assistive tech"
Performance & Security
Performance:
- Treat performance as an outcome, not a feature: "Reduce mobile app cold start to <2s for 95th percentile users"
- Instrument: Page load time, API response time, time-to-interactive
Security:
- Frame security work as outcomes: "Achieve zero critical vulnerabilities in quarterly pentest" vs. "Implement input validation"
- Use compliance milestones as KRs: "Pass SOC 2 Type II audit by Q3"
7. Back-Office & Ops Integration
Why Back-Office Tools Need Outcome-Driven Roadmaps
Internal tools (admin panels, CS dashboards, billing systems) rarely get roadmaps. Teams default to feature requests from internal stakeholders. Outcome-driven roadmapping ensures back-office investments tie to business impact.
Example Outcome for Back-Office:
- Objective: Reduce customer churn by improving CS team effectiveness
- Outcome: Decrease average time-to-resolution for critical support tickets from 48 hours to 12 hours
- Opportunity: CS agents lack visibility into customer system health
- Solution Bet: Real-time health score dashboard with alerting
Back-Office Roadmap Anti-Patterns
- "Internal tools can wait": Back-office friction directly impacts customer experience. Slow CS tools delay support response; buggy admin panels create data errors customers see.
- No metrics: Internal tools often lack instrumentation. Outcome-driven roadmaps require measuring CS productivity, ops error rates, etc.
- Reactive firefighting: Without a roadmap, ops teams live in ticket hell. Outcomes like "Reduce manual reconciliation time by 80%" justify automation investment.
Operationalizing Back-Office Roadmaps
- Identify Ops Outcomes: Interview CS, Support, Finance, IT teams: "What manual work consumes most time? What errors cause customer escalations?"
- Instrument Internal Tools: Track: Task completion time, error rates, user satisfaction (NPS for internal users)
- Tie to Customer Outcomes: Example: "Improving billing system accuracy reduces customer disputes (measurable via dispute ticket volume)"
8. Metrics That Matter
KR Metrics for Outcome-Driven Roadmaps
| Outcome Category | Key Result Metric | Baseline Example | Target Example | Measurement Frequency |
|---|---|---|---|---|
| Time-to-Value | Avg. days from contract signature to first successful user login | 45 days | 14 days | Weekly |
| Feature Adoption | % of accounts using feature X within 30 days of release | 8% | 30% | Weekly |
| Operational Efficiency | Avg. IT admin time to onboard 100 users | 4 hours | 30 minutes | Per onboarding event |
| Customer Satisfaction | NPS for specific workflow (e.g., reporting) | 32 | 50+ | Monthly survey |
| Revenue Impact | Expansion revenue from accounts using feature Y | $200K/quarter | $350K/quarter | Monthly |
| Support Efficiency | Support tickets related to [problem area] | 120/month | 30/month | Weekly |
| Reliability | % of API calls completing in <500ms (p95) | 78% | 95% | Real-time (SLO) |
| Compliance | Days to complete security questionnaire for deals | 12 days | 2 days | Per deal |
| Engagement | DAU/MAU ratio for mobile app | 0.28 | 0.45 | Daily |
| Error Reduction | % of onboarding attempts with zero errors | 62% | 90% | Weekly |
Leading vs. Lagging Indicators
- Lagging Indicators: Business outcomes you want to achieve (revenue, churn, NPS). Change slowly; hard to course-correct.
- Leading Indicators: User behaviors that predict lagging outcomes (feature adoption, onboarding completion, error rates). Change quickly; enable rapid iteration.
Example:
- Lagging: Reduce churn from 15% to 10% (annual)
- Leading: Increase % of customers completing onboarding in <14 days from 40% to 80% (monthly)
- Hypothesis: Faster onboarding predicts lower churn (validate with cohort analysis)
Instrumentation Checklist
- Identify 3–5 leading indicators per outcome
- Implement event tracking in product (use Segment, Mixpanel, Amplitude, or custom)
- Build real-time dashboards (Looker, Tableau, Metabase)
- Set up automated weekly reports for stakeholders
- Define "success" and "alarm" thresholds for each metric
9. AI Considerations
AI-Assisted Roadmapping
1. Opportunity Discovery
- Use AI to analyze support tickets, NPS verbatims, and sales call transcripts to identify recurring pain points
- Tools: ChatGPT (for summarization), specialized tools like Thematic, Dovetail
- Example prompt: "Analyze 500 support tickets tagged 'onboarding' and identify top 5 themes"
2. Hypothesis Generation
- Feed AI your opportunity tree and ask for solution ideas
- Example prompt: "Given the outcome 'Reduce IT admin onboarding time by 75%,' suggest 10 potential solutions, ranked by likely impact and feasibility"
- Validate with customer interviews; AI generates options, humans validate
3. Metric Forecasting
- Use AI/ML to predict impact of initiatives based on historical data
- Example: "If we reduce onboarding time to 14 days, predict effect on 90-day retention using cohort data"
4. Roadmap Communication
- AI can draft stakeholder-specific roadmap narratives
- Example prompt: "Write a 2-slide roadmap summary for Sales, emphasizing customer outcomes, avoiding technical jargon"
AI-Powered Features as Outcomes
When AI is part of your solution bet:
- Outcome: "Reduce time to generate customer report from 30 minutes to 2 minutes"
- Solution Bet: "AI-powered report builder with natural language queries"
- Success Metric: Avg. time from 'Generate report' click to PDF download
- Risk: AI hallucination creates inaccurate reports → Mitigate with human-in-the-loop validation, accuracy metrics
10. Risk & Anti-Patterns
Top 5 Pitfalls in Outcome-Driven Roadmapping
1. Vanity Metrics as Outcomes
- Anti-Pattern: "Increase sign-ups by 50%" without tracking activation or retention
- Risk: You hit the metric but don't improve business (sign-ups don't convert)
- Fix: Pair leading indicators with lagging outcomes. Example: "Increase sign-ups by 50% AND increase 30-day retention from 40% to 60%"
2. Outcomes Without Baseline or Target
- Anti-Pattern: "Improve onboarding experience"
- Risk: No way to measure success; team debates whether work is done
- Fix: Always quantify: "Reduce avg. onboarding time from 45 days (baseline) to 14 days (target) within 90 days"
3. Selling Outcomes as Guaranteed Features
- Anti-Pattern: Sales promises "SSO ships in Q2" to close deal, PM is locked in
- Risk: Solution bet fails; PM forced to ship ineffective feature
- Fix: Sales communicates outcomes: "We're investing in reducing IT admin onboarding effort by 75% this quarter." PM retains flexibility on solution.
4. Ignoring Instrumentation Until After Launch
- Anti-Pattern: Ship feature, then realize you can't measure the outcome metric
- Risk: No data to validate hypothesis; can't learn or iterate
- Fix: Instrumentation plan in every design spec; analytics QA before launch
5. Outcome Overload: Too Many KRs
- Anti-Pattern: 15 outcomes for a single quarter; team is scattered
- Risk: Nothing moves meaningfully; team context-switches constantly
- Fix: Limit to 3–5 outcomes per quarter. Use opportunity trees to explore multiple solution paths for same outcome.
11. Case Snapshot
Before: Feature Factory at CloudOps Platform
Context: CloudOps, a B2B infrastructure monitoring SaaS, operated on a feature-driven roadmap. Sales and CS teams submitted feature requests; PM prioritized based on loudest voices. Engineering shipped 12 features per quarter but churn remained at 18%.
Symptoms:
- PM spent 60% of time negotiating feature requests
- Engineering complained: "We ship stuff no one uses"
- CS struggled to prove ROI at renewals: "What did you build for us?"
- Customer interviews revealed: "We don't need more features; we need faster onboarding"
Before Metrics:
- Avg. time-to-first-value: 52 days
- Feature adoption (features used by >10% of accounts): 4 out of 12 launched
- Renewal rate (Enterprise): 82%
- Engineering morale (internal survey): 6.2/10
After: Outcome-Driven Transformation (6-Month Journey)
Changes Implemented:
-
Defined 3 Outcomes for H1:
- Outcome 1: Reduce time-to-first-value from 52 days to 14 days
- Outcome 2: Increase feature adoption (>10% usage) from 33% to 60%
- Outcome 3: Reduce customer-reported "critical" bugs from 8/month to <2/month
-
Built Opportunity Tree for Outcome 1:
- Opportunity: IT provisioning delays
- Solution Bet 1: Pre-built SSO integrations (Okta, Azure AD)
- Solution Bet 2: Async bulk user import with progress tracking
-
Instrumented Metrics:
- Added telemetry for onboarding funnel steps
- Created weekly KR dashboard for exec team
- Tracked feature adoption via usage analytics
-
Stakeholder Communication:
- Trained Sales: "We're solving onboarding friction" vs. "We're building SSO"
- Created Now-Next-Later roadmap for customers
After Metrics (6 months post-implementation):
- Avg. time-to-first-value: 18 days (65% improvement; target: 14 days by end of year)
- Feature adoption: 58% (from 33%)
- Renewal rate (Enterprise): 89% (7-point increase)
- Engineering morale: 8.1/10
- Reduced feature output from 12 to 6 per quarter, but 5 out of 6 showed >20% adoption
Key Quote (Head of CS): "We can finally have ROI conversations with customers. We show them: 'You told us onboarding was painful; we cut it by 2/3. Here's the data from your account.'"
12. Checklist & Templates
Outcome-Driven Roadmap Launch Checklist
Strategy & Alignment
- Business objectives documented (from leadership/OKRs)
- 3–5 measurable outcomes defined per objective
- Baseline metrics captured for each outcome
- Stakeholder alignment session completed (Product, Eng, CS, Sales)
Discovery & Opportunity Mapping
- Customer interviews conducted (10+ for top outcome)
- Support ticket analysis completed (last 90 days)
- Sales loss/churn analysis reviewed
- Opportunity tree created for top 2 outcomes
- 3–5 opportunities identified per outcome
- 2–3 solution bets brainstormed per opportunity
Hypothesis & Experimentation
- Hypotheses written for top 3 solution bets (using template)
- Success metrics and targets defined
- Instrumentation plan documented
- Experiment design created (A/B test, prototype, beta)
- Experiment timeline: Start date, evaluation date
Communication & Governance
- Internal roadmap view created (full opportunity tree)
- External roadmap view created (Now-Next-Later format)
- Sales/CS training on outcome-based messaging completed
- FAQ document created for customer questions
- Weekly KR review cadence established
Instrumentation & Measurement
- Analytics instrumentation implemented (or tickets created)
- KR dashboard built and shared with stakeholders
- Automated alerts set up for metric regressions
- Quarterly roadmap review scheduled (calendar invite sent)
Template: Hypothesis Statement
## Hypothesis: [Solution Name]
**We believe:** [Specific solution/feature]
**Will achieve:** [Measurable outcome]
**For:** [Target user segment]
**As measured by:** [Metric and target]
- Baseline: [Current value]
- Target: [Goal value]
- Timeframe: [e.g., within 60 days of launch]
**Riskiest Assumption:**
[What has to be true for this to work?]
**How We'll Test:**
[Prototype, A/B test, beta program, etc.]
**Success Criteria:**
- Primary: [Main metric hits target]
- Secondary: [Other positive signals]
**Failure Criteria:**
[When do we kill this and pivot?]
**Owner:** [PM/Designer/Engineer]
**Start Date:** [YYYY-MM-DD]
**Evaluation Date:** [YYYY-MM-DD]
Template: Now-Next-Later Roadmap (External View)
## Product Roadmap: [Quarter/Year]
### Now (Current Focus)
**Outcome:** [What we're trying to achieve]
**Why it matters:** [Customer/business impact]
**How we'll measure success:** [1–2 key metrics]
**Expected timeline:** [Month/Quarter]
Example:
**Outcome:** Reduce IT admin onboarding effort by 75%
**Why it matters:** Customers report onboarding delays block time-to-value
**How we'll measure success:** Avg. onboarding time <30 min (from 4 hours)
**Expected timeline:** Q2 2025
---
### Next (Near-Term Direction)
**Outcome:** [What's coming soon]
**Why it matters:** [Customer/business impact]
**Confidence level:** [High/Medium/Low]
Example:
**Outcome:** Increase feature discovery and adoption for reporting tools
**Why it matters:** Only 15% of customers use advanced reporting; leaving value on table
**Confidence level:** High (discovery completed; hypothesis testing in progress)
---
### Later (Strategic Options)
**Themes we're exploring:**
- [Theme 1]: [Brief description of outcome area]
- [Theme 2]: [Brief description of outcome area]
- [Theme 3]: [Brief description of outcome area]
Example:
- **Mobile-first experience:** Enable field teams to use platform on iOS/Android
- **AI-assisted insights:** Reduce time to identify system anomalies from hours to seconds
- **Ecosystem integrations:** Connect with tools customers already use (Slack, Jira, etc.)
**Note:** Later items are not commitments; they reflect strategic direction. Priorities may shift based on customer feedback and business needs.
Template: Opportunity Tree (Miro/FigJam)
[Business Objective]
|
├── [Outcome 1]
│ ├── [Opportunity 1.1]
│ │ ├── Solution Bet A
│ │ ├── Solution Bet B
│ │ └── Solution Bet C
│ ├── [Opportunity 1.2]
│ │ ├── Solution Bet D
│ │ └── Solution Bet E
│ └── [Opportunity 1.3]
│ └── Solution Bet F
│
└── [Outcome 2]
├── [Opportunity 2.1]
│ └── Solution Bet G
└── [Opportunity 2.2]
├── Solution Bet H
└── Solution Bet I
Instructions:
- Start at the top with business objective
- Branch into 2–4 measurable outcomes
- For each outcome, identify opportunities (problems/friction)
- For each opportunity, brainstorm 2–3 solution bets
- Prioritize solution bets based on evidence, effort, risk
- Update tree monthly based on learnings
13. Call to Action
3 Actions for the Next 5 Days
Day 1–2: Audit and Align
- Export your current roadmap (Jira, Productboard, spreadsheet, wherever it lives)
- For each initiative, ask: "What measurable customer or business outcome does this achieve?"
- Schedule 60-minute session with Product, Engineering, and CS leads: "What are the top 3 outcomes we need to drive this quarter?"
- Document those 3 outcomes with baseline metrics and targets
Day 3–4: Build Your First Opportunity Tree
- Pick your highest-impact outcome from Day 1–2
- Gather evidence: Review support tickets, interview 3 customers, talk to CS team
- Identify 3–5 specific opportunities (friction points, gaps, problems)
- Brainstorm 2–3 solution bets per opportunity (don't commit to building yet)
- Create visual opportunity tree in Miro, FigJam, or on whiteboard
Day 5: Write Hypotheses and Start Measuring
- Select 1–2 solution bets to test first (pick based on evidence and feasibility)
- Write hypothesis statements using template in Section 12
- Define success metrics: What data proves this worked?
- Create tickets for analytics instrumentation (don't ship features you can't measure)
- Share hypothesis doc with stakeholders and ask: "What are we missing? What assumptions are we making?"
Bonus: Start the Conversation
- Post in your team Slack: "We're shifting to outcome-driven roadmapping. Here's our first outcome: [X]. Feedback welcome."
- Update next roadmap slide deck to lead with outcomes, not features
- At next customer call, try: "We're investing in [outcome]" instead of "We're building [feature]"
Remember: You don't need perfect instrumentation or a complete opportunity tree to start. Begin with one outcome, one opportunity tree, and one hypothesis. Learn. Iterate. The goal isn't a perfect roadmap; it's a roadmap that gets you closer to measurable impact every sprint.
End of Chapter 15