Product-Mindset EngineeringDecision-making and Prioritization

Small Bets and Fast Validation

4 minutes read

Prioritization frameworks help teams make better decisions about what to build or work on next. These structured approaches combine data and experience to rank tasks, features, or projects in a logical way.

Why Use Prioritization Frameworks?

Teams often struggle to choose between competing options, especially when resources are limited. Good prioritization helps:

  • Make consistent, objective decisions

  • Compare different types of work fairly

  • Communicate choices clearly to stakeholders

  • Balance short-term needs with long-term goals

Common Prioritization Frameworks

ICE Scoring: Rates items based on three factors:

  • Impact: Potential benefit to users or business (1-10)

  • Confidence: How sure we are about the impact (1-10)

  • Ease: How simple it is to implement (1-10)

ICEscore=Impact×Confidence×Ease10 ICE score = \frac{Impact \times Confidence \times Ease }{10}

AI Prompt:

Estimate the ICE score for the feature idea:
'Auto-save in mobile note-taking app.'

1. Briefly describe:
- Impact: Who benefits and how?
- Confidence: Why this confidence level? Any key assumptions or risks?
- Ease: What makes this simple or complex to implement?

2. Score each factor (1-10) and calculate the ICE score.

3. Keep the response concise: max 2 sentences per factor.

RICE Scoring: Adds reach to the evaluation:

  • Reach: Number of users affected per time period

  • Impact: Effect per user (0.25, 0.5, 1, 2, 3)

  • Confidence: Percent confident in estimates (0-100%)

  • Effort: Estimated person-months of work

RICEscore=Reach×Impact×ConfidenceEffortRICE score = \frac{Reach \times Impact \times Confidence}{Effort}

AI Prompt:

Calculate the RICE score for: 'Add progress bar to onboarding' with:

- Reach: 10,000 users/month
- Impact: 2
- Confidence: 70%
- Effort: 1.5 months

1. Explain each factor briefly: why these values? Any uncertainties?
2. Show the RICE calculation step by step.
3. Based on the score, recommend: prioritize, keep on backlog,
   or investigate further. Keep explanations concise.

MoSCoW Method: Groups items into four categories:

  • Must have: Critical for success

  • Should have: Important but not vital

  • Could have: Nice to have if resources permit

  • Won't have: Not planned for this timeframe

AI Prompt:

Categorize these backlog items using MoSCoW: 
'password reset', 'dark mode', 'billing export', 'usage dashboard.'

1. For each item, assign a category (Must, Should, Could, Won’t) based on:
- User/business value
- Urgency/timeline
- Potential risks if omitted

2. Highlight any items that might need further discussion (e.g., if priority is unclear).
3. Keep each justification to 1-2 sentences.

Validating Prioritization Decisions

After using a framework to prioritize:

  • Check scores against team gut feel

  • Review past similar decisions and outcomes

  • Get input from different team perspectives

  • Set checkpoints to evaluate results

Common Prioritization Biases

Recency

Effort

Stakeholder

“Just discussed → higher score.”
Fix: Wait 3 days then revisit.

“Easy tasks score too high.”
Fix: Rate impact before effort.

“VIP asks outrank others.”
Fix: Apply same criteria to all.

Real-World Example: Feature Scoring

A team used RICE to prioritize three features:

Feature A - Customer Search:

  • Reach: 5000 users/month

  • Impact: 2 (moderate improvement)

  • Confidence: 80%

  • Effort: 2 person-months

  • RICE Score: 4000

Feature B - Export Reports:

  • Reach: 1000 users/month

  • Impact: 3 (major improvement)

  • Confidence: 90%

  • Effort: 1 person-month

  • RICE Score: 2700

Feature C - Dark Mode:

  • Reach: 10000 users/month

  • Impact: 0.5 (minor improvement)

  • Confidence: 100%

  • Effort: 3 person-months

  • RICE Score: 1667

Decision: The team chose Feature A based on highest RICE score, with Feature B as backup if estimates changed.

Key Takeaways

  • Use ICE for quick, simple scoring of similar items

  • Choose RICE when user reach varies significantly

  • Apply MoSCoW for initial project scope decisions

  • Combine frameworks with team experience for best results

  • Watch for biases that can affect scoring accuracy

  • Check results and adjust scoring methods as needed

2 learners liked this piece of theory. 0 didn't like it. What about you?
Report a typo