RICE Prioritization

TL;DR

RICE Score = Reach x Impact x Confidence x Ease.

Overview

One of the hardest problems in growth is figuring out what to work on next. Most teams have a backlog of ideas that looks something like this:

image

How do you prioritize? RICE is one framework teams use.

Ultimately, it boils down to opportunity cost. If you run one experiment, it means you won’t be able to run another, so picking the best next experiment matters a lot.

Quick definitions

  • Reach: how many people does this touch?
  • Impact: if this works, how big is the impact?
  • Confidence: how likely is this to succeed?
  • Ease: how much work will it take to run?

How it’s used

In a growth meeting, you’ll discuss as a team how to score each backlog experiment. Each category gets a score from 1-10. 1 is bad, 10 is good.

Is the final score totally scientific? No. But in the process of fleshing everything out, you’ll often realize that one experiment is order-of-magnitude better than another.

(By the way, it’s very easy for backlogs to get long. You can have a lot of ideas. We find it saves time to just use your gut to pick your 3 to 5 favorite ideas if the backlog is long, then score those. You’ll build your intuition the more you score ideas.)

Here’s more detail on RICE.

Reach

This is the most quantifiable one. It also takes the most work.

You’ll want to make some assumptions and boil down the experiment to how much the North Star Metric will go up at the end of the day.

For example: 5000 waitlist members * 5% email click-through-rate * 1% site conversion = 2.5 Monthly Active Users

Here’s some more detail.

Pretend your north star metric is “Monthly Active Users.”

Let’s say you’re scoring the experiment “Send a reminder email to everyone on the waitlist”.

To figure out reach, do some back-of-the-napkin math and assumptions.

  1. Look up the number of people who are on the waitlist. In our example, let’s say this is about 5000 people.
    • You can get this number from, e.g., MailChimp or Metabase.
  2. Let’s assume 5% of people who read our emails click the link.
  3. Let’s assume 1% of people who land on the site convert into an active user.
    • How do you come up with 1%? You might look at Google Analytics data from the last month (assuming everything is tracked and set up).
  4. Do the math: 5000 * 5% * 1% = 2.5 Monthly Active Users

Compare that to another experiment. Say, “Run Google Ads targeting a competing tool”

  1. Look up how expensive the bids are for the competing tool and take the rough average. In the case below, let’s say the average is about $5. (To make the math easier.)
    1. image
    2. You can find real numbers for this using Google’s keyword planner tool. Ping your mentor if you’re having trouble.
  2. Take your company’s budget. Most seed-stage startups budget $5,000 per ad channel as an initial test.
  3. Divide your budget by the average cost per click. $5,000 divided by $5 = 1,000.
  4. That means we’ll get 1,000 clicks to our site.
  5. Make sure enough people search per month to justify getting 1,000 clicks. You generally need at least 20x more searches than clicks. In our example, 246,000 is way more than 20,000, so we’re fine.
  6. Assume 1% of landing page visits turn into an active user.
    • This number can come from Google Analytics, like above.
  7. Do the math: 1,000 * 1% = 10 Monthly Active Users

The Google Ads experiment clearly has a higher reach than the waitlist email, so you’d score it higher, say a 7 vs. a 3.

You can usually get a feel for how to scale this to 1-10 after you do it for a few experiments.

Impact

We want experiments that can scale, are repeatable, and feed a growth loop.

What can scale? Cold outreach tests that eventually turn into a full sales team. Digital ads with a large audience. Landing page improvements on pages that could reach millions of people.

What’s not as scalable? Tiny markets. Tweeting from the founder’s account. Reaching out to the 3 influencers in your space.

What's repeatable? Sending a weekly newsletter. Running Google Ads that target new people searching every month. Creating a template for your sales team to close deals.

What's not as repeatable? Launching to Product Hunt. Getting press or going viral (in most cases). Anything one-off.

Rough scores

  • 8-10: feeds a growth loop + repeatable + scalable
  • 5-7: checks one or two of the boxes
  • 1-4: one or none

Confidence

How likely is this to succeed? Inform this with your knowledge from past experiments, the industry, user learnings, and gut feel.

Map your percentage confidence to a number.

Teams often overlook marketer-channel fit. If your background is in sales, you’ll probably be good at cold-calling. If you used to be a journalist, content and PR may be your superpower. Put simply: you’ll do a better job with channels that fit your background and personality.

One last thing teams undervalue: excitement. Some ideas are just more fun to work on, and you tend to do a better job on them. Give those a 2-3 point bump.

Rough scores

  • 10: 100% confident and excited (I.e., extremely confident)
  • 5: 50% confident, semi-excited
  • 1: 10% confident, not excited
  • Etc.

Ease

“Ease” is how many resources it takes and how quickly you can launch an experiment.

For example, if you want to build a referral program into the product, that may take:

  • Designer time
  • Engineering time
  • Making sure attribution works
  • Picking a reward
  • Delivering that reward (e.g., if your reward is branded company swag, you have to find a sweatshirt supplier, come up with a system to automatically ship sweatshirts, etc.)
  • Etc.

Compare that to posting in a Google Group:

  • You write the post
  • You hit “Submit”

Rough scores

  • 8-10: Doesn’t take long, something one person can do on their own without other teams involved.
  • 5-7: Other teams are probably involved, may take a week or so to set up.
  • 3-4: Multi-week project, probably involving multiple teams.
  • 1-2: Major change to the product, usually involves getting a significant part of the company on board.

Again, you’re looking to be able to compare experiments with rough orders-of-magnitude.