Skip to Content

Managing conflicting priorities with RICE

When you have a big pile of ideas and conflicting views on their importance, how do you distinguish between what you SHOULD build and what you WANT to build?
2 February 2022 by
Managing conflicting priorities with RICE
Doubly Good, Leigh Garland
| No comments yet

I recently joined a project as acting PM with various conflicting groups of stakeholders. I inherited a huge backlog of features, held dear to the hearts of key people. But, these were not well described, had little supporting evidence and were all “super-mega-high priority”. 😱

On top of this, my stakeholders struggled to understand what a product manager was. They tended towards thinking of me as a coordinator, rather than a decision maker. They felt like they should be able to tell me what the most important thing was, and it was my job to “make it happen”.

I’m sure this isn’t unusual for many PMs, but in this case, the volume of noise, conflicting voices and a very pressing deadline meant that we needed a way to:

  • Create a system that was simple for stakeholders to understand
  • Provide ‘fair’ feedback so competing stakeholders felt like they were being treated equally
  • It needed to be able to be done quickly.

For this I chose “RICE scoring”.

Invented at Intercom, RICE is an acronym of “Reach Impact Confidence Effort”, and is a great way to help prioritise early stage ideas.

  • REACH is “how many relevant users will be affected by the change?”
  • IMPACT is “how much impact will this make to those users experience?”
  • CONFIDENCE is “how certain are you of your REACH and IMPACT scores?”
  • EFFORT is “how much work will it take to do the work?”

There’s a more detailed breakdown on the Intercom blog, but for our purposes, I built a simple RICE calculator in a spreadsheet and presented it to our stakeholders in our next triage session.

It started well, but gradually, it became clear that the ‘confidence’ score was contentious. Stakeholders always felt that they were totally confident about their figures (despite never actually having evidence). They also feared that the first RICE score would be the one a feature was landed with forever.

I decided to refactor the tool, to give a little more context around the score. I added a text summary we could copy-paste into our tickets (so we could revisit the scoring) and that would hint that the RICE score is temporary.

RICE score is 150.19 At this point, we estimate this will reach over 80% of relevant users, with major impact. We have good confidence that this will be effective, and our effort is likely to be large.

The “At this point” phrase did the trick. This helped our session participants remember that RICE could be revised with better evidence.

After a few more sessions, I realised there were a few phrases that I was repeating fairly often. So I also added a “Questions” section, which would deliberately provoke discussions around the evidence or value of the feature.

  • “Are we confident we can measure the impact?”
  • “How can we learn more about the work, in order to understand how big it is?”

One thing I did was use an exponential sequence for the scores. This had the effect of making poor quality features look very small, compared to well defined ones, which can have scores in the hundreds.

Now my stakeholders felt like they were on a level playing field with other groups and could clearly see the process for prioritising features in the backlog. RICE turned out to be a helpful tool in managing backlogs and stakeholders expectations.

There are LOTS of different tools out there, and you should find one that suits your situation. I’ve converted that spreadsheet into a simple online RICE score calculator for anyone to use. If you try it out, please do send me some feedback! 👋

Schedule an Appointment

Share this post
Archive
Sign in to leave a comment