5 steps to create an evidence-based, automated prioritization model with feedback loops

Ruben de Boer
6 min readAug 11, 2021

--

Conversion rate optimization has been around for a while. Over the years, we went in-depth on statistics, psychology, server-side testing, and automation. However, a topic that has not been touched upon a lot is prioritization.

Sure, many companies moved from a basic framework like PIE and ICE to a more complete framework, like PXL. Most companies have been using such a model for years. Perhaps it is time to optimize and validate our model to realize a more significant impact on the business goals.

In this article, I propose a partly automated, evidence-based prioritization framework with a double feedback loop to run better A/B tests and find more winners.

PIE, ICE & PXL frameworks

We have been using PIE, ICE, and PXL related frameworks for a long time. Not just in conversion rate optimization, but also growth hackers use such models.

The best thing about these models is their simplicity. Especially models like PIE and ICE only require three numbers to get your prioritization score. PXL related frameworks require approximately ten numbers, but as this model is much more fact-based, it has been my favorite framework for years. But all these models have a few downsides.

First of all, PIE and ICE are entirely subjective. We give subjective scores to each attribute. Take ‘potential’ within the PIE framework or ‘confidence’ within ICE, for instance. With an A/B test win rate of, let’s say, 25%, how confident can you be? How certain are you about its potential?

Second, there is too much focus on Ease. Within PIE and ICE, Ease contributes 33.3% of the overall score! That surely kills innovative experiments. If something is hard to build, it ends up at the bottom of the backlog. For the PXL framework, Ease has less impact on the overall score. However, it is still the attribute that can get the highest score of all ten attributes. The ease is, of course, important to keep a high testing velocity. However, complex experiments, like new features, can make a greater impact. A combination of both is essential.

Third, there is little to no alignment with the business. I assume that when you run experiments, the main goal will be the same as the goal of the business. Still, aligning with current business OKRs (objectives and key results) or OGSMs (Objective, Goals, Strategies, and Measures) helps run relevant experiments. This again helps with the adoption of experimentation in the business. In most prioritization models, there is no check on the KPI.

And fourth, and perhaps most important for PXL related frameworks: there is a huge lack of evidence and feedback. For instance, in the PXL model, ideas related to issues found in qualitative feedback get a higher prioritization score. However, this might not lead to better experiments. Perhaps in your situation, ideas related to qualitative feedback have a low win rate. Still, you consistently give these ideas a higher prioritization score, majorly declining your experimentation win-rate! Another example is ideas related to user motivation. In the PXL framework, you give these ideas a higher score, but perhaps experiments related to ability result in many more A/B test winners.

5 steps to create the basics of your new prioritization model

We need a prioritization model that helps us make better decisions so we run better A/B tests and get better insights while maintaining the simplicity of the current models. We also need it to be evidence-based, automated to some extent, with a (double) feedback loop based on the success of finalized experiments.

Step 1. Document the psychological direction for each experiment

Data at Online Dialogue shows that your win rate will go up when you use psychology in your experimentation program with proper documentation. Therefore, the first step is to document the psychological direction for each experiment. For this, you can use the psychological model you prefer.

The most straightforward model is the Fogg Behavior Model. For each experiment, document if you try to increase motivation, ability or use a prompt.

You could also use Online Dialogue’s Behavioral Online Optimization Method (BOOM) (article in Dutch) or the model I propose in my Online Psychology for Conversion Optimization course on Udemy.

Step 2. Calculate the win rate and impact of each direction for each page

Once you documented the psychological direction, you can now calculate the win rate and impact (average conversion uplift per winner for your most important KPI) for each direction on each page.

At Online Dialogue, we use Airtable as our documentation tool with all our clients. In this tool, it is easy to make these calculations. And as we document everything in Airtable, including experiment results, automation of prioritization scores is effortless (see next step). Of course, you can use the tool you prefer.

Example of an Airtable image

Step 3. Use the scores as the start of your prioritization model and automate (first feedback loop)

The next step is to set up your prioritization model. The start of your model will be the score from the previous step.

For the win rate, you can multiply the number by 10. So a win rate of 41.5% becomes 4.15 points. For the impact, you can multiply the scores by 100. So an average uplift per winner of 5.1% becomes a score of 5.1.

Based on the screenshot above, every experiment idea on your backlog which is a prompt on the home page, will get a score of 4.15 + 5.1 = 9.25.

Of course, these scores should update automatically. After every experiment, the win rate will change, and after every winning experiment, the impact could change. Your documentation tool should do these calculations automatically and adjust the prioritization score of the ideas on your backlog.

Again, with Airtable, this is relatively easy.

Step 4. Add other attributes applicable to your business

Next, you might want to add additional attributes that apply to your business.

Examples:

  • Alignment with business goals and OKRs (important test goals get a higher score)
  • Percentage of traffic that will see the change (above the fold gets a higher score)
  • Minimum detectable effect (lower MDE gets a higher score)
    Revenue going through the page (higher percentage receives a higher score)
  • Urgency (more urgent, means a higher score)
  • Ease (make sure to balance easy and complex tests for velocity and impact)

There are three things to keep in mind here. First, ensure that the win rate and impact have the highest weight in the overall priority score. These are based on your previous experiments and should be the best predictor for your next experiment.

Second, don’t add too many attributes. This will slow down the prioritization process.

Third, score these extra attributes as you think are important for your experimentation program and optimize in step 5.

Step 5. Validate and optimize the model (second feedback loop)

We are optimizers! We analyze data and optimize. Why don’t we do this for our prioritization model?

With the proper documentation tool, or with an export function, you can create a pivot table. On the vertical axis, state the priority scores (or a range of scores) of all completed experiments. On the horizontal axis, state the win rate and average impact for these experiments.

The experiments with the highest priority score should have the highest win rate and impact. If that is not the case, adjust your model. For example, change the extra attributes’ scoring or put more weight on the win rate and impact scores.

Example of a pivot table

Better prioritization for better decisions

A more successful experimentation program will create enthusiasm in your organization for experimentation and validation.

The success of your program is often determined by the number of A/B test winners and valuable insights from your experiments. To run the best possible experiments, a proper prioritization framework is essential.

Our prioritization models should be simple, evidence-based, automated to some extent, with a (double) feedback loop based on the success of finalized experiments.

As mentioned, this topic has not been touched upon a lot. However, with this post, I hope more organizations will start to use an evidence-based model, aligned with the business goals, for more success.

--

--

Ruben de Boer

As a CRO consultant and online teacher, Ruben works with organization to set up a CRO program and create a culture of experimentation on a daily basis.