6 Best A/B Scoring Models

Prioritize your team's test hypotheses

6 Best A/B Scoring Models


Prioritize your team’s test hypotheses


You probably have a large list of A/B testing ideas on your roadmap. We can’t test everything at once. We all have limited amounts of traffic and budgets.

You should focus on ideas that have the potential to make the best impact rather than testing the small details.

How do you do this? Prioritize! Here are our favorite 6 A/B test scoring models:


1. PIE Model: Potential, Importance, and Ease


Chris Goward at Widerfunnel created the PIE framework. The model scores each page across three factors:

  • Potential: Factors in your web analytics and customer data to identify how much improvement can be made on a page.
  • Importance: Focuses on the pages with the highest volume and costliest traffic.
  • Ease: Considers how difficult or easy it would be to implement a test on a page.

Each factor is scored from 1 – 10, and the factors are averaged together to determine the PIE score.


2. ICE Model: Importance, Confidence, and Ease


The ICE model was invented by GrowthHackers’ founder, Sean Ellis. The model asks three simple questions:

  • Impact – What will the impact be if this works?
  • Confidence – How confident am I that this will work?
  • Ease – What is the ease of implementation?

Similarly to PIE, the ICE model weighs each factor out of 10 and averages the three score to determine the ICE score. You can learn more about ICE from Sean’s 2015 Startup Fest presentation.


3. T.I.R. Model: Time, Impact, and Resources


Bryan Eisenberg created the T.I.R. model. Factors include:

  • Time: How much time is needed to execute the project? A higher score is less time.
  • Impact: What is the projected revenue potential or cost savings from the project?
  • Resources: Does the project require few resources that are readily available?

Each of the 3 factors are scored out of 5, then multiplied for the total score.


4. Hotwire’s Points Model


Hotwire uses a binary scoring (0 or 1 point) across a number of factors including:

  • Main metric: Focuses on the company’s primary metric.
  • Location: Focuses on important pages for the company.
  • Fold: Makes a change above the fold
  • Targeting: Includes 100% of customers in the test
  • New information: Adds or removes an element from the page
  • Benchmarking: Borrows from a learning elsewhere

This model was outlined by Pauline Marol and Josephine Foucher in an Optimizely post.


5. PXL by CXL


PXL also uses a binary scoring method. Similar to Hotwire’s model, this model assigns 1s or 0s to a number of factors:

  • Above the fold
  • Designed to increase user motivation
  • Running on high traffic pages
  • Addressing insights discovers in user testing, qualitative feedback, digital analytics, or mouse/eye tracking

The PXL model also weights this factors more heavily:

  • Noticeable within 5 seconds
  • Adding or removing an element
  • Ease of implementation


6. Customizable model by Experiment Zone


All organizations have different goals, limitations, and strengths, so it’s naive to think the same scoring model will work for all teams.

Why not personalize one of these and make a model that works best for your company? With Experiment Zone, you can easily create and customize your own model.

  • You can create up to 25 factors.
  • For each factor, choose whether they are binary or from a scale (5 or 7 point is commonly used).
  • You can also weight the factors if they aren’t equally important to your team.
  • You can update the model seasonally or as the business focus changes without having to re-evaluate every test idea.

Want to give it a try? Sign up for a 30-day free trial today.

What scoring model does your team use? Are we missing any from the list? Please leave a comment below.

Get our tips and tricks for Experience Optimization sent to your inbox!