How to truly learn from your experiments: Meta-analysis done right

Ruben de Boer
5 min readSep 9, 2022


Learning from your experiments is essential for your success. But are you really learning from your experiments? Are you really getting more knowledgeable about your customers?

With an easy tweak, we can truly learn much more while also heavily decreasing our biases. The goal is to use meta-analyses to know what hypothesis to address in which step of the customer journey and how. This will help you become much more successful.

This article will show you how, including tips on setting it up in your documentation tool (like Airtable).

The meta-analysis

When working on experimentation, it is important to be aware of the hierarchy of evidence. This model comes from science and is essential for experimentation.

In science, the pyramid is used to rank the strength of results obtained from scientific research. For experimentation, the hierarchy of evidence looks like this.

Higher in the pyramid means a higher quality of proof and, thus, more reliable results and insights.

On top of the pyramid is the meta-analysis. In science, a meta-analysis can be performed when multiple scientific studies address the same question. With each individual study reporting measurements that are expected to have some degree of error, the meta-analysis aims to derive a pooled estimate closest to the truth.

The same applies to experimentation. A single A/B test can be prone to errors. You could, for instance, have a false positive. This means the test resulted in a winner, but in reality, there is no difference. Your A/B test could also result in a winner, but for a different reason other than stated in your hypothesis.

As a single A/B test can be prone to errors, we conduct a meta-analysis to get the closest to the truth about what drives our visitors and our conversion rates.

Meta-analysis applied

In previous articles on automated, evidence-based prioritization ( article 1, article 2), I covered tagging the psychological direction for every experiment you run.

For this, you can use the psychological model you prefer. The most straightforward model is the Fogg Behavior Model. For each experiment, document if you try to increase motivation, ability, or use a prompt.

You could also use the model I propose in my Online Psychology for Conversion Optimization course on Udemy; Cognitive ease, motivation & risk, attention & perception, and choices & memory.

If you also document the page where the experiment was running, you can conduct a meta-analysis to see what direction is successful on which page (if you have the right documentation tool, below is an example from Airtable).

This is your first meta-analysis. Multiple A/B tests show what psychological directions work in which stage of the customer journey. Here we see that prompts and increasing ability work very well on the home page.

But we can take it a step further and learn much more!

Behavioral hypothesis

So far, we know what kind of experiments (psychological directions) work where in the customer journey (pages).

We can make this a lot more insightful when we add common general customer problems for additional meta-analyses. This results in you knowing what customer problem to address where (page) and how (psychological direction)!

To do so, we first need to craft general, overarching customer problems. To get these, cluster insights from A/B tests and research that belong together.

Next, we create hypotheses from these problems, called behavioral hypotheses. Behavioral hypotheses state something about your visitors’ behavior, needs, and motivations. It should be general statements, not tight to a single adjustment.


  • Customer problem: Visitors have a hard time finding the right products.
  • Behavioral hypothesis: By making it easier for the visitor to find the right products, transactions will go up.
  • Customer problem: Visitors require social proof and feel the need to belong.
  • Behavioral hypothesis: By increasing social proof, more visitors are motivated to purchase.
  • Customer problem: Visitors are hesitant to purchase due to feelings of uncertainty regarding the product, delivery, and terms.
  • Behavioral hypothesis: By providing certainty, transactions will increase.
  • Customer problem: Visitors have a hard time choosing the right product.
  • Behavioral hypothesis: When including more guidance and advice on the right product, visitors are more likely to convert.

Aim for five to ten behavioral hypotheses based on your research.

Next, validate these hypotheses with multiple A/B tests on multiple pages.

For instance, if we take the behavioral hypothesis ‘By providing certainty, transactions will increase,’ we could think of several experiments (preferably, base these tests on previous experiments and your research):

  • Display trademarks on the cart page.
  • Show the return policy more prominently on the pdp.
  • State the delivery date in the cart.
  • Show microcopy that you can cancel a subscription at any time below the call to action buttons in the checkout.
  • Etc.

For every experiment, document the page, psychological direction, and behavioral hypothesis.


Once you document it this way, you know what hypothesis is successful on what page and how to tackle it with what kind of experiment.

For inspiration, in Airtable, your meta-analyses could look like this, depending on your setup.

For this hypothesis, prompt experiments work great. However, ability does not seem to solve the customer’s problem.

This problem should not be addressed on the list page. However, it seems to work very well on the product page with both a high win rate and uplift per winner.

Start learning from your experiments and increase your success

With meta-analyses, you know what hypothesis to address in which step of the customer journey and how to do so. You increase the quality of your experimentation program, decrease the influence of your (teams’) biases, and create more enthusiasm within the organization by sharing validated insights.

In short, follow these steps:

  1. Find the five to ten most important customer problems based on your research and completed experiments.
  2. Craft your customer problems into hypotheses (called behavioral hypotheses).
  3. For every experiment, document the page, psychological direction, and behavioral hypothesis.
  4. Set up your meta-analyses in your documentation tool.
  5. Enjoy a lot of extra validated insights. :-)

Originally published at



Ruben de Boer

As a CRO consultant and online teacher, Ruben works with organization to set up a CRO program and create a culture of experimentation on a daily basis.