— 27 Apr 2023

Calculating Sample Size for A/B Testing: Formulas, Examples & Errors

Gabriel Kuriata

One of the most popular questions app publishers ask our customer success managers is how much traffic they need to acquire for their experimental landing pages, for their A/B tests to be valid and provide them with a reasonably accurate picture.

The answer is simple: you need to accumulate enough visits to reach a result with a confidence level appropriate for your test. Testing methodology, the objective of a particular test and your business goals determine the minimum required sample size (which in practice means the minimum number of visits on a target destination, be it an app’s product page, a custom product page or a simulation page used in early stages of development).

Unfortunately, there is no answer with a magic number that will fit every single experiment. An optimal traffic volume for mobile A/B testing also depends on such factors as a traffic source, app’s conversion rate, and targeting. There can be significant differences between apps from various App Store categories, which might influence the total cost of testing.

Now let’s get to the main point: how to determine the sample size for A/B tests? In case of sophisticated systems, automation takes most if not all problems off your back (such is the case of our A/B testing service, SplitMetrics Optimize). You can run a sample size calculator if you wish, but it’s good to have a full understanding of it as sample size has a considerable effect on checking the significance of the observed difference in variation performance.

Enhance Your ASO Efforts and Maximize Organic Growth
with App Radar – best-in-class ASO platform
Start for Free
Calculating Sample Size for A/B Testing: Formulas, Examples & Errors

In this post we explore three areas related to the problem of calculating the right sample size for mobile app experiments and A/B tests. First, we focus on the business aspect of it. For a more in depth perspective, we also review a widely used sample size measuring method that helps to make statistically valid decisions (based on the results of your mobile A/B testing). Additionally, for all those nerds in us we included some math & formulas. We know that among our beloved readers there are people who like to look at the machinery at work.

What is the right sample size for my experiment?

In the world of mobile marketing – app growth & user acquisition in particular, the term “sample size” refers to the amount of visitors sent to a given variation of an app’s product page on any app store out there during an A/B test.

The higher the number of people who view and have a chance to interact with your variations (custom product pages on the App Store or custom store listings on Google Play), the more reliable will be your test results.

From a business perspective, the right sample size for your test is the one that is financially viable and that allows to achieve results with a trustworthiness level adequate to a specific goal. This is why a sequential A/B testing method is recommended for pre-launch validation of big design ideas – despite significantly higher traffic required to reach the minimal confidence level of 80%. It’s also the reason why the cheapest, multi-armed bandit method might be best for small seasonal changes and why the Bayesian method is the golden standard for the industry.

Calculating the minimum sample size required

In our case, choosing the right sample size is all about ROI, not discovering the meaning of life (besides, we know it’s 42, thanks to the “Hitchhiker’s guide to the galaxy”). In most cases, we’re striving to find a measurable conversion rate difference… with minimum cost.

To better understand how it all works, we’ll start with showing how it’s done in our platform for A/B testing, SplitMetrics Optimize.

Determining the right A/B test sample size in SplitMetrics Optimize

Return on investment. This is how the biggest app publishers approach this matter and it makes sense. When your ad budget exceeds $200k a month on Apple Search Ads alone, A/B tests are an integral part of their “growth engine”, meaning they come in large numbers and need to come within the boundaries of a certain budget, yet deliver what is expected from them. This is directly reflected in how our A/B testing platform, SplitMetrics Optimize works.

A sample screen from SplitMetrics Optimize, showing results of a finished test. Case study of Etermax.
An example of an A/B test from SplitMetrics Optimize. We can observe a significant difference in the winning variation over the baseline conversion rate. Read the full case study.

As you’d expect, the entire process of calculating the right sample size is standardized and automated. The system requires a certain amount of traffic to return statistically significant results.

In the context of cost-effective A/B testing that’s expected to drive meaningful decisions, a word on the Minimum Detectable Effect (MDE) is necessary. It’s a minimum improvement over the conversion rate of the existing asset (baseline conversion rate) that you want the experiment to detect. By setting the minimum detectable effect, you define the conversion rate increase sufficient for the system to declare the new asset winner. This parameter has a dramatic effect on the amount of traffic required to reach statistical significance. Our system can calculate MDE, but we strongly recommend setting it by yourself. This parameter depends on your own risks – money you’re ready to allocate for the traffic acquisition and time you can wait for the experiment to run.

Additionally, SplitMetrics Optimize offers three methods of A/B testing of apps’ product pages.

BayesianSequentialMulti-armed bandit
Best forLive apps, periodic & minor changesPre-launch, big changes & ideasSeasonal experiments
InterpretabilityImprovement level, winning probability, chance to beat controlImprovement level, confidence levelWeight
Key benefitThe golden middle methodologyHigh confidenceSpeed & low cost
Recommended sample size*500 per variation1000-1500 per variation250 for the least performing variation
*Please note, that first a certain threshold has to be met to activate the mechanism. It’s 250 for the Bayesian methodology, 500 for the Sequential and 250 for the multi-armed bandit.

In the case of the sequential methodology, the minimal sample size required is also dependent on the preset required confidence level. In each case, a test can be continued after reaching the minimal threshold.

Sample size influence on checking results trustworthiness within mobile A/B testing

Why is choosing the right sample size so important? Why is understanding how it works important for a marketer relying on automated platforms such as SplitMetrics Optimize? Why 500 visits and not 250? We’ll discuss this on an example of MSQRD app:

sample size for MSQRD mobile A/B testing, screenshots of MSQRD app
Screenshots variations for MSQRD app.

Let’s consider that MSQRD decided to check if changed order of screenshots (Variation B) favors a better conversion rate. Presume that we got the following results after filling variations with 200 different users each:

  • Variation A – 40 converted users;
  • Variation B – 57 converted users.
Sample size calculations with SplitMetrics
Calculating sample sizes with a mobile A/B testing calculator.

Thus, the observed difference in variations performance shows that we have statistically significant results at the confidence level of 95%.  The picture above shows the result of such validation performed using online mobile A/B testing calculator.

Let’s imagine we didn’t finish the experiment at reaching the above-mentioned result and continued driving traffic. When each variation got 500 users, we got the following results:

  • Variation A – 101 converted users;
  • Variation B – 127 converted users.

In this case, the significance checking will show that the observed performance difference isn’t that statistically significant at  95% confidence level.

Determining mobile A/B testing sample size
Calculating sample sizes again with a mobile A/B testing calculator.

Is the example we examined realistic? Sure, it is.

For instance, when the exact conversion values of variations A and B are 20% and 26% respectively, these values are within the appropriate confidence intervals for cases with both 200 and 500 visitors per variation.

According to this example, if we finished the experiment at reaching 200 visitors for each variation, it would be possible to come to the conclusion that variation B performed better. However, if we finished the test after having 500 visitors on each product page variant, we could conclude that both variations are interchangeable. Pretty confusing, isn’t it?

It raises the legitimate question:

How many users do we need to run trustworthy mobile A/B tests?

Thus, we need to figure out what sample size is necessary for getting statistically significant results in the course of our mobile A/B testing.

Validate concepts and grow your apps
Request Demo

How to calculate A/B testing sample size

Now, let’s review how to calculate a sample size for A/B tests based on statistical hypothesis testing.

First, we need to understand what null hypothesis really is. In mobile A/B testing, the null hypothesis is normally represented by the assumption that the difference between the performances of variations A and B equals to zero.

It has been theoretically proven that the sample size required for acceptance/rejection of the null hypothesis for KPI expressed in terms of the proportion (conversion rate in our case) depends on 5 of the following parameters:

  1. the conversion rate value of our control variation (variation A);
  2. the minimum difference between the values of variations A and B conversion rates which is to be identified;
  3. chosen confidence / significance level;
  4. chosen statistical power;
  5. type of the test: one-or two-tailed test.

Determining sample sizes for mobile A/B testing

Let’s clarify the above-mentioned parameters and determine the sample size for our MSQRD example:

  • The conversion rate of variation A: 20% (CR(A) = 0.2);
  • The conversion rate of variation B: 26% (CR(B) = 0.26).

Thus, the conversion rate value of our control variation A is 20% (CR(A) = 0.2). Our example presumes that:

  • the minimum difference between the conversion values of variations A and B is 6%  in absolute terms;
  • variation B performed better than variation A (CR(B) = 0.26).

In the course of sample size determination, some calculators for A/B testing request minimum conversion rate difference to be formulated in relative terms instead of absolute. In our example, the minimum difference of 6% in absolute terms corresponds to the relative difference of 30% (20% * 0.3 = 6%).

As it was clarified in our post on mobile A/B testing results analysis, the sum of confidence level and significance level values should be 100%. Let’s choose the confidence level of 95% and the significance level of 5% for our MSQRD example as these are the values of the parameters which are most commonly used in A/B tests.

For math lovers: A/B test sample size formula

There are multiple methods to calculate a sample size, but let us focus on two: one-tailed and two-tailed. The choice depends on what we want to check.

  • A one-tailed test is used if we want to check the significance of the observed positive difference in variations conversion rates (i.e. our goal is to replace variation A with variation B if the latter has better conversion rate).
  • A two-tailed test is used if we want to check whether CR(B) and CR(A) differ (i.e. we are interested in both positive and negative difference)

Example 1: Calculating sample size for one-tailed tests

Here’s an A/B test sample size formula for one-tailed tests:

sample size for one-tailed tests
An A/B test sample size formula for one-tailed tests

Example 2: Calculating sample size for two-tailed tests

In case of a two-tailed testing, we use the following A/B testing sample size formula:

sample size for two-tailed tests
An A/B test sample size formula for two-tailed tests

n1 – the number of visitors for each variation А and В in case of a one-tailed test;

n2 – the number of visitors for each variation А and В in case of a two-tailed test;

Zstandard score or Z-score.

The only difference between these two A/B testing sample size formulas is that Z(α) is used in the first one while the second uses Z(α/2).

The values of Z(α), Z(α/2) and Z(1-β) can be calculated with the help of the Excel function NORM.S.INV:

Z-score calculations with SplitMetrics
Z-score calculations with SplitMetrics

At making the calculations for the MSQRD example we’ve mentioned above and rounding the results, we’ll get the following values:

mobile A/B testing sample size calculations
Z-score calculations with SplitMetrics

These calculations can be made with the help of the free-to-use software Gpower.

mobile A/B testing sample size
Calculating with GPower

Therefore, if CR(A) is 20% and the estimated CR(B) value is at least 26%, we’ll have to run our experiment until each variation gets 608 different visitors to check the statistical significance at the significance level of 5% and with 80% statistical power. Thus, the total number of the experiment visitors should be 1216.

In case we are interested in both positive and negative conversion rate differences, the results will be slightly different.

If CR(A) is 20% and we need to find 6% difference in absolute terms, we’ll have to fill each variation with 772 different visitors to check the statistical significance at the significance level of 5% and with 80% statistical power. Therefore, the total number of visitors within this mobile A/B testing should be 1544.

Thus, if n is the sample size calculated according to the method described above, to compare variation B with control variation A, we’ll need n visitors for variation A and n visitors for variation B (the total is 2*n visitors).

But what should we do if the experiment we run has 3 variations?

This is premium content
Share this article
Gabriel Kuriata
Gabriel Kuriata
Content Manager
Gabriel is a professional writer with more than a decade of experience in bringing advanced b2b tech solutions closer to the people - with content in all forms, shapes and sizes.
Read all articles
Unlock New Levels of Mobile Growth
Validate concepts and grow your apps
Request demo
Share this article