Mobile A/B Testing Results Analysis: Statistical Significance, Confidence Level and Intervals

statistical principles of SplitMetrics mobile A/B testing

The aim of mobile A/B testing is to check if a modified version of an app page element is better compared to the control variation in terms of a certain KPI. In the context of app store pages A/B testing, conversion becomes a core KPI most of the times.

However, we all know that it’s not enough to create an experiment with 2 variations, fill it with a dozen of users and expect distinctive and trustworthy results. What turns any split-test into an A/B test you can trust then?

The main characteristic that defines a successful A/B experiment is a high statistical significance which presupposes you’ll actually get a conversion increase the test promised uploading a winning variation to the store.
Read more

ASO checklist

Improve your ASO with the World's Most Ultimatest App Store Optimization Guide!