Why Negative Results from App A/B Testing May Weigh More than Positive Ones

app ab testing case study mallzee

The fact is not all app A/B experiments show a boost in app conversions. Other two types of results app developers often get from mobile A/B testing are:

  • No difference between optimized and control results.
  • Optimized results perform worse than control ones.

App developers interpret zero-results as lack of app A/B testing benefits for them. Yet, the reality is that if a hypothesis works worse or shows no difference, A/B testing allows you to see it on time and save money you would spend if changes in the live version of the app page were implemented.

It also gives you understanding which direction leads you to your final goal – conversion improvement. Basing on data app developers get from unsuccessful tests and App Store analytics, they continue developing ideas that will ultimately bring significant results.

Rachelle Garnham, a digital marketing manager at Mallzee, shares their case study on getting valuable audience behavior insights after running a series of tests with negative results.

What’s behind Mallzee’s decision on running app A/B tests?

Mallzee is the UK’s top non-retailer shopping app for iOS and Android, helping users quickly find and buy clothes from 100s of high street brands. After overwhelming user growth in 2016, Q1 2017 at Mallzee was about further refining each area of our Marketing strategy.

With good organic visibility and very strong paid acquisition channels, we identified App Store conversion optimization as a key area of focus that would impact across acquisition strategy.

How did you come up with A/B testing ideas?

We formed a thorough plan for our tests, ensuring the changes were minimal to allow a clear impact to be understood and an expectation of iterating and adapting our plan after each result. We’ve tested multiple areas from screenshots to description, icon and titles.

For our tests, we have chosen SplitMetrics platform as it allowed us to build the tests very quickly and iterate on our results.

Our icon tests have been particularly interesting. We tested inverting our logo colors for a more vibrant and appealing logo, assuming that this would improve conversion.

Did you see results you expected?

In fact, this variation was 14% less effective than the original. Quite a significant difference on such a small change.

Similarly, we trialed a more fashion focused icon with the original this time proving 28% more effective.

Whilst it was initially disappointing to see that our new ideas weren’t having a positive impact, testing on elements like this have been invaluable, even when the results are negative. It’s the type of change that would previously have been made without testing, assuming that it wouldn’t have such an impact and we would have lost valuable conversions to download.

app ab testing case study mallzee

This taught us the importance of testing any assumptions before implementing them live on the App Store.

We ultimately iterated through enough versions to find success with our brand name logo which is now live on the App Store and has improved conversion by 10%.

mallzee app ab testing app store splitmetrics

In your acquisition strategy, you use app advertising alongside with organic search. Did you analyze how each channel reacted on changes?

We were keen to implement new ASO options, and so we also tested title ideas.

However, the two title options that fitted our ASO plans again saw a 25% and 17.5% lower effectiveness than the existing text. This was an important lesson in balancing the value of ASO rankings and on-page conversion optimization. As on page conversion also impacts our paid acquisition channels, it’s usually most important to prioritize this over ASO elements.

Whilst we’ve run some tests that had no significant winner, we now test all App Store changes as standard with winners introduced and working together to improve our overall App Store conversion.

What are your key takeaways from doing months of A/B tests?

Your assumptions will usually be wrong! It’s important to test and not assume that your opinion will extend across your entire potential user base.

Small changes can have a big impact. Even if you think a change isn’t significant enough to test, it could have a substantial impact on conversion. You don’t want to make that mistake live on the App Store if you have 1000s of impressions to be impacted every day.

Don’t make your App Store decisions based entirely on branding rules. You, of course, want to keep your branding consistent but when small design changes can help convert users, it’s very important to be flexible. It can even inform wider design decisions away from the App Store.

Test individual changes independently and thoroughly so you know exactly what change has had an impact.

What are your results from app A/B testing and ASO? Share in comments!