Best Practices for Mobile A/B Split Testing - Are You Using Them?

While A/B split testing used to be a simple industry trend and buzzword, it is now an imperative and critical component of any mobile app development project. A/B split testing is the simultaneous use of two variations (variation “A” and variation “B”) of a change in the user's mobile app experience in order to methodically test which variation best achieves the developer’s goals. These goals can include increasing app downloads, increasing user retention, or even increasing user purchases. In this article, we will discuss several “best practices” for A/B split testing that must be implemented for an A/B split test to be successful.

Well Formed Hypothesis

For an A/B split test to be successful, it needs to rest on a well formed, precise hypothesis. A well formed hypothesis is necessary as it forces the developer to pin down exactly what they are trying to test and what impact they believe they will see with the test. A developer should be able to make the following statement before implementing an A/B test:

“When I make change X, I believe I will see impact Y.”

When one is clear with the hypothesis of their A/B test, they allow their A/B test to have purpose and to be analyzed against previously stated impacts.

Singular Changes

An A/B split test needs to test one change and one change only. For example, if you are trying to optimize the conversions on a check-out screen in an ecommerce app, do not change the call-to-action, purchase button UI, and credit card form structure all within one A/B test. If you make all of these changes simultaneously, any observed impact (more completed checkouts or less completed checkouts) will provide little actionable data for you. You will not be able to tell which change (of multiple) contributed to the observed impact, and you certainly will not be able to tell by how much each change contributed to the observed impact. Instead, developers should test one change in an A/B split test, such as testing variations with a change in purchase button UI. Changes in the observed impact on user behavior can then be more closely linked to variations in this singular change.

Statistically Significant Data

Be careful not to make an A/B split test an A/B/C/D/E/F/G split test if this comes at the price of greatly limiting the number of users that experience each variation. While it is fine to test multiple variations of the same changed component in a mobile app, each variation will further divide the pool of users that are presented with each particular variation. If only 10 users are being presented with a particular variation, the results of your A/B split will hardly be significant or actionable. Results with such a small sample size are likely to be the result of chance rather than the result of a variation's root impact on user behavior.

Differences in Conversions Across Traffic Sources

It is important to remember that not all traffic is the same. In A/B split tests, one must drill down on spilt test data to examine conversion differences across different traffic sources. This is important as variations in traffic sources - rather than the implemented A/B variation - may be responsible for the observed impact in and A/B split test. For example, imagine that in an A/B split test the checkout form structure is changed and variation B is unequally shown to users acquired via paid advertisements. In this case, it will be hard to determine if any observed impact with variation B is due to the change in the user experience or due to the difference in the types of users that were shown variation B.

Conclusion

A/B split testing allows mobile app developers to move away from relying on hunches and feelings and, instead, to move towards relying on data to make their decisions. However, while A/B testing is a very useful tool in mobile app development, this tool can lose its effectiveness and even become dangerous if not used properly. If data is misinterpreted and impacts are wrongly attributed to variations in user experience (rather than being written off as not being statistically significant or the result of a differences in user types that are shown each variation), A/B split testing can lead developers in the wrong direction.

To read more from us on mobile app development, particularly on the value of A/B split testing, read our article on how we achieved 1000% growth using mobile app optimization methods here.