Incrementality testing is a strategic approach to app store marketing that allows ASO professionals to accurately assess their ad spends and minimize cannibalization.
It can be extremely difficult to distinguish paid installs from organic ones. Because of this, app store marketers often miscalculate their marketing spends and pay for downloads that would have happened organically. It’s like accidentally flushing a pile of cash down the toilet.
Fortunately, this problem can be solved via incrementality testing, which will show you the true impact of your marketing efforts in contrast to your app’s organic reach. This will allow you to accurately gauge the success of your marketing campaigns and adjust them accordingly.
Still with us? Great, let’s take a deep dive into why incrementality testing is so important…
The Importance of Incrementality Testing in Mobile App Marketing
Customer acquisition in 2021 is a lot more complicated than it was in, say, 2006. These days, marketers often engage users 10+ times via a variety of channels before conversion. This level of complexity is why “first touch” and “last touch” attribution models fall short.
Fractional attribution models, such as when 70% of the credit for a conversion is given to the first touch and 30% of the credit is given to the last touch, is much more effective. But it still doesn’t help us understand the value of each individual touchpoint in the buyer’s journey.
Enter incrementality testing, which will allow you to:
- Separate installs from paid marketing campaigns and organic reach
- Accurately compare the effectiveness of one marketing campaign to another
- Pinpoint the lift your marketing campaigns produce against specific audiences
Stop wasting your money and find out, once and for all, if your app marketing efforts are actually working or not. Incrementality testing has the answers.
How to Run an Incrementality Test For Your App
Incrementality testing is the best way to measure the effectiveness of your app marketing efforts—as long as you do it correctly. Follow this approach to find the answers you seek:
The basic idea is to separate users into two equal groups—a test group and a control group.
One of these groups will be exposed to an ad for your app, the other won’t be. Then the conversion rates for each group will be analyzed and the actual cause and effect of your marketing efforts will be known, allowing you to make better marketing decisions.
Here’s how you should set up your test so that the above scenario produces accurate results:
- Randomization: The individuals in your test and control groups should be chosen at random. That way you can properly evaluate the results achieved from each.
- An Initial Hypothesis: Study the data you’ve collected so far and come to a conclusion. If you don’t have an initial hypothesis, your results will be pointless.
- A Primary Outcome: Understand what your ad is trying to accomplish. If your goal is to generate app downloads, then your primary outcome is a conversion event.
- A Reporting Cycle: Choose a start and end date to test your hypothesis. This will help to minimize potential distortions due to time.
- Expected Use of Outcome: Make sure your efforts advance your knowledge in some way. You should know more about your strategy, budget, etc. by the end of the test.
To understand your test results and benefit from incrementality testing, you need to understand a couple of key terms. We’ll cover them now:
- Lift: This represents the likelihood of conversion. It’s found by subtracting the results of the control group from the results of the test group and dividing this figure by the results of the control group. A lift of 20% means those who see your ad are 20% more likely to download your app than those who don’t see it.
- Incrementality: This represents the percentage of conversions generated by your app’s ad. It’s found by subtracting the results of the control group from the results of the test group and dividing this figure by the results of the test group. 30% incrementality means you would lose 30% of installs if you don’t show your ad.
Let’s illustrate the importance of these terms with a quick example:
Company XYZ gathers together a portion of its audience that shows similar behavior in the app stores. It then randomly separates them into two groups: Group A and Group B. Group A is shown an ad for Company XYZ’s app, Group B is not. This means Group A is the test group and Group B is the control group for the purposes of this test.
Company XYZ analyzes the results achieved over a specific period of time and finds that Group A (the test) had 120 installs, while Group B (the control) had 100.
This means that the lift for this test is 20%, AKA Company XYZ’s audience is 20% more likely to download its app after seeing one of its advertisements. It also means that the incrementality is 16.7% (20 installs is 16.7% of Group A’s total) AKA Company XYZ would have lost 16.7% of its downloads if it had not shown the ad to its target audience.
Now that Company XYZ has accurate data, they can determine if the money they spend on ads is worth a 20% lift in installs. If not, they can reasonably expect to retain 83.3% of the installs generated—even if they don’t run any more of the same ads.
Things to Keep in Mind
Before you get too far ahead of yourself and start thinking about how to move forward with your incrementality testing results, make sure you keep these things in mind:
- Control Your Variables: Your two testing groups need to be statistically equivalent. Don’t try to test groups over different time periods, for example.
- Know Your Primary Outcome: Outline your primary outcome before you start incrementality testing. Are you looking to generate installs, in-app purchases, or something else? These tests won’t do you any good without a specific goal.
- Choose Your Approach: Incrementality testing only works if you have a baseline figure to compare results against. You can get this figure by completely pausing your marketing spend or by “blasting” your marketing spend. The first approach will give you a baseline for organic installs, the second will help you determine your potential for success. Choose the approach that works best for you and your company.
What to Do After Your First Test
You’ve just run your first incrementality test—congratulations! But what do you do now?
First, take a long, hard look at your marginal costs. Just because a specific ad produces a lift in installs doesn’t mean you should scale it. You need to assess whether the lift is worth the price you pay to achieve it. If not, kill the ad and try something new.
Next, do whatever you can to combat cannibalization, which you can do in two steps:
- First: Assign someone to track and analyze organic results. That way you have someone to advocate against paid UA when the data calls for it.
- Second: Give this person the authority to stop ad spend when they see signs of cannibalization. This will help to ensure your paid UA strategy is in line with your overall growth objectives. Don’t blindly rely on paid traffic!
From there, continue your testing efforts to find a reliable path forward for your company.
Incrementality testing is the best way to test your ads—but you have to do it right in order to get accurate results and avoid cannibalization. Fortunately, the process isn’t terribly difficult:
- Find your baseline and measure incremental lift.
- Decide what your primary outcome should be.
- Take a long, hard look at your marginal costs.
- Use data to scale up while avoiding cannibalization.
- Adjust your hypothesis as new results come in.
Incrementality testing will give you valuable insight into your marketing efforts, help you determine if they’re worth the costs, and ensure you don’t cannibalize your traffic.
It gets complicated—as you can clearly see if you’ve read the above. But this form of testing is more than worth it because it’s so much more accurate than other methods. Give it a try.