Two popular buzzwords in the digital marketing world these days are “lift” and “incrementality.” And while these may seem like novel concepts to performance digital marketers, the actual concepts have been around for years. After all, brand marketers have always evaluated their performance based on brand growth among target segments, not direct-click conversions. Today we’ll explore why these concepts are suddenly all the rage among digital marketers, and how you can incorporate them into your marketing program.

How Did We Get Here?

 

For the last decade or so, “performance” digital marketing typically meant any type of marketing where you could evaluate your impact based on direct-click ROAS or CPL. This type of evaluation was never perfect – plenty of people wreaked havoc by being influenced without clicking, clearing their browser cache of ad tracking cookies, switching devices, etc. Despite often being used to evaluate much more, this type of evaluation has really only been truly effective for low-funnel digital channels like SEM, where users were most likely to click an ad immediately before a purchase. For any other type of advertising, from paid social banners to TV and print ads or billboards along the highway, the primary method of user interaction was via an impression, not a click, and so click-to-conversion performance was either misleading or impossible.

Additionally, evaluating demand capture channels via direct response posed another serious question: How much is a given tactic driving net new customers versus simply bringing in customers who were already likely to buy? Most SEM programs, for example, run ads for brand searches and image or search remarketing, which in most cases drive much better click-to-conversion performance than other channels or tactics. It’s reasonable to assume, however, that many of those conversions are from people who were planning to purchase anyway and who happened to interact with an ad as they navigated back to your website.

Why Measure for Incrementality

 

The purpose of lift or incrementality testing is to try to understand the relative increase in performance driven by a given marketing campaign or initiative. Whereas standard performance marketing analysis seeks to understand how many leads, sales, or users resulted from each ad interaction, incrementality measurement looks for how many more of them you received by running each ad. Which is more important to you — spending a dollar on a remarketing click to get a visitor to convert when they were 75% likely to buy anyway, or spending a dollar on a converted visitor who otherwise never would have known about your brand (0% likely)? It’s probably the latter case. That’s what incrementality measurement tries to shed light on. In your reporting, the ROI of those two dollars might appear identical, but your business will grow more by investing in the second opportunity.

Incrementality Example: Nurture Remarketing

 

Let’s say you have a product that requires frequent purchases for upgrades, like an in-app game with upgraded characters or capabilities. It stands to reason that one of your top performing marketing campaigns is remarketing to current users to encourage them to upgrade. According to your in-platform stats, this remarketing campaign drives lots of revenue each month. However, your current user base is also seeing ads for upgrades in the app and in email alerts. Plus, many of them really like the game and are excited to buy upgrades upon each new release. In this type of saturated environment, how can you determine which, if any, of these tactics is actually driving the new revenue?

This is a textbook example of an incrementality test. In this case, you could split your current user base in half, and then turn off remarketing to one half (test group) while running remarketing as usual to the other half (control group). After a set time period, look at the difference between revenue from the test group and the control group. If you see about the same revenue from both, you can conclude that your remarketing ads are not doing much to increase sales, and that your marketing dollars are better spent elsewhere. If you see higher revenue from the control group, you can compare the incremental revenue to the remarketing spend to get a better read on the real ROI.

In the example below, the control group (which received remarketing ads) drove $200,000 in additional revenue over the test group (no remarketing). This advertiser could compare the spend on remarketing to that incremental revenue as another method for determining where to spend available budget.

 

Incrementality Example: Prospecting

 

Evaluating remarketing for incrementality is relatively easy, since you already know something about the customers and can use your first party data to create and test against different audience segments. Evaluating prospecting, where you are trying to gain entirely new customers, is much more challenging. There is no perfect way to conduct such an evaluation, and it will generally all depend on your goals.

Let’s say, for example, that you’re trying to understand the impact of branding spend on a target market. There are nearly limitless ways you could structure this test, but all of them involve comparing a treated audience (which sees the ads) to a control audience (which does not see the ads) and then measuring impact based on lift in leads or revenue.

Perhaps the most straightforward way to structure this test is to select a target geography or geo set to receive the treatment, then run your campaign only there for a set period. Once you’ve done so, measure the revenue or lead performance in the test region against the overall change or a comparable geo set. The benefit of running your test this way is that it is easy to set up and relatively easy to interpret the data. The downside is that it can be tricky to set up in a way that mitigates variables. For one, you need enough budget and ad volume to see an impact. Running a $3,000 YouTube campaign in San Francisco isn’t going to give you confidence it was related to a change in performance in that market, even if there was one. Similarly, if you choose a test area where you already have lots of customers, it should already perform better than the average. You can control for this somewhat by comparing the test geo to other geos that have shown similar buyer behavior in the past, but there’s really no perfect way to control completely for fluctuations in customer behavior.

Another method involves interval testing – toggling spend up and down over a set period to evaluate impact. To do this, you might choose a geographic set to test and then turn ads on and off for specific intervals – a week, two weeks, even monthly. The interval needs enough time for the target KPI to be affected, enough volume in the target KPI to stand a chance of confidence in the result, and in a period where seasonality or other external factors are less likely to influence the data for stretches of time. If the performance rises and falls in line with your marketing spend, that will allow you to calculate the incremental contribution from those ads.

Caution for the Budding Incrementality Tester

 

There are a few things to keep in mind if you’re planning an incrementality test. As noted above, you’ll typically need to plan your budgeting thoughtfully and be prepared to commit a decent amount to run an effective test. In order to have an impact, you’ll need to show multiple ads to your target audience over an extended period of time. If you attempt to “invest a little bit to see if it works,” you’ll probably end up with unclear data that is impossible to interpret, meaning that your investment was wasted anyway.

Second, you need to give this test time to work and time to show results. Prospecting ads expose potential customers to your brand, so that when a need does arise, yours will be the first brand they think of. How long it takes for that need to arise will depend on your product, market, and several other factors, but you’ll typically need a longer period of time to run a test and analyze the results before you can draw a firm conclusion.

Conclusion

 

Lift and incrementality testing is a great way to better understand your marketing performance and draw a stronger conclusion about the value of your marketing investments. However, it does require a fair amount of effort and investment upfront to establish the tests, collect the data, and interpret the results. Put time in to set up a strong test, and iterate each round to see if you can improve with experience. No one test or result will tell you everything you need to know, but each test should drive learnings that will help you make impactful changes to your marketing program.

Share: