March 11, 2020
Diving Into the Redesigned Google Ads Attribution Report
Considering the challenges that all marketers face with accurately measuring the impact each channel has on overall business revenue, especially as purchase paths become increasingly complex, measuring ad performance by incrementality is one way to get closer to understanding the true performance lift a given tactic contributes to your business. Our CMO even noted it to be one of the most important tools in a digital marketer’s toolbox in 2020 as cookie-based tracking becomes harder to rely on.
There are a lot of ways you could measure incrementality, but we’ve rounded up five of the ways we’ve done it to summarize how you can set up a test, how measurement will work, and the pros and cons of each approach.
Important notes: With all incrementality testing, you must put enough spend behind the test to get meaningful insights. It may seem like you can do a small-scale test, but you likely won’t get enough data to develop actionable learnings. Some tests will require more spend to get the level of result you need than others, but be prepared to invest. Secondly, you might notice that it’s nearly impossible to design tests that don’t have some level of potential bias or externality involved. Don’t let that deter you! There is still great insight to be gained, but maybe just know that this data should be used directionally and not as a direct statement of exact return.
How it’s set up: Geographical split tests are set up by choosing a subset of geographies (states, cities, etc.) to show ads to, and a set of geographies with similar characteristics to the test group (size, region, demographics, behavior, market density, etc.) where you won’t show ads. When selecting what geographies to use, it’s important to make sure you are selecting places where no other tests will be running and where you will be able to hold other marketing efforts relatively steady to get the cleanest possible results.
Measurement: After running your test, you will be able to measure changes in metrics like new site visitors from Direct and Organic channels, as well as brand searches and revenue in the exposed areas compared to the control areas. From there, you can calculate the incremental lift that your ads drove.
· Easy to set up for almost every channel
· Can concentrate spend in a few areas to stretch a relatively small incrementality testing budget
· Data sets can easily get muddy if other marketing efforts change in test markets mid-test
· Geographical locations don’t create truly randomized groups – there will always be natural variances in behavior between states
How it’s set up: Customer list segmentation takes customer lists and splits them into different segments to serve on different channels to measure the incremental impact each additional channel has on performance.
For example, if you have five segments of your list, the set up might look like this:
· Segment 1: No marketing
· Segment 2: Only email
· Segment 3: Email + Facebook/GDN retargeting
· Segment 4: Email + Facebook/GDN retargeting + YouTube retargeting
Measurement: After running your test, you will be able to compare overall revenue for each segment to determine the incremental revenue driven by each additional channel
· Reliable and clean measurement
· Easy to set up once audience segments are created
· Can help to measure incrementality across various channels with one test
· Requires upfront time investment to create audience segments
· Can be difficult to control for unintended bias in audience sets
· Audience is limited to current customers or email lists – in most cases not possible to use this method with prospecting efforts
How it’s set up: “Dummy Ad” holdout tests are set up by choosing a percentage of your traffic to receive an ad for your company and serving the rest of the traffic an ad for something completely unrelated to your business, typically a public service ad or nonprofit cause. This will allow you to measure conversions for both sets of ads and compare conversion rates for each just like you would do between two standard campaigns.
Measurement: After running your test, you will be able to compare the performance between the two groups and determine the incremental lift in conversion rate, conversions and revenue that seeing your ad drove.
· Allows you to measure lift in conversions and/or revenue in-channel, which can be preferable when you don’t see a lot of direct conversions in your web analytics software for the channel you want to measure (YouTube is a good example)
· You can choose to serve dummy ads to a small subset of the audience to mitigate wasted spend and then scale up the results to be similar volume to the control group
· Spend is going to serving the “dummy ads,” rather than your business
How it’s set up: YouTube Brand Lift Studies are set upright in the Google Ads platform. They are survey-based studies, and the metrics that the advertiser chooses to measure determine what survey questions are asked. For example, if an advertiser chooses to measure brand awareness, viewers would receive a survey question asking something like “which of the following brands have you heard of?” with options for that advertiser and their competitors.
After selecting which metrics you’d like to measure from Brand Awareness, Brand Recall, Brand Consideration, Brand Favorability, and Purchase Intent and setting up the study, Google will automatically hold out half of your chosen audience from seeing your ads in order to be able to measure the difference in responses between the exposed and holdout groups.
There is also an option to measure Brand Interest, which looks at the increase in brand searches across Google and YouTube from people that see the ads vs. people that don’t. However, this metric requires a higher level of spend to get statistically significant data.
This is different than the other tests noted above. It’s much more useful for understanding the power of the creative you use in capturing your audience’s attention or moving them to act than it will be to gain insight on incremental conversions or purchases.
Measurement: Google will calculate the difference in “positive” responses (meaning that people selected your brand from the list of options) between the exposed and holdout groups to calculate lift. You can see metrics directly in the Google Ads interface for relative lift, absolute lift, lifted users and cost per lifted user.
· Relatively easy to set up in the Google Ads interface
· Multiple lift metrics to choose from
· Can customize what competitors you want to show alongside your brand
· The test setup is one of the most controllable available
· Can only be used with YouTube campaigns
· No metric that measures conversion lift
· Minimum spend requirements that need to be met to use the tool (currently a minimum of $5,000 in the first seven days for one question)
How it’s set up: Similar to Google’s Brand Lift Study, Facebook Lift Studies are set up in the Facebook platform and are specific to Facebook and Instagram campaigns. However, instead of being survey based, Facebook Lift Studies automatically split your audience into two groups (one that receives ads and one that doesn’t). Also, unlike YouTube Brand Lift, Facebook will measure incremental conversions and revenue driven by the exposed group compared to the control group.
Measurement: Facebook automatically calculates the lift driven by users exposed to the ad compared to users that didn’t see an ad.
· Easy to set up in the Facebook platform
· Allows us to measure conversion lift and determine true incrementality
· Only works for Facebook & Instagram
There you have it! Those are five methods available to any digital marketer now to start gaining an understanding of the incrementality of running ads on any number of popular channels. We hope it makes you feel incrementally more in control of your ad budgets! If you want some help tackling tests like these and more, you can contact our team.