June 1, 2020
Google Ads Affinity Audiences for Search
There’s a decent chance that I’ve lost your attention before I’ve even begun. That’s because PPC blog posts espousing the importance of ad copy testing are more common than candy on Halloween. If you’re still with me though, I’m not here to tell you that you should test ad copy. You already know that. The biggest issue is that most advertisers stop at testing.
In almost every new account that we bring on board, we see advertisers testing 2-3 ads in each ad group. The problem is that as an advertiser you don’t realize improvements in account performance just from running a test. Let me repeat to be very clear: you do not improve your PPC account performance just by testing ads. You improve performance by acting on the information gained through ad copy testing.
If you go to Foot Locker to buy a new pair of running shoes, you’re likely going to try on a couple different pairs (ad copy testing your running shoes). One of those pairs is going to be more comfortable, lighter and fit your foot better than the other. The reason that’s important is that you want to optimize your performance. You want a shoe that allows you to comfortably run further at a faster pace. Once you figure out which shoe you prefer, would you buy both pairs to rotate them on runs? Would you wear one of each shoe on each foot? Of course not! So why, when advertisers know that a certain selling proposition in ad copy is working better than another, do they continue to run both ads!
Whether you’re running a multivariate test or a split test, you need to gather enough data to make an informed decision on ad copy. That said, at that point the impact on performance can be significant:
If you pause ad #2 and anticipate the same performance out of ad #1, you can then apply ad #1’s rates to the 17,991 impressions given to ad #2.
17,991 impressions * 2.43% CTR = 437 clicks (compared to 367)
437 clicks at $1.56 CPC = $682.00
437 clicks * 4.62% conversion rate = 20 conversions
Now, instead of this ad group producing 44 conversions at $38.73 each, it will produce 52 conversions at $33.77 each. That’s an 18% increase in conversions at a lower CPA. What did you have to do to achieve it? Take one second to set an ad to pause.
Your work isn’t done though! Maybe you were testing a unique selling proposition in ad #1 that concerned price and selection vs. a USP focused around quality in ad #2.VS.
Thus, you have data that shows the USP from ad #1 outperforming the USP from ad #2. Now you should test that winning ad copy around selection and pricing in other ad groups. This allows you to more broadly leverage the winner of your ad copy test and realize better performance account-wide rather than just telling your VP of Marketing, “We’re pretty sure pricing and selection works better in ad copy.”
In this situation, you would want to implement that same line 1: 300+ Blue Widgets from $9.99, in other applicable ad groups throughout the account. You can either run this head to head with the winning ad copy in each ad group, or if you have conclusive data, replace existing ads. What you are effectively doing is shifting a higher percentage of your overall account spend from lower performing ad copy to higher performing ad copy. This will positively influence the overall account performance.
More to come in later posts on how to analyze ad copy tests to determine winners across multiple campaigns. For now, don’t head out to the track with a different shoe on each foot. Act on your ad copy tests!