July 19, 2021
iOS 14 Update: Facebook’s Data Challenges Show the Need for Alternative Measurement in the Privacy Era
With all the factors that go into setting up effective Facebook campaigns, it is crucial that digital marketers are isolating variables to run successful tests and properly assess performance to make data-driven decisions. Facebook split testing allows you to compare the success of said variables to execute strategy optimizations.
Similar to an A/B test, split testing allows you to create multiple ad sets and test them against each other to see which produces the strongest outcome, according to your campaign objective. Within ad sets, you can test one of the following variables:
Each ad set will be identical aside from the variable you’re testing to ensure other factors won’t skew the data. If you’re not running an audience test, the split test will divide your audience into random, non-overlapping groups. When the test is completed, you’ll receive a notification including conclusive results to help drive future strategy.
You can also use split testing to set up tests with more than one variable, but you obviously can’t hone in on the exact variable that drove the winning ad set in this case. It can be helpful if you want to better understand which of two ad sets is performing best, but maybe less so in identifying overall trends. We recommend running split tests where you’re able to isolate one variable to ensure clean data collection.
Because of the way Facebook advertising relies so heavily on automation and machine learning, the Split Testing function is the only reliable way to run what you’d think of as an A/B test. In other words, if you set up two ads to A/B test against each other outside of a split test, Facebook’s ad algorithm would likely influence the test result.
As ad costs on Facebook continue to rise over time, regular split testing can help you find new ways to lower your CPM, improve conversion rate, or expand your reach. The result of these tests can inform ad messaging, audience choices, and more on Facebook ad networks, but also beyond the Facebook platform.
Split testing can also help you to experiment with ad options that you or your team might be unsure of. For example, we recently ran a split test for an ecommerce client to assess the potential of automatic placements. Because of our past experience testing automatic placements, our hypothesis was that we would likely reach more users and drive a stronger return. Their team had concerns about placement control. A split test allowed us to agree on a way to give automatic placements a shot.
As it turns out, with automatic placements, we saw a 40% increase in reach and a 5% decrease in cost per order. Because we isolated a variable to test, we were able to definitively understand the impact of this optimization compared directly to the previous way of operating.
So you’re ready to run a split test? There are a few things you should keep in mind. Let’s say we’re a clean beauty brand, working to understand what type of messaging is most engaging to our target audience:
Note: The audience(s) that you run for your split test should not be live in any other campaigns during the test period – that would likely interfere with the validity of the result.
Before you launch your test, you should always form a hypothesis going in that tests an explicit point of view about your audience. In this case, that hypothesis might be, “We’ve always assumed ads discussing ingredients would work best because we’re a clean beauty brand, but now that this market is more mature and we’re already recognized this way, focusing on our value propositions would be more compelling.” When you outline a hypothesis prior to testing, you’ll have a clearer takeaway when analyzing the outcome.
At this point, I imagine your mind is racing with possibilities of what to test first! The beauty of split testing is the ability to trust your results because of the steps you’ve taken to isolate variables and ensure clean data collection to drive strategy. Happy testing!