May 2, 2019
How to Derive Business Insights Using SQL, BigQuery, and Google Data Studio
In a previous post, I discussed why a hypothesis is the foundational element of a successful marketing test. With a disciplined system of developing hypotheses in place, you’re now prepared to run tests that will make you more educated about your market, which will help you achieve greater market share.
A hypothesis alone, though, will teach you nothing about your market. To the contrary, a hypothesis without strong testing practices could actually lead you to the wrong conclusions, resulting in money wasted on a bad test, and potentially subsequent decisions that lead to a decline in market share! If you want to gain insights that will help you grow your business, follow these five rules when planning each test.
There are already enough uncontrollable variables at play in PPC; you should not add more. Google is always changing which ad extensions are served, updating your quality score, changing the layout of the SERP, and so on. Adding your own variables will just increase the chances that background variables will cloud the true results of your test.
Don’t run automated optimization logic (such as automated bid policies or ad rotation settings) on the test elements. This is especially the case for ad tests – you want to prevent one ad from showing more often during certain parts of the day or to a certain audience. Isolate your tests so that you can be more confident in drawing a conclusion. Launch fresh ads when starting a new ad test, even if you are keeping the old champion ad. You’ll want to reset the cookie pools for each message if you plan to evaluate the impact on conversion rate.
Speaking of variables, broad match! Broad match introduces significant and unpredictable variables to your testing environment. You have little to no influence over what search query or ad position a broad match keyword matches to in an individual experiment. How are you to know whether your new ad message is driving the stronger CTR, or if it’s just a result of the search terms that broad match keywords have matched to that ad? You can’t! Our tip: Run your tests in a more controlled environment, and push the top performers to your broad match ad groups or campaigns.
Make sure your test is worth the time investment. From brainstorming new text ad copy to redesigning your website, every test takes time, money, or both to set up. And then it takes both to run the test.
Don’t waste your time setting up a bunch of tests that change a single word in an ad, or button color on a page, unless you have good reason to expect an impactful difference in the outcome. Spend your time testing where you have the best opportunities for lasting takeaways, or where the outcome has the best chance to move the needle furthest towards your goal.
Testing for impact does not mean that you should only run tests that could hugely impact your sales or lead volume. Think big with your changes, but then find a way to limit the impact of the test on overall performance. If you’re thinking about completely restructuring your Shopping ad targets, start by testing a cross-section of the current feed that mimics the overall diversity and performance of the whole, but only spends 20% of the total. If the outcome of this initial test is positive, continue the rollout over time. If not, return to the legacy structure and re-evaluate.
Also, don’t run disruptive tests during high season! If you’re an ecommerce retailer, the first week of December is not the best time to test a complete transition away from phrase match. Try smaller tests, like optimizing how you shift budgets, or adding a new promotion.
Testing has the power to unlock entirely new levels of market share in digital marketing, the same way that it has the potential to waste your scarce marketing dollars if not properly managed. Build these principles into your marketing organization, and you can be sure your team is passing the test on testing!