Tony Fagan, Director of Research at Google, answers the six quant questions every CMO should be asking in order to maximise their return on search advertising.
Welcome to the age of experiments. At Google, we believe that online advertising is a more measurable medium than television, radio and print. How can we be sure? Because we look at the stats.
Business is about trial and error, but with statistics comes a method to make the process work better. With the data generated from search, click-through and conversion rates, we’re able to address and improve ad campaigns on the fly.
The process is called ‘test-and-learn’ and it’s the gold standard for calculating whether something caused something else. In marketing, we call them ‘A/B tests’. The idea is simple: test A versus B to see which one works better. That gives us the ‘incremental’ improvement – the difference between doing something and not doing it.
We can use these experiments to address six commonly asked questions about running search ad campaigns on Google. The answers will give you an insight into how to make online advertising work efficiently for your organisation.
#1: Should I manage my spend through bidding or a daily budget cap?
Some advertisers choose to manage their spend using the daily budget cap feature in AdWords. This is fine unless you’re hitting your cap, because that’s when we remove you from all future auctions for that day. And this is potentially expensive.
If you were to lower your bids so you just meet your budget cap at the end of each day, you would potentially spend the same amount but get more clicks. How? When you lower your bid, you lower your position and the cost of clicks. This saves money throughout the day, allowing you to participate in more auctions. You can use AdWords Campaign Experiments for this too, video here.
We conducted an A/B experiment on behalf of an electronics manufacturer to analyse how budget caps affected their AdWords performance. By removing the budget cap, this advertiser was able to spend 170 per cent more with 170 per cent more clicks at the same cost-per-click. Pretty good, right?
#2: How much should I spend to maximise profit?
Auction theory tells us to increase a bid until the ‘marginal’ cost-per-click equals the value-per-click. The marginal cost-per-click is different than the average cost-per-click, and is often higher. So if you’re managing your spend to an average cost-per-click, you’re paying too much – and making less profit. We recently released a few tools to help you with this, including Google Bid Simulator (video here), which estimates the traffic you’ll get for a keyword at a different bid.
We ran a second experiment with the same electronics manufacturer to determine its optimal spend, testing different spend levels by changing bids and adding/subtracting keywords. We found that reducing bids by half resulted in 37 per cent lower spend but 20 per cent more clicks. Using the results data, we were able to draw the ‘marginal cost-per-click curve’, which plots the marginal cost-per-click against spend. By selecting the point on the curve where the marginal cost-per-click equals your value-per-click, you have your optimal spend.
“If you’re managing your spend to an average cost-per-click, you’re paying too much – and making less profit.”
#3: Should I buy branded keywords, non-branded keywords or both?
This is a hotly debated topic among advertisers. To answer this question we conducted a geo experiment with Vineyard Vines. In a control group we purchased generic keywords. In a test group we purchased both generic and branded keywords. The test group generated 14 per cent more total clicks across both organic and paid clicks combined. Conclusion: they should buy branded keywords.
But how much should they pay? Consider a search results page that contains both an organic search link and a paid search ad, where the user clicks on the ad. What would have happened if the paid ad wasn’t there?
Either the user wouldn’t have clicked through, or they would have clicked on the organic link and found the website anyway. The first case generates an incremental click; the second case is called ‘cannibalisation’. If we know how many of the paid clicks are incremental then we can calculate how much to pay for them.
Suppose the test group generated 63 incremental clicks, yet AdWords reports 100 clicks from branded keywords. We know that the cost of 63 incremental clicks equals the average cost-per-click reported in AdWords divided by 63 per cent. If the average cost-per-click is £1, the average cost-per-click of the incremental clicks is £1.59. That’s how much to pay for branded keywords.
#4: How much ‘indirect’ traffic am I getting from AdWords?
Suppose someone views your search ad but doesn’t click on it, then subsequently navigates directly to your website by typing your web address into their browser. Now suppose that the user directly navigated to your site because of previously viewing your ad. This is an indirect effect that won’t show up in your AdWords report even though it was still caused by your ad.
Some people call this the ‘view-through’ effect, because a user viewed but did not click on the ad. There are many possible combinations of views and clicks across natural search, paid search and other media. Rather than try to decipher this mess, we simply run our usual A/B test to measure the effect in aggregate, which will help us better value our search ads.
In this case, we’ll recruit a panel of users who opt-in to participate in this study and use a software plug-in for the browser to control which ads they see and don’t see. The search ads shown to the test group will be suppressed for the control group.
Then we’ll compare the total website visits for each group, including both visits from users clicking on the ads and indirect visits from users visiting the site having been exposed to the ads. For a retailer, we observed a 62 per cent increase in total visits to their website.
#5: Is the campaign increasing traditional brand metrics?
We can also use A/B experiments to measure traditional brand metrics such as awareness, consideration and favourability. We simply survey the test group who were exposed to the search ads and the control group who weren’t, and then compare the results. This setup is sometimes called a ‘laboratory environment’ because we’re artificially asking the panellists to perform specific searches rather than observing their behaviour in the wild. We’ll use a study with General Electric (GE) as our example.
As part of the experiment, we asked the panellists to search ‘renewable energy’ then changed the search results pages in a variety of ways for the test group and control group. When a GE ad was shown in the top spot on the search results page for the test group but not for the control group, 28 per cent more respondents cited GE as the first company they thought of when it comes to renewable energy. And 36 per cent more respondents correctly recalled GE’s ‘Ecomagination’ tag line.
The GE case study is here.
#6: Are my AdWords ads increasing in-store sales?
This is a good one. We don’t provide a measure of in-store sales in AdWords, but we can measure it with a geo experiment. We’ll need to look at a few months of sales data for each retail store, the amount of search ad inventory available around each store location, and historical AdWords performance.
This information will allow us to determine how much to spend each day, how long to run the experiment and how big an increase in sales we will be able to detect (assuming there is one). This is called a ‘power analysis’. The next step is to run the experiment and analyse the results. We obtain the increase in in-store sales by comparing in-store sales from the test group to the control group. Results from a recent experiment with Vodafone found a 1.5 per cent increase in in-store sales, with 400 per cent return-on-ad-spend.