A/B testing helps. When it comes to choosing your bidding strategy you should make sure that the way you are running your ads is the most efficient. In this article, we will cover some learnings from our side and share tips on how to get started with A/B tests. The focus is on paid search campaigns.
Correct measurement
Before you do any tests you should understand that what you want to measure are business outcomes. Do not fall into the trap of optimising goal completions on your analytics property. You have to measure what matters. Let’s put it into context.
B2B or complex products involving sales teams
For complex sales cycles, it pays off to understand the whole customer journey and not just the form fills. Here are examples of complex customer journeys.
Example one: You are selling an expensive product to a single buyer. Examples here could be education, cars, software or life insurance.
It’s important not to measure the point of conversion as a form fill. Instead, the focus should be on the leads which turn into opportunities or the deals.
Example two: You are selling a complex software product to a company and multiple people are involved in the customer journey.
Optimising on a cost per form fill here will not even closely help you to make efficient decisions. Instead, you should focus on getting each touchpoint mapped to the customer journey and then make decisions based on business outcomes.
eCommerce conversions
In eCommerce, there are similar pitfalls. Optimising on eCommerce revenue can depending on your product and market be suboptimal, especially when you are dealing with cancellations or returns.
You should always factor in cancellations and returns into your measurement and reporting. Not doing so can lead to decisions based on an incomplete data set. We have clients who had >40% of their orders returned on specific campaigns. Not knowing which campaigns are causing the returns can put your business at risk.
Types of tests
We want to look at two types of tests:
- Account-level: Running a whole account on a certain bidding strategy vs another on another bidding strategy
- Campaign level: Running various bids against each other with a 50/50 split on budget
Account-level bidding strategy test
The outcome of an account level test is to understand which bid strategy works best to drive business outcomes. This test is best applied as an agency or as an owner of multiple similar brands.
Sometimes automated bidding strategies such as target CPA might outperform manual bidding on CPC or eCPC. In case you cannot connect your CRM conversion feed to ad platforms like Google Ads or Facebook you should be extra careful.
The way to set this up with the example of Generic search campaigns:
Go into Google Ads and switch all campaigns to target CPA for one account. On the other account switch all campaigns to eCPC or manual CPC.
Now let the campaigns run for a minimum of two weeks and see how they perform against your KPI’s.
Results
Here is an example of how a target CPA against eCPC looks like for CRM outcomes. Here the product has an average customer lifetime value of $60’000:
Target CPA bids
eCPC bids
Here the eCPC bids outperformed target CPA as there was no way of sending the contributing GCLID’s (Google Click Identifier) back to Google. In this case, it was better for the client to use manual bidding strategies.
Campaign level bid strategy test
This section is best suited for users who are already on the Windsor.ai platform. We will cover on how to benchmark the performance of your existing CPC bids against the Windsor.ai keyword-level bids.
Here is how to get started:
Step 1: Set up an experiment draft in Google Ads
- Sign in to your Google Ads account.
- From the page menu on the left, click on Drafts & experiments, and click on Campaign Drafts.
- Click on the plus button (new draft).
- Name your draft the same as your original campaign, just append ‘_windsor’ at the end of the name, and click Save.
More information from Google help can be found here
Step 2: Set up your experiment in Google Ads
- From the page menu on the left, click Drafts & experiments, then click Campaign Experiments at the top of the page.
- Click the plus button.
- Click Select draft, and select the draft you have set up in Step 1.
- Name your experiment the same as your original campaign, just append ‘_windsor_exp’ at the end of the name, and click Save.
- Choose a start date for your experiment.
- If you’d like to manually end your experiment, select None. Otherwise, choose an end date for your experiment. We recommend running experiments for at least 4 weeks.
- Allocate 50% of your budget to the experiment.
- For Search campaigns, under “Advanced options,” choose to split the experiment Search based.
- Click Save to finish creating the experiment.
More information from Google help can be found here
Repeat Step 1 and Step 2 for all the campaigns you want to test.
Step 3: Filter the Windsor.ai bid optimiser to only optimise your experiment
After setting up the 50%/50% split we recommend waiting 7 days before doing the actual bid optimisations.
Now you need to exclude the existing campaign from the bid optimiser.
- Go to your Windsor.ai dashboard
- Head to the Optimise Bids Section and click on the three dot’s (see arrow)
- Click inside the filter box on the right side
- Add a filter to filter to only show campaigns ending with ‘_windsor_exp’
- Click Save
- Select overwrite existing chart and click Save
Step 4: Optimise your bids
Now you can refresh your dashboard and click the send to Google Ads button.
Step 5: Track original versus experiment
To see how the campaigns behave over time you can track them in the campaigns section of the dashboard. You can also start setting up line charts to see the performance over time. We recommend running the optimisations at least once per week.
Results
After 6 weeks you should have good insights on which bids perform better. If you want to switch all bids to Windsor.ai you just need to remove the filter which you have put in place for the experiments.