What is ad testing?
Advertising research, often simply called ad testing, establishes an ad’s effectiveness based on consumer responses, feedback, and behaviour. It can be carried out on an ad-by-ad basis, or with periodic or continuous in-market research that monitors the performance of your advertising campaign over time.
Why test your ad?
No matter how much money you throw at your advertising campaign, which celebrity endorses your product, or how many Hollywood-style special effects it includes, if it doesn’t reach and influence your target audience, it’s wasted effort. Ad testing helps make sure your advertisement resonates with your target audience, leading to better conversion rates, helping to cement your brand and boost the positive associations that come with it.
How do you test an ad?
- Take a pre-selected segment of your audience that represents the target group for your campaign, such as people in a particular age group, those in specific job roles, or a combination of your preferred criteria.
- Show respondents the ad – different versions of it, or different ad concepts
- Survey respondents to find out what they thought of it
- Test before, during and after launch Pre-testing ad concepts can help you make sure your campaign starts on the right note and avoid costly pitfalls. In-flight monitoring of a campaign shows you the performance arc across its lifecycle. This helps you pinpoint where conversions occurred and how sentiment and purchase decisions evolve over time in response to your ad.
How do you design an ad-testing survey?
People’s perception of ads is so subjective, it can be difficult to measure exactly what makes them ‘click’ with the people who see them. A well-constructed survey is a powerful tool for collecting a range of highly personal responses in consistent, quantifiable ways. Here’s how to create one.
1. Set your goal
A good survey always works towards a clear goal – in this case, measuring how well the ad is fulfilling your objectives. This desired outcome is known as the ‘advertising effect’, and it’s a key ingredient to include in your survey questions. So, first of all, decide what you want your ad to achieve:
- Raising awareness of a new product?
- More web traffic?
- Driving more sales of an existing product?
- Promoting your brand?
- Something else?
2. Cover these 5 elements of ad effectiveness
To generate a complete view of your ad, these are the five areas to cover:
- Service or product attributes: These are the product features and qualities the ad conveys to the target audience. Your questions should gauge how well the message is being communicated. Can they pick up from the ad that your product is cheap, delicious, built to last, versatile or convenient to use?
- Benefits: These are the actual experiences a customer expects the product to deliver. It could be having more money in their pocket, comfort, a better night’s sleep, warm, cosy toes or a more convenient way to feed the family.
- Personal values: These are the positive emotional associations that come with the product and brand. You may want to see if the customer feels cared about, in safe hands, that their needs are met, or that they’re going to be inspired.
- Higher order values: How well does the product tap into the personal and identity-related values its target customers hold? Does it make them feel secure, wise, that they have great taste or maybe that they’re an astute bargain hunter?
- Advertising effect: This is what the ad itself does. Depending on your ad and product, these could include: being entertaining, realistic, attention-grabbing, good at prompting a purchase, or inspiring people to seek more information.
3. Select the right style of question
Use a mix of question styles. For your ad testing, you’ll be dealing with ideas or constructs that range from positive to negative, such as:
How appealing is the ad?
How believable is the ad?
How relevant is this ad to you personally?
- A 5-point Likert scale: (strongly agree / agree / neutral / disagree / strongly disagree) is a good way to ask these types of questions, since it gives you comparable, specific results without too much respondent effort. A scale also avoids the agreement bias that can be introduced through yes/no questions.
- Open text fields: Include one or two of these for respondents to fill in with their own words. Text analysis software can gather and interpret text, giving insight into your respondents’ more nuanced feelings about your ad. For example: What is this ad saying to you?
- Radio buttons: Particularly useful for sounding out the clarity of your ad’s messaging, with options such as confused / unclear / didn’t understand. Where the respondent selects such an option, follow up with an open text field where they can elaborate in their own words on which part of the ad was confusing or wasn’t clear.
- Keep questions neutral: Don’t use any more words than necessary, and avoid emotive or descriptive terms that might bias the response.
- Capture demographic information: As well as collecting data about opinions and reaction, be sure to add a few fields that request age bracket, gender, profession and any other demographic metrics you’re interested in at the end of the survey. This information could help you to target your ad towards the right people once it has been created.
What do you measure in your ad testing program?
- Is there anything confusing, unclear or difficult to understand about the ad? Ask at least one question that explores the shortcomings of your ad, emphasising that you’re actively interested in any criticism they might have. This breaks down any reluctance to be negative and sets the tone for a constructive and honest survey response.
- How likely are respondents to recommend the product to a friend based on this advertising? Because social proof (copying what others do, particularly when we’re not sure) is so powerful, recommendations are a strong indicator of perceived product quality. If someone is willing to advocate your product to a friend, it’s likely they think it will create a positive result not just for the friend, but for the friendship itself. When a recommendation is made, values like trust and reciprocity are being staked on your product, albeit in a small way. The best way to measure this is by the ever-reliable Net Promoter Score (NPS): On a scale of 0-10, how likely are you to recommend [product/service] to friends or family, based on this ad?
- Qualitative data from open text fields Including at least one open text field is a bit like adding a safety net to catch anything important that’s fallen through gaps in your survey design. It’s also a way of giving respondents free rein to tell you anything you haven’t explicitly asked, and capturing questions or opinions that your survey has triggered while they’re fresh in the respondent’s mind. Collecting qualitative data like this is increasingly valuable, especially since natural language can be processed at scale and turned into sentiment analysis with tools like Qualtrics iQ. In the Qualtrics survey platform, you can choose between different open field text box sizes, from one-line fields that encourage a quick, succinct response, to larger essay-style boxes for when there is more to say.
How do you feed survey insights back into your ad design process?
You can further hone your ad concept by iterating it through multiple rounds of testing, and tweaking it in response to what your survey audience tells you. These iterative results are valuable in the longer-term too – they can help steer your creative team away from ideas that weren’t successful in testing and focus them on what works well for a particular audience.
How do you choose the right platform for ad testing?
Generally, whichever channel you run your ad on (social media, search engine with Pay Per Click (PPC), sponsored listings, etc) will come with built-in tools to quantify your ad’s performance in specific, measurable ways.
This is ‘what happened’ operational data (O-data) which reports factually on the interactions you care about. But this isn’t a joined-up solution. The ad that was a roaring success on Instagram is unlikely to work in the same way if you transplant it as-is to Facebook.
The more joined up solution is to use experience data (X-data) that can unravel deeper meanings behind operational facts and figures. This is centred around the people you care about, rather than the things that happen. It’s all about the experiences they are having, what they value, what turns them off a purchase, and what drives their decisions. Data flows into a single platform through sentiment, rankings and ratings, or in the customer’s own words through reviews and customer service conversations.
In the case of ad testing, the people you care about are likely to be your prospective customers, or the audience groups you’re targeting with your ads.