A key marketing concept to get your head around is testing.
Testing to improve results has been an accepted marketing practice for a number of years now but it still surprises me the number of businesses that don’t do any kind of testing. There are a number of areas where comparing different responses should be standard operating procedure for your business, regardless of whether you’re a big company looking to consolidate or a small one wanting to grow.
The purpose of testing, A/B or multivariate, is usually championed as a way to increase conversion rates. This could be increased sales, increased leads, clicks or engagement. The benefit is that the same amount of traffic or views yields a better result. Who doesn’t want that?
And while analysts and marketers can geek out on the details of different split-testing results, the real benefit is that testing provides insights into what your prospects or customers react to. It means you can measure ‘real’ responses rather than what people say they want or how you or they think they would react in a given situation.
Focus groups are a traditional way to get qualitative research from potential customers. There is no doubt that they are valuable and continue to provide useful insights. But they are limited.
Participants know that they are being watched and their responses monitored by the company doing the testing. They are also being watched by other participants. Their reactions are close to a live simulation but one of the key reasons they are not the same is because no one is reaching for their wallet to pay. Instead they are being paid for their participation.
The standard mantra in sales and marketing is to find out what people want and give it to them. The challenge is working out exactly what that is because
The best and most often quoted illustration of people not knowing exactly what they wanted is the iPhone. Steve Jobs believed that people wanted a more responsive phone and was determined that his products would be stylus free.
Before the iPhone was launched the Blackberry was ubiquitous, and featured a full QWERTY keyboard. As a result, there probably weren’t many people asking for a touchscreen phone with a keypad built into the display. It wasn’t until Apple created it that people realised that they wanted it.
In fast food circles it’s common to hear that consumers “talk healthy but eat fat”. In focus groups and vox pops people talk about wanting to make a healthier choice when ordering takeaway. But when it comes down to placing the order they will choose the option that they know isn’t as healthy for them, even when there is a healthy option on the menu in front of them.
Another example is the rise of own brand products from supermarkets. People may say they’ll be loyal to the brand they’ve known and used for years but when they are standing in front of olive oil in the supermarket aisle and the home brand version is $3 cheaper and they need 2 bottles (a potential saving of $6), then the choice is going to be home brand most times, despite their proclaimed loyalty. And this decision is played out across the majority of items they need to buy because it can add up to a big price differential.
Without data from running tests, you’re guessing. And while educated guesses can get you a fair way, it isn’t as conclusive as taking a structured approach to determining what works and what doesn’t.
The other aspect is that people are surprising. They don’t always act or react in ways that you expect. Testing allows for that element of surprise and stops you from operating in your own self-referencing bubble.
The insights from testing can be used to create new products and to structure your site in a way that is more appealing to your target audience. And while a lot of the comparison of responses is easiest done online, the results can be used to improve offline marketing as well. For example, a headline that gets a better result for an online ad can also be used in a printed publication.
Another way testing can help is in determining how much copy is needed to trigger a response. Perhaps your target market is busy and responds better to a page that is mostly images and headlines compared to a page with several paragraphs of text.
Or maybe your research indicates the opposite. If that was the case then a printed brochure would need to include plenty of explanatory text or perhaps a booklet is needed to educate your prospects before they are ready to buy.
There are a number of different elements that you can test. The ones you could include in your experimenting are:
Things to test are:
Elements to test are:
If you haven’t done any testing before and looking through the above list makes you feel overwhelmed then take a breath and zero in on one of the elements listed.
While you’ll want to work your way through each of the different elements, trying to test everything at once is a recipe for disaster and will likely result in you throwing your hands in the air and abandoning it altogether.
Are you spending money on advertising?
If you answered ‘no’ to advertising, do you send emails to your list?
If you answered ‘no’ to emailing your list, do you have a website?
If you don’t have a website, what else can you test?
Think about how you can run other tests that will let you get to know your prospects and customers better. Things you can try are:
If you choose one or several of the options listed above or come up with your own test, make sure you track what you’re doing. A simple spreadsheet will suffice so you can measure the results.
There are 2 main types of testing approaches that you can use. They are A/B split-testing and multivariate testing.
A/B split-testing is where you send 50% of traffic to option A and 50% to option B. The aim is to find out which version will result in better engagement, usually measured in terms of conversions. It can also refer to a setup where 90% of traffic is sent to the ‘control’ and 10% to a test.
Multivariate testing involves sending traffic to a number of different options each with a different combination of testing elements. Let’s say you want to test 2 different headlines with 2 different button colors. Those 2 testing elements are then tested in different combinations with each other resulting in 4 versions of a particular page.
A/B testing is best if you’re starting out with testing. This is because it is easiest and simplest to get started with.
Multivariate is best if you’re experienced at testing and you have a lot of traffic. Because traffic is being split across more variations (4 pages compared to 2 in our example above), you need a lot more traffic to pick a conclusive winner.
The ability to run a variety of tests has got progressively easier as new technology becomes available. And it has got much simpler. Which is great because you don’t want to let ‘technology overwhelm’ get in the way of getting started.
I’ve listed some tools you can use to get going today:
Google has a free tool called Optimize that can get you up and running a split-test on any page on your website within 30 minutes. The most challenging part is adding a snippet of code to your website but it only needs to be done once if you’re using a content management system like WordPress which has the same header on each page.
Optimize provides instructions but if adding the code to your site is a stumbling block then hire a developer to add it for you.
These are the steps involved in getting started with testing on Optimize:
If you’re advertising using a service like Google Adwords you can easily run alternative ads for each ad group. If you do make sure that you change the setting about ad rotation so that Google shows your two ads evenly for 90 days and then displays the better performing option.
There is another option where you can select the winner yourself but this is a good backup in case you don’t get back to it.
Many email marketing services have an inbuilt ability to run split-tests. Some have better setups than others and make it easier but most have the scope to do it. Programs like Mailchimp have training information and so do others like Active Campaign or Infusionsoft.
A simple A/B test can yield results that have a direct impact on your bottom line. What does that look like? We’ll go through an example so you can see how it could work.
In the example below, let’s say that over a one-week period your site receives 10,000 visitors. In the A/B test, that means there will be 5,000 visitors to each page. Your former conversion rate for sales was one per cent, so for every 100 visitors you would get one sale. 10,000 visitors to the site would normally result in 100 sales.
However, version A achieved a twenty per cent increase over version B (your existing home page), which means that instead of fifty sales there were sixty sales. If each sale was worth $100 profit then this test yielded $1,000 more profit for version A.
Using version A as the new homepage without doing any other split-tests would mean your new conversion rate would be 1.2 per cent, or 120 sales for 10,000 visitors. That means an increase of twenty sales which equates to $2,000 extra profit.
When it comes to a key marketing concept, testing is crucial to get your head around. It should be a regular part of the way you do business because of the customer insights it can provide and the potential increase in leads and sales. You can start with advertising, email marketing, your website or offline testing. The most important thing is to start and foster a testing mindset.
If you can’t fathom any other marketing concept, make testing the one that you do pursue.
This testing overview hopefully provides the impetus to get started with experimenting in your business. Let me know how you go.