Editor’s Note: In a recent interview with MarketingSherpa Editor Sean Donahue, Research Analyst Corey Trent outlined errors even experienced email marketers make when conducting tests. (My personal favorite – #5). We thought this information was valuable, and wanted to share it right here on the blog for those who do not have a MarketingSherpa membership. Special thanks to our sister company for allowing us to republish the below article…
SUMMARY: Before you conduct your next email test, make sure you’re not falling into a trap that can muddy your results or limit the gains you might otherwise achieve.
We spoke with an email testing expert from our sister company, MarketingExperiments, to uncover common mistakes marketers make when running email tests. Read why good analytics and segmentation are crucial forerunners to testing, and why a blockbuster discovery from one test actually can be a risky thing for a marketing team.
by Sean Donahue, Editor, MarketingSherpa
Testing is an essential component of a strong email marketing strategy. But only if the tests are conducted and analyzed properly to ensure you’re helping – not hurting – your email performance.
“There is a cost for bad testing,” says Corey Trent, Research Analyst, MarketingExperiments. “Bad assumptions based on bad tests can cost you a lot of money and cause you to lose out on a lot of business.”
Trent routinely conducts email tests as a member of the MarketingExperiments sales and marketing optimization research team. Through this work, he’s seen how mistakes, misconceptions and simple oversights can derail a well-meaning marketer’s testing strategy.
We asked him to share his advice for avoiding testing pitfalls, so you can achieve your goal of improving email performance. Here are seven common mistakes he’s observed:
Mistake #1. Not having the right analytics behind your tests
Sometimes marketers must recognize they are just not ready for email testing, says Trent.
In particular, he says, marketers often don’t have the processes or the analytics systems in place to capture a complete data picture — you must be able to tie your email metrics to business goals, such as revenue.
For example, if you’re only measuring email metrics, you might test a new subject line treatment that increases open rates by a few percentage points. Sounds exciting, but without tying subsequent clicks from those messages to website sales, you might not notice that you’re actually decreasing conversions and total revenues because the subject line is creating the wrong expectations about your message.
The good news is that it is now relatively easy to track email activity through to website metrics using tools such as Google Analytics. This program offers marketers ways to track email messages through specific goals on your site, such as product pages viewed or time spent on site (see Related Resources below).
“Email tracking all the way through to business goals is available for everyone, and doesn’t take a long time to set up or require having an IT team to do a lot of programming for you,” says Trent.
So make sure your analytics systems are robust enough to support your objectives before you begin email testing.
Mistake #2. Not segmenting email lists for tests
Many marketers have large email databases, but don’t know a lot about the records held within. In these cases, they may conduct an email test using their entire database – and create a muddy results picture.
“Without segmented lists you don’t get good test results,” says Trent. “You get all these people responding differently to your emails, which pulls your results in all different directions.”
Spend the time to segment your database and understand the different characteristics of the segments before you embark on email testing. The more you know about unique segments within your database prior to testing, the better chance you’ll have of finding the right messages to appeal to them.
Mistake #3. Stopping tests after one big win
“With email more than anything, we see people get a big win and stop testing,” says Trent.
As exciting as those big wins may be, they shouldn’t be the end of your testing process. The makeup of your email lists is constantly changing; external factors, such as the economy, also impact subscriber behavior; and your competitors’ campaigns and tactics are always changing as well.
This constant state of change means you must routinely work on the messaging, layout, calls-to-action and other elements of your email messages to ensure you’re getting the full benefit of a testing program.
Mistake #4. Testing too often
The flip side of stopping tests too quickly is falling into a pattern of repeated testing, to the point that you simply can’t keep up with all the data you’re generating.
When finding the right pace for testing new email treatments, think critically about the tests you want to perform and their potential impact. Focus on the ones that have the greatest potential impact on your specific business goals.
Trent suggests researching your current email strengths and weaknesses: Examine your own email and website metrics and study industry research to benchmark your own performance against your industry peers. This process can uncover where your email programs are delivering weak performance – and where you have a bigger chance of improving your company’s bottom line through testing.
“Those deep dives into the data you have available help you prioritize that list of potential tests,” says Trent.
Mistake #5. Overlooking email copy tests
Today’s email messages are so versatile that marketers have a wealth of options to test: Layout changes, images, fonts, colors, social media integration and so on.
But the ability to tweak these elements can make marketers forget about the old standby: copy tests.
“What often gets left out is how important the copy and communication pieces of an email are,” says Trent.
Copy changes can deliver surprising results, so keep them on your list of test options. You can try using a different voice for your message, long copy vs. short, different copy layouts, or even a personal letter from someone in your company.
Mistake #6. Always testing additions, rather than subtractions
The pressure to generate more engagement with subscribers can push marketers to continually test new additions to their emails: More images, more buttons, more links or more colorful text.
Too many additions can actually confuse recipients, making it hard for them to decide what to do with your message.
Instead, Trent has seen marketers achieve strong performance gains by eliminating elements from an email and reducing the number of decisions a recipient has to make.
By focusing a message on a single goal — such as inviting recipients to an event or generating sales within a specific product category — you can see whether there are extraneous elements that might be distracting recipients from the desired action. You can then test the impact of removing those distractions.
Mistake #7. Not asking other customer-facing teams for test ideas
Great test ideas don’t always come from the marketing team. It’s likely you have other sources within your company whose insights can shape a good email test.
Trent recommends talking to sales people or customer service representatives to learn what they’re hearing from customers. The feedback these teams receive from customers about your products and services — or why they chose to buy from you — can spark ideas for new subject lines, email copy, or specific products to feature in email promotions.
“Don’t be scared to go down and talk to those people to get some good ideas.”
The Magical Metrics Tour: Demystifying the secrets behind analytical “tricks” to help you drive ROI (describes how to track email clicks with Google Analytics)
What Else Can I Test….To Increase Email Click-through?
Discover the Best Time to Send Email: 4 Test Ideas (for MarketingSherpa members)
Improve Your Email Programs: 5 Test Ideas (for MarketingSherpa members)
Thanks for the tips!