Better tests, better results — here’s how
With our 2008 analyst roundup clinic close at hand, one of the prominent themes is adopting and applying effective testing methods.
Many marketers still need help with this, so we turned to our research analyst, Andy Mott, for a blog post with some guidance from the trenches.
Here’s what he suggests …
It’s not hard to apply the MarketingExperiments methodology to design effective tests. However, it can be much more difficult to identify the best things to test and the proper sequence for testing.
As marketers are introduced to a scientific approach for testing and measuring their marketing communications, we’ve seen that even when tests are designed, implemented, and executed well, identifying the best element to test is sometimes lost.
Here are three things to consider when starting a testing cycle:
- Remember to put the company’s best objective first. When you begin a testing cycle for a particular campaign or website, create a record of all the ideas you have for testing. These ideas are the framework for your test plan. The test plan serves as a “living document,” meaning that it is in a constant state of flux. Be flexible enough to change the plan, create new tests and scrap others as you learn from the results of the test battery. At the top of this plan should be your primary research question – the company’s objective for testing. This can be simple, such as: “ABC company’s revenue model is ad based. The objective is to get more users seeing our partners’ ads.” Notice that this goal doesn’t say “increase pageviews” or “increase e-mail open rate.” Those are the ways to accomplish the primary objective, not the objective itself.
- Measure, Learn, Test, Repeat. Understanding and clearly defining the success metric for your test is just as important as the primary research question. But the secondary metrics also help you understand the reasoning behind your results. Think of it this way: we marketers are often asked to appear before the powers-that-be and justify what we’ve done. As rewarding as it is to say something like, “We tested our offer page, and our treatment design resulted in a 53% increase in revenue!” it is equally discouraging to be asked why and not have a great answer. Sometimes these “whys” are the key to making additional changes that push 50% gains up to 150%. Not measuring all that can be measured and comparing results limits your ability to design effective follow-up tests (for a practical example, see the brief “Optimizing headlines, part 2“).
- Testing discipline also means having the discipline to change. One of the hardest parts of testing is the temptation to interpret results before they are valid. You need discipline to keep a test running all the way through to validity to ensure you can provide a scientific interpretation of the results. And this is only one half of testing discipline; the other half is being able to admit when you are wrong and change your strategy. I can tell you that it’s deflating to work hard on a test strategy that you think will get great results only to see the control trounce your brain-child treatment design. But that’s when marketing scientists need to shake it off, re-evaluate their test objectives, absorb what they learned from the test, and apply it to the next one.
Remember: The only failed test is one that doesn’t teach you anything. With experience, you begin to realize that you sometimes learn even more from tests that did not perform as expected.
For more insights from Andy and our team, you can now access our free web clinic — Lessons Learned: Our analysts reveal the top takeaways from our 2008 research.