Online Testing

5 steps to launching tests and being your own teacher


Testing is the marketer’s ultimate tool. It allows us to not just guess what coulda, woulda, shoulda worked, but to know what actually works. But more than that, it gives us the power to choose what we want to know about our customers.

“As a tester, you get to be your own teacher, if you will, and pick tests that make you want to learn. And structure tests that give you the knowledge you’re trying to gain,” said Benjamin Filip, Senior Manager of Data Sciences, MECLABS.

So what steps do we take if we want to be our own teacher?

While conducting interviews about the live test ran at MarketingSherpa Email Summit 2014, I recently had the chance to discuss testing processes with Ben, as well as Lauren Pitchford, Optimization Manager, and Steve Beger, Senior Development Manager, also both of MECLABS. The three of them worked together with live test sponsor BlueHornet to plan, design and execute the A/B split test they validated in less than 24 hours.

Read on to learn what they had to share about the testing process that marketers can take away from this email live test. We’ll break down each of the steps of the live test and help you apply them to your own testing efforts.


Step #1. Uncover gaps in customer insights and behavior

As Austin McCraw, Senior Director of Content Production, MECLABS, said at Email Summit, “We all have gaps in our customer theory. Which gap do we want to fill? What do we want to learn about our customer?”

What do you wish you knew about your customers? Do they prefer letter-style emails or design-heavy promotional emails? Do they prefer a certain day of the week to receive emails? Or time of day? Does one valuable incentive incite more engagement than three smaller incentives of the same combined value?

Think about what you know about your customers, and then think about what knowledge could help you better market to them and their needs and wants.


Step #2. Craft possible research questions and hypotheses

When forming research questions and hypotheses, Ben said, “You have to have some background info. A hypothesis is an educated guess, it’s not just completely out of the blue.”

Take a look at your past data to interpret what customers are doing in your emails or on your webpages.

Lauren wrote a great post on what makes a good hypothesis, so I won’t dive too deeply here. Basically, your hypothesis needs three parts:

  • Presumed problem
  • Proposed solution
  • Anticipated result


Step #3. Brainstorm ways answer those questions

While brainstorming will start with you and your group, don’t stop there. At MECLABS, we use peer review sessions (PRS) to receive feedback on anything from test ideas and wireframes, to value proposition development and results analysis.

“As a scientist or a tester, you have a tendency to put blinders on and you test similar things or the same things over and over. You don’t see problems,” Ben said.

Having potential problems pointed out is certainly not what any marketers want to hear, but it’s not a reason to skip this part of the process.

“That’s why some people don’t like to do PRS, but it’s better to find out earlier than to present it to [decision-makers] who stare at you blinking, thinking, ‘What?’” Lauren explained.

However, peer review is more than discovering problems, it’s also about discovering great ideas you might otherwise miss.

“It’s very easy for us to fall into our own ideas. One thing for testers, there is the risk of thinking that something that is so important to you is the most important thing. It might bother you that this font is hard to read, but I don’t read anyway because I’m a math guy, so I just want to see the pretty pictures. So I’m going to sit there and optimize pictures all day long. That’s going to be my great idea. So unless you listen to other people, you’re not going to get all the great ideas,” Ben said.


Step #4. Decide the test plan order

It’s unlikely you’re going to poll an audience of hundreds of marketers to determine which test to run if you have multiple research questions you want to answer.

Instead, Lauren provided three areas to take into consideration when testing on tight timeline. If the ideas build upon each other, you must consider their potential revenue, IT and resource impact.

“The first thing I look at is can anything build on each other, or is any one thing you’re going to learn in one test change the how you’re going to test the next thing. If the second question is going to change how I test the first question, then those need to flip,” Lauren explained.

She added, “Then, you need to take in practical considerations of which one is more revenue-impacting. So a lot of times, you want to get the higher revenue gains first, especially if you’re trying to prove testing is worthwhile.”

Finally, keep development resources and time in mind.

“If you test a really high-impacting idea but it’s going to take you six months to develop, you should have other tests in there beforehand,” she said.


Step #5. Build out a test plan

While the MECLABS team had to build out all three test options, you can build them as needed.

According to Steve, the key to test execution success is planning.

“Plan ahead, plan early,” he said. “The later you let things go, the more hurdles you’re going to encounter later in the game. You’re not going to allow enough time to tackle those challenges and you’re going to have to take shortcuts. Shortcuts are only going to impact the end user’s experience, and are going to create data that will be a validity threat.”

Steve also suggested a quality assurance (QA) check of the webpages, emails, or whatever medium you’re testing. Don’t just check for obvious bugs, but also “ensure the user experience you envisioned is the same user experience that they actually experience. Because what you may see as something [that’s] very clear, somebody else may have anxiety or friction you may not have thought about from the front end.”

You also want to make sure that all the data is being collected properly.


You’re ready to go

After all the testing of the pages is complete and the data collected during QA has been deleted, you’re ready to launch your test. Just don’t forget to monitor your data to make sure things stay on track.

No matter if you experience a win or see a loss, you should gain some insights into your customers and their behaviors by creating a research question and hypothesis around a gap in your customer theory. Happy testing!


You might also like

Online Testing: 3 test options, 3 possible discoveries, 1 live test from Email Summit 2014 [More from the blogs]

Lead Gen: 17% lift in lead capture by including more details in email [Email Summit 2014 live test] [Case study]

Digital Marketing: Stop ignoring data and start learning [More from the blogs]

You might also like

Leave A Reply

Your email address will not be published.