The Importance of Testing: How one test applied to two email sends resulted in different audience responses
At MarketingExperiments, sister company of MarketingSherpa and parent company of MECLABS, we believe in testing. Our Web clinics are a testament to this belief – every month we share research that is designed to help marketers do their jobs better.
This culture of testing encouraged me to run my own test.
First, I ran the test on a MarketingSherpa send. After the results of that test came back, the same test was then applied to a MarketingExperiments’ newsletter.
By running virtually the same test twice for two different audiences not only did I learn more about the preferences of ourreaders, but I also learned how incredibly important it is to test, even when you are sure you know what the results will be.
As the copy editor at MECLABS, I get to see all of the copy produced by both MarketingExperiments and MarketingSherpa. One of my responsibilities is overseeing the email newsletter sends. Every Monday, MarketingSherpa sends out a Best of the Week newsletter which features the most popular articles of the previous week and introduces the new book for the MarketingSherpa Book Giveaway.
The copy in these newsletters was extensive. Every article listed had a full summary which, as a reader, I imagined would seem overwhelming when opening this email on a Monday morning.
With the help of Selena Blue, Manager of Editoral Content, and Daniel Burstein, Director of Editorial Content, I decided to change reduce the length of the Best of the Week newsletter and use an A/B split test to determine which format our readers preferred.
Below is a side-by-side of the Control and the Treatment.
After running this test for six weeks, there was a definite trend of higher total and unique clickthrough happening for the shorter treatment versions. My hypothesis proved to be true — less copy in the email meant higher engagement rates with readers.
The MarketingExperiments’ test
Based on the success of the MarketingSherpa test, I thought it would be interesting to run the same test on MarketingExperiments’ Best of the Month newsletter send, which is sent to readers on the first of every month. Similar to the Best of the Week email send, the Best of the Month features the most popular articles of the month, gives a recap of the latest Web clinic and provides a description of the upcoming Web clinic. This was done with the help of Ken Bowen, Managing Editor, MarketingExperiments.
The Best of the Month send has more information in it, so it is naturally a longer newsletter than the Best of the Week email previously tested. Therefore, it felt like a natural mental leap to assume that the shorter Treatment would once again get higher clickthrough rates as it seemed to be more reader-friendly.
As you can see with the below Control of the Best of the Month from August, there was a lot of copy in these sends.
For the Treatment, we once again cut the article summaries. In addition to losing this copy, we also cut the recap of the previous month’s research and instead just had the upcoming month’s Web clinic description. We ran this A/B split test for three months.
Due to the results of the Best of the Week testing, I knew what the results of this testing would be. If one audience responded positively the lessened copy, certainly so would another audience. The fact that we were virtually re-running the same test seemed to me to just be a formality.
Spoiler Alert: This is why testing is so imperative.
If I had applied the results from one test to all newsletter sends, I would have been doing our readers a disservice. Below are the results broken down by month.
Of the three months of testing, only one month validated. Although the Treatment had slightly higher clickthrough rates, it wasn’t significant enough to regularly validate as it did with the Best of the Week send. It appears that readers of the Best of the Month newsletter may find value in the summaries and extended copy that the Control version had.
Testing allowed us to change the MarketingSherpa Best of the Week send to better serve our readers and also learn that the same changes made to the Best of the Month send did not make a statistically significant difference to MarketingExperiments’ readers.
Results
Testing is so important. I work for a company that values testing and is continually striving to best suit the needs of our readers. The editors and directors overseeing both MarketingSherpa and MarketingExperiments were very open to my experimenting with the format of a send that has been with the company much longer than I have.
And while the change of the format itself is important, so was the testing involved. It allowed me to really see how our readers were digesting the information that I was sending to them on a weekly or monthly basis. To better understand the patterns and behaviors of our audience only benefits me as a copy editor.
This meant that one audience preferred the lessened copy while the other may find value with more text . Without re-running the test, I would have blindly assumed that the MarketingExperiments audience had the same reading preferences as our MarketingSherpa audience.
Testing allows us to validate even that which we are sure to be true, and to humble us when this in fact turns out to be false.
You might also like
MarketingSherpa Summit 2016 — At the Bellagio in Las Vegas, February 22-24
A/B Testing: Ecommerce site’s 3,000 positive comments show why you can’t trust just one test
Testing and Optimization: Welcome send test results in 46% open rate for CNET