Metrics and Analytics: 4 questions every marketer should ask their data analysts

4

As a manager for the MECLABS data sciences division, I often work with marketers who understand the value of metrics, but they have a limited background in statistics.

And, from a data analyst’s perspective, this can have a profound impact on the accuracy of information being used to make some very important (and expensive) decisions.

In short, marketers are from Mars and data analysts are from Venus, but at the end of the day, our abilities to communicate effectively through a complex web of math, marketing and money will have one conclusion … shared success or failure in meeting the goals set before us.

So, in today’s MarketingExperiments Blog post, I wanted to cover four questions every marketer should ask their data analysts.

I should also add that the goal here is not alchemy …

There’s no need to turn a marketer into a data analyst or vice versa, but instead, let’s cover some very important ground first before any testing begins.

 

Question #1. How are we using quality assurance to mitigate risk?     

It stands to reason you want to address data accuracy and appropriate metric interpretation before you begin testing.

And quality assurance (QA) is a key step in doing that.

Yet often a landing page only gets checked to make sure it works for the user, leaving the functionality of your tracking and metrics platform as an afterthought.

Now what if, instead, you thought of your tracking and metrics as indispensable to maximizing your ROI?

Because in most cases they are and here’s why…

A/B testing is an investment with an expected ROI and your tracking and metrics are the first line of risk assessment.

These are tools to help you understand (and even mitigate) some risks beforehand, so checking to make sure they are working before you start running A/B tests is vital.

For example, recently our team narrowly avoided a $30,000 (the cost associated with research, development, and implementation) mistake because they discovered the reporting had been set up wrong before anything went live.

We were able to determine it was a reporting issue (usually can be fixed in retrospect) and not a tracking issue (extremely hard to fix in retrospect) because of the quality assurance measures that go into our tracking and metrics platform before we split test.

 

Question #2. How will results vary?

Would you want to make a multimillion-dollar decision knowing there is a good chance you could be off by a fairly large margin?

Probably not, and this is why understanding how variance between your metrics platform and the true value is important.

What is variance, you ask?

In the world of statistics, variance tells you how far away from the true value your data is likely to be, how spread out the data is, but the direction of how far away is not certain.

Standard deviation tells you how far away you are from the true results, but there is a direction that can be used as a reference point.

Consequently, the further away your data is from the true value, the further away your interpretation will also be of what is actually happening.

Therefore, having a discussion with your data team on how much variance between the reported values and the true values affects results will help your team make decisions driven by results that are much more accurate.

 

Question #3. Do we know how the metrics we use are calculated on our platform?   

Misinterpretation of metrics is epidemic in online testing – and it’s often preventable.

I say this because the accuracy of interpreting information is generally driven by an understanding of it.

Let’s use bounce rates for example …

A bounce rate is usually calculated by taking the number of people who only saw a specific page and dividing it by the total amount of entrances to that page.

However, I have seen, albeit rarely, metrics platforms reporting a bounce rate based on the amount of time spent on the page.

These platforms pick a threshold, say 20 seconds, and divide those people by total entrances.

In short, the more you know about how a conclusion is calculated will determine your abilities to interpret that conclusion accurately.

So, try to make sure your marketing team and data analysts are on the same page when it comes to how your metrics software calculates and reports and what the interpretation of these metrics means.

For example, if your metrics platform uses the first method, then having a high bounce rate on an information or contact page might not be a bad sign, it could mean the page is delivering the content you hoped it would and visitors don’t need to go anywhere else to answer their question.

 

Question #4. How can we avoid placing too much faith in one calculated metric?   

I often talk to people who use average time on page as a measure of engagement and they put far too much faith in it.

The problem with this is from a theoretical standpoint, any average is highly susceptible to outside forces because an extreme outlier will skew the average value.

For example, let’s say I visit your landing page to watch a five-minute video demo of one of your new products.

And, as soon as I land on the page, my phone rings and it’s my grandmother wishing me a happy birthday.

I get up and leave my desk to catch up with my grandmother on the latest goings on in her life … and I remain on your site, idle, for 15 minutes.

So, after I hang up and return to your demo, I spend about 30 seconds perusing your site and then I leave.

Your platform would report a second visit of 15 minutes and 30 seconds, which would make you assume I was there engaging your content, when in fact I wasn’t watching any videos at all.

Not only would the value for my visit be inaccurate, but I would also skew the average time reported because my visit was three times longer than normal and I ultimately didn’t watch anything.

My whole point here is placing too much faith in any of your metrics, especially calculated engagement metrics, can give you an artificial perception of true customer behavior.

 

The road to gaining customer insight is paved by great communication

What all four of these questions leads to is essentially one thing – building great communication.

As it only takes a relatively small amount of time invested getting your marketing and data team in sync before testing begins to avoid some costly and embarrassing misinterpretations in the future.

 

Related Resources:

A/B Testing: Example of a good hypothesis

Marketing Analytics: 6 simple steps for interpreting your data

Marketing Analytics: Why you need to hire an analyst

Marketing Analytics: Frequently asked questions about misunderstood and misinterpreted metrics

You might also like
4 Comments
  1. charles says

    Great post. Can you elaborate on this statement a little?

    reporting issue (usually can be fixed in retrospect) and not a tracking issue (extremely hard to fix in retrospect)

    does that mean you can correct the report but not the data once processed? Thx!

    1. Kayla Cobb says

      Hey Charles,

      Here’s the answer from the author Ben Filip: “Yes, if the report is wrong, it is most likely because of human error, and you can re-run the report to fix it. If the data wasn’t tracked properly to begin with, then you can’t retroactively collect it again.” Please let me know if you have any other questions.

      Thanks,
      Kayla

Leave A Reply

Your email address will not be published.