When good stats go bad

I was faced with an interesting situation this afternoon: I had found a blog post where some work I had overseen was critiqued with a point of view that didn’t align with the goals we had, and made the project look bad. The project was, in reality, a big success.

It’s an easy mistake to make. Heck, I’ve made it. We judge work based on the standards we have for it. We think we know the desired outcomes, and we make assumptions based on that.

But it is a dangerous path, making a case study from external stats, especially when you don’t know what the point of the endeavour was.

The figures are, I imagine, faultless. There’s no arguing the number of times words have appeared in a feed, or the number of status updates on a single platform. But that yardstick didn’t match the one we were using.

When reading a case study, one of the most important questions you can ask of it is: Was it written by or with the help of someone who was a part of the project? Can the writer tell me the point of the project? Do they know the measurements to decide “success” in this instance?

A one-size-fits-all approach is tempting, but not always the right move when it comes to social strategy. Getting ‘heaps of mentions’ on Twitter, or ‘1,000 views on YouTube’ is not always the goal of the exercise. Something that looks great on the outside may not be achieving any of the objectives, or tell the whole of the story.

A Facebook page with 100,000 fans may look good on paper – but how many of those fans click the links, interact with the brand, or buy the product? How much did you spend in ads to get those fans? A Twitter account with 50,000 followers seems legendary – but are those followers in an active relationship with the account holder? Are they even real accounts? Is it the old follow-followback trick?

One of the issues with social media in New Zealand is that there are so few case studies released, fewer still with in-depth strategic commentary, that any data is picked apart until it loses meaning. But a fundamental mistake in reading too much into data is thinking the intention of the project manager was to achieve “A”, and then failing them for not doing so, ignoring that they may well have been aiming for “B” all along.

I learned a valuable lesson about my own assumptions today.

The next time you’re reading an analysis of a project, read it critically, asking yourself if the author really has the authority to offer the context required. If you don’t, you could miss out on some real insights.

Seems basic, right?