Product Manager: Spiderman or Peter Parker?

Most people in the tech industry can attest to the fact that the definitions of roles vary across industries and companies. This definitely holds true for product management. In smaller companies, it isn’t unusual for a product manager to be an analyst, a UX designer and a project manager rolled into one. However in larger companies, the role of a product manager is more strictly defined. But regardless of how you define the role, there is one thing that always firmly lies in the bucket of a product manager – the metrics.

Analysts can certainly help dig out relevant data, but I strongly believe that a product manager should be just as familiar with the tools for data analysis. In any case, getting the data isn’t the hard part—making sense of it is. Now there are mainly two points during the product development lifecycle that data comes into play

  1. While deciding what feature to build i.e. identifying the metrics that a feature aims to move
  2. While conducting a post-mortem of sorts, to see if the feature achieved its objectives

But in my own experience, I have found that as the primary stakeholder of a specific feature, I tend to be biased towards the success of the feature. This is when I unconsciously start to mold the numbers into a narrative that fits my own agenda and choose to ignore data that doesn’t neatly fit into said narrative. Fortunately, I’ve become hyper aware of this over time and I’ve come up with a few internal checks to make sure it doesn’t happen again.

Some of these might be obvious to more experienced PMs, but they surely help me make the right decisions in order to draw true conclusions from data.

  1. Before the feature goes into development, identify metrics that the feature aims to impact and make a projection about what those numbers will be. This could be anything from increasing engagement by 10% or reducing the bounce rate by 5%.
  2. Put this down in writing and don’t touch it until your feature is rolled out. This will help test your hypothesis without any bias.
  3. Looking at the data immediately post-release is important to make sure that everything is working as it should (no major bugs, no obvious issues). However, don’t draw conclusions about the success or failure of the feature just yet.
  4. The feature needs to be used by a statistically significant number of users over a pre-determined range of time to obtain results within a confidence level. I’m borrowing from how most A/B tests are conducted; I believe the same principles would apply here as well.
  5. Once you’re convinced of the accuracy of the data, compare it to your initial measurement and projections. Seek the truth and nothing else.
  6. If the numbers clearly indicate that your feature didn’t work as expected, trust that your numbers are correct (assuming you did all the due diligence in steps 1 to 5) and you were wrong.

spidey senses

I will caveat this to say that there might be situations where your data is completely off, but this is where your intuition and PM spidey senses come into play. Qualitative analysis is important, but don’t let that overshadow what your data clearly proves. I would definitely recommend reading this excellent blog post by John Vars, who talks about The Bipolar Nature of Product Management (I was hugely inspired by it).

Image credit: http://mercenarytrader.com/

Leave a Comment

Your email address will not be published. Required fields are marked *