Product Management

How much data is enough?

It's hard to find a balance between 'quick and dirty' research and more time-consuming in-depth studies. Where do you draw the line to make product decisions?

  • The Team at JAM

    For a healthy gut feeling, invest in good prebiotics. With regards to a gut feeling in product, it comes with practice. Here is your PM prebiotic regimen.

    Collect the right data.

    Depending what you’re researching one type of data might be better than another. Learn what the data is telling you. For example, A/B testing performance of a landing page. If you have high enough traffic, you can rely on quantitative data like number of clicks. But, for assessing intuitiveness of a feature, it will be better to talk to the users.

    Maintain balance.

    More often than not you’ll need both qualitative and quantitative data. But be sure you know how they interact. Three out of five people you interviewed might find your pin-to-top feature useless. But, if the numbers show 65% of app users pin daily, you know to take your interviewees' opinion with a grain of salt.

    Set a cut off point.

    Research, like editing or perfecting UX, can be an endless process. Before you start decide how much time you will devote to research, and how much data you will collect. Predetermine the number of customers to talk to. Use a calculator to establish the right sample size and ensure statistically significant results. Yes, you might need to refresh your high school stats for that. But hey, this time it’s actually for a better cause than getting a pat on the back from your math teacher.

    Separate collection from analysis.

    These are two different processes. Analysing data mid-collection will introduce bias to your process, for example, confirmation bias in seeking our results to prove what you want to see.

    Initially err on the side of having too much data, rather than too little. And, in case you didn’t yet have enough to learn here is another thing to add to your list. Investigate how others use data to arrived to their decisions: read case studies, and talk to the PMs (how about at JAM afterparty?).

    Ask JAM a follow-up question!

    Ask a question
  • Product Manager at V&A

    For the product team at the Victoria & Albert Museum (V&A), this really depends on what we're testing and what for exactly. For anything to do with our programme, we find it easiest to do ‘guerilla’ research and speak to visitors just outside of our office. For a project like Search, we recruited a number of users according to our target audience segmentation to see if our search results have been organised in an easily understood way. For Collections online, we needed users to help us validate if our categorisation made sense way before we started any dev work - to a regular punter, not just our target audience. For understanding and improving UX, we have found Hotjar really helpful. For Search the Collections, we're doing a 5 question survey to help us identify our users and what they're after from the site.

    Ask Eva a follow-up question!

    Ask a question
  • Freelance UX consultant at Eurostar

    User interviews are great for helping you make a call on what features to test. I often use insights from just one round of interviews as the starting point for a brainstorm, where we get down all our ideas for solving a user problem, then narrow them down into what ideas to test first, live on our site or app.

    For example: At the Guardian, we ran some user interviews to learn about what people who read the news find “relevant”. We learned that “relevance” meant a number of things, from recommendations, to editor’s picks, to the ability to control news alerts you receive, and much more. I summed up what we learned in a simple illustration to help the team keep it front of mind, and we used this as a starting point for a brainstorm on how we could make the Guardian more relevant.

    We narrowed our ideas down to our five favourites, which were rapidly prototyped and shown to users. Of these, three ideas showed promise, so we turned those three into into live tests.

    I always try to ensure we test multiple ideas, each with a clear hypothesis and success metric. This helps us make a call - rather than testing just one idea and having to decide whether or not to progress it further, we can choose the best performing idea of the bunch and throw the others away. The fact that we’re keeping our tests lean, without too much code or intense design work, means it’s not a big deal to test a few things at once, and decisively throw away the losers.

    I use this cycle with my product teams often, to ensure we’re taking action on what we’re learning rather than getting bogged down in indecision. User research sessions always result in a decision about what to prototype and test; the prototypes and tests are always as lean as we can make them, so that we can get them out there and make our ultimate decision.

    Ask Andrea a follow-up question!

    Ask a question

Need help convincing your boss?

Please read the guidelines first

More tips like this...

People & Teams
How do you keep your design team members growing?
Do you consider your company design led? Design is at the centre of every experience we create. So how do you ensure your design team has a path to improve, up-skill and deliver even more value?
Tools
What is your go-to framework for product success?
We all have that one book, article or talk that opened our eyes and helped us build better products. What's yours?
People & Teams
When and why should teams hire their first Product Manager?
Early in a new businesses life as a founder you're often doing the product manager role yourself. And then you scale the business in other areas. So when do you hand over the reins and make your first product hire?
Tools
How do you build your product roadmap?
The quintessential product artefact that divides opinion and generates huge debate. But we want to know the process that gets you there, rather than what it looks like at the end. What's your approach?