Hot Tips is a collection of candid advice by and for product people.
Found a great new way to build your roadmap? Got an awesome design or research tool you can’t live without? Unearthed the holy grail of prioritisation techniques?
Tell the JAM community! Sharing a Hot Tip is the best, fastest way to pay it forward to +3,000 makers from all over Europe. Your daily grind might be their ‘aha moment’!
Think of it as a precious piece of advice you wish you had received when you started building products. It’s a short snippet of wisdom that helps you do things differently. With Hot Tips, we hope to show there’s no ‘one best way’ and it’s ok!
📖 Be as open as you can. Share your insider knowledge. Something people won’t have come across before. A Hot Tip reveals how you do things.
🎨 Show, don’t (just) tell! Talking about your roadmapping process? How about including a screenshot of the tool you use? There’s nothing like seeing your ‘behind-the-scenes’.
💌 Keep it short and personal. Aim for 200 words max, and word it like you’re helping a friend out.
Every week, we’ll curate the best Hot Tips and share them with the community.
It's hard to find a balance between 'quick and dirty' research and more time-consuming in-depth studies. Where do you draw the line to make product decisions?
For a healthy gut feeling, invest in good prebiotics. With regards to a gut feeling in product, it comes with practice. Here is your PM prebiotic regimen.
Depending what you’re researching one type of data might be better than another. Learn what the data is telling you. For example, A/B testing performance of a landing page. If you have high enough traffic, you can rely on quantitative data like number of clicks. But, for assessing intuitiveness of a feature, it will be better to talk to the users.
More often than not you’ll need both qualitative and quantitative data. But be sure you know how they interact. Three out of five people you interviewed might find your pin-to-top feature useless. But, if the numbers show 65% of app users pin daily, you know to take your interviewees' opinion with a grain of salt.
Research, like editing or perfecting UX, can be an endless process. Before you start decide how much time you will devote to research, and how much data you will collect. Predetermine the number of customers to talk to. Use a calculator to establish the right sample size and ensure statistically significant results. Yes, you might need to refresh your high school stats for that. But hey, this time it’s actually for a better cause than getting a pat on the back from your math teacher.
These are two different processes. Analysing data mid-collection will introduce bias to your process, for example, confirmation bias in seeking our results to prove what you want to see.
Initially err on the side of having too much data, rather than too little. And, in case you didn’t yet have enough to learn here is another thing to add to your list. Investigate how others use data to arrived to their decisions: read case studies, and talk to the PMs (how about at JAM afterparty?).
For the product team at the Victoria & Albert Museum (V&A), this really depends on what we're testing and what for exactly. For anything to do with our programme, we find it easiest to do ‘guerilla’ research and speak to visitors just outside of our office. For a project like Search, we recruited a number of users according to our target audience segmentation to see if our search results have been organised in an easily understood way. For Collections online, we needed users to help us validate if our categorisation made sense way before we started any dev work - to a regular punter, not just our target audience. For understanding and improving UX, we have found Hotjar really helpful. For Search the Collections, we're doing a 5 question survey to help us identify our users and what they're after from the site.
User interviews are great for helping you make a call on what features to test. I often use insights from just one round of interviews as the starting point for a brainstorm, where we get down all our ideas for solving a user problem, then narrow them down into what ideas to test first, live on our site or app.
For example: At the Guardian, we ran some user interviews to learn about what people who read the news find “relevant”. We learned that “relevance” meant a number of things, from recommendations, to editor’s picks, to the ability to control news alerts you receive, and much more. I summed up what we learned in a simple illustration to help the team keep it front of mind, and we used this as a starting point for a brainstorm on how we could make the Guardian more relevant.
We narrowed our ideas down to our five favourites, which were rapidly prototyped and shown to users. Of these, three ideas showed promise, so we turned those three into into live tests.
I always try to ensure we test multiple ideas, each with a clear hypothesis and success metric. This helps us make a call - rather than testing just one idea and having to decide whether or not to progress it further, we can choose the best performing idea of the bunch and throw the others away. The fact that we’re keeping our tests lean, without too much code or intense design work, means it’s not a big deal to test a few things at once, and decisively throw away the losers.
I use this cycle with my product teams often, to ensure we’re taking action on what we’re learning rather than getting bogged down in indecision. User research sessions always result in a decision about what to prototype and test; the prototypes and tests are always as lean as we can make them, so that we can get them out there and make our ultimate decision.