Why Your 'Correct' Discovery Method Might Be WrongDear Reader, "Which experiment should we run next?" This question comes up in almost every Discovery coaching session I facilitate. Teams often focus on finding the methodologically perfect way to test their assumptions. But here's the thing: the most technically correct experiment isn't always the right one to run. When choosing methods for Product Discovery, we often focus on what fits our research question or assumption best. Say you want to understand how users perceive features in your product. A diary study might be the perfect method—capturing real usage patterns over time. But what if it takes three months to get reliable insights? A series of well-structured interviews or shadowing sessions might get you 80% of the way there in just two weeks. Several factors determine your lead time to insight:
Here's another way to think about it: An A/B test might seem quick to start—taking just hours or days to implement. But depending on your traffic and conversion rates, it could take weeks or months to reach statistical significance. In contrast, while recruiting participants for qualitative interviews might take two weeks, you could have reliable insights within days of completing them. Neither method is inherently better. What matters is the total time to reliable insight, not just how quickly you can get started. So, when picking your next Discovery priority, ask yourself:
Here's a practical tip: When evaluating lead time, avoid abstract scoring systems. Instead, estimate the actual duration by adding:
This concrete timeline helps you make practical trade-offs between methods. Remember: The goal isn't to achieve perfect certainty. It's to reduce uncertainty enough to make confident decisions about what to build next. Did you enjoy the newsletter? Please forward it. It only takes two clicks. Creating this one took two hours. Thank you for Practicing Product, Tim Good News!Some last tickets are available for my in-person Product Discovery workshop on March 10 in London (as part of Mind the Product conference).
As a Product Management Coach, I guide Product Teams to measure the real progress of their evidence-informed decisions. I focus on better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery. |
1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes.
Product Practice #356 4 Opportunities to Layer AI into Your Product Discovery Activities READ ON HERBIG.CO PUBLISHED Mar 28, 2025 READING TIME 4 min & 32 sec Dear Reader, To use AI to shorten your lead time and reduce uncertainty, consider it a layer to supercharge your existing workflow. Instead of generating with AI, think about supercharging with AI. Here are four common Product Discovery activities where thoughtful AI integration can dramatically reduce uncertainty without sacrificing...
Product Practice #355 You’re Not Writing OKRs.You’re Reformatting KPIs READ ON HERBIG.CO PUBLISHED Mar 21, 2025 READING TIME 5 min & 32 sec Dear Reader, Every product organization has seen it - "OKR Theater" - Teams meticulously craft perfect objective statements, debate whether something is "outcome enough," and create elaborate tracking systems that generate more busy work than value. Behind this theater lurk predictable patterns: We adopt "best practices" without a clear why. Instead of...
Product Practice #354 Stop Looking at Flat Dashboards READ ON HERBIG.CO PUBLISHED Mar 29, 2025 READING TIME 4 min & 06 sec Dear Reader, Jeff Patton explained why flat backlogs don’t work for prioritizing user stories more than 16 years ago: Arranging user stories in the order you’ll build them doesn’t help me explain to others what the system does. Try handing your user story backlog to your stakeholders or users when they ask you the question “what does the system you’re building do?” I...