Assumptions vs. Experiments vs. Hypotheses​Dear Reader,​ “Make sure you treat this as an experiment.” “Our working hypothesis is that people want this.” Are any of these familiar? Your team (or the entire organization) might regularly mix up the terms assumptions, experiments, and hypotheses, which can create confusion at best. Let’s clarify what each of these means. An Assumption is a statement about what we believe to be true about an idea. Stated in a format like “We believe…” Typically, your assumptions center around an idea’s feasibility, usability, viability, desirability, or ethicality. An Experiment is a technique we use to test the most critical but least proven assumptions to collect reliable evidence about whether a specific assumption is valid. Your experiment technique needs to match the nature of your assumption instead of dogmatically defaulting to A/B tests. A Hypothesis explicitly defines success for a given experiment and ties it back to the assumption. It describes the measurable change you expect through the chosen experiment technique. Which means it has to be falsifiable. By incorporating your initial assumption, you focus instead of chasing opportunistic ideas. There are countless formats, but a simple one is: Based on [evidence]. We believe [idea] will encourage [target audience] to [change behavior = outcome]. Our confidence in this solution increases when we see [metric change] during [experiment]. Your experiments (and metrics) might change or expand as you test the idea from different angles. Let’s assemble the pieces: We’re a European car marketplace looking to expand to the US and will use Private US Sellers of Vintage Premium Cars as a strategic wedge to break into this market. An AR-based car intake scanner is a feature idea that addresses the need for people to get their cars vetted without searching for in-person experts. The two most critical assumptions are “We believe car owners trust us to evaluate their cars digitally” and “We can automatically recognize 90% of a vintage car’s details through a digital smartphone scan.” One experiment to test the former is a Wizard of Oz MVP, which has human experts evaluate sent-in photos manually and deliver a prediction asynchronous back to the owners. Which has us arrive at this hypothesis: An AR scanner will encourage US vintage car owners to list their cars online without a physical inspection. Our confidence in this solution increases with an acceptance rate of 80% for our manually delivered photo-based evaluations.​ Did you enjoy this one or have feedback? Do reply. It's motivating. I'm not a robot; I read and respond to every subscriber email I get (just ask around). If this newsletter isn't for you anymore, you can unsubscribe here. Thank you for Practicing Product, ​Tim​ PS: I messed up last week's link, so here we go again. Do you Interview Users? Do you have “no shows”? Fill out this short survey to learn more about a free productized solution to that. What did you think of this week's newsletter? As a Product Management Coach, I guide Product Teams to measure the progress of their evidence-informed decisions. I identify and share the patterns among better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery. |
1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes. Explore my new book on realprogressbook.com
Product Practice #387 Can We Drive the Same Outcome for Different Customer Segments? READ ON HERBIG.CO PUBLISHED Dec 5, 2025 READING TIME 4 min & 40 sec Dear Reader, "An outcome is a measurable change in human behavior that creates business value." (via Josh Seiden). But what if different customer segments share the same problem? Should you repeat the outcome on your impact map? The answer: Yes—when it forces clarity. From the chapter "Targeted Discovery" in my Book Real Progress Let me give...
Product Practice #386 Why your Discovery Insightsneed an Expiration Date READ ON HERBIG.CO PUBLISHED Dec 29, 2025 READING TIME 5 min & 32 sec Dear Reader, "I believe we should split-test this change to the funnel." "No, we tried that 3 years ago. Didn't work." End of story...right? 9-ish years ago, I got to listen to Willem Isbrucker sharing his insights from running experiments at booking.com (famous for their quantitative data-first approach) at ProductTank Hamburg. Among other things, he...
Product Practice #385 Why Strategic THINKINGmatters more than THE Strategy READ ON HERBIG.CO PUBLISHED Nov 20, 2025 READING TIME 4 min & 5 sec Dear Reader, Will I meet you later today in Frankfurt? "We can't move forward until leadership finishes the strategy." I've heard this from countless product teams. Roadmap planning is on hold. OKRs feel arbitrary. Discovery lacks direction. Everything hinges on THE strategy document, which is perpetually "almost done." Here's what nobody wants to...