Assumptions vs. Experiments vs. Hypotheses​Dear Reader,​ “Make sure you treat this as an experiment.” “Our working hypothesis is that people want this.” Are any of these familiar? Your team (or the entire organization) might regularly mix up the terms assumptions, experiments, and hypotheses, which can create confusion at best. Let’s clarify what each of these means. An Assumption is a statement about what we believe to be true about an idea. Stated in a format like “We believe…” Typically, your assumptions center around an idea’s feasibility, usability, viability, desirability, or ethicality. An Experiment is a technique we use to test the most critical but least proven assumptions to collect reliable evidence about whether a specific assumption is valid. Your experiment technique needs to match the nature of your assumption instead of dogmatically defaulting to A/B tests. A Hypothesis explicitly defines success for a given experiment and ties it back to the assumption. It describes the measurable change you expect through the chosen experiment technique. Which means it has to be falsifiable. By incorporating your initial assumption, you focus instead of chasing opportunistic ideas. There are countless formats, but a simple one is: Based on [evidence]. We believe [idea] will encourage [target audience] to [change behavior = outcome]. Our confidence in this solution increases when we see [metric change] during [experiment]. Your experiments (and metrics) might change or expand as you test the idea from different angles. Let’s assemble the pieces: We’re a European car marketplace looking to expand to the US and will use Private US Sellers of Vintage Premium Cars as a strategic wedge to break into this market. An AR-based car intake scanner is a feature idea that addresses the need for people to get their cars vetted without searching for in-person experts. The two most critical assumptions are “We believe car owners trust us to evaluate their cars digitally” and “We can automatically recognize 90% of a vintage car’s details through a digital smartphone scan.” One experiment to test the former is a Wizard of Oz MVP, which has human experts evaluate sent-in photos manually and deliver a prediction asynchronous back to the owners. Which has us arrive at this hypothesis: An AR scanner will encourage US vintage car owners to list their cars online without a physical inspection. Our confidence in this solution increases with an acceptance rate of 80% for our manually delivered photo-based evaluations.​ Did you enjoy this one or have feedback? Do reply. It's motivating. I'm not a robot; I read and respond to every subscriber email I get (just ask around). If this newsletter isn't for you anymore, you can unsubscribe here. Thank you for Practicing Product, ​Tim​ PS: I messed up last week's link, so here we go again. Do you Interview Users? Do you have “no shows”? Fill out this short survey to learn more about a free productized solution to that. What did you think of this week's newsletter? As a Product Management Coach, I guide Product Teams to measure the progress of their evidence-informed decisions. I identify and share the patterns among better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery. |
1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes. Explore my new book on realprogressbook.com
Product Practice #397 3 Things to Put into YourNext Strategy Document PUBLISHED Feb 27, 2026 READ ON HERBIG.CO Dear Reader, The most effective strategy document I've seen doesn't worry about the looks or format. Whether it's a scrappy Google Doc or a fancy Miro template, what matters is the quality and cohesiveness of the information it contains. Make sure what you cover aligns with your company's expected standards to ensure stakeholder understanding and, consequently, buy-in. But make sure...
Hallo liebe:r Leser:in, English Translation below for internal forwarding to your German colleagues Du lieferst Features aus und wirst nach KPIs gefragt – ohne Verbindung zu Erfolg für Nutzer:innen und Geschäft. Die Strategie deines Unternehmens ist entweder zu vage oder fehlt ganz. Das Ergebnis: Alibi Progress statt echter Wirkung.In meinem Workshop "Strategische Umsetzung statt KPIs abarbeiten – Entwicklung & Messung von Produktstrategie am 4. Mai im Rahmen der Product Owner Days 2026...
Product Practice #396 MECE: Double the Usefulnessof Your Metrics Trees PUBLISHED Feb 19, 2026 READ ON HERBIG.CO Dear Reader, Many resources say your metrics trees need to be "MECE." But how do you do it? MECE stands for: Mutually Exclusive Collectively Exhaustive In the context of metrics trees, this means mapping the individual drivers of an overarching goal in a way that allows us to identify and improve domain-specific levers through selective focus, while creating holistic...