🛠️ Assumptions vs. Experiments vs. Hypotheses


Assumptions vs. Experiments vs. Hypotheses

READ ON

​HERBIG.CO​

PUBLISHED

May 3, 2024

READING TIME

3 min & 48 sec

​Dear Reader,​

“Make sure you treat this as an experiment.”

“Our working hypothesis is that people want this.”

Are any of these familiar? Your team (or the entire organization) might regularly mix up the terms assumptions, experiments, and hypotheses, which can create confusion at best.

Let’s clarify what each of these means.

An Assumption is a statement about what we believe to be true about an idea. Stated in a format like “We believe…” Typically, your assumptions center around an idea’s feasibility, usability, viability, desirability, or ethicality.

An Experiment is a technique we use to test the most critical but least proven assumptions to collect reliable evidence about whether a specific assumption is valid. Your experiment technique needs to match the nature of your assumption instead of dogmatically defaulting to A/B tests.

A Hypothesis explicitly defines success for a given experiment and ties it back to the assumption. It describes the measurable change you expect through the chosen experiment technique. Which means it has to be falsifiable. By incorporating your initial assumption, you focus instead of chasing opportunistic ideas. There are countless formats, but a simple one is:

Based on [evidence].

We believe [idea] will encourage [target audience] to [change behavior = outcome].

Our confidence in this solution increases when we see [metric change] during [experiment].

Your experiments (and metrics) might change or expand as you test the idea from different angles.

Let’s assemble the pieces:

We’re a European car marketplace looking to expand to the US and will use Private US Sellers of Vintage Premium Cars as a strategic wedge to break into this market.

An AR-based car intake scanner is a feature idea that addresses the need for people to get their cars vetted without searching for in-person experts.

The two most critical assumptions are “We believe car owners trust us to evaluate their cars digitally” and “We can automatically recognize 90% of a vintage car’s details through a digital smartphone scan.”

One experiment to test the former is a Wizard of Oz MVP, which has human experts evaluate sent-in photos manually and deliver a prediction asynchronous back to the owners.

Which has us arrive at this hypothesis:

An AR scanner will encourage US vintage car owners to list their cars online without a physical inspection.

Our confidence in this solution increases with an acceptance rate of 80% for our manually delivered photo-based evaluations.​
​

HOW TO PUT THIS THEORY INTO PRACTICE

  • What are you assuming? Statements that include the words assume or believe but don't contain are number are not testable.
  • Look for the hard-to-scale option. If you're too worried about scaling an experiment, you ventured too much into the commitment to actually build the idea.
  • Hypotheses have to be falsifiable. And the only way to objectify this discussion is a metric.

Did you enjoy this one or have feedback? Do reply. It's motivating. I'm not a robot; I read and respond to every subscriber email I get (just ask around). If this newsletter isn't for you anymore, you can unsubscribe here.

Thank you for Practicing Product,

​Tim​

PS: I messed up last week's link, so here we go again. Do you Interview Users? Do you have “no shows”? Fill out this short survey to learn more about a free productized solution to that.

Content I found Practical This Week

How Monzo does assumption testing

It’s important to check your respondents are the right persona. Are they the decision maker? Are they a first-time user or a second-time user? These sort of breakdowns make sure that the person you’re talking to is the right persona and you can be precise with your analysis. It lets the researchers break down answers by new/returning mortgage-rs i.e. whether they’re a newbie or an established player in the field in question. This not only leads to more insights, but sifting the wrong people out means you increase the validity and accuracy of your results.

A PM's Guide to Wireframes

The 3 key jobs of a product manager: Recognize the problem, Structure a solution, and Execute

As a PM, I need to make sure that the solution actually ships. I have to create a sense of urgency and a system that holds us accountable as a team. This means clarifying and documenting what needs to happen and when, dividing up all the needed workstreams amongst the team, and picking up the leftovers myself. I’ve had the fun of following up on bugs, checking dashboards to understand retention, writing marketing copy, or just bringing donuts to the launch event — all ways to learn (and test-drive) other disciplines. My mental image is “flowing to wherever the water is lowest” — whatever the rest of the team can’t cover, that’s what a PM needs to carry.

What did you think of this week's newsletter?

​👎​

​Bad​

​🤷‍♂️​

​Meh​

​👍​

​Great​

Who is Tim Herbig?

As a Product Management Coach, I guide Product Teams to measure the progress of their evidence-informed decisions.

I identify and share the patterns among better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery.

Enjoy the newsletter? Please forward it. It only takes 2 clicks. Coming up with this one took 2 hours.

Product Practice Newsletter

1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes.

Read more from Product Practice Newsletter

Product Practice #354 Stop Looking at Flat Dashboards READ ON HERBIG.CO PUBLISHED Mar 29, 2025 READING TIME 4 min & 06 sec Dear Reader, Jeff Patton explained why flat backlogs don’t work for prioritizing user stories more than 16 years ago: Arranging user stories in the order you’ll build them doesn’t help me explain to others what the system does. Try handing your user story backlog to your stakeholders or users when they ask you the question “what does the system you’re building do?” I...

Product Practice #353 How Duolingo Approaches Strategy, OKRs, and Discovery READ ON HERBIG.CO PUBLISHED Mar 7, 2025 READING TIME 4 min & 24 sec Dear Reader, Many Product Managers were in awe of the ways of working shared in The Duolingo Handbook a few weeks ago. While it’s an inspiring read, I used this as a reason to revisit some of my all-time favorite reads about how this company operates (or at least used to operate) and extract my takeaways with you. Duolingo focuses on “Movable” Metrics...

Product Practice #352 My Digitale Leute Summit 2024 Keynote Recording and Slides READ ON HERBIG.CO PUBLISHED Feb 28, 2025 READING TIME 1 min & 40 sec Dear Reader, I'm excited to share the full recording of my talk on How Product Teams Can Connect the Dots of Strategy, OKRs, and Discovery from last year's Digitale Leute Summit. You can think of it as the naturally progressing chapter (hint hint) that would follow my talk from Product at Heart 2024. As always, I won't require you, as an...