How it feels being Interviewed by AIDear Reader, Last week, I invited you to join an AI-led concept test of a side project idea. Here's what participants share with me about their experience: "It was like getting interviewed by someone who is doing this one of the first times." "I felt free to say what I wanted without 'hurting' the interviewers. I felt listened (as the AI was repeating my points). But sometimes the conversation was cut off weirdly." "The AI often interrupted me when I was thinking. In addition, it often started to respond, realized I wasn't finished, and then took a completely different approach, forgetting what it had said before. After two or three interruptions, I naturally lost the desire to speak naturally and fell into a telephone agent's way of speaking." So, what can you make of this? Obviously, this is an n=3 sample, so it's not a definitive judgment on AI interviewers' capabilities. So, with a grain of salt, here's what I'm observing:
The main caveat I shared last week about being intentional, where you insert AI into your discovery practices for what purposes, remains true: "Would users actually adopt it?" is a red flag in interview-style research, regardless of scale. Because humans suck at predicting future behavior - especially when it comes to trading time or money for a new solution. Concept testing provides insights into fundamental customer sentiment toward a visual design or gaps in understanding. But they won't help you predict what will get adopted or bought. To get strong evidence about feature adoption or willingness-to-pay, you need to turn to behavioral methods, not attitudinal methods like interviews (no matter if a human or an AI does them). As a result, here's how I would summarize the current state of AI tooling in the context of Product Discovery: Oh, and in case you're curious: Here's the live link to the (messy and work-in-progress) prototype I used for concept testing: Pour Over Diary - A Platform for Specialty Coffee Enthusiasts. Thank you for Practicing Product, Tim Get my BookAs a Product Management Coach, I guide Product Teams to measure the real progress of their evidence-informed decisions. I focus on better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery. |
1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes. Explore my new book on realprogressbook.com
Product Practice #405 How to treat Prototypes as Decision-Making Tools PUBLISHED Apr 23, 2026 READ ON HERBIG.CO From Strategy Choice to Planned Experiments in One Workshop In my upcoming live remote workshop From Strategy to Discovery, I take you from sharpening your Product Strategy, to defining leading indicators for measuring progress, all the way to prioritized assumptions you need to derisk end-to-end. Three 4h Live Sessions - Lifetime Material and Recording Access - Real Results Join...
Product Practice #404 Linked Better Practices over Stacked Best Practices PUBLISHED Apr 16, 2026 READ ON HERBIG.CO Dear Reader, During a recent webinar, someone asked a question I had to think about a bit longer: "What do you do when your strategy is still early, and you're not sure if it's right?" The answer that popped into my head was based on an incredible piece of advice (or admission) I received from a former boss 10+ years ago: No one knows if their strategy is right in the beginning,...
Product Practice #403 Linked Better Practices over Stacked Best Practices PUBLISHED Apr 9, 2026 READ ON HERBIG.CO Dear Reader, During my last webinar on Connect Strategy, Goals, and Discovery with Progress Wheel, I asked people which part of their work is most prone to Alibi Progress. Almost everyone who chimed in named OKRs. And that's because many OKR cycles start the same way for teams: Someone opens a spreadsheet, fills in three to five semi-random metrics, and picks a value that isn't...