How it feels being Interviewed by AI


How it feels being Interviewed by AI

PUBLISHED

Feb 13, 2026

READ ON

HERBIG.CO

​Dear Reader,​

Last week, I invited you to join an AI-led concept test of a side project idea. Here's what participants share with me about their experience:

"It was like getting interviewed by someone who is doing this one of the first times."

"I felt free to say what I wanted without 'hurting' the interviewers. I felt listened (as the AI was repeating my points). But sometimes the conversation was cut off weirdly."

"The AI often interrupted me when I was thinking. In addition, it often started to respond, realized I wasn't finished, and then took a completely different approach, forgetting what it had said before. After two or three interruptions, I naturally lost the desire to speak naturally and fell into a telephone agent's way of speaking."

So, what can you make of this? Obviously, this is an n=3 sample, so it's not a definitive judgment on AI interviewers' capabilities. So, with a grain of salt, here's what I'm observing:

  1. The AI ignores basic principles, such as rotating the starting variant, which impacts the feedback you'll get. Not helpful (even borderline harmful for your insights).
  2. It's seemingly easier for people to drop out of AI-led interviews because there's no friction. Closing a browser tab is less uncomfortable than leaving a room.
  3. Getting summaries with specific playback markers from feedback shared brings insights alive - but you don't need AI-led interviews for that.

The main caveat I shared last week about being intentional, where you insert AI into your discovery practices for what purposes, remains true:

"Would users actually adopt it?" is a red flag in interview-style research, regardless of scale. Because humans suck at predicting future behavior - especially when it comes to trading time or money for a new solution. Concept testing provides insights into fundamental customer sentiment toward a visual design or gaps in understanding. But they won't help you predict what will get adopted or bought. To get strong evidence about feature adoption or willingness-to-pay, you need to turn to behavioral methods, not attitudinal methods like interviews (no matter if a human or an AI does them).

As a result, here's how I would summarize the current state of AI tooling in the context of Product Discovery:

Oh, and in case you're curious: Here's the live link to the (messy and work-in-progress) prototype I used for concept testing: Pour Over Diary - A Platform for Specialty Coffee Enthusiasts.

Thank you for Practicing Product,

Tim

Get my Book

⭐️⭐️⭐️⭐️⭐️

"What makes Real Progress particularly special to me is that it’s not just a source of inspiration or a collection of ideas, but a practical guide you can return to again and again – no matter what area you’re currently focusing on."

If you consume one thing this week, make it this...

Fixing Strategy

Russian Matryoshka dolls — each one nested within the next larger one. The outer doll is like the highest-level corporate strategy choice. Its importance is consistent with the doll’s size and position. It is the most important choice — and it sets the context for all the choices below. The next doll can’t be bigger than the first, or a different shape. The second doll, as with the next level of strategy choices (let’s say BU strategy) must fit with (and reinforce) the first doll/corporate choice.

Who is Tim Herbig?

As a Product Management Coach, I guide Product Teams to measure the real progress of their evidence-informed decisions.

I focus on better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery.

Product Practice Newsletter

1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes. Explore my new book on realprogressbook.com

Read more from Product Practice Newsletter

Product Practice #401 How to Close Your Confidence Loop PUBLISHED Mar 26, 2026 READ ON HERBIG.CO Dear Reader, Most teams can tell you what they're building. Far fewer can tell you why it matters and how they will know it has worked. And I mean in a connected, defensible way that traces from their next release or discovery back to a company goal. That gap is where confidence lives (or doesn't). The confidence loop describes the critical questions you need to be able to answer and connect to...

Product Practice #400 Get your North Star Metric Reviewed by me PUBLISHED Mar 19, 2026 READ ON HERBIG.CO Dear Reader, To celebrate the 400th edition of this newsletter (🥳), I thought, why not try something different: Share your current North Star Metric and some high-level context with me through the form below, and I'll send you a personalized review video - for FREE and without AI processing. Just me, in front of a camera, sharing my thoughts on your North Star Metric. There's no hook,...

Product Practice #399 How to Connect Strategy,Goals and Discovery PUBLISHED Mar 12, 2026 READ ON HERBIG.CO Most product teams don't have a strategy or OKR problem. They have a connection problem. My new Progress Wheel Intensive is a full-day working session for ambitious product teams where we fix that together — using your actual product context, not hypotheticals. Book a call to talk about it. Dear Reader, Often, the core concepts I share on how teams can make real progress by connecting...