How it feels being Interviewed by AI


How it feels being Interviewed by AI

PUBLISHED

Feb 13, 2026

READ ON

HERBIG.CO

​Dear Reader,​

Last week, I invited you to join an AI-led concept test of a side project idea. Here's what participants share with me about their experience:

"It was like getting interviewed by someone who is doing this one of the first times."

"I felt free to say what I wanted without 'hurting' the interviewers. I felt listened (as the AI was repeating my points). But sometimes the conversation was cut off weirdly."

"The AI often interrupted me when I was thinking. In addition, it often started to respond, realized I wasn't finished, and then took a completely different approach, forgetting what it had said before. After two or three interruptions, I naturally lost the desire to speak naturally and fell into a telephone agent's way of speaking."

So, what can you make of this? Obviously, this is an n=3 sample, so it's not a definitive judgment on AI interviewers' capabilities. So, with a grain of salt, here's what I'm observing:

  1. The AI ignores basic principles, such as rotating the starting variant, which impacts the feedback you'll get. Not helpful (even borderline harmful for your insights).
  2. It's seemingly easier for people to drop out of AI-led interviews because there's no friction. Closing a browser tab is less uncomfortable than leaving a room.
  3. Getting summaries with specific playback markers from feedback shared brings insights alive - but you don't need AI-led interviews for that.

The main caveat I shared last week about being intentional, where you insert AI into your discovery practices for what purposes, remains true:

"Would users actually adopt it?" is a red flag in interview-style research, regardless of scale. Because humans suck at predicting future behavior - especially when it comes to trading time or money for a new solution. Concept testing provides insights into fundamental customer sentiment toward a visual design or gaps in understanding. But they won't help you predict what will get adopted or bought. To get strong evidence about feature adoption or willingness-to-pay, you need to turn to behavioral methods, not attitudinal methods like interviews (no matter if a human or an AI does them).

As a result, here's how I would summarize the current state of AI tooling in the context of Product Discovery:

Oh, and in case you're curious: Here's the live link to the (messy and work-in-progress) prototype I used for concept testing: Pour Over Diary - A Platform for Specialty Coffee Enthusiasts.

Thank you for Practicing Product,

Tim

Get my Book

⭐️⭐️⭐️⭐️⭐️

"What makes Real Progress particularly special to me is that it’s not just a source of inspiration or a collection of ideas, but a practical guide you can return to again and again – no matter what area you’re currently focusing on."

If you consume one thing this week, make it this...

Fixing Strategy

Russian Matryoshka dolls — each one nested within the next larger one. The outer doll is like the highest-level corporate strategy choice. Its importance is consistent with the doll’s size and position. It is the most important choice — and it sets the context for all the choices below. The next doll can’t be bigger than the first, or a different shape. The second doll, as with the next level of strategy choices (let’s say BU strategy) must fit with (and reinforce) the first doll/corporate choice.

Who is Tim Herbig?

As a Product Management Coach, I guide Product Teams to measure the real progress of their evidence-informed decisions.

I focus on better practices to connect the dots of Product Strategy, Product OKRs, and Product Discovery.

Product Practice Newsletter

1 tip & 3 resources per week to improve your Strategy, OKRs, and Discovery practices in less than 5 minutes. Explore my new book on realprogressbook.com

Read more from Product Practice Newsletter

Product Practice #405 How to treat Prototypes as Decision-Making Tools PUBLISHED Apr 23, 2026 READ ON HERBIG.CO From Strategy Choice to Planned Experiments in One Workshop In my upcoming live remote workshop From Strategy to Discovery, I take you from sharpening your Product Strategy, to defining leading indicators for measuring progress, all the way to prioritized assumptions you need to derisk end-to-end. Three 4h Live Sessions - Lifetime Material and Recording Access - Real Results Join...

Product Practice #404 Linked Better Practices over Stacked Best Practices PUBLISHED Apr 16, 2026 READ ON HERBIG.CO Dear Reader, During a recent webinar, someone asked a question I had to think about a bit longer: "What do you do when your strategy is still early, and you're not sure if it's right?" The answer that popped into my head was based on an incredible piece of advice (or admission) I received from a former boss 10+ years ago: No one knows if their strategy is right in the beginning,...

Product Practice #403 Linked Better Practices over Stacked Best Practices PUBLISHED Apr 9, 2026 READ ON HERBIG.CO Dear Reader, During my last webinar on Connect Strategy, Goals, and Discovery with Progress Wheel, I asked people which part of their work is most prone to Alibi Progress. Almost everyone who chimed in named OKRs. And that's because many OKR cycles start the same way for teams: Someone opens a spreadsheet, fills in three to five semi-random metrics, and picks a value that isn't...