AI PM TEMPLATES

AI Customer Interview Template: How to Research Users of AI Products

By Institute of AI PM·10 min read·Apr 18, 2026

TL;DR

Customer interviews for AI products require a different question set than traditional product research. You need to surface trust signals, understand how users calibrate AI reliability, identify the gap between stated behavior and actual behavior, and uncover the adoption blockers that users won't volunteer unprompted. This template covers the full interview guide for AI product research — from opening to close — with the specific AI questions that unlock the most valuable insights.

What Makes AI Research Different

Traditional product research assumes users can accurately describe what they do and what they want. AI research requires extra depth because users often misattribute AI failures ("the product is broken"), misunderstand AI capabilities ("I didn't know it could do that"), and have social desirability bias around AI use ("I checked everything it told me").

Trust calibration

Do users trust the AI appropriately — or do they over-trust (accept wrong outputs) or under-trust (verify everything unnecessarily)? This requires behavioral observation, not just self-report.

Actual vs. stated use

Users often describe how they think they should use AI, not how they actually do. Asking about specific recent interactions ('tell me about the last time you used this feature') gets closer to truth than 'how do you use this?'

Adoption blockers

Users won't volunteer that they stopped using a feature because they didn't trust it, or because their manager doesn't trust AI outputs. You have to ask specifically about workflow integration failures and organizational context.

The AI Customer Interview Guide

1

Opening: context and role (5 min)

Tell me about your role and what you use [product] for day-to-day. | Before [product], how did you do [the task the AI helps with]? | What made you decide to try the AI features?

2

Current usage patterns (10 min)

Walk me through the last time you used [specific AI feature] — what were you trying to accomplish? | How often do you use it in a typical week? | Are there times you start to use it but decide not to — what makes you hesitate?

3

Trust and verification (10 min)

When you get an AI output, what do you do next — do you use it directly, or do you review it first? | Can you give me an example of a time the AI output wasn't quite right — how did you know, and what did you do? | Are there certain types of outputs you trust more or less? Why?

4

Workflow integration (10 min)

Is the AI part of your daily workflow, or do you use it occasionally? What would have to be true for you to use it more? | Does anyone else on your team use this feature? How do they feel about it? | Has anyone ever questioned or pushed back on work you did using the AI?

5

Limitations and unmet needs (10 min)

Is there anything you've tried to use the AI for that didn't work well? | If you could change one thing about how the AI works, what would it be? | Are there tasks you still do manually that you wish the AI could help with — even if you know it can't do them yet?

Analysis Framework for AI Interview Findings

Trust calibration score

Rate each user as: over-trusting (accepts AI outputs without verification), appropriately calibrated (verifies high-stakes outputs, trusts routine ones), or under-trusting (verifies everything, low confidence in AI). Track the distribution across your user base and its correlation with usage and retention.

Workflow integration depth

Score each user's AI integration: 0 = never uses it / tried once, 1 = uses occasionally for standalone tasks, 2 = part of regular workflow for 1-2 tasks, 3 = central to daily workflow. Correlate with retention and NPS. This is your most important adoption metric.

Failure memory

Note every AI failure the user mentions unprompted — these are the failures that have lodged in memory and are most likely to drive churn. A user who mentions 3 specific AI failures in an interview is at high churn risk even if they say they're generally satisfied.

Organizational trust context

Does the user work in an environment where AI use is encouraged, neutral, or discouraged by management or peers? Organizational context is often the largest predictor of adoption ceiling — a user in a skeptical team will never become a deep workflow user regardless of product quality.

Master AI User Research in the Masterclass

Customer interview frameworks, user research, and AI product discovery are part of the AI PM Masterclass. Taught by a Salesforce Sr. Director PM.

AI Research Interview Mistakes

Asking about AI preferences instead of AI behavior

"How would you like the AI to behave?" produces hypothetical feature requests. "Tell me about the last time the AI gave you an output you didn't use" produces real behavioral insight. Always anchor questions in specific recent experiences, not hypothetical preferences.

Not probing the gap between 'I check it' and how they actually check

Almost every user will say they verify AI outputs. When you probe 'what specifically do you check?' and 'give me an example', you often find that 'checking' means a 3-second glance, not a rigorous review. Understanding how users actually verify is essential for designing appropriate trust and verification UX.

Only interviewing power users

Power users who engage deeply with AI features have self-selected for trust and capability. Their research data is valuable but can't be generalized to users who adopted more reluctantly or who stopped using the AI. Balance your interview sample with at minimum 30% users who are not active AI feature users.

Skipping the organizational context questions

For B2B AI products, the user's own attitude toward AI is often less predictive than their organization's attitude. A PM who doesn't ask about team and management context will miss the organizational blockers that are preventing deep adoption — blockers that product improvements alone can't address.

AI Interview Program Checklist

1

Interview setup

Sample mix: active AI users (40%), light AI users (30%), non-AI-feature users (30%). Guide reviewed by team before first interview. Recording and transcription process in place. Analysis framework defined before interviews start (what will you code for?).

2

Interview execution

Specific recent interactions elicited (not just general preferences). Trust calibration assessed behaviorally (specific verification behavior, not self-report). Organizational context covered. Failure memories surfaced through 'tell me about a time' questions.

3

Synthesis and action

Trust calibration distribution calculated across sample. Workflow integration depth scored per user. Top failure memories cataloged. Organizational context patterns identified. Findings translated into at least 3 product decisions or hypotheses to test.

Build User Research Skills for AI in the Masterclass

Customer interviews, user research, and AI product discovery — all covered in the AI PM Masterclass. Taught by a Salesforce Sr. Director PM.