AI PM TEMPLATES

AI Customer Discovery Script Template: Questions to Validate AI Demand

By Institute of AI PM·13 min read·May 7, 2026

TL;DR

AI customer discovery is uniquely deceptive. Customers say they want AI features they'd never use. Enthusiasm doesn't convert. Willingness-to-pay is hard to surface. This script template gives you the 20 questions AI PMs use to separate AI hype from real demand — including the AI-specific traps and the behavioral evidence that beats stated preferences.

Why AI Discovery Is Different

In normal discovery, customers tell you what they want and you build a smaller version. In AI discovery, customers tell you they want AI for everything — because AI is exciting in 2026 and saying yes feels forward-thinking. Most of that enthusiasm doesn't convert to usage or willingness-to-pay. The script needs to filter it out.

Behavior beats stated preference

"What do you currently do when X?" trumps "Would you use AI for X?". Past behavior is the leading signal.

Specific stories beat hypotheticals

"Tell me about the last time you..." produces real signal. "Would you ever..." produces theatrical answers.

Willingness-to-pay needs concrete anchoring

"Would you pay $20/month for this?" produces less signal than "What would you stop using if this cost $20/month?"

Trust questions matter more in AI

Will users trust AI for this task? Often the deciding factor — and customers don't volunteer their distrust.

Discovery Script Structure (45 Minutes)

Phase 1 (10 min): Workflow context

What does the user actually do today? Walk through a recent specific instance. Establishes ground truth before AI is mentioned.

Phase 2 (10 min): Pain-and-cost mapping

Where does it hurt? How much time? How much money? What workarounds exist? Prioritize specifics over abstracts.

Phase 3 (10 min): Solution tour

Show your concept. Watch reactions; capture exact words. Don't sell; observe.

Phase 4 (10 min): Trust and adoption probes

Ask about edge cases, failure modes, escalation paths. Surfaces hidden objections.

Phase 5 (5 min): Willingness-to-pay anchoring

Concrete prices, concrete tradeoffs, concrete next steps. End with a behavioral commitment if possible.

The 20 Questions Worth Asking

1. "Walk me through the last time you did X."

Specific past instance; ground truth.

2. "What did you use to do that?"

Surfaces real workflow, not idealized version.

3. "What was the worst part?"

Pain becomes specific.

4. "How long did it take? How much did that cost?"

Quantify the pain.

5. "What workarounds have you tried?"

Existing solutions reveal seriousness.

6. "If you could wave a wand, what would change?"

Surfaces ideal-state desires.

7. "Have you tried any AI tools for this?"

Existing AI usage reveals openness.

8. "What worried you about using AI for this?"

Surfaces trust concerns explicitly.

9. "What would the AI need to get right?"

Defines the quality bar.

10. "What would the AI getting wrong cost you?"

Quantifies risk; clarifies trust requirements.

11. "Can you imagine ever fully trusting AI for this?"

Surfaces ceiling of trust formation.

12. "Who would need to approve using this?"

Surfaces buying committee.

13. "What would make this a no for your company?"

Surfaces hidden disqualifiers.

14. "Show me how you'd use this if it existed today."

Behavioral. Watch what they actually do.

15. "What would you stop doing if this worked?"

Surfaces real value vs. additive value.

16. "If this cost $X/month, what would you stop using?"

Anchored willingness-to-pay.

17. "Who else have you talked to about this problem?"

Indicates how much they actually care.

18. "Would you be willing to be a beta user?"

Behavioral commitment beats stated interest.

19. "Would you introduce me to two peers facing this?"

Real interest is shareable interest.

20. "What would have to be true for you to buy in 90 days?"

Closes on concrete conditions.

Run Discovery Like a Senior PM

The AI PM Masterclass walks through real discovery sessions and synthesizes findings — taught by a Salesforce Sr. Director PM.

Reading the Room — Behavioral Signals

Strong signals

User volunteers a specific past pain. Asks about timeline. Names budget. Offers introductions. Wants to be a beta user. These are real.

Weak signals

"Cool, sounds great." Generic excitement. Hypothetical phrasing. No commitment. Common; mostly noise.

Negative signals worth heeding

Energy drops when you describe the solution. Asks about competitors instead of features. Dodges willingness-to-pay questions. Listen.

Behavioral micro-tests

"Want me to send the prototype link?" — do they click? "Should we set up a 30-min beta intro?" — do they accept the calendar invite?

Discovery Anti-Patterns

Pitching instead of asking

30 minutes describing your idea = 30 minutes you didn't learn anything. Discovery is asking, not telling.

Leading questions

"Don't you think AI would be great for this?" produces useless yes answers. Open-ended only.

Counting enthusiasm as signal

"That sounds amazing" is the cheapest sentence in English. Discount accordingly.

Skipping behavioral commitment asks

Without a real ask at the end, you walk away with vibes, not signal. Always ask for something concrete.

Not synthesizing across calls

Single calls produce noise; patterns across 5-10 calls produce signal. Plan for the synthesis time.

Validate AI Demand With Discipline

The Masterclass walks through customer discovery practice with real transcripts and synthesis exercises — taught by a Salesforce Sr. Director PM.