AI PM TEMPLATES

AI Feature Request Intake Template: Triage Customer Requests at Scale

By Institute of AI PM·11 min read·May 9, 2026

TL;DR

Most AI PMs are drowning in feature requests because customers, sales, and support all funnel ideas through Slack DMs. This intake template forces every request into the same eight fields. The result: comparable, triageable requests, fewer 'can we add AI to X' Slack pings, and a clear paper trail when you say no.

The Intake Form: Required Fields

Implement this as a Linear/Jira/Airtable form, a Notion database, or a typed Slack workflow. The form is non-optional — no field, no triage.

1

1. Problem Statement (one paragraph, customer-language)

Example: 'When I onboard a new client, I spend 90 minutes writing the kickoff plan from scratch even though every kickoff plan I've written looks 80% the same.' No solution language. No 'we should add AI to.'

2

2. Customer Segment

Pick one: SMB / Mid-market / Enterprise / Internal. List 2–3 named customers who reported it. Anonymous 'a customer asked' requests get bounced.

3

3. Frequency

Daily / weekly / monthly / once per workflow / one-time. Quantify: 'happens 4–6 times per week per AE.'

4

4. Current Workaround

What do they do today? 'Copy last quarter's plan and edit by hand.' If there is no workaround, the problem may not be real — investigate before triaging.

5

5. AI-vs-Rules Check

Why does this need AI rather than a deterministic rule, template, or workflow? Write the answer. If the answer is 'because AI is cool' — close the ticket.

6

6. MVP Sketch

What is the smallest thing that would solve 60% of the pain? '1-click generate kickoff plan from CRM data, user edits inline, ship.'

7

7. Success Metric

How will we know it worked? 'Time to complete kickoff plan drops from 90min → 20min for >50% of users in 30 days.' No metric, no shipping.

8

8. Risks & Failure Mode

If the AI is wrong, what is the cost? Reputation? Compliance? Money? 'Hallucinated client name in the kickoff plan = embarrassing but not catastrophic — accept with confidence indicator.'

Customer Segment Tags (Required)

Force the submitter to tag the segment. This is how you spot 'one loud customer' patterns before you build for them.

SMB (1–50 employees)

Often want AI to replace work entirely. Low patience for setup. Drop-off after 2 minutes of friction.

Mid-market (50–500)

Often want AI to augment a process they already have. Have an admin who will configure. More tolerant of approval flows.

Enterprise (500+)

Want AI inside their existing workflow with audit trails, RBAC, BYOK. Will not adopt without a champion-led pilot.

Internal users

Salespeople, CSMs, support — your own teammates. High volume, low willingness-to-pay. Use them to dogfood, not to validate willingness-to-pay.

Frequency & Workaround Capture

Multiple times per day

High-leverage automation candidate. Even small time savings compound. Default: priority A.

Multiple times per week

Standard automation candidate. Worth a 4–6 week build if MVP is scoped tightly. Default: priority B.

Monthly or quarterly

Often not worth automating — the cost of building and maintaining beats the cost of doing it manually. Default: priority C.

One-time / setup task

Almost never worth an AI feature. Often better as documentation or a service offering. Default: close with explanation.

Workaround exists and works

If the workaround is a 5-minute task, the bar for shipping is high. Quantify the time saved before scoping.

No workaround at all

Investigate. Either the problem is hypothetical, or the user gives up — both change the priority calculus.

Triage Like a Senior AI PM

Discovery, scoping, and AI feature triage are core to the AI PM Masterclass — taught live by a Salesforce Sr. Director PM.

The AI-vs-Rules Decision Gate

Half of incoming 'AI feature' requests are deterministic problems with an AI label slapped on. Run every request through this gate before it hits the roadmap.

Could a SQL query, regex, or rule engine solve this?

If yes — ship the rule. AI adds cost, latency, and unpredictability. Use AI when the input is unstructured or the rule has too many branches to maintain.

Is the input variable enough that you cannot pre-define the rules?

If yes — AI is a candidate. Examples: free-text customer messages, mixed-format documents, voice transcripts, code in unknown languages.

Can the user tolerate a wrong answer 5–10% of the time?

If no — either AI is the wrong tool, or you need a human-in-the-loop wrapper. Many AI features fail because the team underestimated the cost of being wrong.

Do you have the data to validate the AI is right?

If no — you cannot ship. Eval data is a precondition, not a follow-up. Build the eval set during scoping, not after launch.

MVP Sketch Section (Required)

The submitter does not have to be right — they have to commit to a smallest-version. This forces specificity and reveals scope creep early.

Where does it live?

In which surface? 'Inline button on the deal record page' beats 'somewhere in the app.'

What is the input?

'CRM deal data + last 5 emails with the customer.' Get specific about which fields and which records.

What is the output?

'A 200-word kickoff plan in markdown, editable inline.' If you cannot describe the output in one sentence, scope is too big.

What is the cheapest acceptable model?

'GPT-4o mini at $0.15/1M tokens, fallback to a frontier model only if eval pass rate <80%.' Rules out the 'just use the most expensive model' default.

What is explicitly out of scope for v1?

'No multi-language. No bulk action. No mobile.' This is the highest-leverage line in the form.

Success Metric & Triage Output

Every accepted intake produces a one-line decision. This is what closes the loop with the submitter and creates a triage history you can audit.

ACCEPTED — added to the next quarter's bet list

Required: success metric, customer segment, MVP scope, eval-set owner. Submitter is tagged as the design partner.

PARKED — valid problem, no current capacity

Logged in the parked queue with a re-evaluation date (next QBR). Submitter receives the timeline. No silent rejection.

REJECTED — does not need AI

Send the deterministic alternative (rule, template, automation). Loop in the team that owns that surface. Close with a one-paragraph rationale.

REJECTED — wrong segment / not enough demand

Most rejections live here. Cite the specific gate it failed (frequency too low, only one customer, no measurable success metric).

RETURNED — incomplete intake

Sent back to the submitter with the missing fields highlighted. No incomplete request gets triaged. This is the discipline that makes the system work.

Stop Drowning in AI Feature Requests

Discovery, triage, and saying no with discipline are taught live in the AI PM Masterclass — by Ata Tahiroglu, Salesforce Sr. Director PM and former Apple Group PM.