How to Use Claude and ChatGPT as Your AI PM Study Partner
TL;DR
Most PMs use LLMs as smarter Google. The ones who level up fastest use them as a 24/7 study partner: explainer of papers, mock interviewer, code-reading buddy, flashcard machine, and adversarial critic. Below are 12 copy-pasteable prompts that work today on Claude and ChatGPT, plus the meta-rules for getting useful answers instead of confident fluff.
The Five Rules That Make This Work
Vague prompts get vague answers. Apply these rules to every prompt below — and to the ones you write yourself.
Set the persona
Always tell the model who it's playing (senior AI PM hiring manager, skeptical staff engineer, hostile CFO). Specificity here drives 60% of the quality difference.
Set your level
'I'm an AI PM, technical but not an ML researcher' is a different conversation than 'explain like I'm five.' Calibrate or you'll get the wrong altitude.
Demand structure
Ask for tables, numbered lists, or specific output formats. 'Output as TSV ready for Anki' is far better than 'make me flashcards.'
Ask for the hard part
Models default to safe, balanced answers. Say 'don't soften your feedback' or 'steelman the opposite position' to get past the diplomatic crouch.
Verify when it matters
LLMs hallucinate confidently — especially on paper details, benchmark numbers, and dates. For anything you'd cite in a meeting, double-check the primary source.
Prompts for Reading and Writing
Explain a paper in three layers
When: First read of any ML/AI paper. Use before opening the PDF.
I'm an AI Product Manager — technical but not an ML researcher. Explain the paper [TITLE / arXiv link] to me in three layers, in this order: (1) one-paragraph summary a smart non-technical person could repeat at dinner; (2) the core technical contribution in 5 bullets, including the math when it's load-bearing; (3) three product implications for AI teams shipping LLM features in 2026. End with two questions I should be able to answer about this paper but probably can't yet — those are my study targets.
Mock AI PM technical screen
When: Day before a phone screen at an AI-first company.
Act as a senior AI PM hiring manager at a frontier lab. Give me a 30-minute mock technical screen. Ask one question at a time. After each answer, rate it 1–5 on substance and 1–5 on communication, give specific feedback, then move to the next question. Cover: model selection tradeoffs, eval design, an open-ended product case, and one curveball on inference economics. Don't soften your feedback.
Critique my PRD ruthlessly
When: Before sharing any AI-feature PRD with engineering.
You are a skeptical staff engineer who has shipped three production LLM products. Read this PRD and tell me everything wrong with it. Specifically: (1) where I'm hand-waving over technical decisions, (2) what evals I'm missing, (3) what the failure mode rollout will look like, (4) what a smart adversarial user could do to break this, (5) what's underspecified that will bite us in week three. Be specific and use line numbers. PRD below:\n\n[paste PRD]
Prompts for Interviews and Pressure-Testing
Mock AI PM technical screen
When: Day before a phone screen at an AI-first company.
Act as a senior AI PM hiring manager at a frontier lab. Give me a 30-minute mock technical screen. Ask one question at a time. After each answer, rate it 1–5 on substance and 1–5 on communication, give specific feedback, then move to the next question. Cover: model selection tradeoffs, eval design, an open-ended product case, and one curveball on inference economics. Don't soften your feedback.
Steelman the opposite position
When: When you've made a model/architecture decision and want to pressure-test it.
I've decided to [decision, e.g., 'use RAG instead of fine-tuning for our customer support bot']. Steelman the opposite position. Make the strongest possible case for [opposite decision], assuming a competent, well-resourced team. Include: when this opposite is actually correct, three pieces of evidence I'm probably ignoring, and two questions I should answer before sticking with my original choice.
Roleplay a hostile stakeholder
When: Practice before any contentious review.
Roleplay [stakeholder type, e.g., 'a skeptical CFO who thinks AI features are vendor lock-in']. I'm pitching [feature]. Push back with the toughest objections this person would actually raise. After we go three rounds, drop the roleplay and tell me which of my responses was weakest and how to fix it.
Get the Full AI PM Prompt Library
Masterclass students get 60+ tested prompts plus weekly office hours where a Salesforce Sr. Director PM critiques your real prompts on real work. The prompt is the leverage; we sharpen yours.
Prompts for Code Reading and Eval Design
Read this code and tell me what it does
When: Anytime you need to understand a repo without bothering an engineer.
I'm an AI PM who reads code but doesn't write it daily. Walk me through [paste file or function]. Tell me: what this code does in plain English, why it's structured this way, what the non-obvious decisions are, and what would break if I changed [specific line/section]. End with two things a PM should ask the engineer who wrote this.
Eval brainstorm for a feature
When: Right before writing the eval section of a PRD.
I'm shipping [feature, e.g., 'an AI assistant that summarizes customer support tickets']. Brainstorm 30 evaluation cases — 20 normal-flow inputs and 10 adversarial or edge cases. For each, give me: input, expected behavior, how I'd score it (binary, scale, LLM-as-judge), and what failure on this case would tell me about the model. Group them by category at the end.
Compare two papers/techniques
When: When choosing between two architectures, methods, or tools.
Compare [Technique A] and [Technique B] across these dimensions: (1) what each is best at, (2) cost and latency, (3) where each fails, (4) how I'd evaluate which is better for my use case, (5) the consensus view in 2026. Output as a table with these as rows. End with a one-sentence recommendation tree: 'Use A if X. Use B if Y. Use both if Z.'
Prompts for Daily Practice
Generate flashcards from notes
When: End of any study session. Pipes directly into Anki.
Convert these notes into atomic Anki flashcards. Rules: one fact per card, question on the front, concise answer on the back, no compound questions, prefer 'why' and 'when' framings over 'what is'. Output as a tab-separated list (Front\\tBack) ready to paste into Anki. Notes:\n\n[paste notes]
Daily AI news briefing
When: Morning standup prep. Use ChatGPT with web search or Claude with web tool.
Give me a 5-minute briefing on the most important AI/ML news from the last 24 hours, written for an AI PM. Focus on: model releases with real benchmarks, eval results that change the landscape, infrastructure shifts, and serious safety/policy moves. Skip funding announcements unless the round changes a competitive dynamic. End with the one thing I should bring up in standup.
Explain it to my CEO
When: Before any executive briefing on an AI feature or risk.
I need to explain [topic] to a non-technical CEO in 90 seconds. Write three drafts: (1) the safe, conservative version, (2) the version that emphasizes opportunity, (3) the version that emphasizes risk. Each must be under 200 words, use no jargon, and end with one specific decision the CEO should make. Then tell me which draft you'd use and why.
Find my knowledge gaps
When: Quarterly self-audit. Honest answers required.
Act as a senior AI PM coach. Ask me 12 questions across the AI PM technical surface area — transformers, training, RAG, evals, inference, agents, safety. After each answer, score me 1–5 and note what a stronger answer would have included. At the end, give me a ranked list of my three biggest gaps and one specific resource for each.
Claude vs ChatGPT in 2026: Claude tends to be stronger on long-document reading and nuanced critique; ChatGPT is faster on web-grounded tasks and image inputs. For most prompts above, run them through both and keep the better answer — that's a 30-second habit that pays off forever.