How to Build an AI Product Manager Portfolio That Hiring Managers Actually Read
TL;DR
A resume gets a recruiter 6 to 9 seconds of attention. A focused portfolio of three to five deep case studies extends that to 5 or 10 minutes with the hiring manager who actually decides. Aspiring AI PMs win interviews when their portfolio shows three things: evidence they can scope an ambiguous problem, evidence they understand model behavior beyond surface prompts, and evidence they can ship something that real users touch. This guide covers what to include in an AI PM portfolio, the four case study formats that convert, the technical artifacts that separate serious candidates from prompt tinkerers, and how to publish the portfolio so it actually reaches hiring managers.
Why Most Aspiring AI PM Portfolios Get Skimmed and Closed
Most portfolios fail in the first 30 seconds because they look like classroom assignments rather than product work. Hiring managers reviewing 80 candidates per role have pattern matched the failure modes. Here are the four most common ones to avoid.
The prompt engineering screenshot collage
A grid of ChatGPT screenshots with clever prompts and impressive looking outputs. This shows the candidate can use a chat product. It does not show product judgment, scoping ability, or any understanding of model behavior under load. Hiring managers at companies like OpenAI, Anthropic, Google, and Salesforce see hundreds of these every month and now treat them as a negative signal. The candidate is signaling that prompt engineering is the deepest part of AI PM work, which it is not.
Tradeoff: Prompts belong in the appendix of a case study, not the headline. If you must show prompt work, show the evaluation harness around it: what variants you tested, what metric you used to compare them, and how you decided which one shipped. That reframes the work from tinkering to product engineering.
The hypothetical product spec with no model
A 12 page PRD for an imagined AI feature with personas, user flows, and KPIs but no actual prototype, no evaluation, and no contact with a real model. This format was acceptable in traditional PM portfolios because the bar was strategic clarity. AI PM hiring managers now want evidence the candidate has wrestled with hallucinations, latency, and cost in a real environment. A spec without artifacts reads as someone who has not done the work.
Tradeoff: If you cannot ship a working prototype, at minimum run an evaluation against an off the shelf API. Generate 50 to 100 sample inputs, score the outputs against your acceptance criteria, and write up what you learned about where the model fails. This converts a hypothetical spec into a piece of grounded research.
The Notion page wall of text
A single very long Notion document covering background, problem, solution, metrics, and roadmap with no visual hierarchy, no diagrams, and no skim path. Hiring managers read on phones between meetings. A wall of text gets closed before anyone reads paragraph two. The portfolio format itself signals product taste and writing discipline.
Tradeoff: Use a 90 second TL;DR at the top, three to five section headers a manager can scan, one diagram or screenshot per major decision, and a clear link out to the prototype. Length is fine if structure is good. Treat each case study like a press release plus an FAQ rather than a research paper.
The portfolio of solo class projects from generic bootcamps
Three nearly identical case studies on summarization, sentiment analysis, and a rag chatbot built during the same six week course. These show that the candidate can complete coursework. They do not differentiate. Hiring managers can tell when work came from the same template, and they will assume the candidate has not formed independent product opinions.
Tradeoff: Pick at least one project that comes from outside any course: a real bug in an existing AI product you analyzed, a teardown of a competitor, a tool you built for your current team, or an evaluation you ran on a public benchmark. Originality is a stronger signal than polish.
The Four Case Study Formats That Convert
An AI PM portfolio needs three to five case studies. Five is the cap. Beyond that, hiring managers stop reading and you dilute the strongest pieces. Mix at least two of the four formats below so the portfolio shows range across discovery, evaluation, shipping, and analysis.
Format 1: The shipped prototype with usage data
Build a working AI feature, put it in front of 20 to 100 real users, and write up what you learned. This can be a Streamlit app, a small Next.js site, a Slack bot, or a Chrome extension. The technical complexity matters less than the fact that real humans used it. Show the design decisions, the metrics you tracked (task completion, user corrections, latency p95), and at least one tradeoff you made between cost, quality, and speed. Two weeks of usage data beats six months of theoretical analysis.
Tradeoff: Shipping is hard for working PMs without engineering support. The unlock is to scope brutally small. Pick a tool you and five colleagues will actually use (a meeting notes summarizer, a PRD reviewer, a customer ticket triager) rather than a consumer product. Internal tools are perfectly acceptable portfolio pieces and are often more impressive because they show product instinct on a known user.
Format 2: The evaluation deep dive
Pick a category of AI behavior (hallucination on factual questions, refusal patterns, code generation accuracy) and run a structured evaluation across two or three models. Write up the methodology, the scoring rubric, the results table, and the product implications. This format shows you understand that AI products are built on top of unreliable components. Hiring managers at AI native companies value this format heavily because it mimics the daily work of an applied AI PM.
Tradeoff: Evaluations require effort and discipline (designing the rubric, generating inputs, scoring consistently). Plan 25 to 40 hours for a good one. The payoff is that this format is rare in candidate portfolios, so a well executed evaluation case study is one of the strongest differentiators available.
Format 3: The teardown of an existing AI product
Pick a shipped AI product (Notion AI, GitHub Copilot, Perplexity, ChatGPT search, a vertical AI tool in your industry) and write a 1500 to 2500 word teardown. Cover the user problem, the model choices you can infer from behavior, the guardrails you can detect, the failure modes you found through structured probing, and what you would change as the PM. This format works because it shows you can reverse engineer product decisions and form your own opinions, both core AI PM skills.
Tradeoff: Teardowns can come across as armchair criticism if not handled carefully. Lead with what the team got right before what you would change, ground every claim in observed behavior rather than speculation, and acknowledge the constraints they were probably operating under. A respectful teardown reads as senior. A snarky one reads as junior.
Format 4: The internal tool case study from your day job
If you currently work as a PM, designer, or engineer, build an AI powered tool that solves a real problem on your team and write it up. Even if you cannot share the code or screenshots due to confidentiality, you can write the case study in a sanitized form. This format is the most credible because it shows you applied AI to a problem with real stakeholders, real constraints, and real measurement. Hiring managers prefer this over consumer demos because it maps directly to the work you would do at their company.
Tradeoff: The risk is confidentiality. Get explicit permission from your manager before publishing. Sanitize numbers (use ranges or percentages rather than raw values), generalize the company description, and never include screenshots of internal data. A vague but honest internal case study is more valuable than a polished consumer demo.
The Technical Artifacts Hiring Managers Look For
Technical artifacts are the difference between a portfolio that says trust me and a portfolio that proves it. None of these require an engineering degree. They require a few weekends and a willingness to do work that most candidates skip.
An evaluation harness with at least 50 inputs
A spreadsheet or notebook with 50 to 200 test inputs, the model output for each, and a score against your rubric. Bonus points for testing two or three models on the same inputs. This artifact alone moves a candidate from interesting to interview ready because it proves they understand AI products are built on probabilistic components and need systematic measurement.
A cost and latency table
For any prototype, include a small table showing cost per request and latency p50 and p95 across the model options you considered. Even if the numbers come from public pricing pages, the act of building the table shows you think about unit economics. Most candidates skip this entirely.
A failure mode catalog
A documented list of the 10 to 20 ways your AI feature fails, with examples and frequency estimates. Failure mode thinking is the single strongest signal of AI PM maturity. A candidate who can articulate where their own product breaks is more credible than one who only shows the happy path.
A guardrails or safety section
Even on a small prototype, document what you did about prompt injection, sensitive content, and PII. The answer can be a one paragraph honest assessment of what you implemented and what you punted on. The exercise of thinking through these issues separates serious candidates from people building demos.
Code is optional, judgment is not
Hiring managers do not expect AI PM candidates to write production code. They do expect candidates to read code well enough to understand model integrations, to modify a prompt template, to read a JSON config, and to evaluate whether an engineering proposal makes sense. If your portfolio shows that level of technical fluency through small commits, notebook screenshots, or clear architecture diagrams, you can skip the deep engineering work and still clear the bar.
Build a Portfolio That Lands Interviews
Portfolio strategy, case study writing, and shipping your first AI prototypes are core curriculum in the AI PM Masterclass. Taught by a Salesforce Sr. Director PM.
How to Publish So Hiring Managers Actually See It
A portfolio that nobody finds is a journal entry. Distribution is half the work. Aspiring AI PMs consistently underinvest here, building beautiful Notion pages that get 12 lifetime views. The tactics below are what actually moves a portfolio from drafts folder to inbox of a hiring manager.
Host on a custom domain, not Notion or Medium
A custom domain (yourname.com or yourname.dev) reads as serious. It costs 12 dollars a year and signals that the candidate has done this before. Use a static site (Next.js, Astro, or even plain HTML) so pages load fast and look polished on mobile. Hiring managers click links from LinkedIn on phones; if the page feels heavy or breaks layout, they close it within 5 seconds.
Make the homepage scannable in 30 seconds
The homepage should show your name, current role, three sentences about what you focus on in AI PM, and links to the three to five case studies. Do not bury the work behind a long bio. Hiring managers know what AI PM work is; they want to see what you have done. Aim for the homepage to load and answer the question what does this person work on within 30 seconds of arrival.
Distribute the work where AI PMs gather
Post one case study at a time to relevant communities (the AI PM channels in Lenny's Slack, the Latent Space Discord, the AI PM subreddit, LinkedIn). Lead with the insight, not the link. A post that says I evaluated three models on financial summarization and found X performs N times better on Y will get 10x the engagement of here is my new portfolio. Hiring managers monitor these channels for talent.
Send the portfolio to specific people, not job applications
For every company you want to interview at, identify the AI PM hiring manager or a senior AI PM on the team via LinkedIn. Send a 4 sentence message: who you are, the one case study most relevant to their work, the link, and a specific question. This converts at 10 to 20 percent for thoughtful messages. Cold applications convert at under 2 percent.
Update one case study per quarter
Portfolios decay. A portfolio that was last updated 14 months ago signals that the candidate stopped doing the work. Add or refresh one case study every quarter. The cadence keeps your skills sharp and gives you a reason to reach out to your network with new work to share. This compounds over 18 to 24 months into a body of work that becomes hard to ignore.