The AI Compliance Checklist Template for Product Launches
By Institute of AI PM · 15 min read · May 3, 2026
TL;DR
Compliance for AI products is not a single checkpoint — it is a continuous process that starts before you write a single line of code and extends well past launch. This template organizes compliance requirements into four phases (pre-development, during development, pre-launch, post-launch) so you know exactly what to verify at each stage. The PM who owns this checklist is the PM who does not get blindsided by a regulatory hold two weeks before ship date.
Why Compliance Is a PM Responsibility — Not Just Legal’s
Here is the uncomfortable truth: legal cannot protect you from compliance failures they do not know about. And they will not know about them unless the PM surfaces them proactively. Legal teams understand regulations. They do not understand your model architecture, your data pipeline, or the specific decisions your AI makes for users. That context lives with the product team.
This means the PM is the compliance integration point. You do not need to be a lawyer, but you do need to know enough to ask the right questions at the right time, translate between engineering and legal, and ensure nothing falls through the gap between “that is technically compliant” and “that will actually hold up under scrutiny.”
PMs own the what and when
You decide what the AI does, who it affects, and when it ships. These decisions have compliance implications. If you make them without compliance input, you are creating risk the company does not know about.
Legal owns the how
Legal tells you how to meet requirements, what documentation is needed, and what language to use in disclosures. But they can only do this if you give them the technical and product context they need.
Together you own the outcome
The best AI PM-legal partnerships work like this: PM identifies what needs review, legal provides the framework, PM implements it in the product, legal verifies. Neither can do the other's job.
The cost of getting this wrong: Regulatory fines are the obvious risk, but the bigger cost is usually launch delay. A compliance issue discovered at pre-launch review can push your ship date by months. The entire purpose of this checklist is to surface those issues early, when they are cheap to fix.
The 4-Phase Compliance Checklist
Each phase has specific compliance items that must be verified. Items are ordered by when they should be completed, not by importance — they are all important. A skipped item in Phase 1 becomes a blocking issue in Phase 3.
Phase 1: Pre-Development
Complete before any model training or data collection begins.
- Regulatory landscape mapped. Identify all applicable regulations for your product’s geography and industry: EU AI Act risk classification, GDPR data processing requirements, CCPA consumer rights, sector-specific rules (HIPAA for health, ECOA for lending, EEOC for employment). Document which apply and why.
- AI risk classification completed. Under the EU AI Act, determine if your system is unacceptable risk, high risk, limited risk, or minimal risk. This classification determines your entire compliance obligation. Get legal sign-off on the classification.
- Data legal basis established. Document your legal basis for collecting and processing each category of data: consent, legitimate interest, contractual necessity, or legal obligation. If using personal data for training, verify consent covers this use.
- Data Processing Impact Assessment (DPIA) initiated. Required under GDPR for high-risk processing. Even if not legally required, a DPIA forces you to think through data risks before building. Start this now; it takes weeks, not days.
- Third-party AI vendor compliance verified. If using external models (OpenAI, Anthropic, Google), verify their data processing agreements, confirm where data is stored and processed, and ensure their terms allow your intended use case.
Phase 2: During Development
Verify continuously as you build. These are not one-time checks.
- Training data provenance documented. For every dataset used in training or fine-tuning, record: source, collection method, consent status, potential biases, and date range. This is your audit trail. If you cannot explain where your training data came from, you have a compliance gap.
- Bias testing conducted across protected classes. Run fairness evaluations across all applicable protected attributes: race, gender, age, disability, religion, national origin. Document results, thresholds, and any mitigations applied. This is not optional for high-risk systems.
- Model card or technical documentation drafted. Document model architecture, intended use, known limitations, performance metrics across subgroups, and training data characteristics. The EU AI Act requires this for high-risk systems. Best practice for all systems.
- Data minimization verified. Confirm you are collecting only the data necessary for the AI function. If you are collecting more than needed “in case it is useful later,” you are violating GDPR’s data minimization principle and creating unnecessary risk.
- Human oversight mechanisms built. For high-risk AI decisions, implement human-in-the-loop or human-on-the-loop controls. Document how a human can override, correct, or shut down the AI system. This is a hard requirement under the EU AI Act for high-risk systems.
Phase 3: Pre-Launch
Complete at least 2 weeks before your target launch date. These take time to remediate if issues are found.
- User-facing transparency implemented. Users must know when they are interacting with AI, what data is being used, and how to opt out. Verify that AI disclosure is prominent (not buried in terms of service), clear (not euphemistic), and accurate (describes what the AI actually does).
- User rights mechanisms functional. Verify that users can: access their data, request correction, request deletion, opt out of AI processing, and receive a human review of automated decisions. Test each pathway end-to-end. Broken rights mechanisms are compliance violations.
- Audit trail system validated. Confirm that all AI decisions are logged with: timestamp, input data (or reference), model version, output, and confidence score. Logs must be tamper-resistant and retained for the period required by applicable regulations.
- Legal review of all user-facing copy. Every piece of copy that describes what the AI does, how it uses data, or what users can expect must be reviewed by legal. This includes tooltips, help articles, marketing pages, and in-app disclosures.
- Incident response plan documented. Define what happens when the AI produces a harmful, biased, or incorrect output that affects users. Include: who is notified, how fast the response must happen, who decides whether to disable the feature, and how affected users are informed.
Phase 4: Post-Launch
Ongoing obligations that continue for the life of the product.
- Continuous bias monitoring active. Production data distributions shift over time. Set up automated monitoring for performance disparities across protected groups. Define alert thresholds and response procedures. Review results at minimum quarterly.
- Model drift detection operational. Monitor for changes in model input distributions and output patterns. When drift is detected, trigger a review cycle that includes compliance re-evaluation. Model updates must go through the same compliance checks as the initial launch.
- User complaint tracking and resolution. Track complaints related to AI decisions separately from general product complaints. Analyze complaint patterns for compliance-relevant issues: bias, accuracy, transparency, and consent. Report trends to legal quarterly.
- Regulatory change monitoring. AI regulation is evolving rapidly. Assign someone (usually the PM) to monitor regulatory changes that affect your product. When new requirements emerge, assess impact and create a remediation timeline within 30 days.
- Annual compliance audit scheduled. Conduct a comprehensive review annually: re-run bias tests, verify all documentation is current, confirm user rights mechanisms still work, and update the risk classification if the product has changed. Document findings and remediation actions.
How to Customize the Checklist for Your Regulatory Environment
The checklist above covers the common baseline. Your specific product will have additional requirements depending on three factors: geography, industry, and AI risk level. Here is how to determine what to add.
Geography-specific additions
EU: Full EU AI Act compliance including conformity assessment for high-risk systems, CE marking, and registration in the EU database. US: State-level AI laws (Colorado AI Act, NYC Local Law 144 for hiring). China: Algorithm registration and security assessments. Map every market you serve.
Industry-specific additions
Healthcare: HIPAA compliance for PHI, FDA pre-market review if the AI qualifies as a medical device. Financial services: Fair lending rules, model risk management (SR 11-7), anti-money laundering screening. Employment: EEOC guidance on AI in hiring, adverse impact analysis.
Risk-level adjustments
Minimal risk (spam filters, recommendation engines): baseline checklist is sufficient. Limited risk (chatbots, emotion detection): add transparency obligations. High risk (hiring, credit, healthcare): add conformity assessment, ongoing monitoring, and mandatory human oversight. Unacceptable risk: do not build it.
Practical tip: Create a compliance matrix with your legal team that maps your product features to applicable regulations. Update it quarterly. This becomes your single source of truth and eliminates the “I thought legal was handling that” problem that kills launch timelines.
Master AI Compliance and Responsible AI Practices
Regulatory navigation, ethics frameworks, and compliance planning are core curriculum in the AI PM Masterclass — taught live by a Salesforce Sr. Director PM.
See Program DetailsCommon Compliance Gaps That Delay or Kill Launches
These are the gaps that most frequently cause last-minute launch delays. Each one seems minor until it blocks your release.
Training data consent does not cover the new use case
You collected data under one consent framework and are now using it for a different AI purpose. This is the single most common compliance issue I see. Example: customer support transcripts collected to 'improve service quality' being used to train an AI agent. The original consent likely does not cover this. Fix: audit every data source against your specific AI use case before development begins, not at pre-launch review.
No mechanism for users to contest AI decisions
Under GDPR Article 22, users have the right to not be subject to purely automated decisions with significant effects, and to obtain human intervention. If your AI makes decisions about credit, employment, insurance, or access to services, you need a documented contestation process. Many teams build the AI but forget the appeals workflow.
Bias testing done once, never repeated
Bias testing at development time is necessary but not sufficient. Production data distributions change. User demographics shift. Model behavior drifts. If your last bias test was six months ago, it is stale. Regulators expect ongoing monitoring, not a one-time snapshot.
AI disclosure is technically present but practically invisible
Putting 'powered by AI' in 8-point font at the bottom of a page does not meet the spirit of transparency requirements. The EU AI Act requires that AI disclosure be 'timely, clear, and intelligible.' If users do not actually notice the disclosure, you have a compliance risk regardless of whether it is technically present.
No data processing agreement with your AI vendor
If you send user data to an external AI API, you need a Data Processing Agreement that specifies: what data is sent, how it is processed, where it is stored, how long it is retained, and who is responsible for what. Many teams start building with a vendor API on a standard terms-of-service agreement and never upgrade to a proper DPA. This is a GDPR violation if you handle EU user data.
Compliance Readiness Verification Checklist
Run through this final verification before requesting launch approval. Every item must be “yes” or have an approved exception documented.
Data and Privacy
- Legal basis for all data processing documented
- Data Processing Impact Assessment complete
- Data minimization principle verified
- Data retention and deletion policies active
- Vendor DPAs executed for all third-party AI services
Fairness and Bias
- Bias testing completed across all protected classes
- Fairness metrics defined and thresholds documented
- Disparate impact analysis results within tolerance
- Ongoing bias monitoring pipeline configured
- Bias mitigation strategies documented and applied
Transparency and User Rights
- AI disclosure is prominent and user-tested
- User opt-out mechanism functional end-to-end
- Human review pathway tested for automated decisions
- Data access, correction, and deletion requests work
- Explanation capability matches regulatory requirement
Documentation and Governance
- Model card or system documentation complete
- Audit trail capturing all AI decisions
- Incident response plan reviewed by legal
- Compliance review sign-off from legal obtained
- Post-launch monitoring and audit schedule set
Ship AI Products Without Compliance Surprises
Regulatory strategy, responsible AI frameworks, and compliance planning are core curriculum in the AI PM Masterclass. Learn to navigate the EU AI Act, GDPR, and industry-specific requirements with confidence.
Explore the Program