Ethics reviews for AI products go beyond compliance checkboxes. They require systematic evaluation of how your AI system impacts users, society, and vulnerable populations. This template helps you document and address ethical considerations before launch.
Why AI Ethics Reviews Matter
Beyond Compliance
- Prevent costly recalls and PR disasters before they happen
- Build user trust through demonstrated responsibility
- Stay ahead of emerging AI regulations globally
- Identify blind spots your team may have missed
- Create documentation for legal and audit requirements
AI Ethics Review Template
Copy and paste this template for your ethics review documentation:
╔══════════════════════════════════════════════════════════════╗ ║ AI ETHICS REVIEW DOCUMENT ║ ╠══════════════════════════════════════════════════════════════╣ FEATURE/PRODUCT: [Name] REVIEW DATE: [Date] REVIEWER(S): [Names and roles] REVIEW TYPE: [ ] Pre-development [ ] Pre-launch [ ] Post-launch ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1. FEATURE OVERVIEW ─────────────────────────────────────────────────────────────── Purpose: [What does this AI feature do?] Users: [Who will use it?] Decisions: [What decisions will AI make or influence?] Automation Level: [ ] Fully automated (no human review) [ ] Human-in-the-loop (human approves AI recommendations) [ ] Human-on-the-loop (human can override) [ ] AI-assisted (human makes decision with AI input) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2. DATA & TRAINING ASSESSMENT ─────────────────────────────────────────────────────────────── Training Data Sources: ┌─────────────────────┬───────────────┬──────────────────────┐ │ Source │ Size │ Collection Method │ ├─────────────────────┼───────────────┼──────────────────────┤ │ │ │ │ │ │ │ │ └─────────────────────┴───────────────┴──────────────────────┘ Data Consent: [ ] Users consented to this use of their data [ ] Data was anonymized/pseudonymized [ ] Data licensing permits this use [ ] Data was synthetically generated Representation Check: [ ] Training data represents all user demographics [ ] Underrepresented groups identified: [List] [ ] Mitigation for underrepresentation: [Describe] Data Recency: Last training data date: [Date] Refresh frequency: [Frequency] Concept drift monitoring: [ ] Yes [ ] No ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3. BIAS ASSESSMENT ─────────────────────────────────────────────────────────────── Protected Attributes Evaluated: [ ] Age [ ] Gender [ ] Race/Ethnicity [ ] Disability [ ] Religion [ ] Sexual orientation [ ] Socioeconomic [ ] Geography [ ] Language [ ] Other: _______________ Bias Testing Results: ┌─────────────────┬────────────┬────────────┬───────────────┐ │ Attribute │ Metric │ Result │ Pass/Fail │ ├─────────────────┼────────────┼────────────┼───────────────┤ │ │ Dem Parity │ │ │ │ │ Equal Opp │ │ │ │ │ Pred Parity│ │ │ └─────────────────┴────────────┴────────────┴───────────────┘ Disparate Impact Analysis: Highest benefiting group: [Group] Lowest benefiting group: [Group] Performance gap: [Percentage] Acceptable threshold: [Percentage] [ ] Within threshold [ ] Exceeds threshold - mitigation needed Bias Mitigation Applied: [ ] Pre-processing (data rebalancing) [ ] In-processing (algorithmic constraints) [ ] Post-processing (output adjustments) [ ] None required Mitigation details: [Describe] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4. FAIRNESS EVALUATION ─────────────────────────────────────────────────────────────── Fairness Definition Selected: [ ] Individual fairness (similar inputs → similar outputs) [ ] Group fairness (equal outcomes across groups) [ ] Counterfactual fairness (same decision if attribute changed) Justification: [Why this definition fits your use case] Fairness Metrics: ┌────────────────────────┬─────────────┬─────────────────────┐ │ Metric │ Target │ Actual │ ├────────────────────────┼─────────────┼─────────────────────┤ │ Demographic parity │ │ │ │ Equalized odds │ │ │ │ Calibration │ │ │ │ Individual fairness │ │ │ └────────────────────────┴─────────────┴─────────────────────┘ Edge Cases Tested: ┌────────────────────────────────────────┬────────────────────┐ │ Scenario │ Outcome │ ├────────────────────────────────────────┼────────────────────┤ │ Non-native language speakers │ │ │ Users with disabilities │ │ │ Low-bandwidth connections │ │ │ Cultural/regional variations │ │ │ [Custom edge case] │ │ └────────────────────────────────────────┴────────────────────┘ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5. TRANSPARENCY & EXPLAINABILITY ─────────────────────────────────────────────────────────────── User Disclosure: [ ] Users know they're interacting with AI [ ] AI vs human distinction is clear [ ] Confidence levels are communicated Disclosure method: [Describe] Explainability Level: [ ] Black box (no explanation available) [ ] Local explanations (per-decision reasoning) [ ] Global explanations (overall model behavior) [ ] Full transparency (complete logic visible) Explanation Provided to Users: [ ] Why this recommendation/decision [ ] What data influenced it [ ] How to get different outcome [ ] How to contest/appeal Explanation format: [Text/Visual/Interactive] Documentation Available: [ ] Model card published [ ] Data sheet available [ ] API documentation includes limitations [ ] User-facing FAQ exists ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6. STAKEHOLDER IMPACT ANALYSIS ─────────────────────────────────────────────────────────────── Direct Users: Impact: [ ] Positive [ ] Neutral [ ] Negative [ ] Mixed Benefits: [List] Risks: [List] Mitigation: [Describe] Indirect Stakeholders: ┌────────────────────┬────────────┬─────────────────────────┐ │ Stakeholder │ Impact │ Consideration │ ├────────────────────┼────────────┼─────────────────────────┤ │ Employees │ │ │ │ Competitors │ │ │ │ Society │ │ │ │ Environment │ │ │ │ Vulnerable groups │ │ │ └────────────────────┴────────────┴─────────────────────────┘ Vulnerable Population Assessment: Identified vulnerable groups: [List] Additional protections: [Describe] Opt-out available: [ ] Yes [ ] No Job Displacement Risk: [ ] No jobs affected [ ] Jobs augmented (productivity increase) [ ] Jobs partially displaced [ ] Jobs fully displaced Affected roles: [List] Transition support: [Describe] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7. SAFETY & HARM PREVENTION ─────────────────────────────────────────────────────────────── Potential Harms Identified: ┌──────────────────────────────┬──────────┬──────────┬────────┐ │ Harm Type │ Severity │ Likelihood│ Priority│ ├──────────────────────────────┼──────────┼──────────┼────────┤ │ Physical safety │ │ │ │ │ Psychological harm │ │ │ │ │ Financial harm │ │ │ │ │ Privacy violation │ │ │ │ │ Discrimination │ │ │ │ │ Manipulation │ │ │ │ │ Misinformation │ │ │ │ └──────────────────────────────┴──────────┴──────────┴────────┘ Misuse Scenarios: ┌────────────────────────────────────────┬────────────────────┐ │ Potential Misuse │ Prevention │ ├────────────────────────────────────────┼────────────────────┤ │ │ │ │ │ │ └────────────────────────────────────────┴────────────────────┘ Safety Controls: [ ] Content filters implemented [ ] Rate limiting in place [ ] Abuse detection active [ ] Human escalation path exists [ ] Kill switch available Details: [Describe each control] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8. PRIVACY & DATA PROTECTION ─────────────────────────────────────────────────────────────── Data Minimization: [ ] Only necessary data collected [ ] Retention period defined: [Duration] [ ] Deletion process documented User Rights: [ ] Access their data [ ] Correct their data [ ] Delete their data [ ] Export their data [ ] Opt out of AI processing Privacy by Design: [ ] Anonymization applied where possible [ ] Encryption at rest and in transit [ ] Access controls implemented [ ] Audit logging enabled Regulatory Compliance: [ ] GDPR compliant [ ] CCPA compliant [ ] HIPAA compliant (if applicable) [ ] Industry-specific: [List] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9. ACCOUNTABILITY & GOVERNANCE ─────────────────────────────────────────────────────────────── Ownership: Ethics owner: [Name and role] Technical owner: [Name and role] Escalation path: [Describe] Review Cadence: [ ] One-time review [ ] Quarterly review [ ] Triggered by significant changes [ ] Continuous monitoring Next review date: [Date] Incident Response: Reporting channel: [Describe] Response SLA: [Time] Remediation process: [Describe] Audit Trail: [ ] All decisions logged [ ] Model versions tracked [ ] Changes documented [ ] Audit accessible to: [Roles] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10. REVIEW DECISION ─────────────────────────────────────────────────────────────── Overall Assessment: [ ] APPROVED - No significant ethical concerns [ ] APPROVED WITH CONDITIONS - See required actions [ ] REQUIRES CHANGES - Cannot proceed until addressed [ ] REJECTED - Fundamental ethical issues Required Actions Before Launch: ┌───┬───────────────────────────────────┬───────────┬─────────┐ │ # │ Action │ Owner │ Due │ ├───┼───────────────────────────────────┼───────────┼─────────┤ │ 1 │ │ │ │ │ 2 │ │ │ │ │ 3 │ │ │ │ └───┴───────────────────────────────────┴───────────┴─────────┘ Ongoing Monitoring Requirements: [List metrics and frequency] Sign-offs: Product Manager: _____________ Date: _______ Engineering Lead: _____________ Date: _______ Legal/Compliance: _____________ Date: _______ Ethics Reviewer: _____________ Date: _______ ╚══════════════════════════════════════════════════════════════╝
Understanding Bias Metrics
Demographic Parity
Positive outcome rates should be equal across groups.
P(Ŷ=1|A=0) = P(Ŷ=1|A=1) Example: Loan approval rates • Group A: 60% approved • Group B: 58% approved • Difference: 2% ✓ Acceptable
Equal Opportunity
True positive rates should be equal across groups.
P(Ŷ=1|Y=1,A=0) = P(Ŷ=1|Y=1,A=1) Example: Qualified candidate selection • Group A TPR: 85% • Group B TPR: 72% • Gap: 13% ✗ Needs work
Predictive Parity
Precision should be equal across groups.
P(Y=1|Ŷ=1,A=0) = P(Y=1|Ŷ=1,A=1) Example: Fraud prediction accuracy • Group A precision: 78% • Group B precision: 81% • Difference: 3% ✓ Acceptable
Calibration
Confidence scores should reflect true probabilities.
P(Y=1|S=s,A=0) = P(Y=1|S=s,A=1) Example: 80% confidence means • Group A: 79% actually positive • Group B: 82% actually positive • Both calibrated ✓
Stakeholder Impact Framework
Use this matrix to systematically evaluate impact across stakeholder groups:
STAKEHOLDER IMPACT MATRIX ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ IMPACT LEVELS: ++ Strong positive + Mild positive 0 Neutral - Mild negative -- Strong negative ? Unknown STAKEHOLDER CATEGORIES: ┌─────────────────────┬────────┬────────┬────────┬──────────┐ │ Stakeholder │ Access │ Quality│ Agency │ Overall │ ├─────────────────────┼────────┼────────┼────────┼──────────┤ │ Primary users │ │ │ │ │ │ Secondary users │ │ │ │ │ │ Non-users affected │ │ │ │ │ │ Internal employees │ │ │ │ │ │ Vulnerable groups │ │ │ │ │ │ Future generations │ │ │ │ │ └─────────────────────┴────────┴────────┴────────┴──────────┘ DIMENSION DEFINITIONS: • Access: Can they use/benefit from the AI equally? • Quality: Does AI quality vary by group? • Agency: Do they maintain control over AI decisions? • Overall: Net impact considering all factors VULNERABLE GROUP CHECKLIST: [ ] Low digital literacy users [ ] Users with disabilities [ ] Non-native language speakers [ ] Low-income users [ ] Elderly users [ ] Children/minors [ ] Marginalized communities [ ] Users in crisis situations
Common Ethical Issues to Watch
High-Risk Patterns
- Proxy discrimination: Using ZIP codes as proxy for race
- Automation bias: Users over-trusting AI decisions
- Feedback loops: Biased predictions reinforcing bias
- Consent washing: Burying AI disclosure in terms
- Performative ethics: Checkbox compliance without substance
Warning Signs
- No representative testing across demographics
- Rushing ethics review to meet deadlines
- Single reviewer without diverse perspectives
- No clear accountability for ethical issues
- Lack of user recourse for AI decisions
Ethics Review Cadence
WHEN TO CONDUCT ETHICS REVIEWS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ MANDATORY REVIEWS: ┌────────────────────────────────────┬──────────────────────┐ │ Trigger │ Review Type │ ├────────────────────────────────────┼──────────────────────┤ │ New AI feature development │ Full review │ │ Model architecture change │ Full review │ │ Training data source change │ Data & bias review │ │ Expansion to new user segment │ Impact review │ │ Regulatory change affecting AI │ Compliance review │ │ Major incident or complaint │ Incident review │ │ Quarterly (high-risk features) │ Monitoring review │ │ Annually (all AI features) │ Comprehensive audit │ └────────────────────────────────────┴──────────────────────┘ REVIEW TEAM COMPOSITION: • Product Manager (owner) • ML/AI Engineer • Legal/Compliance representative • User researcher or customer advocate • Diverse reviewer (different background/perspective) • External ethics advisor (for high-risk features) ESCALATION CRITERIA: Escalate to leadership if: • Any "strong negative" stakeholder impact • Bias metrics exceed acceptable thresholds • No clear mitigation for identified harms • Regulatory compliance gaps • Reviewer disagreement on approval
Quick Ethics Review Checklist
Before Development
- [ ] Ethical purpose clearly defined
- [ ] Stakeholder impacts considered
- [ ] Data consent verified
- [ ] Bias risks identified
Before Launch
- [ ] Bias testing completed
- [ ] Fairness metrics met
- [ ] User disclosure ready
- [ ] Appeal process documented
Post-Launch
- [ ] Monitoring active
- [ ] Feedback channel open
- [ ] Incident response ready
- [ ] Review cadence scheduled
Ongoing
- [ ] Regular bias audits
- [ ] Stakeholder feedback reviewed
- [ ] Model drift monitored
- [ ] Documentation updated
Related Templates
Master Responsible AI Product Management
Learn how to build ethical AI products that users trust. Our masterclass covers bias mitigation, fairness frameworks, and responsible AI practices.
Explore the Masterclass