AI Audit Checklist

This checklist provides a structured framework for auditing AI systems across six key categories. Each item includes the required evidence for demonstrating compliance. Use this as a baseline and adapt to your organization's regulatory requirements, risk profile, and AI maturity level.

Data Governance

Audit ItemEvidence RequirementsStatus
Data inventory and classificationData catalog with classification labels; data lineage documentation; data owner assignments.Required
Data quality managementData quality metrics and thresholds; data validation procedures; quality monitoring dashboards.Required
Consent and legal basisConsent records; legal basis documentation for each data processing activity; data processing agreements.Required
Data retention and deletionRetention policy; deletion procedures and logs; evidence of periodic data review and purge.Required
Training data documentationTraining dataset cards; data source provenance; annotation guidelines and quality reports; bias assessment of training data.Required

Model Documentation

Audit ItemEvidence RequirementsStatus
Model card or factsheetCompleted model card per NIST/Google/IBM template; intended use, limitations, performance metrics, ethical considerations documented.Required
Architecture and design documentationSystem architecture diagrams; model architecture description; feature engineering documentation; hyperparameter records.Required
Version control and change historyModel version registry; change logs; training run records with reproducibility information (seeds, configurations, data snapshots).Required
Deployment documentationDeployment architecture; infrastructure requirements; rollback procedures; A/B testing methodology (if applicable).Required
Decommissioning planEnd-of-life criteria; data disposal procedures; stakeholder notification plan; transition procedures.Recommended

Testing and Validation

Audit ItemEvidence RequirementsStatus
Pre-deployment testingTest plans; test results with pass/fail criteria; performance benchmarks against baseline; edge case testing results.Required
Adversarial and red-team testingRed-team test plans and results; adversarial input testing; prompt injection testing (for LLM systems); robustness evaluation.Required
User acceptance testingUAT results from intended user population; usability assessment; feedback documentation and resolution.Recommended
Regression testingRegression test suite; results from each model update; comparison with prior version performance.Required

Bias and Fairness Monitoring

Audit ItemEvidence RequirementsStatus
Fairness metrics definitionSelected fairness metrics (demographic parity, equalized odds, etc.) with justification; threshold definitions.Required
Pre-deployment bias assessmentBias evaluation results across protected characteristics; disaggregated performance metrics; identified disparities and mitigations.Required
Ongoing bias monitoringMonitoring dashboard or reports; alert thresholds; frequency of assessment; drift detection for fairness metrics.Required
Bias incident responseDocumented incidents; root cause analysis; corrective actions taken; evidence of resolution effectiveness.Required

Incident Management

Audit ItemEvidence RequirementsStatus
Incident reporting procedureDefined reporting channels; severity classification criteria; escalation procedures; response time SLAs.Required
Incident log and trackingCentralized incident register; status tracking; resolution documentation; trend analysis.Required
Root cause analysisCompleted RCA for significant incidents; contributing factor identification; preventive action plans.Required
Lessons learnedPost-incident reviews; knowledge base updates; policy or procedure revisions triggered by incidents.Recommended

Third-Party AI Vendor Assessment

Audit ItemEvidence RequirementsStatus
Vendor due diligenceVendor risk assessment questionnaire; security certifications (SOC 2, ISO 27001); AI-specific governance documentation.Required
Contractual protectionsData processing agreements; liability clauses; SLAs for performance and availability; right to audit provisions.Required
Model transparencyVendor-provided model cards or documentation; performance benchmarks; known limitations and failure modes.Required
Ongoing vendor monitoringPeriodic reassessment schedule; vendor performance reports; contract compliance reviews; incident notification records.Recommended
Exit strategyData portability provisions; transition plan; alternative vendor assessment; business continuity procedures.Recommended

Using This Checklist

  • Items marked Required represent baseline expectations for any AI system audit. Items marked Recommended reflect best practices that strengthen governance posture.
  • Evidence types are illustrative. Acceptable evidence formats depend on your organization's documentation standards and the auditor's requirements.
  • This checklist aligns with but does not replace requirements from specific frameworks (NIST AI RMF, EU AI Act, ISO 42001). Cross-reference with applicable regulatory requirements.