NIST AI Risk Management Framework (AI 100-1)

The NIST AI Risk Management Framework, published in January 2023, provides a voluntary, flexible structure for managing AI risks throughout the AI system lifecycle. It is designed to be used by organizations of any size, in any sector, and for any type of AI system. The framework is organized around four core functions: Govern, Map, Measure, and Manage.

Key Principles

  • Risk-based approach -- Proportional governance based on the level of risk an AI system poses, not a one-size-fits-all mandate.
  • Lifecycle coverage -- Applies from design and development through deployment, operation, and decommissioning.
  • Trustworthy AI characteristics -- Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

GOVERN

Establish and maintain AI risk management policies, processes, and organizational structures.

IDSub-CategorySuggested Actions
GV-1Policies and ProceduresDocument AI risk management policies; define roles and responsibilities; establish escalation procedures for AI-related risks.
GV-2Accountability StructuresAssign AI risk ownership; create cross-functional governance committees; define decision-making authority for AI deployment.
GV-3Workforce Diversity and CultureFoster interdisciplinary teams; provide AI literacy training; integrate diverse perspectives in AI development.
GV-4Organizational ContextAlign AI risk management with enterprise risk; map AI systems to business objectives; document organizational risk tolerance.
GV-5Stakeholder EngagementIdentify affected communities; establish feedback channels; incorporate external input into AI governance.
GV-6Legal and Regulatory ComplianceMap applicable regulations; monitor regulatory changes; document compliance evidence and audit trails.

MAP

Identify and categorize AI risks in context, including intended and unintended uses.

IDSub-CategorySuggested Actions
MP-1AI System ContextDocument intended purpose; identify deployment environment; catalog data sources and dependencies.
MP-2Intended and Unintended UsesDefine acceptable use boundaries; anticipate misuse scenarios; document foreseeable failure modes.
MP-3Benefits and CostsAssess expected value; quantify implementation costs; evaluate opportunity costs of alternatives.
MP-4Risk IdentificationCatalog technical risks (bias, drift, adversarial attack); identify societal risks (discrimination, privacy); map operational risks (availability, accuracy).
MP-5Impact AssessmentEvaluate potential harms to individuals and groups; assess reversibility of negative outcomes; consider cumulative effects across populations.

MEASURE

Analyze, assess, and track identified AI risks using quantitative and qualitative methods.

IDSub-CategorySuggested Actions
MS-1Risk MetricsDefine performance benchmarks; establish fairness metrics; set thresholds for acceptable performance degradation.
MS-2Testing and EvaluationConduct pre-deployment testing; run adversarial testing and red-teaming; validate against diverse datasets and populations.
MS-3Continuous MonitoringImplement runtime performance tracking; monitor for data and concept drift; track incident rates and near-misses.
MS-4Independent EvaluationEngage third-party auditors; participate in external benchmarking; document evaluation methodology and results.

MANAGE

Prioritize and act on identified AI risks to maximize benefits and minimize negative impacts.

IDSub-CategorySuggested Actions
MG-1Risk PrioritizationRank risks by severity and likelihood; allocate resources to highest-priority risks; reassess priorities as conditions change.
MG-2Risk TreatmentImplement technical mitigations (debiasing, guardrails); establish operational controls (human oversight, fallback procedures); accept, transfer, or avoid residual risks.
MG-3Risk CommunicationReport risk status to leadership; disclose material risks to affected stakeholders; publish transparency reports where appropriate.
MG-4Incident ResponseDefine incident detection and escalation procedures; conduct root cause analysis; implement corrective actions and lessons learned.

NIST AI RMF Playbook

The NIST AI RMF Playbook (NIST AI 100-1, companion resource) provides detailed suggested actions for each sub-category. Organizations should select and adapt actions based on their specific context, risk tolerance, and the nature of their AI systems.

The framework also defines AI RMF Profiles, which allow organizations to align the framework functions with their specific requirements. A Core Profile maps the sub-categories to an organization's current and target risk posture, helping prioritize implementation efforts.