EU Artificial Intelligence Act

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based approach to AI regulation, categorizing AI systems into four risk tiers with corresponding obligations. The regulation applies to providers, deployers, importers, and distributors of AI systems in the EU market, regardless of where the provider is established.

Risk Classification

Risk LevelExamplesRequirements
Unacceptable Risk

AI systems that pose a clear threat to safety, livelihoods, or fundamental rights. These are banned outright.

Social scoring by governments; real-time remote biometric identification in public spaces (with limited exceptions); manipulation of vulnerable groups; exploitation of age, disability, or socio-economic circumstances.Prohibited. Must not be placed on the EU market, put into service, or used.
High Risk

AI systems that pose significant risk to health, safety, or fundamental rights. Subject to strict obligations before market placement.

AI in critical infrastructure (transport, energy); educational access and vocational training scoring; employment and worker management; essential services access (credit scoring, insurance); law enforcement; migration and border control; administration of justice.Risk management system; data governance; technical documentation; record-keeping; transparency and information to deployers; human oversight; accuracy, robustness, and cybersecurity; conformity assessment before market placement; registration in EU database.
Limited Risk

AI systems with specific transparency obligations. Users must be informed they are interacting with AI.

Chatbots and conversational AI; emotion recognition systems; biometric categorization systems; AI-generated or manipulated content (deepfakes).Transparency obligations: clearly inform users of AI interaction; label AI-generated content; disclose emotion recognition or biometric categorization use.
Minimal Risk

AI systems that pose little or no risk. No specific regulatory requirements beyond existing law.

AI-enabled video games; spam filters; inventory management systems; AI-assisted manufacturing optimization.No additional requirements under the AI Act. Voluntary codes of conduct encouraged.

Prohibited AI Practices (Article 5)

  • Subliminal, manipulative, or deceptive techniques that distort behavior and cause significant harm.
  • Exploitation of vulnerabilities due to age, disability, or specific social or economic situations.
  • Social scoring by public authorities leading to detrimental or unfavorable treatment.
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for serious crime, missing persons, and imminent threats).
  • Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.
  • Emotion recognition in the workplace and educational institutions (with narrow exceptions).
  • Biometric categorization to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation.

Conformity Assessment for High-Risk Systems

Before a high-risk AI system can be placed on the EU market, providers must complete a conformity assessment to demonstrate compliance with the Act's requirements. The assessment process depends on the type of AI system:

  • Internal conformity assessment -- Available for most Annex III high-risk systems. The provider self-certifies compliance based on internal quality management and technical documentation.
  • Third-party conformity assessment -- Required for biometric identification systems and certain safety-critical applications. A notified body must certify compliance.
  • EU database registration -- All high-risk AI systems must be registered in the EU public database before market placement.

Implementation Timeline

DateMilestone
August 1, 2024AI Act enters into force (20 days after publication in the Official Journal).
February 2, 2025Prohibitions on unacceptable-risk AI systems apply.
August 2, 2025Obligations for general-purpose AI (GPAI) models apply; governance structures must be established.
August 2, 2026Most provisions apply, including requirements for high-risk AI systems listed in Annex III.
August 2, 2027Obligations for high-risk AI systems that are safety components of products covered by existing EU harmonization legislation (Annex I).

Transparency Obligations

The AI Act imposes transparency obligations at multiple levels. Providers of AI systems that interact directly with individuals must ensure users are informed they are interacting with an AI system. AI-generated or manipulated content (including deepfakes) must be labeled as artificially generated or manipulated, except when used for legitimate artistic, satirical, or fictional purposes. Providers of general-purpose AI models must publish sufficiently detailed summaries of training data and comply with EU copyright law. Penalties for non-compliance can reach up to 35 million EUR or 7% of global annual turnover for prohibited practice violations, and up to 15 million EUR or 3% for other infringements.